Skip to main content
  1. Blog/

Why the 'Unsolvable' AI Regulation Problem Has Already Been Solved in Credit Risk

Author
Filip Geburczyk
Data scientist specializing in credit risk model implementation at a systemically important bank. Trained as a lawyer, now happy with debugging messy ETLs and exploring the regulatory frameworks that shape the intersection of data and banking.

If you spend enough time in the AI space, you’ll eventually hear the “Math Defense.” It usually goes like this: Technology moves too fast for law. AI is a black box that defies traditional oversight. You can’t regulate math. Skeptics such as tech heavyweight Jimmy Wales argue that a cultural lag makes it impossible for regulators to keep pace with innovation.

But from inside the banking sector, this argument feels thin. Why? Because we’ve been regulating math for years.

While the rest of the world debates whether AI can be governed, European financial institutions have been quietly following a rigorous, 8-stage blueprint for deploying complex machine learning (ML) models. It’s called the Advanced Internal Ratings-Based (AIRB) approach, and it might just be the missing link for responsible AI across every other industry.

The Contrarian Reality: Innovation Through Regulation
#

The EU is often criticized for being the “world leader in regulation” rather than “innovation.” But as Scott Galloway points out, smart regulation is a form of innovation. By prioritizing a precautionary approach, the EU has created a framework that doesn’t just block tech. It ensures tech serves the public interest without breaking society.

In banking, we use AIRB models to calculate creditworthiness: essentially using ML to decide who gets a loan. Because these decisions affect both the bank’s stability and the customer’s life, the stakes are massive. The result is a blueprint for AI that is transparent, accountable, and, most importantly, functional.

The 8-Stage Blueprint for Responsible AI
#

The Capital Requirements Regulation (CRR) doesn’t just look at the final model; it dissects the entire lifecycle into eight distinct, checkable stages. Here is how it is done in credit risk, and how any company from retail to healthcare could adapt it:

1. Permission
#

In banking, you don’t just go live. You need permission from regulators for specific exposure classes. Similarly AI shouldn’t be a wild west. High-stakes applications should require a license to operate based on the risk profile of the use case.

2. Rigorous Development & Independent Validation
#

We don’t just check if the model works; we check how it was built. Crucially, the people who validate the model must be different from the people who built it. Internal review isn’t enough. Critical AI systems need independent validation to catch bias and statistical drift before they hit the real world.

3. Data Maintenance
#

High-quality data is the difference between an insight and a disaster. The CRR (together with another international standard called BCBS 239) mandates data lineage tracking and quality control. If your data is biased or dirty, your AI is a liability. Regulation should mandate high standards for data integrity and privacy.

4. Parameter Estimation
#

In credit risk, we focus on Probability of Default (PD), Loss Given Default (LGD) and Exposure at Default (EAD). We must justify these parameters against real-world observations. AI outputs shouldn’t be just magic numbers. Developers must be able to explain the why behind the parameters their models are optimizing for.

5. Supervisory Approval
#

Reaching the deployment stage requires a pivotal moment of external oversight where you present your evidence of validation and data quality. Before deployment, a regulatory body conducts a formal go/no-go assessment. This checkpoint ensures that organizations are actually following the rules, not just paying them lip service.

6. Governance & Oversight
#

The CRR requires a management body to understand the rating systems. You can’t hide behind the “it’s just an algorithm” excuse if things go wrong. Accountability starts at the top. Boards and executives must be literate in the AI tools their companies deploy.

7. Comprehensive Documentation
#

Documentation is the paper trail of accountability. Everything from assumptions and methodologies to data sources must be recorded. Transparency is impossible without a record. Detailed documentation allows for the reviewability of automated decisions.

8. Stress Testing
#

We subject our models to extreme but plausible scenarios, like an economic collapse or geopolitical event, to see if they hold up. AI shouldn’t just work in a perfect environment. It needs to be resilient to edge cases and adverse conditions.

Why This Matters for the Broader AI Conversation
#

The argument that AI is too complex to regulate is often a smokescreen for businesses that don’t want to be held accountable.

Any AI debacle such as the Dutch childcare benefits scandal is a sobering example of what happens when machine learning is deployed without these guardrails: thousands of lives were upended by a fraud-detection model that used nationality as a risk factor, a clear, illegal bias.

If the steps used in AIRB models had been applied there (independent validation, data quality checks, and clear governance) that harm could have been prevented.

The Key Takeaway
#

Regulation isn’t a hurdle to innovation; it’s the safety harness that allows us to move faster. We don’t need to reinvent the wheel for AI. The 8-stage AIRB framework proves that even the most complex black box models can be governed if we break the lifecycle down into manageable, transparent steps.

If we can regulate a bank’s billion-euro credit portfolio using ML, we can regulate a retail recommendation engine or medical diagnostic tool. The question isn’t whether we can regulate AI — it’s whether we have the will to apply what we already know works.