...

Global AI Regulations and Their Impact on Industry Leaders – with Michael Berger of Munich Re


There is significant regulatory uncertainty in global AI oversight, primarily because of the fragmented legal landscape across countries, which hinders effective governance of transnational AI systems. For instance, as noted in a 2024 Nature study, the lack of harmonized international law is complicating AI innovation, making it difficult for organizations to understand which standards apply in different jurisdictions.

The absence of robust AI governance and risk management frameworks exposes organizations to operational, ethical, and financial risks. Compliance failure is costly: fines under the EU AI Act can reach up to € 40 million or 7% of global revenue for severe violations.

On a recent episode of the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello sat down with Michael Berger, Head of Insure AI at Munich Re, to discuss how companies should actively manage growing AI risks by setting governance frameworks, defining risk tolerance, and reducing aggregation risk through model diversification and task-specific fine-tuning.

This article brings out two essential insights every organization needs for effective AI governance:

  • Building governance and accountability for AI risk: Defining clear risk ownership and implementing governance frameworks to manage inevitable AI errors across jurisdictions.
  • Managing AI risk with governance and model strategy: Defining risk tolerance, implementing mitigation beyond regulations, and diversifying model architectures to reduce systematic bias and aggregation risk.

Guest: Michael Berger, Head of Insure AI, Munich Re

Expertise: Insurance, Technology, Data Management, and Technology-Based Risk Assessment.

Brief Recognition: Michael has spent the last 15 years at Munich Re, helping to mold their Insure AI operations. He holds a Master’s degree in Information and Data Science from UC Berkeley, a Master’s in Business Administration from the Bundeswehr University Munich, and a PhD in Finance.

Building Governance and Accountability for AI Risk

Michael opens the conversation by comparing how the EU and the US approach AI regulation differently:

  • The EU creates regulations upfront, setting clear rules and requirements before issues occur.
  • The US often shapes its approach through litigation, where court cases set precedents and best practices emerge over time.

For global companies, the difference means they must adapt AI deployments to each jurisdiction’s requirements, which increases compliance burdens but also encourages clearer thinking about risks.

He gives an example from Canada where a passenger asked an airline’s AI-powered chatbot about discount policies. The model hallucinated a fake policy, the passenger relied on it, and the airline refused to honor it. The court ruled the airline liable, even though it did not build the model.

Michael says such cases clarify who is responsible for AI outputs, helping businesses improve risk management and decide where to adopt AI confidently and where to be cautious, ultimately supporting healthier AI industry growth.

He argues that the responsibility for AI-related errors, particularly hallucinations from generative AI, should not be placed heavily on end users or those affected by the AI’s decisions, but on AI adopters and potentially the AI developers.

He explains that while generative AI offers significant benefits for many use cases, these models are inherently probabilistic, meaning they operate on likelihoods rather than certainties. Because of the inherent biases of probabilistic models, failures or hallucinations are not just possible but inevitable, and no technical fix can eliminate them:

“I think, as business leaders, we will just need to accept that those models are probabilistic models, and that they can fail at any point in time, so that there always exists a probability.

It’s not a failure of hallucinations, and that this is not avoidable by any form of technical means. Then I think we will need to accept this kind of risk and embrace the larger risk, together with the potential upside that these models create.”

-Michael Berger, Head of Insure AI at Munich Re

Managing AI Risk with Governance and Model Strategy

Michael notes that discussions about AI have matured from viewing AI as a distant possibility to recognizing it as a current operational reality for many companies. With the shift comes a more precise understanding that AI’s potential always comes with risk, and that risk must be actively managed.

He says this new understanding has led to growing conversations about AI governance, or how to manage AI risks operationally. These conversations include defining risk tolerance levels for the organization, implementing mitigation measures that go beyond regulatory requirements to bring risk down to an acceptable level, and considering AI insurance as part of the strategy to cover potential liabilities.

He mentions that at the level of a single company, the overall risk increases as more AI use cases are developed and more interactive AI models are put into production. Each additional model brings the possibility of errors or hallucinations, which can create liability or lead to financial costs.

He also points out that the risk becomes more serious in sensitive use cases where private consumers are directly affected by AI decisions. In such scenarios, the issue of AI-driven discrimination becomes critical.

“I think here it’s a significant change in risk, because previously, humans were making decisions, and it might be that discrimination cases were more rare, or at least not systematic.

However, with an AI model being used across the board and impacting many people now, discrimination risk is a risk which can suddenly turn systematic. So if the AI model is found to be discriminatory, then it might impact many consumer groups where those models have been used – not just for a single company, but also potentially across companies.”

-Michael Berger, Head of Insure AI at Munich Re

While human decision-making might also involve discrimination, it is often less systematic. With AI, however, if a model is biased, that bias can be applied consistently and at scale, creating systematic discrimination that affects large groups of people.

Michael further explains that the risk can extend beyond a single company, especially when foundational models are involved. If a foundational model is trained in a way that embeds discriminatory patterns and is then used by many companies for similar sensitive applications, the discriminatory effects can spread widely. 

Embedding discrimination in foundational models creates what he calls an “aggregation risk,” where a flaw in one model can cause harm across multiple organizations simultaneously.

He believes companies must be aware of the aggregation risk when planning and deploying AI, particularly when using foundational models for consumer-impacting decisions.

Michael argues that smaller, task-specific models are better from a risk perspective because their intended use cases are clearly defined. Creating smaller models makes them easier to test, easier to measure error rates for, and less prone to unpredictable performance shifts. In contrast, very large models can behave inconsistently across different use cases, sometimes showing low error rates in one scenario but very high rates in another. 

He gives the example of the 2023 GPT-4 update, where a model that had error rates below 5 percent for certain tasks suddenly saw those error rates jump to over 90 percent after retraining. The difference, he says, highlights the brittleness of larger general models.

To address the problem, Michael recommends that companies consider using different foundational models or even intentionally choosing a slightly weaker model architecture if it is less related to those used elsewhere in the organization for similar tasks. Closing his point on the matter, he emphasizes that diversification can help reduce aggregation risk while still delivering adequate performance.

Source link

#Global #Regulations #Impact #Industry #Leaders #Michael #Berger #Munich