...

Unacceptable risk provisions take effect


The European Union Artificial Intelligence Act (EU AI Act) entered into force on 1 August 2024, marking a historic moment as the world’s first comprehensive legislation regulating AI systems according to a risk-based approach.

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

This Sunday, 2 February, the first key provisions of the EU AI Act will take effect, including the ban on prohibited ‘unacceptable risk’ AI systems and the deadline to meet AI literacy requirements.

From this date, using or marketing AI systems posing an ‘unacceptable risk’ will be prohibited in the EU, with penalties up to EUR 35,000,000 and 7% of global annual turnover for non-compliance.

The full list of activities prohibited by the Act include:

  • Harmful subliminal, manipulative and deceptive techniques
  • Harmful exploitation of vulnerabilities
  • Unacceptable social scoring
  • Individual crime risk assessment and prediction (with some exceptions)
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases
  • Emotion recognition in the areas of workplace and education (with some exceptions)
  • Biometric categorisation to infer certain sensitive categories (with some exceptions)
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions).

Commenting on the global impact, Fiona Ghosh, partner at Ashurst, noted: “As the rest of the AI Act comes into effect in stages, the regulatory framework in Europe will change significantly. This comes at a time when other jurisdictions, in particular the US (and to a lesser extent the UK) are signalling a move away from regulation, which may well accelerate in response to competition from elsewhere. The extent of the resulting divergence remains to be seen.”

However, the EU AI Act will apply to providers and deployers regardless of their location, meaning US-based companies with EU operations will still be impacted, despite the US currently lacking federal AI legislation similar to the EU AI Act.

“The AI Act will have a truly global application. That’s because it applies not only to organisations in the EU using AI or those providing, importing or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules,” explained Marcus Evans, partner at Norton Rose Fulbright.

Sunday, 2 February 2025, marks the not only the deadline of prohibitions on unacceptable risk, but also of meeting the EU AI Act’s literacy requirements. The literacy requirements are designed to ensure organisations make efforts to ensure adequate training and upskilling or teams operating AI systems.

Speaking on this and upcoming deadlines, Matt Worsfold, risk advisory partner at Ashurst, commented: “Businesses should therefore use this deadline as a prompt to start working through a plan for compliance. With approximately 18 months until the more substantive deadline for the rest of the Act, businesses need to start commencing at a minimum their discovery and cataloguing exercises to identify potential AI systems and use cases. This exercise will likely take significant amount of time given the challenges that will need to be faced into, for example many organisations have been procuring and/or building AI systems for a number of years, are unlikely to have central registers and are going to need to engage extensively with third parties.”

Source link

#Unacceptable #risk #provisions #effect