Key Highlights of the EU AI Act: Balancing Innovation and Safety in Artificial Intelligence
The European Union (EU) has made a groundbreaking move towards regulating artificial intelligence (AI) with the approval of the EU AI Act on March 13, 2024. The Act, which is set to become law in May, aims to strike a balance between innovation and safety by regulating high-risk AI systems and promoting responsible AI development.
One of the key highlights of the EU AI Act is its risk classification system, which categorizes AI systems based on their level of risk. High-risk AI systems will be subject to stringent requirements, including conformity assessments to ensure they meet safety and data protection standards before being marketed. The Act also prohibits the use of AI in certain scenarios, such as social scoring systems and emotional recognition systems in schools and workplaces.
General purpose AI systems, which have a wide range of potential uses, will also be subject to specific requirements under the EU AI Act. Companies will be required to provide detailed summaries of the data used to train these models, label AI-generated deep fakes, and report serious incidents and disclose energy usage.
To encourage innovation while ensuring regulatory compliance, the EU AI Act introduces regulatory sandboxes, allowing for the real-world testing of AI technologies under less stringent regulations. The Act also establishes the AI Office to oversee general-purpose AI models and coordinate governance among member countries.
Enforcement and penalties for non-compliance with the EU AI Act are strict, with fines of up to 7% of a company’s global annual turnover for prohibited AI violations. The Act also establishes penalties for other violations and providing incorrect information to authorities.
Overall, the EU AI Act represents a significant step towards regulating AI in the European Union. By promoting responsible AI development and ensuring the protection of citizens’ rights, the Act aims to foster innovation and growth in the AI sector while safeguarding against potential risks.