New AI Regulation in the EU

On August 1, 2024, the European Union’s Artificial Intelligence Regulation (RIA) came into force, the first global framework that establishes guidelines for the ethical and safe development and use of AI. With risk categorizations and strict obligations, companies must adapt to this new regulation to avoid sanctions that can reach up to 7% of their global turnover.

This regulatory framework establishes guidelines to ensure ethical, safe and responsible use of AI, promoting innovation and protecting fundamental rights.

AI Risk Classification

  1. Unacceptable Risk: Prohibitions such as behavior manipulation, social scores, and real-time facial recognition (with few exceptions).
  2. High Risk: Strict controls in critical areas such as infrastructure, education, employment, and essential services. Obligations include risk management, transparency, and human oversight.
  3. Limited Risk: Requires user notifications and human oversight.
  4. Minimal Risk: No specific obligations, but legal risks must be managed.

Impact on Businesses

  • Developers: Assess risks, document and audit systems, and incorporate ethical principles.
  • Business users: Select compliant vendors, monitor AI use, and train employees.

Deadlines and Penalties

The regulation will be fully implemented in phases, with penalties of up to 7% of global annual turnover or 35 million euros for non-compliance.

Supervisory Authority

The EU Office for Artificial Intelligence will oversee the implementation of the regulation, coordinating with national agencies such as AESIA in Spain.

Conclusion

The RIA establishes a legal framework for ethical technological development, forcing companies to adapt and prevent legal, operational and reputational risks. It is a step towards a responsible future in AI.