The risk-based European AI Act came into force on Thursday, 1 August 2024. Its provisions will be implemented in stages until mid-2026. Within six months, it is planned to enforce bans on several types of AI use in specific scenarios, such as the use of remote biometrics by law enforcement in public spaces.
The AI Act uses a grading approach to AI applications based on potential risks. Under this approach, most AI applications are considered “low-risk” and therefore would not be subject to regulation at all.
The “limited risk” level applies to AI technologies such as chatbots or tools that can be used to create deepfakes. They are required to provide a level of transparency that does not mislead users.
“High-risk” AI applications include biometric data processing and facial recognition, AI-based medical software, or the use of AI in areas such as education and employment. Such systems must be registered in the EU database, and their developers must ensure that they comply with risk and quality management requirements.
The AI Act provides for a multi-tiered system of penalties: fines of up to 7% of global annual turnover for using prohibited AI applications, up to 3% for violating other obligations, and up to 1.5% for providing regulators with false information.
A separate section of the new law concerns developers of so-called General Purpose Artificial Intelligence (GPAI). The EU has also adopted a risk-based approach to GPAI, with transparency as the main requirement for developers of such systems. It is expected that only a subset of the most powerful models will be required to carry out risk assessments and mitigation measures.
Specific guidelines for GPAI developers have not yet been developed, as there is no experience of legal application of the new law. The AI Office, the body for strategic oversight and building the AI ecosystem, has launched a consultation and called for developers to participate in this process. The work on the AI application code is expected to be fully completed in April 2025.
OpenAI’s “Artificial Intelligence Law Playbook,” released late last month, said OpenAI expects to “work closely with the EU AI Authority and other relevant bodies as we implement the new law in the coming months,” which includes producing technical documentation and other guidance for GPAI model providers and developers.
“If your organization is trying to determine how to comply with the AI Act, you should first try to classify any AI systems by scope. Identify what GPAI and other AI systems you use, determine how they are classified, and consider what obligations arise from your use cases,” the guide says. “These issues can be complex, so you should consult with legal counsel.”
If you notice an error, select it with your mouse and press CTRL+ENTER.