The Commission’s flagship AI Act – hailed as ‘the world’s first comprehensive AI law’ – entered into force on 1 August 2024. It will be fully applicable two years later, with some exceptions: prohibitions take effect after six months and the rules and obligations for general-purpose AI (GPAI) providers will come into effect after a year. Despite these looming deadlines, many crucial details of the AI Act remain unspecified, and stakeholders are concerned that the drafting process is too rushed, preventing meaningful engagement.
The first draft of the Code of Practice for GPAI providers was published on 14 November and the second draft on 19 December. The third draft is expected on 17 February 2025, with a final version due in April. Each draft reflects the views of those participating in the Code of Practice Working Groups and Provider Workshops, consisting of around 1000 stakeholders.
In an open letter, the digital rights groups Access Now and European Digital Rights (EDRi) argue that these short timeframes do not “enable more targeted and useful feedback.” For instance, upon the issuance of the first draft, stakeholders were given just 10 days to go through it. Hundreds of written responses were handled in only two weeks.
Adding to the Commission’s challenge, the new AI Office is “massively understaffed,” according to MEP Axel Voss, shadow rapporteur for the AI Act. It currently has only 85 staffers. Finding enough expertise to regulate the highly technical aspects of AI training and testing practices presents another obstacle.
Industry representatives are among the most vocal critics of the AI Act drafting process. The Computer and Communications Industry Association (CCIA), representing the interests of leading tech companies, raises similar concerns with too little time. “The shortcomings of the AI Act, particularly the overly tight timeline for applying its rules, are already becoming evident,” says Boniface de Champris, the CCIA’s Senior Policy Manager.
In addition to the GPAI Code of Practice, the AI Act will also prohibit certain high-risk AI practices, including social scoring and the creation of facial recognition databases from photos scraped from the internet. The prohibitions are due to come into force on 2 Feb. But – less than two weeks from that date – the European Commission has yet to publish details on what the AI prohibitions will look like.
Many AI developers worry about their ability to comply with the regulation on such short notice. “With the AI Act set to take effect in two weeks, business remain uncertain about critical issues,” the tech lobby group DigitalEurope told POLITICO.
If stakeholders are not consulted in a timely manner, the legitimacy and success of the world’s first comprehensive AI law is at risk. More broadly, the EU AI Act offers a litmus test for the new Commission’s ability to provide clear, responsible, and innovation-friendly regulation. It is certainly off to a rocky start.