The European Union is contemplating a structured strategy to oversee generative AI technologies, as per a draft reviewed by Bloomberg. This initiative is a pioneering attempt to manage this swiftly evolving tech domain.
The proposed structure outlines three distinct categories for foundational AI models, which are versatile systems suitable for various applications. The most advanced among these would undergo rigorous external evaluations, as per the draft’s details.
The EU is on the brink of becoming the inaugural Western entity to enforce obligatory guidelines on AI. As per the upcoming AI Act, AI-driven systems that forecast criminal activities or screen job candidates would be subject to risk evaluations, among other stipulations. The aim is to refine this legislation in the upcoming meeting on October 25 and finalize it by year’s end.
The initial category in the proposed framework would encompass all foundational models. The subsequent level, termed “very capable” systems, would be characterized by the computational intensity used in training their expansive language algorithms. These algorithms, leveraging vast data troves to enhance AI prowess, might surpass existing technological benchmarks and remain somewhat enigmatic, as the draft suggests.
Concluding the tiered approach, the third segment, labeled large-scale universal AI systems, would cover the most widely utilized AI tools, gauged by their total user base.
Following the launch of OpenAI’s ChatGPT the previous year, generative AI platforms have witnessed a surge in demand, prompting many in the tech sector to race in creating similar offerings. These AI solutions can produce text, images, and videos in response to basic cues, leveraging their expansive language models to deliver outputs that are often uncannily accurate.
The EU, however, is grappling with several challenges in framing its stance on this technology. Key concerns include the specifics of overseeing generative AI and the potential prohibition of real-time facial recognition in crowded settings. Proposals from the EU’s legislative bodies have faced scrutiny, with critics suggesting that such regulations might stifle the competitive edge of smaller firms against tech behemoths.
In a recent assembly, delegates from the EU’s primary institutions largely supported a structured strategy. However, it was the technical specialists who formulated a more detailed plan. As per the document dated October 16, which Bloomberg had access to, these concepts are gradually crystallizing, though they remain subject to modifications during ongoing discussions.
EU’s proposed approach to AI in three tiers
Here’s how the three tiers could be addressed by the EU:
Foundational model oversight
The EU proposes that AI creators adhere to stringent transparency standards before introducing any model to the market. This involves a thorough documentation of the model’s development and training phases, inclusive of the outcomes from internal “red-teaming” exercises. In these exercises, neutral experts challenge the models to identify potential flaws or vulnerabilities. The models would also undergo assessments based on universally accepted protocols.
Once a model is commercially available, the onus would be on the companies to furnish detailed insights to enterprises leveraging the technology. This would empower these businesses to independently evaluate the foundational models.
Furthermore, companies would be mandated to provide an in-depth overview of the data sources utilized during the model’s creation, addressing how they navigate copyright challenges. This includes ensuring that original content creators have the autonomy to exclude their content from being used in AI training. An additional stipulation is that content generated by AI should be discernible from other types of content.
The working definition for a foundational model, as suggested by the negotiators, is a system proficient in executing a diverse array of specific tasks.
Advanced AI scrutiny
For AI models falling under this category, the EU proposes a more rigorous regulatory framework. Prior to their commercial introduction, these models would be subjected to consistent “red-teaming” evaluations by external specialists, who would be accredited by the EU’s newly established AI Office. The findings from these evaluations would be directly reported to this central agency.
In addition, companies would be mandated to implement mechanisms to identify overarching risks associated with these models. Once these advanced models are commercially available, independent auditors and research entities would undertake compliance checks. This would encompass verifying adherence to stipulated transparency guidelines.
There’s also a proposal on the table to establish a collaborative platform where companies can exchange insights on best practices. Additionally, a voluntary code of conduct, endorsed by the European Commission, is under consideration.
The classification of “very capable foundation models” would hinge on the computational intensity required for their training. This would be gauged using a metric termed FLOPS (floating point operations per second). The specific benchmark for this classification would be delineated by the commission in due course and would be subject to periodic revisions.
Provisions are being made for companies to challenge this classification. On the flip side, the commission retains the right to designate a model as “very capable” even if it doesn’t surpass the set benchmark, especially after a thorough review. Another criterion under deliberation for categorizing these models is their “potential impact,” determined by the volume of high-risk AI applications built upon them.
Large-scale AI regulation
For these expansive AI systems, the EU mandates rigorous external “red-teaming” evaluations to pinpoint potential weak spots. The outcomes of these assessments would be relayed to the commission’s dedicated AI Office. Additionally, companies would be required to implement systems for risk evaluation and mitigation.
The EU’s criteria for categorizing a system as a General Purpose AI (GPAI) at scale would be based on its user base: either 10,000 registered business users or a whopping 45 million individual registered users. The commission would subsequently provide clarity on the methodology for user count calculations.
Companies would have the provision to challenge their designation as a GPAI at scale. In contrast, the EU reserves the right to subject certain systems or models to these stringent regulations, even if they don’t meet the stipulated benchmarks, especially if they pose potential risks.
There’s an ongoing dialogue to establish protective measures to prevent the generation of unlawful or harmful content by both GPAI systems and very capable AI models.
The newly established AI Office would be at the helm of overseeing the additional regulations for both GPAI at scale and very capable foundation models. This office would have the authority to solicit documentation, conduct compliance tests, maintain a roster of approved red testers, and initiate investigations as per the draft. In extreme cases, the office might even halt a model’s operations.
While the agency would operate under the commission’s umbrella, it would function autonomously. Funding for staffing the AI Office could potentially be sourced from fees levied on GPAI at scale and very capable foundation models.