EU AI Act: Analyzing the Third Draft of the Code of Practice for GPAI Developers

Listen to this Post

The European

the Third Code of Practice Draft

The third draft of the EU Code of Practice for GPAI developers introduces some promising updates but also raises concerns, particularly for smaller developers and researchers. While the improvements show progress, the current approach to systemic risks and transparency may hinder the future-proof development of AI technologies. Below is a summary of the key points:

Systemic Risk Concerns

The draft categorizes systemic risks, identifying issues such as cyber offense, harmful manipulation, and loss of control. While cyber offense is well-defined and measurable, the other categories—harmful manipulation and loss of control—are highly speculative. These risks are seen as challenging to assess and could disproportionately burden smaller and open developers. The inclusion of these risks may stifle innovation, as it creates barriers for developers who do not have the resources to evaluate these complex and abstract risks.

Transparency Issues

The draft diminishes some of the transparency commitments present in previous versions. While open-source developers without systemic risks are exempt from certain administrative burdens, the lack of a requirement for model training data disclosure could hinder downstream providers from evaluating models in new settings. Additionally, information about energy consumption and personal data handling is now only available to national authorities, limiting public access to essential information for safety and compliance.

Copyright and Legal Clarity

The copyright section of the draft introduces clearer guidelines on opt-out mechanisms and compliance with EU copyright laws. However, smaller developers face challenges in navigating these regulations, especially with the legal complexities of open-source AI models that span multiple jurisdictions. The lack of clarity regarding SME exemptions raises concerns about the disproportionate burden placed on smaller entities.

Moving Forward

While the draft contains positive aspects, the overall balance seems misaligned with the current understanding of AI technology. Key issues like transparency and copyright are urgent and could have a more immediate societal impact than the more speculative risks identified in the draft. A more robust commitment to transparency and fair competition is needed to ensure that the AI ecosystem remains open and accountable.

What Undercode Says:

The third draft of the EU AI Code of Practice presents a critical moment for the future of AI development in Europe. The draft has made some strides toward refining systemic risk categories, but the inclusion of speculative risks like ā€œharmful manipulationā€ and ā€œloss of controlā€ highlights a fundamental issue in the EU’s approach. These categories are ill-defined and lack concrete evidence of their immediate relevance to current AI models. The result is an overestimation of potential risks and a regulatory framework that may unintentionally hinder innovation, especially for smaller, open developers.

Undercode’s perspective underscores that while systemic risks like cyber offense can be assessed and mitigated with relative certainty, the other identified risks require much more research to fully understand their relevance. For example, ā€œharmful manipulationā€ is currently too ambiguous, with no solid evidence linking state-of-the-art models to major issues like misinformation. Likewise, ā€œloss of controlā€ remains a largely theoretical scenario, more akin to science fiction than a plausible risk.

This presents a significant challenge for smaller and open developers, who rely on collaborative, scientifically-grounded approaches to tackle risks. The draft’s focus on speculative risks may place undue burdens on these developers, potentially limiting their ability to contribute to the broader AI ecosystem. The exclusion of smaller actors and researchers could result in a concentration of power in the hands of a few large players, which is counterproductive to the goal of creating an inclusive, transparent AI landscape.

Moreover, the draft weakens previous transparency commitments by reducing the disclosure requirements for training data and model energy consumption. For open-source models, this lack of transparency could hinder downstream providers from understanding the conditions under which a model can be deployed safely. Open systems have historically played a vital role in advancing AI, both by fostering innovation and ensuring the research behind AI systems is verifiable. Limiting transparency is a step backward in promoting public safety and accountability.

Finally, the draft’s provisions on copyright offer some clarity on the legal challenges facing smaller developers, but they still fall short in ensuring that SMEs are not disproportionately burdened by the complex legal landscape of AI development. Clearer guidelines are needed to help these developers navigate the regulatory environment without facing legal uncertainty.

The overarching theme is that the current draft, while containing positive aspects, could do more to support smaller, open AI developers and provide the transparency necessary for a thriving, competitive ecosystem. Without these adjustments, the EU’s approach could inadvertently stifle the very innovation it seeks to encourage.

Fact Checker Results:

  1. The classification of risks, especially “harmful manipulation” and “loss of control,” remains speculative, with limited evidence supporting their immediate relevance.
  2. Transparency measures in the current draft are less robust compared to previous versions, which could impede open collaboration and the development of responsible AI practices.
  3. Copyright provisions are clearer but still place a heavy burden on smaller developers, lacking sufficient clarity on SME exemptions.

References:

Reported By: https://huggingface.co/blog/frimelle/eu-third-cop-draft
Extra Source Hub:
https://www.medium.com
Wikipedia
Undercode AI

Image Source:

Pexels
Undercode AI DI v2

Join Our Cyber World:

šŸ’¬ Whatsapp | šŸ’¬ TelegramFeatured Image