Skip to content Skip to footer

EU AI Act Digest Newsletter #5

Welcome to the EU AI Act Digest Newsletter!

Welcome to the 5th edition of this newsletter on AI policy & regulation, read by hundred’s subscribers in 100+ countries.
I hope you enjoy reading it as much as we enjoy writing it.
⏰ Register: the EU AI Act Online Course will be released to the first cohort in September.

Australia’s Commitment to Responsible AI in Government

The Australian Government has introduced its “Policy for the Responsible Use of AI in Government,” setting a high standard for the ethical and effective adoption of artificial intelligence across government agencies. This policy is designed to ensure that AI is leveraged to enhance public services while maintaining public trust and accountability. Below are key excerpts that highlight the strategic priorities of this important initiative.

Strengthening Public Trust

“One of the biggest challenges to the successful adoption of AI is a lack of public trust around government’s adoption and use. Lack of public trust acts as a handbrake on adoption. The public is concerned about how their data is used, a lack of transparency and accountability in how AI is deployed, and the way decision-making assisted by these technologies affects them. This policy addresses these concerns by implementing mandatory and optional measures for agencies, such as monitoring and evaluation of performance, being more transparent about their AI use, and adopting standardised governance.”

Adapting to Technological Change

“AI is a rapidly changing technology and the scale and nature of change is uncertain. This policy has been designed to ensure a flexible approach to the rapidly changing nature of AI and requires agencies to pivot and adapt to changes in the technological and policy environment. The policy aims to embed a forward-leaning, adaptive approach for government’s use of AI that is designed to evolve and develop over time.”

Embracing the Benefits of AI

“The adoption of AI technology and capability varies across the APS. This policy is designed to unify government’s approach by providing baseline requirements on governance, assurance, and transparency of AI. This will remove barriers to government adoption by giving agencies confidence in their approach to AI and incentivising safe and responsible use for public benefit. The policy provides a unified approach for government to engage with AI confidently, safely, and responsibly, and realise its benefits.”

Read the full report here

The State of Artificial Intelligence in the Pacific Islands

The Pacific Islands are beginning to harness the potential of artificial intelligence (AI) to address their unique challenges, including geographic isolation, vulnerability to natural disasters, and the preservation of cultural heritage. The report, “The State of Artificial Intelligence in the Pacific Islands,” outlines the current landscape, highlighting both the opportunities and challenges in integrating AI across the region. Below are key excerpts that encapsulate the strategic priorities and challenges faced by the Pacific Islands.

AI’s Potential to Transform the Pacific Islands

“AI can improve disaster forecasting and response, enhance healthcare delivery, optimize resource management, and promote sustainable development, thereby creating new opportunities for growth and resilience in the Pacific Islands. However, to fully unlock these benefits, adequate safeguards must be put in place to mitigate the potential risks and downsides of AI.”

Current State of AI Readiness and Governance

“The Pacific Islands region is gradually embracing the potential of AI and digital transformation… However, there is still much work to be done, especially in terms of data protection and privacy laws, which are lacking in many countries in the region. As the region continues to embrace AI and digital transformation, it will be crucial to address these gaps and ensure that the benefits of these technologies are accessible to all citizens.”

Building a Foundation for Responsible AI

“The Pacific Islands have not yet developed systematic and comprehensive government-led AI governance and ethics frameworks, with their efforts primarily directed towards broader digital and ICT initiatives. By examining and adapting the best practices and international efforts of the EU, US, China, regional players, and international organizations, the Pacific Islands can leverage their position as latecomers to selectively adopt advantageous strategies.”

Read the full report here

OpenAI’s Dilemma: Watermarking AI-Generated Content

OpenAI, the company behind ChatGPT, has developed a tool capable of watermarking AI-generated text—a feature that could help the company comply with the EU’s upcoming Artificial Intelligence (AI) Act. However, despite the tool’s readiness, OpenAI has hesitated to release it, fearing it could drive users away.

The AI Act, which took effect on 1 August 2024, mandates that AI-generated content, including text, audio, images, and videos, must be marked as artificially generated. This requirement becomes enforceable from 2 August 2026, giving companies like OpenAI time to adapt. Despite this, OpenAI remains cautious about implementing its watermarking tool, which it has reportedly had ready for over a year.

According to sources quoted by the Wall Street Journal, OpenAI’s concerns stem from potential user backlash. A survey conducted by the company revealed that nearly 30% of loyal ChatGPT users would reduce their usage if the watermarking tool were implemented and competitors did not follow suit. Nevertheless, another survey showed that globally, four out of five people support the idea of an AI detection tool.

In a recent blog post, OpenAI confirmed the existence of the watermarking tool, describing it as “highly accurate.” However, the company is still evaluating the potential risks, such as the tool’s impact on non-native English speakers and its vulnerability to circumvention through translation or rewording by other generative models.

The watermarking tool would allow ChatGPT to digitally stamp content, making it easier to identify as AI-generated when checked by AI detection tools. While current AI detection methods can be unreliable, this tool could help mitigate misuse, particularly in academic and professional settings where the integrity of content is critical.

Watermarking is also a key feature mentioned in the draft AI Pact, a voluntary set of commitments that companies can sign to prepare for compliance with the AI Act. The European Commission hopes to formalize these commitments in an event planned for September.

OpenAI has reiterated its commitment to complying with the AI Act but remains cautious about the broader implications of deploying the watermarking tool. As of now, the company continues to explore alternatives while engaging in ongoing debates about the tool’s potential impact.

The Risks of Recursive Data in AI: Understanding Model Collapse

A recent study published in Nature sheds light on a critical issue facing the development of artificial intelligence (AI): the phenomenon of “model collapse” when AI models are trained on recursively generated data. This research explores the long-term implications of using AI-generated content to train future models, revealing how such practices can degrade the quality and reliability of AI systems over time. Below are key excerpts from the study that highlight the core findings and concerns.

The Phenomenon of Model Collapse

“We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in large language models (LLMs) as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs).”

Long-Term Risks of Recursive Training

“Indiscriminately learning from data produced by other models causes ‘model collapse’—a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time. We demonstrate that this process is inevitable if training on recursively generated data continues unchecked.”

Implications for the Future of AI

“Our evaluation suggests a ‘first mover advantage’ when it comes to training models such as LLMs. To sustain learning over a long period of time, we need to ensure that access to the original data source is preserved and that further data not generated by LLMs remain available. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology.”

Read the full report here

South Africa Unveils Its National AI Policy Framework

South Africa’s National AI Policy Framework lays the foundation for the country’s integration of artificial intelligence across various sectors. This framework not only aims to drive economic growth but also ensures that AI is developed and deployed ethically and inclusively. Below are key excerpts that highlight the strategic priorities of this ambitious initiative.

Ethical AI: A Foundation for Trust

“For South Africa to exploit the full potential of AI, the country needs to carefully take into consideration ethical, social, and economic implications, ensuring that AI benefits are broadly shared, and risks are managed effectively. A cornerstone of this framework is the commitment to ethical AI development and use. It integrates comprehensive guidelines to ensure AI systems are transparent, accountable, and designed to promote fairness while mitigating biases. This includes establishing robust data governance frameworks to protect privacy and enhance data security, alongside setting standards for AI transparency and explainability to foster trust among users and stakeholders.”

Bridging the Digital Divide

“The persistent digital divide, characterized by unequal access to technology and education, poses a significant challenge. Bridging this divide is crucial for ensuring equitable AI adoption and benefits. Addressing these historical weights requires deliberate policy interventions that ensure inclusive access to AI benefits. Investments in fundamental digital infrastructure are necessary to bridge the digital divide and enable widespread AI adoption. The framework acknowledges that overcoming these barriers is essential to ensure that AI initiatives are inclusive and equitable, addressing historical disparities and promoting broad access to AI benefits.”

Strategic Investments in Digital Infrastructure

“Investing in supercomputing infrastructure and advanced digital connectivity is essential for creating an environment conducive to AI innovation. The policy framework emphasizes the need for a robust digital infrastructure to support AI research and development. This includes not only the development of supercomputing capabilities but also the expansion of digital connectivity through technologies like 4G, 5G, and high-capacity fiber networks. Such investments are critical to enable the widespread adoption and effective use of AI technologies across various sectors.”

Developing a Skilled and Adaptable Workforce

“We must develop a skilled workforce that can harness the potential of AI. This involves integrating AI education from basic schooling to higher education and creating specialized training and continuous learning programs. The framework calls for fostering partnerships between academia and industry to ensure that AI education is aligned with real-world applications. By cultivating a robust AI talent pool, South Africa aims to ensure that its workforce is well-equipped to drive and sustain AI innovation across all sectors.”

Read the full paper here

Deadline for General-Purpose AI Consultation Extended

The European Commission has granted an extension for the submission of inputs on the Code of Practice for general-purpose AI, following a request from a coalition of eleven tech industry associations. These groups, representing major tech companies from both the EU and the US, argued that the initial six-week consultation period was insufficient, particularly as it fell during the busy summer months. The associations, which include Allied for Startups, the American Chamber of Commerce to the European Union, and national tech organizations from France, Germany, and Poland, called for a minimum two-week extension to allow for more comprehensive feedback.

In their letter dated 8 August, the signatories emphasized the need for a more reasonable timeframe to ensure that the input provided would be meaningful and well-considered. The organizations involved represent industry giants such as Google, Meta, Oracle, Amazon, Microsoft, and Samsung, reflecting the widespread interest and significant stakes in shaping the AI regulations.

Responding to the appeal, the European AI Office has now extended the deadline for submissions to 18 September, providing stakeholders with additional time to contribute their perspectives on the future of AI governance in Europe.

In this episode, we sit down with Gabriele Mazzini, the architect and lead author of the EU AI Act, to explore the intricacies and implications of this groundbreaking legislation.

Discover how the EU AI Act addresses the ethical, legal, and social challenges posed by AI technologies. Gabriele shares expert insights on the Act’s key provisions, its impact on innovation, and what it means for businesses, developers, and consumers.

You can listen to the episode here

Join the waiting list for our New EU AI Act Course

For those looking to gain a deeper understanding of the EU AI Act, AI Ireland has launched a comprehensive course that covers all aspects of the legislation. This course is designed to help you navigate the complexities of the Act, ensuring that your organization remains compliant and informed. To learn more and join the waiting list click here 

🎤 Are you looking for a speaker in AI, tech & business?

I would welcome the opportunity to:

➵ Give a talk at your company;
➵ Speak at your event;
➵ Coordinate a private AI Bootcamp for your team (15+ people).

Contact us here to find out more.

 

Stay Ahead of AI Regulations: Subscribe to Our EU AI Act Digest Newsletter!

Are you navigating the complexities of the EU AI Act? Don’t miss out on the latest updates, insights, and expert analysis. Subscribe to our EU AI Act Digest Newsletter today and ensure you’re always in the know.

Get the information you need to stay compliant, innovate responsibly, and lead the way in AI. Click below to subscribe and join our community of forward-thinking professionals!

Subscribe Now

Leave a comment