The European Commission has launched a comprehensive investigation into the risks posed by generative AI technologies in some of the world’s largest online platforms and search engines.
The March 14 request targets eight online services: Google Search, Microsoft Bing, Facebook, X, Instagram, Snapchat, TikTok and YouTube.
The European Commission said:
“The question is relevant to both the dissemination and creation of Generative AI content.”
These platforms were asked to provide detailed information about their risk management strategies, particularly those related to AI-induced “hallucinations,” the proliferation of deepfakes, and automated manipulation of content that could potentially mislead voters.
The Commission’s investigation extends to a wide range of concerns, including the impact of generative AI on election integrity, the spread of illegal content, the protection of fundamental rights, gender-based violence, child protection, and mental health. The request includes both the creation and distribution of content generated by AI technology.
The focus, in part, on election issues follows the agency’s broader efforts to mitigate risks posed by the rise of AI, including the introduction of the Digital Services Act (DSA).
DSA requires Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to comply with comprehensive regulations designed to prevent the dissemination of illegal content and mitigate negative impacts on fundamental rights, electoral processes, and mental health . Presence and child protection.
Response deadline in April
Each service must provide requested information related to the election by April 5, and information regarding other categories by April 26.
Failure to provide accurate, complete and transparent information may result in significant penalties. The Commission has emphasized its power to impose fines for inaccurate, incomplete or misleading answers.
Additionally, if a platform fails to respond within the stipulated time period, the Commission may enforce compliance through a formal decision-making process, potentially subjecting it to additional financial penalties.
This plan marks an important step in the implementation of the DSA and highlights the EU’s commitment to mitigating risks associated with digital technologies and ensuring a safe online environment.
The news comes months after reports of a separate EU initiative called the Artificial Intelligence Act, which would ban certain biometric applications of AI while providing exceptions for law enforcement.