Italy Ends Probe Into DeepSeek Over AI Hallucination Warnings
AGCM Investigation Background
Italy’s antitrust authority, the AGCM, has closed its investigation into the Chinese AI system DeepSeek. The probe focused on whether the platform properly warned users about the risk of generating false or misleading information, often called AI “hallucinations.” The investigation began last June.
Commitments From DeepSeek
DeepSeek, operated jointly by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, offered binding commitments to address the regulator’s concerns. These measures focus on improving disclosures to users about the possibility of AI-generated errors.
Improving Transparency
According to the AGCM, the commitments make warnings about hallucinations clearer, more immediate, and easier to understand. This ensures users are better informed about the potential inaccuracies in AI outputs.
Official Announcement
The AGCM published the decision in its weekly bulletin on Monday. By accepting DeepSeek’s measures, the authority concluded the case without imposing penalties.
Why It Matters
AI “hallucinations” can mislead users if outputs are taken at face value. Clear warnings help users interact safely with AI and promote responsible AI deployment in Europe.
