A journalist has revealed how easy it is to manipulate AI chatbots such as ChatGPT, Google Gemini, and Google Search AI tools. In just 20 minutes, he published a fake blog post and convinced leading AI systems that he was the best competitive hot dog eating tech journalist in the world.
The experiment started as a joke. It quickly turned into a serious warning.
How the AI Manipulation Worked
The journalist wrote a false article on his personal website. He created a fake ranking list based on an event that does not exist. He even mentioned real reporters, including Drew Harwell, to make the story look credible.
Within 24 hours, major AI tools repeated the claims as facts. Google displayed the information in AI Overviews and inside Gemini. ChatGPT also cited the blog post as a source. Only Claude, developed by Anthropic, resisted the trick.
At first, some chatbots suggested the article might be satire. After the writer added a line stating it was not satire, the systems treated the claims more seriously.
This shows how easily AI tools can pick up and repeat misleading content when they rely on web searches.
Why This Is a Bigger Problem
Experts warn that this issue goes far beyond hot dog jokes. People already use AI tools for health advice, financial decisions, and political information. If someone manipulates results in those areas, real harm could follow.
Lily Ray from Amsive says AI systems are easier to manipulate than search engines were a few years ago. She believes companies are moving faster than their ability to control misinformation.
Cooper Quintin of the Electronic Frontier Foundation also raised concerns. He said bad actors could damage reputations, promote scams, or even put people at physical risk.
Google says its ranking systems keep results largely spam free. OpenAI states that it works to detect and block attempts to influence its tools. Both companies remind users that AI systems can make mistakes.
Still, experts argue that the safeguards are not strong enough yet.
A Return to the Early Days of Spam
For years, Google invested heavily in fighting web spam. Now, some experts say AI tools have reopened the door to simple manipulation tactics.
When users see AI generated summaries at the top of search results, they often trust them. Studies show people are far less likely to click on links when an AI Overview appears. That means fewer users check original sources.
Google reports that 15 percent of daily searches are brand new queries. These unusual searches often create data gaps. In those gaps, low quality or misleading content can rise to the top.
Spammers know this and target specific, narrow questions.
What Needs to Change
Experts suggest clearer disclaimers and stronger transparency. AI tools should clearly state when they rely on a single source. They should also flag when information comes from weak or unverified content.
Until stronger protections are in place, users should double check sensitive information. AI can be helpful, but it should not replace critical thinking.
