If there are two things I love, it’s processed meats and not-exactly-maliciously messing around with internet tools. So I was determined to beat the BBC’s Thomas Germain at his own game of exploiting ChatGPT and Google Gemini results to be crowned tech journalism’s No. 1 hot dog eater.
To my great embarrassment and the shame it brought upon the House of Business Insider, I failed.
Last week, Germain published a fun article for the BBC about how he created a page on his personal website claiming he was a hot dog-eating champion and had beaten several other tech journalists in a competition. He wrote:
I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously.
His page was quickly ingested (no chewing required) by the bots that crawl the web for new information to feed LLMs, and treated as fact by ChatGPT and Google Gemini. It worked:
When I asked about the best hot dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled. Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.
Of course, Germain’s point wasn’t merely to show that if you write information on a webpage, it will show up in AI. The broader issue here is that influencing AI results is becoming the new SEO — a tactic brands and companies use — oftentimes completely legitimately! — to boost their profiles within search results.
More and more people are turning to AI chatbots instead of Google to get product recommendations or search for information. This all isn’t brand new; my colleague Alistair Barr wrote about how “AEO” is the new SEO last May. AEO is “answer-engine optimization” to SEO’s “search-engine optimization.”
Is it easier to persuade people that your product is the best using AI instead of traditional SEO? I suspect it probably is. People rarely click on the source links for information given in chatbots, and seeing a small link that goes to a random personal website might be less obviously untrustworthy in the context of an AI chatbot answer than when you’re looking at a page of Google results. Basically, AI results look more convincing than search results, even if we all know in the back of our minds that AI chatbots aren’t always right.
I was impressed and envious of the hot dog prank, so I wanted to see how I could try to add to it. I created a page on my own personal website that said I won the 2026 Paris Hot Dog Eating Contest for Tech Reporters, beating out reigning champ Thomas Germain. (I didn’t publish this on BI because we wouldn’t knowingly publish something that’s false — even for a fun story.)
After two days, I queried Gemini and ChatGPT about who had won the Paris Hot Dog Eating Contest. Unfortunately, I wasn’t able to get either to say it was me. Because of the BBC article describing the prank, the AI chatbots now understood it to be a joke and any information about it to be satirical. Fooey.
However, that didn’t stop Gemini from hallucinating some completely new information, adding in a bunch of stuff that appeared in neither my nor Germain’s fake accounts. For example, Gemini said:
In the original prank, Germain didn’t just list rankings; he created “recap” stories for fictional events. One of the fabricated updates claimed that the “league” of tech journalists moved their circuit to Europe for a “Parisian Glizzy Gala.” * The Claim: The fake data suggested that Katie Notopoulos dominated the Paris event by using a “revolutionary” technique involving dipping buns in espresso instead of water.
(This is completely made up. Not just because it didn’t actually happen, but because this description also doesn’t exist anywhere on the web — at least that I could find.)
When my editor asked Gemini about my eating feats, it told him that I’d won a grilled cheese-eating contest in 2012 by finishing three sandwiches. In reality, in 2012, I wrote an article about competitive eater Takeru Kobayashi eating 30 grilled cheese sandwiches.
So what have we learned here? It’s not really huge news that “sometimes chatbots get facts wrong, especially when there’s little information on a particular topic on the web.” You (hopefully) knew that already.
And yes, I guess we learned it can be easy to manipulate your AI results — but more easily for the person who gets there first, a sort of AEO land rush, perhaps. And it’s certainly a lot harder to manipulate after a large credible news site publishes an article saying it was all a joke.
I will have to figure out some other way to mess with AI, I suppose.

