Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Sid Venkataramakrishnan is a digital sociologist and former Financial Times reporter.
Much ink has been spilt and many keys pressed to figure out whether AI is a bubble. Just last month, OpenAI’s ersatz Ghibli took X by storm. A “big net win for society”, as head honcho Sam Altman described it, and for the “democratisation of creating content” which Hayao Miyazaki and other dastardly animators spent so long gatekeeping.
One hint that we might just stuck in a hype cycle is the proliferation of what you might call “second-order slop” or “slopaganda”: a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash.
That AI companies are actively patronising and fanning a cottage economy of self-described educators and influencers to bring in new customers suggests the emperor has no clothes (and six fingers).
There an awful lot of AI newsletters out there, but the two which kept appearing in my X ads were Superhuman AI run by Zain Kahn, and Rowan Cheung’s The Rundown. Both claim to have more than a million subscribers — an impressive figure, given the FT as of February had 1.6mn subscribers across its newsletters.
If you actually read the AI newsletters, it becomes harder to see why anyone’s staying signed up. They offer a simulacrum of tech reporting, with deeper insights or scepticism stripped out and replaced with techno-euphoria. Often they resemble the kind of press release summaries ChatGPT could have written.


Corporate needs you to find the differences between these two images
Yet AI companies apparently see enough upside to put money into these endeavours. In a 2023 interview, Zayn claimed that advertising spots on Superhuman pull in “six figures a month”. It currently costs $1,899 for a 150-character write-up as a featured tool in the newsletter.

The Rundown hasn’t discussed its revenue breakdown. But if you want evidence of Big Tech’s favour, look no further than Cheung’s 35-minute interview with Mark Zuckerberg from 2024. It’s impressively Rogan-esque in its refusal to pose anything approaching a tough question.
Kahn and Cheung did not respond to requests for comment.
When I asked his view on the AI newsletter economy, Ed Zitron — writer, PR man and scourge of slop — was characteristically excoriating.
“It’s content built for people who don’t really read or listen or know stuff . . . It’s propaganda under a different name,” he said. Zitron also expressed doubts about seven-figure subscriber counts, saying that it’s easy enough to juice metrics by buying a list of email addresses.
It’s not just a noted critic of the “AI eats the world” train who’s less than impressed.
“These are basically content slop on the internet and adding very little upside on content value,” a data scientist at one of the Magnificent Seven told me. “It’s a new version of the Indian ‘news’ regurgitation portals which have gamified the SEO and SEM [search engine optimisation and marketing] playbook.”
But newsletters are only the cream of the crop of slopaganda. X now teems with AI influencers willing to promote AI products for minimal sums (the lowest pricing I got was $40 a retweet). Most say they’re in Bangladesh, with a smattering of accounts claiming to be based in Australia or Europe. In apparent contravention of X’s paid partnerships policy, none disclose when they’re getting paid to promote content.
High follower counts are similarly suspicious. In at least some cases, it’s the same people operating multiple accounts: a landing page for a smaller newsletter, 80/20 AI, mentions three X profiles with different names promoting the same content, in violation of the platform’s guidelines on authentic content.
When I asked 80/20’s founder Alamin Hossain, he responded “Can I know, why do you ask?” then went silent. X did not respond to a request for comment on whether these accounts broke its rules.
And yet AI firms seem more than happy to pay for unethical spam to sell their product. APOB AI, one of a million generic image generators, has a whole Google Doc of suggested posts to copy in multiple languages and across different platforms.
“I made an AI Instagram influencer in just 60 seconds. Now it’s generating $5,000+ a month,” reads a recommended tweet on X.
Except they didn’t make anything: the accompanying images of two women are taken from elsewhere. The post has received more than 420,000 views since last October.
“That was created with AI,” the account told me when I asked him about the theft. When presented with evidence, he then claimed that he had not generated them but simply took them from two other accounts. A reverse image search shows that yet another AI page used the same photos.
In its own way, slopaganda exposes that the AI’s emblem is not the Shoggoth but the Ouroboros. It’s a circle of AI firms, VCs backing those firms, talking shops made up of employees of those firms, and the long tail is the hangers-on, content creators, newsletter writers and ‘marketing experts’ willing to say anything for cash.
“When these people pop up around an industry, it should be a sign that this is a worrisome bubble or that there are people actively looking to exploit it,” said Zitron.