Information & Trust
The information ecosystem has a manufacturing problem. Understanding it changes how you read almost everything.
In March, Israeli Prime Minister Benjamin Netanyahu released three videos attempting to prove he was still alive. Each was independently forensically verified as authentic. None of them were widely believed.
The Netanyahu story though is less about fake content being believed and more about what happens after enough of it circulates. Real things stop being trusted too. Researchers have a name for it: the Liar's Dividend.
The comment sections and social feeds most of us use to calibrate our sense of reality have been gradually, systematically stacked. And what gets posted in them doesn't just shape opinion in the moment. It becomes the training data that shapes what AI thinks the world looks like.
The infrastructure behind manufactured online consensus is commercial, accessible, and doing a brisk trade.
Fresh Reddit accounts sell for around $5 each. Aged accounts with posting history and genuine-looking community karma go for up to $30. Premium accounts, ten-plus years old with thousands of upvotes, fetch up to $75. Custom TikTok comments start at $4.99.
There are agencies that will seed your product into the right subreddits, shape the conversation under a competitor's announcement, or manufacture the appearance of grassroots interest in something nobody is actually talking about yet. And that's before we even get into the fake podcast industry.
Game marketing agency Trap Plan published a blog post in late 2025 openly describing how they deployed around 100 fake "organic-style" Reddit posts to promote a video game, written to look like they came from real players across multiple subreddits. Their CEO called it a success. Reddit users found the post, archived it before the company deleted it. The archive is still up.
In Australia, the ACCC swept 118 social media influencers and found 81% were making posts raising concerns under Australian Consumer Law, mostly by failing to disclose brand relationships. Victorian company PhotobookShop was hit with the regulator's first-ever financial penalty, a $39,600 fine, for influencer disclosure failures after its contracts literally told influencers not to mention that products were free or that the company had commissioned the content.
Brands aren't the only ones doing this, either. Which brings me back to where I started: the current US-Israel war on Iran has become the starkest illustration yet of AI-assisted information warfare at scale.
The New York Times identified over 110 distinct AI-generated images and videos in the first two weeks of fighting alone. Iran, Israel, and the US all ran coordinated information operations simultaneously. When Iran's embassy in Austria wanted to document a real school bombing that killed more than 170 people, mostly children, the image it posted was AI-generated. The tragedy was real. The photograph was not.
The confusion this creates compounds fast. When users tagged Grok on X to verify videos from the conflict, the AI gave three different verdicts within 24 hours, placing the same footage in Pakistan in 2014, Kabul in 2021, and Iran in 2026.
At a fact-checking webinar this month, one researcher put it plainly:
"We are no longer facing a misinformation problem. We are facing a reality crisis."
Even as I publish this, many are unsure what their eyes should believe around this conflict, even from previously trusted legacy media outlets.
Here's the thing that took me a while to fully reckon with. We don't just read comment sections. We use them to calibrate our reality.
When you scroll the replies under a story and see a particular sentiment dominating, our brains don't file it away as "some people think this." It adjusts what we believe most people think, and this all happens fast and quietly. This process is just how our brain's social cognition works. But it means that if you can manufacture the dominant sentiment in a comment thread, you can shift how thousands of people privately assess a situation without them ever knowing it happened.
You can feel this more than you can measure it. The first comments especially, which set the tone of a thread and what follows. Only a small fraction of people who read comments actually post them, so a small, organised group can fill that space and create the appearance of overwhelming consensus while everyone else reads along and quietly updates their priors.
A communications researcher at the University of Georgia put the economics of this plainly: for shifting public opinion, a well-placed comment is often more effective than a website or an ad campaign.
The social reaction cited in news stories as evidence of what "people are saying" is often drawn from the same platforms where this manufactured consensus operates. A trending hashtag referenced as public sentiment might have been seeded by a network of bots or a coordinated campaign. The "community reaction" screenshot embedded in an article might be showing you a stage, not a crowd.
One of the most documented cases of this going wrong at scale is the Russian Internet Research Agency operation. Research found that IRA accounts appeared in 32 of 33 major US news outlets examined, cited as genuine public opinion. In roughly 70% of cases, a fake account designed to impersonate an ordinary American was being presented as one. The fakes had names, posting histories, the appearance of legitimacy.
Journalists aren't the only ones caught in this. While seventy percent of Australian journalists say they use social media as a source for stories, so does anyone trying to understand what's happening in the world. Those habits developed before the scale of this problem was understood, and the tools for navigating it are still catching up.
Australians are, at least, aware something is off. The 2025 Digital News Report found we top the global list for concern about what's real or fake online, with 75% of us saying we worry about it, well above the global average of 54%.
Trust in news has fallen to 32%, down from 40% a decade ago. And yet social media overtook news websites as our main source of news in 2025. For Gen Z, two-thirds now rely on social platforms as their primary news feed, a jump of 17 percentage points in a year.
This is where it stops being about individual stories or products and becomes something harder to unwind.
Reddit struck a $60 million-per-year licensing deal with Google in 2024 and a similar arrangement with OpenAI worth roughly $70 million annually. Reddit's billion posts and 16 billion comments are now core training material for the major language models. According to analytics firm Profound AI, Reddit content accounts for roughly 40% of all citations across major LLMs, triple the share of Wikipedia.
But around 15% of Reddit posts are now likely AI-generated, up 146% since 2021, reaching 33 to 45% in marketing-adjacent subreddits. A significant share of what's being licensed to AI companies as authentic human conversation is already "synthetic", what the AI industry calls AI generated content used for training. The coordinated brand posts, the astroturfed recommendations, the bot-amplified sentiment, all of it goes in.
When AI systems train heavily on manipulated content, researchers have found the damage compounds. Minority perspectives disappear first. The voices most likely to be drowned out by a coordinated comment campaign are exactly the ones that get erased from how the model understands the world. What remains is the manufactured consensus, laundered through training, and eventually returned to us as a summary of what people generally think.
There's now a commercial industry built around exploiting this deliberately. Companies openly offer services to shape how AI models describe brands, advising clients that positive, high-upvote mentions serve as training signals during model development. The comment placed today to shift your opinion might equally be placed to shape what AI tells the next thousand people who ask about that brand. Those aren't separate goals.
It might not sound that comforting, but we're still working it out.
Knowing how some of this machinery works doesn't make you immune to it. The habits run deep, and the manufactured content is hard to tell from the real thing. We're all navigating this with the same imperfect instincts.
The Netanyahu videos are a useful reminder of where this ends up.
The infrastructure for manufacturing fake reality has run long enough, and at enough scale, that even authenticated things have stopped being trusted. That outcome probably isn't accidental.
The newsroom being everywhere, which I wrote about recently, is still true. What's becoming clearer is that not everyone in that expanded newsroom is a journalist. Some of them are paid to be there.