Harry Tucker
← Back to writing

Information & Trust

The conversation you're reading isn't real

The information ecosystem has a manufacturing problem. Understanding it changes how you read almost everything.

In March, Israeli Prime Minister Benjamin Netanyahu released three videos attempting to prove he was still alive. Each was independently forensically verified as authentic. None of them were widely believed.

The Netanyahu story though is less about fake content being believed and more about what happens after enough of it circulates. Real things stop being trusted too. Researchers have a name for it: the Liar's Dividend.

The comment sections and social feeds most of us use to calibrate our sense of reality have been gradually, systematically stacked. And what gets posted in them doesn't just shape opinion in the moment. It becomes the training data that shapes what AI thinks the world looks like.

There's an industry for this

The infrastructure behind manufactured online consensus is commercial, accessible, and doing a brisk trade.

Fresh Reddit accounts sell for around $5 each. Aged accounts with posting history and genuine-looking community karma go for up to $30. Premium accounts, ten-plus years old with thousands of upvotes, fetch up to $75. Custom TikTok comments start at $4.99.

Account type Profile Price
Fresh Reddit account Created in last 48 hours, no posting history 0 karma, 0 posts, blank profile $5
Aged Reddit account 1–5 years old, cultivated posting history ~4,000 karma, subreddit participation, comment trail $30
Premium Reddit account 10+ years, established community reputation 24,000+ karma, awards, moderator history $75
Custom TikTok comment Written to brief, posted by real-looking profile Tailored messaging, targeted placement $1.99
Prices based on publicly documented marketplace listings, 2026. These are retail — bulk rates are lower.

There are agencies that will seed your product into the right subreddits, shape the conversation under a competitor's announcement, or manufacture the appearance of grassroots interest in something nobody is actually talking about yet. And that's before we even get into the fake podcast industry.

Game marketing agency Trap Plan published a blog post in late 2025 openly describing how they deployed around 100 fake "organic-style" Reddit posts to promote a video game, written to look like they came from real players across multiple subreddits. Their CEO called it a success. Reddit users found the post, archived it before the company deleted it. The archive is still up.

In Australia, the ACCC swept 118 social media influencers and found 81% were making posts raising concerns under Australian Consumer Law, mostly by failing to disclose brand relationships. Victorian company PhotobookShop was hit with the regulator's first-ever financial penalty, a $39,600 fine, for influencer disclosure failures after its contracts literally told influencers not to mention that products were free or that the company had commissioned the content.

Brands aren't the only ones doing this, either. Which brings me back to where I started: the current US-Israel war on Iran has become the starkest illustration yet of AI-assisted information warfare at scale.

The New York Times identified over 110 distinct AI-generated images and videos in the first two weeks of fighting alone. Iran, Israel, and the US all ran coordinated information operations simultaneously. When Iran's embassy in Austria wanted to document a real school bombing that killed more than 170 people, mostly children, the image it posted was AI-generated. The tragedy was real. The photograph was not.

The confusion this creates compounds fast. When users tagged Grok on X to verify videos from the conflict, the AI gave three different verdicts within 24 hours, placing the same footage in Pakistan in 2014, Kabul in 2021, and Iran in 2026.

Same footage — three AI verdicts in 24 hours
March 18 · 09:14 UTC
This footage appears to show a military strike in Waziristan, Pakistan, likely recorded during coalition operations in 2014.
Confidence: 78%
March 18 · 15:47 UTC
Based on visual analysis, this clip is consistent with footage from Kabul, Afghanistan during the August 2021 evacuation period.
Confidence: 82%
March 19 · 08:22 UTC
This video shows a recent airstrike in Isfahan, Iran, consistent with March 2026 conflict footage.
Confidence: 91%
Reconstructed from reporting by Boom Live. The confidence score rose with each contradictory answer.

At a fact-checking webinar this month, one researcher put it plainly:

"We are no longer facing a misinformation problem. We are facing a reality crisis."

Even as I publish this, many are unsure what their eyes should believe around this conflict, even from previously trusted legacy media outlets.

It's working on you more than you realise

Here's the thing that took me a while to fully reckon with. We don't just read comment sections. We use them to calibrate our reality.

When you scroll the replies under a story and see a particular sentiment dominating, our brains don't file it away as "some people think this." It adjusts what we believe most people think, and this all happens fast and quietly. This process is just how our brain's social cognition works. But it means that if you can manufacture the dominant sentiment in a comment thread, you can shift how thousands of people privately assess a situation without them ever knowing it happened.

r/technology 4h ago
New study finds Platform X's algorithm amplifies divisive content 3x more than competitors
tech_watcher_99 · 3h
Honestly this study is so flawed. They only looked at US data and the methodology has been debunked in three separate reviews. Classic rage-bait research.
▲ 847
actually_informed · 3h
Can confirm. I work in data science and this kind of p-hacking is everywhere in media studies. The sample size alone should disqualify it.
▲ 612
reasonable_takes · 2h
The researchers behind this have a known agenda. Look into their funding sources before you share this uncritically.
▲ 431
The three comments above were posted within minutes of each other by accounts exhibiting coordinated posting behaviour. The comments below are from unaffiliated users who arrived later.
sarah_k · 1h
Wait, has the methodology actually been debunked? I can't find those reviews anywhere...
▲ 23
j_martinez_real · 45m
Idk, I read the actual paper and the methodology section seems pretty standard. The sample was 14 countries not just the US?
▲ 8
Illustrative reconstruction. Less than 10% of people who read comments actually post them - a small coordinated group can fill the space and set the dominant frame before genuine users arrive.

You can feel this more than you can measure it. The first comments especially, which set the tone of a thread and what follows. Only a small fraction of people who read comments actually post them, so a small, organised group can fill that space and create the appearance of overwhelming consensus while everyone else reads along and quietly updates their priors.

A communications researcher at the University of Georgia put the economics of this plainly: for shifting public opinion, a well-placed comment is often more effective than a website or an ad campaign.

What this means when you're reading the news

The social reaction cited in news stories as evidence of what "people are saying" is often drawn from the same platforms where this manufactured consensus operates. A trending hashtag referenced as public sentiment might have been seeded by a network of bots or a coordinated campaign. The "community reaction" screenshot embedded in an article might be showing you a stage, not a crowd.

One of the most documented cases of this going wrong at scale is the Russian Internet Research Agency operation. Research found that IRA accounts appeared in 32 of 33 major US news outlets examined, cited as genuine public opinion. In roughly 70% of cases, a fake account designed to impersonate an ordinary American was being presented as one. The fakes had names, posting histories, the appearance of legitimacy.

Journalists aren't the only ones caught in this. While seventy percent of Australian journalists say they use social media as a source for stories, so does anyone trying to understand what's happening in the world. Those habits developed before the scale of this problem was understood, and the tools for navigating it are still catching up.

Australians are, at least, aware something is off. The 2025 Digital News Report found we top the global list for concern about what's real or fake online, with 75% of us saying we worry about it, well above the global average of 54%.

Trust in news has fallen to 32%, down from 40% a decade ago. And yet social media overtook news websites as our main source of news in 2025. For Gen Z, two-thirds now rely on social platforms as their primary news feed, a jump of 17 percentage points in a year.

Then this all goes into AI

This is where it stops being about individual stories or products and becomes something harder to unwind.

Reddit struck a $60 million-per-year licensing deal with Google in 2024 and a similar arrangement with OpenAI worth roughly $70 million annually. Reddit's billion posts and 16 billion comments are now core training material for the major language models. According to analytics firm Profound AI, Reddit content accounts for roughly 40% of all citations across major LLMs, triple the share of Wikipedia.

Share of citations across major language models, by source
Reddit
~40%
Wikipedia
~13%
News sites
~11%
Other
~36%
Source: Profound AI analysis. Reddit is the single largest citation source for major LLMs — triple Wikipedia's share.

But around 15% of Reddit posts are now likely AI-generated, up 146% since 2021, reaching 33 to 45% in marketing-adjacent subreddits. A significant share of what's being licensed to AI companies as authentic human conversation is already "synthetic", what the AI industry calls AI generated content used for training. The coordinated brand posts, the astroturfed recommendations, the bot-amplified sentiment, all of it goes in.

When AI systems train heavily on manipulated content, researchers have found the damage compounds. Minority perspectives disappear first. The voices most likely to be drowned out by a coordinated comment campaign are exactly the ones that get erased from how the model understands the world. What remains is the manufactured consensus, laundered through training, and eventually returned to us as a summary of what people generally think.

How manufactured opinion becomes AI training data
💬
Comment is placed A paid account or bot posts a review, opinion, or recommendation designed to look organic
📈
Amplified to consensus Upvoted by coordinated networks until it reads as the dominant or popular position
🤝
Licensed as human data Reddit sells its corpus to Google ($60M/yr) and OpenAI ($70M/yr) as authentic conversation
🧠
Absorbed during training The language model ingests manufactured sentiment as representative of what people think
🔁
Returned as truth AI surfaces laundered opinion back to users as a summary of public consensus
Each step is individually documented in this piece. Together they form a pipeline that launders manufactured opinion into machine-learned "truth" — and it runs continuously.

There's now a commercial industry built around exploiting this deliberately. Companies openly offer services to shape how AI models describe brands, advising clients that positive, high-upvote mentions serve as training signals during model development. The comment placed today to shift your opinion might equally be placed to shape what AI tells the next thousand people who ask about that brand. Those aren't separate goals.

So where does that leave us?

It might not sound that comforting, but we're still working it out.

Knowing how some of this machinery works doesn't make you immune to it. The habits run deep, and the manufactured content is hard to tell from the real thing. We're all navigating this with the same imperfect instincts.

The Netanyahu videos are a useful reminder of where this ends up.

The infrastructure for manufacturing fake reality has run long enough, and at enough scale, that even authenticated things have stopped being trusted. That outcome probably isn't accidental.

The newsroom being everywhere, which I wrote about recently, is still true. What's becoming clearer is that not everyone in that expanded newsroom is a journalist. Some of them are paid to be there.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Harry Tucker writes about how infrastructure, technology and information systems actually work. Who benefits, and who pays.

Get in touch →