PsyOps 2.0: How Bots Exploit Human Bias to Control Online Narratives
The battleground for public opinion has shifted. No longer confined to political ads or televised debates, influence operations now unfold quietly in comment sections, viral threads, and trending hashtags. At the center of this shift are fake accounts, bots, and troll profiles designed not just to spread misinformation but to manipulate how we think.
These accounts don’t rely on brute force or obvious spam. Instead, they exploit the psychological shortcuts—known as cognitive biases—that shape our perception of truth and trust online. From echoing what we already believe to mimicking social consensus, bots have learned how to game human behavior at scale.
This is PsyOps 2.0: a new era of psychological operations where the targets are not just systems but minds. In this article, we’ll explore how fake accounts are engineered to exploit our mental habits and how their influence is quietly shaping the narratives that define our world.
The Rise of Bots and Digital Manipulation
The use of automated accounts, also known as bots, has evolved dramatically over the past decade. Originally deployed to automate simple tasks like retweeting or boosting visibility, bots have become far more sophisticated in both design and purpose. Today, they’re part of coordinated information campaigns used by state actors, political movements, and even private interest groups to influence public discourse at scale.
According to a 2020 report by the Oxford Internet Institute, organized social media manipulation campaigns were identified in 81 countries, with bots playing a central role in spreading propaganda, amplifying polarizing content, and drowning out dissenting voices. These aren’t just spammy accounts with random usernames; they often mimic real users, complete with profile pictures, bios, and consistent posting habits. You've undoubtedly come across some of the more unsophisticated ones if you've spent any considerable time on social media.
Platforms like X (formerly Twitter), Facebook, and TikTok have all acknowledged the presence of bot networks and foreign interference campaigns. Despite efforts to detect and remove them, many bots continue to evade detection by blending in with real users and adapting their behavior based on engagement patterns.
What makes this new generation of bots particularly dangerous is not just their volume but their ability to manipulate perception. They don’t just share content; they shape conversations, reinforce biases, and manufacture the illusion of consensus. And increasingly, they’re doing it with the help of AI. If you are interested in learning more, the BBC link below has a good overview of bots and how they spread fake news.

The Psychology Behind the Manipulation
The success of online manipulation campaigns doesn’t rely on advanced technology alone—it depends on our minds. Bots and fake accounts are designed to exploit well-documented cognitive biases, the mental shortcuts all humans use to make sense of complex information. I've talked about a number of these biases in previous articles, but I think it is worth going over them again.
One of the most commonly targeted is confirmation bias, our tendency to favor information that aligns with our preexisting beliefs. When bots amplify content that reinforces a user’s worldview, it feels more truthful, simply because it "fits." This tactic has been observed in multiple disinformation campaigns, including those documented during the 2016 and 2020 US elections. A Stanford Internet Observatory report noted that accounts linked to foreign influence operations often tailored messages to appeal to deeply held political identities, ensuring they would be accepted and shared without scrutiny.
Another powerful tool is social proof, the psychological principle that people tend to follow the behavior of others, especially in uncertain situations. When a tweet receives thousands of likes or retweets, it appears more credible, regardless of its factual accuracy. Researchers from Indiana University found that social bots were responsible for a disproportionate share of retweeting activity during key political events, giving fringe ideas the appearance of mainstream support
(source).
Authority bias is also routinely exploited. Some bots impersonate experts or verified accounts to lend legitimacy to their messages. In 2019, Facebook removed fake pages that posed as media outlets and journalists as part of a coordinated effort to mislead users in the Middle East and North Africa
(Facebook Report, August 2019).
Then there’s the availability heuristic, the idea that the more we see or hear something, the more likely we are to believe it’s true. Repetition breeds familiarity, and bots are used to repeat the same messages across platforms, increasing their visibility and perceived legitimacy.
These psychological vulnerabilities aren’t new. What’s changed is the scale and precision with which they’re being targeted. With the help of AI and data analytics, manipulators can now tailor their strategies to specific demographics, interests, and emotional triggers, turning cognitive bias into a powerful weapon.
Inside the Influence Machine: How Campaigns Are Engineered
You might think there is no rhyme or reason behind how bots emerge, but you would be wrong. Behind every coordinated bot network is a carefully structured operation, often blending automation, human oversight, and data-driven targeting. These aren’t random acts of trolling; they’re systematized campaigns built to manipulate attention, emotion, and belief at scale.
Many of these operations begin with data harvesting. Public social media profiles provide a wealth of information, political leanings, interests, location, and even emotional vulnerabilities. This data is used to segment audiences and tailor content to resonate with specific communities. In some cases, disinformation actors use hacked data or leak sites to further refine their targeting.
Once targets are identified, operators deploy networks of fake personas, a mix of bots, sockpuppet (a false identity) accounts, and sometimes real users paid to participate. These accounts are programmed or instructed to post, comment, and engage in ways that mimic organic behavior. Some are designed to stir outrage; others, to quietly validate certain narratives through agreement, likes, or shares.
The content itself is often algorithmically optimized. Disinformation operatives monitor engagement metrics in real-time to test which headlines, hashtags, or emotional triggers perform best. Content that gains traction is then amplified across multiple platforms, often using cross-platform coordination to reinforce the same message in different digital environments.
Sometimes, these campaigns are aided knowingly or not by legitimate influencers or media outlets who pick up and amplify manipulated content. This process, known as “information laundering,” makes the original source harder to trace and lends unearned legitimacy to the message.
Perhaps most concerning is the increasing use of AI-generated content. With tools capable of producing convincing text, images, and even video, operators can now flood the information ecosystem with synthetic media that looks and feels authentic. This creates not just misinformation but a climate of uncertainty, where even true information becomes suspect.
What emerges is a feedback loop: engineered content exploits human bias, gains engagement, and is further amplified by both algorithms and people—making manipulation not just possible, but scalable.
Fighting Back: What Comes Next
The threat of psychological manipulation via fake accounts is no longer theoretical; it’s here, it’s ongoing, and it’s evolving. As we’ve seen, these influence campaigns don’t rely solely on technology. They weaponize human psychology, targeting our cognitive biases to shape what we trust, share, and believe. They’re built with precision, run at scale, and are increasingly enhanced with artificial intelligence.
Social media platforms have taken some action by removing coordinated networks, labeling state-affiliated media, and investing in moderation tools. But many experts argue these steps are not enough. Detection remains difficult, and enforcement is uneven. Bots continue to adapt faster than the systems designed to stop them.
Governments, too, are struggling to keep pace with the speed and sophistication of these operations. While some countries have introduced laws targeting disinformation and foreign interference, others risk overreach, raising concerns about censorship and abuse of power.
Ultimately, the most resilient defense may lie with users themselves. Recognizing how manipulation works and how content taps into our own mental shortcuts is a crucial step. Digital literacy, critical thinking, and healthy skepticism are more important than ever.
We can no longer just be passive consumers of information; we are data points, targets, and sometimes unwitting amplifiers in a global contest for influence. The question now is not whether we are being manipulated online. The question is: how often and by whom?
As the line between reality and manipulation continues to blur, the responsibility to understand and resist it becomes not just personal—but civic. It is everyone's duty in a democracy to seek the truth and fight against those bad actors who perpetuate this false information. Keep fighting!