by Peter Petrangelo

When people started intentionally disrupting online communities, the internet quickly decided to call them “trolls.” Named after creatures stemming from Scandinavian Folklore, the digital species shared numerous characteristics with their mythological counterpart, like being unintelligent, ill-intentioned, and typically easy to distinguish from humans. Such parallels may have made sense at the time, as jokes pushed by internet trolls tended to be simple in nature, with confusion as a worthy aim in and of itself.
But the internet evolved, and the trolls did too. Though they bear the same time, they no longer possess all the traits they once did. They are sophisticated, understanding how our culture is built and how to leverage it. They are no longer easily distinguishable; it takes more than just one look to spot a troll. They diversified, and rather than remaining a monolithic species, they now come from many different continents and cultures. Crucially, trolls also became very organised. Yet our collective understanding of their identity is still stuck on the definition of who they once were. We are confident in our superiority over them, which makes us blind to their influence and our susceptibility to become victims of their warfare.
Gaining prominence
Though organised troll groups were present for many years, the entities first became recognised as a proper threat following the 2016 US presidential election. Most of what we know about troll farms is based on the aftermath of this event. The much-debated Mueller report confirmed that the efforts of the prominent Saint Petersburg-based trolling group known as Internet Research Agency (or IRA) began in 2013. The leader of the organisation was determined to be Yevgeny Prigozhin, head of a hot-dog empire turned Kremlin henchman. Known as “Putin’s chef” thanks to his government catering contracts, Prigozhin took both finances and direction from the Kremlin, Russia’s Ministry of the Defense, and the GRU. The Mueller report also highlighted the involvement of the troll farms across all major social media platforms including Facebook, Instagram, YouTube, Tumblr and Twitter.
While many of the organisational details of these agencies were kept classified, some personal accounts have come out that illuminate the structure of the propaganda machine at work. According to these witnesses, the trolls were divided into groups, each in a different room, according to their abilities and, most importantly, their language skills. Each day, they received an email with assigned subjects around which to construct a narrative. Topics varied; one individual who worked in the Russian-speaking team mentions that the subject could have been anything from praising Putin to highlighting US involvement in the spreading of ebola. That same person recalls conversations amongst the English-speaking team on “the best time to post commentary to attract an American audience and bragging about creating thousands of fake social media accounts.”

Following the release of the Mueller report, the IRA and the people at the helm of it were indicted, and Facebook announced they had shut down 32 accounts and profiles of groups related to the IRA. But the problem has persisted. There have been consistent reports about IRA-associated individuals conducting work outside of the organisation. Former head of the IRA, Alexander Malkevich, went on to run NewsFront, another disinformation outlet. Reports also speak of IRA members’ involvement in the construction of troll farms abroad that operate from places like Ghana or Nigeria to provide their endeavour with additional cover. The issue was never resolved—as the farms got smarter and disseminated themselves around the globe, it only got tougher to combat.
Spreading the narrative
The 2016 US presidential election not only taught us who the trolls are; more crucially, we learned about the manner in which Russia conducts these campaigns. The methodology that crystallised itself is very effective, but also severely misunderstood and underestimated. The approach can be summarised as “following and trust first.” The runup to the election demonstrated that easy-to-spot accounts which may instantly strike people as propaganda machines are not particularly effective in achieving their goals; instead, it is better if they are disguised as something else.
These accounts and pages actually tend to begin with posts that are totally unrelated to the objectives they are trying to achieve. As disinformation expert Renée DiResta recounts, when asked to investigate Russian influence during the 2016 election, she initially had doubts about whether or not the data set she was given was corrupted. Her skepticism stemmed from the fact that the data set began with a series of Kermit the Frog memes. Only later did DiResta realise that the trolls had perfectly understood that memes are a unit of cultural communication, a building block on which trust can be built—trust that they can later leverage to push their desired narrative.
Trolls’ strategy is not only limited to anonymous pages posting typical media content like memes, infographics, and motivational quotes. They also leverage people, such as Vladimir Bondarenko.
Vladimir is an ex-aviation engineer and a vocal critic of the Ukrainian government—justifiably so, one could say, as he is convinced that their actions destroyed the Ukrainian Aviation Industry, which left him jobless. As a result, he turned to blogging. His story and popularity even earned him a profile in Ukraine Today. One sympathises, or rather could—if Vladimir were real. In fact, he is actually a carefully crafted character linked to troll farms uncovered during Meta research that focused on profiles with AI-generated faces. Yes, Vladimir is an output of thispersondoesnotexist.com, existing with all the curses given to him by its creator, like a strange, asymmetrical left ear—a common glitch when using the website to generate faces.
What’s remarkable about this strategy is how effective Russian trolls are at disguising themselves. A leaked 2019 Facebook report on troll farm activity following the 2016 election reveals internal research on accounts in the US that were run by troll farms. By then, the IRA was already sanctioned and theoretically eliminated from all major platforms. Therefore, in these instances, the troll farms were mostly run from Kosovo and North Macedonia. The research found that these troll farms were behind:
- The top 15 Christian Facebook pages
- 10 out of the top 15 Black American Facebook pages
- 4 out of the top 12 Native American Facebook pages
Collectively, those pages reached 140 million people in the US alone—around 40 million more than the page with the second highest audience: Walmart. 75% of those people never followed any of the pages. Globally, they reached 360 million people. As staggering as these numbers may seem, they by no means represent the full extent of the problem, as they only account for pages detected between the 2016 election and the publishing of the report in 2019. If raw figures on view counts were not enough, perhaps a testament to the effectiveness of Russian campaigns is offered by successful attempts to organise real life events. Some of them accumulated as many as 250,000 followers.
Trolls across the spectrum
Understanding how trolls disguise themselves leads to another subject that is far from obvious: what is the narrative that they actually spread? It is often assumed that the rhetoric they promote shamelessly replicates the propaganda pushed by the Kremlin itself. While this may be true in Russia proper, the same cannot necessarily be said when it comes to the West. To illustrate the point, let’s look at a picture you might have recently seen on your social media feed:
The post was first uploaded by an account called Redfish. On its surface, the page may seem like another left-leaning progressive Twitter account. It pushes anti-capitalist rhetoric, honours figures like Malcolm X, and remembers February the 14th not as a Valentine’s Day but as the anniversary of Native Hawaiians stabbing coloniser James Cook.
However, in 2018 the account was exposed as an organisation primarily consisting of employees who had last worked for the state-run Russia Today. Their aim appears to be more subtle than just spreading disinformation. As the graphic above shows us, the goal is rather to divert the attention away from the Kremlin and exacerbate divisions in the social fabric of Western countries. This doesn’t mean these accounts don’t manipulate and misinform; for example, the map comfortably ignores Russia’s participation in the Syrian conflict. Redfish has also posted a video promoting the argument that Western sanctions are ineffective, that ordinary Russians don’t care about the ability to buy an iPhone, and that the only result of the sanctions will be a loss of jobs and the accompanying economic fallout.
This is far from a lone incident; for instance, the aforementioned Ghana operation tended to post about issues concerning racism in the US without an explicit link to anything Russia does. We can only speculate about the goals here; one could be to extend Russia’s reach by building trust across different communities—that is, to build accounts with followings that can be used for other purposes in the long-term. Another could be inspired by the so-called 36 stratagems, a famous Chinese military doctrine, one which dictates you should “Let the enemy’s own spy sow discord in the enemy camp;” one can undermine an enemy’s abilities by shifting their attention towards settling internal disputes, instead of defending themselves.
In short, even if the narrative in question does not help the Kremlin directly, the divisions such accounts create in society may in and of themselves help Moscow in achieving its goals. If that’s the objective, one could wonder what means could be taken to achieve it; theoretically speaking, there is nothing stopping the trolls from building an anti-Russia account that, while not supporting the Kremlin openly, could then be used to spread other narratives. Redfish itself has posted infographics on the number of anti-war protests taking place in Russia.
Weaponising social media
All of these strategies have proven to be particularly effective because Russian trolls acutely understand the environment in which they operate and how they can use its particularities to achieve their goals. Social media in and of itself possesses characteristics that help in spreading disinformation. Much has been said about how social media is programmed to promote content that engages us, which typically translates into content that sparks anger and arguments. Thus, if social media finds something that, based on your activity, could cause you anger, it will probably find you, providing a basis for the proliferation of lies across most susceptible groups. In this sense, every social media user may participate in spreading propaganda by amplifying the message at hand through views and reactions of all kinds.
One of the reasons we underestimate Russian trolls is because we may rarely (at least knowingly) virtually meet them. Yet, encounters were never a prerequisite for them to spread their message. There is a second way to influence information flows, the point of which is not necessarily about reaching as many people as possible directly; instead, it may be about reaching enough people to achieve what Samuel Woolley and Phillip Howard coined as manufactured consensus. The term refers to a discrepancy between opposing opinions we see when we compare them online and offline. As an example, in one of her studies, Renee DiResta noticed that 99% of voices online expressed anti-vaccine views on a particular topic being debated in a US constituency. However, when she looked at poll results for real people living in the region, only 15% of people expressed such views.
Of course, an average constituent doesn’t come into contact with such polls; however, they do see the internet discourse. Thus, the aim of manufactured consensus is to create a situation in which you change your world view purely on what you see online—“if everyone online thinks Bill Gates wants to put a chip in you using a vaccine, surely there is something to it.” The goal is to amplify topics artificially and achieve prominence that does not correspond to the views of the population. This can be done both directly by trolls and by people who have fallen for their propaganda. Once they gain mass to achieve this effect, the trolls themselves are no longer needed; people will spread the disinformation themselves. Trolls have achieved what they desired; they’ve shifted the discourse to focus on what they want us to talk about. The narrative was implemented—and we participate in it.
Taking back control
All of these tactics accumulate to form a coherent, extremely well-designed manner of attacking our discourse and exerting influence on the West. Trolls are no longer easy to spot. They are organised, and they come from different continents and cultures; detecting them is not as simple as checking if they operate using a .ru email address. They are highly aware of the social dynamics and the environments in which they operate and can produce memes that go viral. There are many things that must be done to gain control over our narrative. However, the first step is clear: understanding that we are all susceptible to fall for their narrative.
To illustrate the point, let’s consider the situation you are in. You trust INTERZINE, and articles posted here might have convinced you before. Alternatively, someone you trust, perhaps whose views align with yours, sent you a link to this piece. Now, go to the top of the page, and Google the name of the author of this article. I am convinced you didn’t take time to research that person. If you did, maybe you would have realised that person doesn’t even exist. I made that name up. Yet you kept on reading.
Bart Boczon is a data scientist working in the media industry. He is interested in Big Tech and the influence it exerts on our everyday lives.