Given current world events, there’s an incredible amount of misinformation and disinformation around at the moment. Whether we’re talking 5G, the pandemic, vaccines, or invasions, there’s a lot out there.
One of the biggest problems where bad information placed online is concerned is bot farms. A huge army of automated accounts sowing seeds of doubt and nonsense isn’t helpful, and it can be tricky to do much about it. Occasionally, we hear tales of successful takedowns. This is one such story.
The difference between misinformation and disinformation
It’s important we know where both of these diverge, and how. Misinformation is when someone spreads incorrect information. There doesn’t need to be any malicious intent behind the spreading. They’re simply getting something wrong, and throwing it out there anyway. Someone may even be repeating a talking point to be helpful or give assistance. Unfortunately incorrect information being spread across social media and elsewhere can lead to all sorts of problems.
Disinformation is designed to be bad from the outset. It’s carefully crafted lies and subterfuge, manipulating the truth for dubious purposes. It can be done by anyone with a random account online, or deployed in more sweeping fashion by a Government. It’s a popular tool during wartime, and it’s being used daily throughout the invasion of Ukraine.
What is a click farm?
A common question asked is “What does a bot farm actually look like?”, with good reason. People imagine rows of servers, doing server type things. Someone once asked me if they “resembled robots”. There are different types of farm, which can make the confusion even worse.
Click farms are incredibly common, and are used to perform basic tasks on social media.
These farms may employ people to monitor dozens or hundreds of mobiles and interact in some way on content intended to go viral.
Click farms often descend into click fraud and other activities such as SIM smuggling, with law enforcement inevitably becoming involved.
What is a bot farm?
Bot farms, as the name suggests, attempt to go one better and automate a lot of these activities. It wasn’t so long ago that researchers uncovered an insecure bot farmand dug into how it operated. The intention of that farm was to perform political manipulation. Bots playing host to friends (also bots) across their networks pushed dubious and divisive content. They also joined specific Facebook groups to push the content further still. No doubt some of the material promoted was disinformation.
This is what’s currently happening in relation to the invasion of Ukraine.
When bot farms are dismantled
It’s been revealed that no fewer than five significantly sized bot farms have been shut downby the Security Service of Ukraine (SSU) since the invasion began.
According to the SSU, these bot farms are responsible for at least some of the many bomb threatscalled in from the start of the year onward.
The farm itself involved a wealth of equipment across two individual’s residential properties, with a third person fielding technical maintenance. 3,000 SIM cards, numerous laptops, and multiple GSM gateways were among the items seized.
Combating disinformation tactics
There’s never been a more urgent need for fact checking services. The Ukrainian Center for Countering Disinformation has set up its own Telegram bot, aimed at fact checking dubious claims within minutes. There are also several well-known fact-checking sites providing shakedowns of bogus claims and viral content in relation to the Russian invasion of Ukraine. These include Full Fact, Snopes, and AFP fact check.
Before retweeting or sharing content, it’s always worth checking the facts first. Even engaging with disinformation to counter or correct it can boost the disinformation’s viral nature, making it even harder to shut down. This is why, for example, people will often screenshot bogus claims to counter them instead of quoting them directly. In many cases, it may well be better to simply report the content instead of directly engaging with it.