Is identifying a virtual bot still possible?

Bots are now an integral part of the content platform landscape. Clearly identified as such or more hidden, they are present on each of our feeds. But how do we distinguish them from other content? How to manage and moderate this type of content? How to avoid abuse? We discuss it.

Hey you ! 

The border between the real and the virtual is becoming more and more blurred. The appearance of the metaverse, the popularity of fictitious influencers… In short, it’s a phenomenon that’s spreading. A recent study published by the Swiss Federal Institute of Technology in Lausanne states that 20% of global trends on Twitter are created from scratch by bots. According to the same study, there are no less than 108,000 fake accounts involved in these operations on the platform.

The risk of abuse of bots

Originally, it was obvious to make the distinction between a bot and an account animated by a real person. Today, the operation is more delicate. Not only is the line blurring, but it can also lead to other abuses. Bots sometimes create phishing apps that can be used for disinformation, boycott, defamation and hate campaigns… This is what happened during the last American elections. 

An almost impossible distinction between real and virtual

Recently, researchers from the University of Pennsylvania and Stony Brook University (New-York) looked at a key question: how to distinguish human users from bots? The idea was to determine if it is still possible today, for a classic user, to distinguish between an account really animated by a human and an account 100% managed by a virtual bot. The researchers analyzed more than three million tweets written by three thousand bot accounts and as many authentic profiles. They then classified them according to 17 distinctive characteristics (age, gender, personality traits, emotion, etc.). The result? It would be very difficult, if not impossible, for a human observer to tell the difference between a real and a fake profile. So, how to protect yourself from the associated risks?

The importance of a moderation policy

The study also shows another element, rather reassuring. Bots, analyzed individually, can hardly be detected. On the other hand, if they are analyzed as a group, similar aspects are quickly noticed and their artificial side is revealed. It would therefore be possible to regulate them and integrate them into a content moderation policy. This is in line with a fundamental trend. The pressure on platforms is increasing, in order to encourage them to set up ethical committees and stricter and better regulated publication rules. This is for example what the French platform MYM has chosen to put in place with an independent ethical committee. This committee defines the rules for publishing content on the platform and ensures that they are respected. It is a way to guarantee the quality and variety of the content, the user experience and the work of the content creators. So, is this type of measure going to become widespread?