Election campaigns are increasingly carried out in social networks to influence voters. Social bots are being used for this purpose, which raises the question of how much influence they have on voters and whether they can even endanger a democracy. Furthermore, the question arises as to who can be held responsible for this and how users of social networks can protect themselves against social bots.
Democracy – a secure system?
The decline of the Weimar Republic through Adolf Hitler’s seizure of power has shown that a democracy is a system that must be protected. Adolf Hitler repealed the civil rights by Reichstagsbrandverordnung („Reichstag fire ordinance“) [1]. The separation of powers was abolished by the „Ermächtigungsgesetz“ (Authorization Act) [2]. Furthermore, the freedom of assembly and of the press was restricted by the „Verordnung des Reichspräsidenten zum Schutze des Deutschen Volkes“ („Ordinance of the Reich President for the Protection of the German People“) [3]. In the Federal Republic of Germany, democracy is secured by various laws. The constitution of Germany, the Basic Law, can only be changed if a 2/3 majority of the Bundestag and a 2/3 majority of the Bundesrat is achieved [4]. In addition, some paragraphs cannot be changed at all. These concern Article 1 of the Basic Law (human dignity), Article 20 of the Basic Law (state structure) and paragraphs which prevent the abolition of fundamental rights [5]. Nevertheless, there are always people or organizations that want to harm democracy. For this reason, the Constitution is protected by the Domestic Intelligence Service of the Federal Office for the Bundesamt für Verfassungsschutz („domestic intelligence service of the Federal Republic of Germany “), the Militärische Abschirmdienst („Military Counter-Intelligence Service“) and the Landesbehörde für Verfassungsschutz („Federal Office for the Protection of the Constitution“) [6].
Bots
Bots are computer programs that automatically perform repetitive tasks without having to rely on interaction with a human user. Bots mostly consist of simple algorithms. Complex bots are rather rare. [7]
Like any technology, bots are neither good nor bad. Bots can be used for good purposes, to help people or for malicious purposes. For example, web crawlers help to index content from the Internet so that it can be found via search engines. Shop bots enable price comparison portals or bots in computer games make some games attractive in the first place.
Users of bots, for example in the form of Aim-Bot, which automatically aim at the opponent’s head, or Mining-Bots, which automatically mine resources, have clearly darker intentions. With the sale of the items and resources won with it, you can earn real money afterwards. Other bots try to bid at the last second (auction sniping) [8], click on advertisements (click fraud) [9] or even steal credit card information (credit card fraud) [10]. Entire bot networks are able to surrender an attack target with targeted requests (DDoS).
What makes Bots dangerous?
The fact that bots are easy to develop and can cause major damage or generate large profits means that the risk of bots is high. Even attackers with basic programming knowledge should be able to program and use or sell a bot. Bots aren’t expensive either, they start from $50 per day [11].
Social Bots
Social Bots simulate human users in social networks [12]. They are used to spread fake news and (political) opinions [13]. Through targeted disinformation, they deceive users of social networks or create polarizing effects [14]. Social bots are not forbidden in Germany [15].
Like other bots, social bots are simple programs. To access data or create their own content, they mostly use APIs provided by the platform operators. Using simple if-then event chains, they search the social network for contributions with certain keywords or hashtags and carry out actions on the contributions found, for example by replying to the contribution. [16]
The beneficiaries of social bots are primarily political actors such as lobby groups, political parties and governments, but also other groups with an interest in forming public opinion such as private individuals, groups and organisations. Furthermore, there are harmless bots, which for example reply to tweets with StarWars tags. [16]
Use of social bots in election campaigns
Social bots are used above all in election campaigns. In the US presidential election of 2016, Donald Trump’s 13 million followers included several million bots who shared his tweets and created a mood for the Republicans. Bots from Russia also spread about half a million tweets. [17].
In the Brexit vote (2016), both sides, opponents and supporters of Brexit, used bots. According to this, around 800,000 bots used hashtags on the Brexit theme, such as #leave. From 10 June to 11 July, 10 million tweets were written on the subject of Brexit. Of these, an average of 5 tweets were sent by accounts that can be assigned to bots, while only 1.2 tweets were sent by real users. Bots could be assigned to 13,500 accounts, of which 31% contained #leave and 17% #remain (#remain is mainly used by Brexit opponents). [18]
Bots were also used in Germany in the 2017 Bundestag elections. 7.6% of the tweets about the Bundestag elections had been sent by bots, of which 15% can be attributed to the AfD and 12% to the left [19].
Influence of bots on humans – even a few bots can influence
There is no empirical study to date on how far voters actually allowed themselves to be influenced by elections. The theory of the spiral of silence, however, states that the willingness to commit oneself publicly to a topic depends on the climate of opinion [21]. This means that if an opinion contradicts the prevailing opinion, there is an inhibition to publicly represent that opinion. The more opinions are expressed, the less inhibition there is. Already 2-4% of bots can break the spiral and the probability that the opinion of the bots will prevail is 2/3 [20].
Responsibilities
Since only 18% of Internet users feel able to recognize social bots [21], the question arises as to who is responsible. In addition, there are various possible positions: the platform, the governments or the users themselves.
Twitter has taken several measures to detect social bots. To do this, they compare relative metrics of human users. Different patterns can be bots. But also the registration process has been improved. Users must confirm their e-mail address or mobile phone number when logging in. This is to prevent automatic logins by bots. Furthermore, tweets with frequent occurrence will be evaluated with the same hashtags. [22]
Facebook, Google and other platforms have presented the EU with a code of conduct against disinformation in which they undertake to block fake accounts and social bots quickly. Advertising revenues for accounts that disseminate false information are also to be withdrawn and data protection-compliant access for researchers established. [23]
The federal government wants a labelling obligation for news originating from social bots. If bots do not identify their messages, the operator will be fined. Furthermore, platforms are to be obliged to make the emergence of bots transparent. [23]
The EU wants to implement an early warning system for the defence against fake news. This would fall within the remit of the European External Action Service (EEAS). To this end, they are to hire more qualified staff, receive better data analysis tools and receive 5 million euros. [24]
However, there is also criticism of an obligation to label. For example, Linus Neuman of the Chaos Computer Club (CCC) considers the importance of social bots to be massively exaggerated. He also said that politicians want to distract from a general loss of trust among citizens. According to Dr. Christian Grimme of the Westfälische Wilhelms-Universität Münster, there is no jurisdiction: it must first be proven in a court of law whether it is a bot or not. Marie-Teresa Weber of Bitkom raises the question of who judges “good” and “evil” in the fight against bots. [25]
Improvement of the competence of the users
Prof. Dr. Thorsten Quant (Westfälische Wilhelms-Universität Münster) doesn’t see the problem in technology, but in its social application: one should not start with technology, but education is the decisive point [26].
So how can they be recognized? There are several hints that can be found in user profiles and behavior and can provide information about whether it is a real user or a bot [27]:
- Do profiles have any private information?
- Number of postings?
- Thematically always the same?
- Above average fast reactions?
- No answer to complicated questions?
- Many conversations at the same time?
- Do the postings have a pattern?
There are also online services such as Botometer (https://botometer.iuni.iu.edu) or Bootsheck (https://botcheck.me), which try to detect whether a social media profile is a real user or a bot. They use algorithms based on machine learning [28, 29].
Social Bots and Artificial Intelligence
As described, social bots are not intelligent, but simple algorithms. Nevertheless, the development of bots does not stop there. A danger emanates from artificial intelligence when social bots create their own content or behave like real people. By then at the latest, social bots will no longer be recognizable as machines.
Microsoft has developed a social bot called Tay, which is based on AI. The aim was to test how an AI can learn in everyday life. Tay was designed as a Twitter bot and learns with every conversation. During a first test run in March 2016, Tay turned into a genocidal troll, so that Tay was shut down after only 16 hours and 96,000 tweets. [30]
Philosophical and ethical questions
In addition to technical questions, philosophical and ethical questions must also be clarified, such as whether an AI would develop a morality and whether it would accept limits and whether mankind has to do everything that is technically possible at all [32]. The transhumanists want a fusion of the human brain with computer chips to prevent the danger of people not understanding machine solutions [31].
Stephen Hawking warned against crossing a red line, which says whether mankind must do anything at all that is technically possible [32]. An example of this is the atomic bomb.
Science fiction takes up the topic in the form of films. In Terminator (1984) intelligent machines fight against mankind [33| and in 2001: A Space Odyssey (1968) reached an AI that the end of mankind is better for the universe [34].
Conclusion
German democracy is a system that is protected by laws in such a way that it cannot be leveraged. Nevertheless, there are institutions that actively protect democracy. Social bots are used to spread (political) opinions and influence voters. To counteract this, politicians, operators and users must become aware of their responsibilities. An obligation to label content created by bots is not readily enforceable without jurisdiction. This is why it is important to communicate digital skills in terms of social networks, fake news and the recognition of social bots, and to sensitize users.
The greatest danger comes from social bots that use artificial intelligence, because it is very difficult to even recognize them as bots. In the future, modern problems will require modern solutions to combat them if they send rushing messages, for example by platforms themselves using AI to recognize AI-based bots. The example of Tay from Microsoft shows how dangerous AI-based social bots are. Here we must appeal to humanity not to do everything that is technologically feasible.
References
[1] https://de.wikipedia.org/wiki/Verordnung_des_Reichspr%C3%A4sidenten_zum_Schutz_von_Volk_und_Staat
[2] https://de.wikipedia.org/wiki/Erm%C3%A4chtigungsgesetz
[3] https://de.wikipedia.org/wiki/Verordnung_des_Reichspr%C3%A4sidenten_zum_Schutze_des_Deutschen_Volkes
[4] https://de.wikipedia.org/wiki/Verfassungs%C3%A4nderung
[5] https://de.wikipedia.org/wiki/Ewigkeitsklausel
[6] https://de.wikipedia.org/wiki/Verfassungsschutz
[7] https://de.wikipedia.org/wiki/Bot
[8] https://en.wikipedia.org/wiki/Auction_sniping
[9] https://www.cloudflare.com/learning/bots/what-is-click-fraud/
[10] https://en.wikipedia.org/wiki/Credit_card_fraud
[11] https://securelist.com/the-cost-of-launching-a-ddos-attack/77784/
[12] https://onlinemarketing.de/lexikon/definition-social-bots
[13] https://www.oeffentliche-it.de/-/social-bots
[18] https://www.datacenter-insider.de/brexit-forscher-kommen-der-twitterbot-armee-auf-die-spur-a-661027/
[20] https://medienrot.de/laute-minderheiten-und-die-schweigespirale/
[21] https://de.wikinews.org/wiki/K%C3%BCnstliche_Intelligenz:_Gefahren_durch_Social_Bots
[26] https://www.bundestag.de/dokumente/textarchiv/2017/kw04-pa-bildung-forschung-social-bots-488818
[28] https://www.tagesschau.de/faktenfinder/tutorials/social-bots-erkennen-101.html
[29] https://botometer.iuni.iu.edu/#!/faq#how-does-it-works
[30] https://medium.com/@robhat/identifying-propaganda-bots-on-twitter-5240e7cb81a9
[31] https://de.wikipedia.org/wiki/Tay_(Bot)
[32] http://mediario.de/wordpress/wp-content/uploads/Social-Bots-und-KI.pdf
[33] https://de.wikipedia.org/wiki/Terminator_(Film)
[34] https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)
Leave a Reply
You must be logged in to post a comment.