Social bot


A social bot is an agent that communicates autonomously on social media. The messages (e.g. tweets) it distributes can be simple and operate in groups and various configurations with partial human control (hybrid). Social bots can also use artificial intelligence to express messages in more natural human dialogue.

Uses

  • To influence peoples decisions, e.g.: advertise a product, support a political campaign, increase engagement statistics for social media pages Etc.[1]
  • To provide low cost customer service agent to answer questions that their users might have.
  • To automatically answers commonly asked questions on Social media such as Discord.

Lutz Finger identifies 5 immediate uses for social bots:[2]

  • foster fame: having an arbitrary number of (unrevealed) bots as (fake) followers can help simulate real success
  • spamming: having advertising bots in online chats is similar to email spam, but a lot more direct.
  • mischief: e.g. signing up an opponent with a lot of fake identities and spam the account or help others discover it to discredit the opponent.
  • bias public opinion: influence trends by countless messages of similar content with different phrasings.[3]
  • limit free speech: important messages can be pushed out of sight by a deluge of automated bot messages.
  • to phish passwords or other personal data.

History

Social bots, besides being able to (re-)produce or reuse messages autonomously, also share many traits with spambots with respect to their tendency to infiltrate large user groups.

Twitterbots are already well-known examples, but corresponding autonomous agents on Facebook and elsewhere have also been observed. Nowadays, social bots are equipped with or can generate convincing internet personas that are well capable of influencing real people.

Using social bots is against the terms of service of many platforms, such as Twitter and Instagram, although it is allowed to some degree by others, such as Reddit and Discord. Even for social media platforms that restrict social bots, a certain degree of automation is of course intended by making social media APIs available. Social media platforms have also developed their own automated tools to filter out messages that come from bots, although they are not advanced enough to detect all bot messages.[4]

The topic of a legal regulation of social bots is becoming more urgent to policy makers in many countries, however due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is currently unclear how that can be done and also if it can be enforced. In any case, social bots are expected to play a role in future shaping of public opinion by autonomously acting as incessant and never-tiring influencer.

Russian disinformation campaigns

Social bots have been used by the Russian government in spreading misinformation to destabilize the west, weaken Nato allies and more. In 2014 Russia used social bots to spread misinformation about its invasion of Crimea and in 2022 it did the same when invading Ukraine to try and convince people that the invasion was for the good of the Ukrainian people.[5]

2016 U.S presidential election

Social bots appear to have played a significant role in the 2016 United States presidential election and their history appears to go back at least to the United States midterm elections, 2010. It is estimated that 9-15% of active Twitter accounts may be social bots and that 15% of the total Twitter population active in the US presidential election discussion were bots. At least 400,000 bots were responsible for about 3.8 million tweets, roughly 19% of the total volume.

Boston marathon Bombings

Social Bots have were used in early detection of critical events such as the Boston Marathon bombings, although unverified sharing of information with others did diminish the over positive of their use.[6]

Detection

The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages around the clock (and at massive rates). Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. To unambiguously detect social bots as what they are, a variety of criteria[7] must be applied together using pattern detection techniques, some of which are:[8]

  • cartoon figures as user pictures
  • sometimes also random real user pictures are captured (identity fraud)
  • reposting rate
  • temporal patterns[9]
  • sentiment expression
  • followers-to-friends ratio[10]
  • length of user names
  • variability in (re)posted messages
  • engagement rate (like/followers rate)

Botometer[11] (formerly BotOrNot) is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[12][13] An active method that worked well in detecting early spam bots was to set up honeypot accounts where obvious nonsensical content was posted and then dumbly reposted (retweeted) by bots.[14] However, recent studies[15] show that bots evolve quickly and detection methods have to be updated constantly, because otherwise they may get useless after a few years.

See also

References

  1. "The influence of social bots". www.akademische-gesellschaft.com. Retrieved 2022-03-01.
  2. Lutz Finger (Feb 17, 2015). "Do Evil - The Business Of Social Media Bots". forbes.com.
  3. Frederick, Kara (2019). "The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight". Center for a New American Security. {{cite journal}}: Cite journal requires |journal= (help)
  4. Efthimion, Phillip; Payne, Scott; Proferes, Nicholas (2018-07-20). "Supervised Machine Learning Bot Detection Techniques to Identify Social Twitter Bots". SMU Data Science Review. 1 (2).
  5. Schaffer, Aaron (February 14, 2022). "Social media is a key battleground in the Russia-Ukraine standoff". The Washington Post.{{cite web}}: CS1 maint: url-status (link)
  6. Flammini, Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, Alessandro. "The Rise of Social Bots". cacm.acm.org. Retrieved 2022-03-01.
  7. Dewangan, Madhuri; Rishabh Kaushal (2016). "SocialBot: Behavioral Analysis and Detection". International Symposium on Security in Computing and Communication. doi:10.1007/978-981-10-2738-3_39.
  8. Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96–104. arXiv:1407.5225. doi:10.1145/2818717. S2CID 1914124.
  9. Mazza, Michele; Stefano Cresci; Marco Avvenuti; Walter Quattrociocchi; Maurizio Tesconi (2019). "RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter". In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). arXiv:1902.04506. doi:10.1145/3292522.3326015.
  10. "How to Find and Remove Fake Followers from Twitter and Instagram : Social Media Examiner".
  11. "Botometer".
  12. Davis, Clayton A.; Onur Varol; Emilio Ferrara; Alessandro Flammini; Filippo Menczer (2016). "BotOrNot: A System to Evaluate Social Bots". Proc. WWW Developers Day Workshop. arXiv:1602.00975. doi:10.1145/2872518.2889302.
  13. Varol, Onur; Emilio Ferrara; Clayton A. Davis; Filippo Menczer; Alessandro Flammini (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization". Proc. International AAAI Conf. on Web and Social Media (ICWSM).
  14. "How to Spot a Social Bot on Twitter". technologyreview.com. 2014-07-28. Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them
  15. Grimme, Christian; Preuss, Mike; Adam, Lena; Trautmann, Heike (2017). "Social Bots: Human-Like by Means of Human Control?". Big Data. 5 (4): 279–293. arXiv:1706.07624. doi:10.1089/big.2017.0044. PMID 29235915. S2CID 10464463.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.