Social bots, or automatic social media accounts that pose as legitimate people today, have infiltrated all fashion of conversations, including conversations about consequential subjects, these kinds of as the COVID-19 pandemic. These bots are not like robocalls or spam emails modern experiments have demonstrated that social media end users obtain them primarily indistinguishable from serious human beings.
Now, a new study by University of Pennsylvania and Stony Brook University researchers, released in Findings of the Association for Computational Linguistics, offers a nearer seem at how these bots disguise themselves. Through condition-of-the-art device understanding and natural language processing methods, the researchers estimated how effectively bots mimic 17 human attributes, which includes age, gender, and a array of feelings.
The examine sheds light on how bots behave on social media platforms and interact with genuine accounts, as nicely as the present abilities of bot-generation technologies.
It also implies a new approach for detecting bots: When the language utilized by any one bot reflected convincingly human personality features, their similarity to one particular yet another betrayed their artificial nature.
“This exploration presents us insight into how bots are capable to have interaction with these platforms undetected,” suggests direct writer Salvatore Giorgi, a graduate scholar in the Office of Pc and Information Science in the College of Engineering and Utilized Science. “If a Twitter consumer thinks an account is human, then they might be a lot more likely to engage with that account. Based on the bot’s intent, the finish final result of this conversation could be innocuous, but it could also direct to partaking with most likely unsafe misinformation.”
Read through a lot more at Penn Engineering Nowadays.