Even before the last U.S. Presidential elections, we had blogged about how bots could influence the electoral process by amplifying messages from candidates, spread misinformation and negative sentiments about a candidate and his or her policy positions, or even discourage citizens from voting. An unprecedented number of bots are now using fake accounts created on Facebook, Twitter, and other social media channels to post and share partisan propaganda and fake content.
According to submissions by Twitter executives to the U.S. Congress, bots linked to Russian user accounts retweeted posts from Donald Trump’s Twitter account nearly half a million times, and retweeted Hillary Clinton’s tweets nearly 50,000 times prior to Election Day. Between them, these two Presidential candidates had over seven million fake followers on Twitter alone. More recently, bots with suspected links to Russia have posted and retweeted divisive messages after a spate of school shootings, as well as during other heavily-reported events covered by the media. There has been no definitive federal legislation against bots, in spite of the big social networks taking actions to purge bot accounts, implementing review processes to flag fake content, and introducing tools to provide greater transparency into entities behind controversial and partisan content.
In response, lawmakers in California and other states have cited the lack of action by the Federal government in curtailing bots on social media sites such as Facebook and Twitter, and proposed legislation to do just that. State Senator Bob Hertzberg (D-Los Angeles) noted that “We need to know if we are having debates with real people or if we’re being manipulated. Right now we have no law and it’s just the Wild West.”
California Senate Bill 1001 has been proposed to make it illegal for bots to communicate with a person in the state with “the intention of misleading and without clearly and conspicuously disclosing that the bot is not a natural person.” If approved, the new law will require social media platforms to implement reporting procedures to flag bot violations, and to delete a bot account within 72 hours of receiving a report about it. Further, networks will have to provide the California Attorney General with statistical information every two weeks on the company’s corrective actions taken against automated accounts.
California isn’t the only state that has taken cognizance of political trolling by bots. New York State’s Governor Andrew Cuomo is working with the state Assembly to pass legislation that would require political ads on social media to reveal the buyer’s identity. With researchers estimating that bots account for between 9 to 15 percent of active Twitter accounts, social networks certainly have a monumental task ahead of them.
The immense complexity of identifying bot accounts is compounded by the fact that automated programs are evolving to become extremely sophisticated and are starting to mimic human behavior, and are also combining their automation with actual human interactions from paid workers at ‘troll farms’ and other nefarious organizations.
It’s clear that the need for sophisticated bot mitigation is starting to get the attention it deserves. Watch this space for more updates.