Partner Sites


Logo BusinessBecause - The business school voice
mobile search icon

Are Facebook And Twitter Ready For The US Election?

Four years on from Russian interference in the US election, what have Facebook and Twitter done to fight misinformation? And will it work?

Fri Oct 30 2020

BusinessBecause

Amid the chaos of a disputed election, foreign actors may find it easier to infiltrate and disseminate further false claims across both Twitter and Facebook. 

Paul M. Barrett, the deputy director of the NYU Stern School of Business Center for Business and Human Rights, predicted in a report that this time around, Iran and China may join Russia in disseminating disinformation.

US national security officials have said that Iran is responsible for a slew of threatening emails sent to Democratic voters ahead of the election. They have also said that both Iran and Russia have obtained some voter registration information. 

Compared to four years ago, what impact could this have in 2020?

“The problem with 2016 was that the platforms and their users—and the US government—weren’t at all prepared for Russian interference or domestically generated mis- and disinformation,” explains Paul. 

“It's hard to gauge whether users are more on their guard for harmful content, but the platforms certainly are.” 


aa3bb2055ec09d118f661621030101ca5f295d0b.png


Facebook and Twitter introduce new policies to tackle misinformation

Twitter in 2019 made the decision to ban all paid for political ads on its platform. Facebook has introduced a similar policy this year, banning all political ads in the week leading up to the election, and for an unspecified period of time after November 3rd.

Both platforms moved to restrict the spread of a New York Post story about Joe Biden’s son, Hunter Biden, which contained hacked materials and personal email addresses. Twitter said sharing the article defied its hacked materials policy, while Facebook limited its spread while it was fact-checked.

The platforms have also started to provide more information about a news article’s sources, something David Rand, a professor at the MIT Sloan School of Management and in MIT’s Department of Brain and Cognitive Sciences, believes is a positive step. 

“This sort of tactic makes intuitive sense because well-established mainstream news sources, though far from perfect, have higher editing and reporting standards than, say, obscure websites that produce fabricated content with no author attribution,” he wrote in a New York Times op-ed.

Policies like these are clearly aimed at trying to protect the integrity of the US election in 2020. The fact that the companies are acting shows a willingness to avoid the spread of misinformation that was ripe in 2016. The election also comes at the end of a year in which the big tech platforms have been hounded for their lack of accountability and anti-competitive behavior.

Though recent research of David’s does raise questions about the effectiveness of their approach. 

David, along with Gordon Pennycook of the University of Regina’s Hill and Levene Schools of Business, and Nicholas Dias of Annenberg School for Communication, found that emphasizing sources had virtually no impact on whether people believed news headlines. 

Attaching warning labels was also seen as counterproductive. Though people were less likely to believe and share headlines labelled as false, only a small percentage of headlines are fact-checked, and bots can create and spread misinformation at a much faster rate than those stories can be verified.

“A system of sparsely supplied warnings could be less helpful than a system of no warnings, since the former can seem to imply that anything without a warning is true,” David wrote in the Times. 

So, what’s the solution?


How to fix misinformation and social media

Paul of NYU Stern thinks that, in one sense, the social media companies can never do enough.

“The platforms host too much traffic for even the best combination of artificial intelligence and human moderation to stop all harmful content,” he says.

“In addition to continuing to improve technological and human screening, they should be revising their basic algorithms for search, ranking, and recommendation. Currently, those algorithms still reportedly favor promotion of sensationalistic and anger-inducing content—a tendency that purveyors of harmful content exploit.”

A more modest step would be to remove, rather than label or demote, content that has been determined demonstrably false. This should be coupled with an increase in the number of content moderators, says Paul, who are hired in-house rather than outsourced. 


RECAPTHA :

2c

bc

77

2f