How Facebook, Twitter Are Looking to Stop Voter Suppression for US Midterm Elections

0
1

Facebook and Twitter aren’t just trying to drive people to the polls – they’re racing to battle back bad actors who seek to deter their users from voting.

With the 2018 midterms days away, both social media platforms are waging a quiet war against fast-spreading falsehoods about how, when and where to vote — including posts containing inaccuracies about how to mail in ballots, or doctored photos that show long lines at polling stations. To do so, they’re taking new, aggressive steps to scan, vet and remove content that they see as a direct threat to democracy.

For Facebook and Twitter, the challenge is to ensure that false information about voting – potentially seeded by foreign governments or malicious domestic actors, then amplified by web users unwittingly — doesn’t serve to deter or intimidate voters on November 6. Typically, these web giants shy away from correcting or removing some false comments on their platforms, arguing they aren’t arbiters of truth.

For example, Twitter users in recent weeks have fueled a rumour that federal immigration agents might be stationed at polling places across the country checking voters’ citizenship status. “I hear ICE agents will be at polling stations on election day, looking to deport illegals trying to vote,” read an Oct. 28 tweet.

The post, which had gained little traction, was removed after The Washington Post contacted Twitter about it on Thursday. Experts fear such tweets might scare immigrants who have obtained citizenship and are allowed to vote from casting ballots, and could go viral. (The Trump administration has said — on Twitter – that it does not conduct “enforcement operations at polling locations.”)

But both tech companies are proceeding cautiously, trying to find the right balance between combating perceived voter suppression and preserving free expression. “STOP VOTER FRAUD WEAR A ICE HAT ON ELECTION DAY,” suggested another tweet that was still viewable on Twitter as of Friday.

Facebook and Twitter say they have fine-tuned their policies — and their algorithms — in a bid to thwart threats and misinformation around voting. Government officials also stress they are keeping watch, and many state-election leaders and voting-rights organizations say have reported problematic posts to the companies.

Monitoring for misleading messages, however, is not an easy task — and the stakes for tech giants are sky high after suspicious accounts with possible Russian ties used similar tactics during the 2016 election. On Twitter, they targeted their inaccurate voting information specifically to Hispanic, African-American and LGBT voters, according to documents released by congressional lawmakers.

“We’re concerned there’s going to be misinformation,” said Jim Condos, the secretary of state for Vermont and the leader of the National Association of Secretaries of State.

The heightened oversight complements the get-out-to-vote reminders and other tools that will be available atop Facebook’s News Feed and Twitter’s timeline come Election Day. On Friday, Twitter announced it would display an election countdown and links to resources for users to learn more about their local candidates. Other companies like Snap and Spotify similarly are encouraging their users to vote.

Attempts to depress turnout are hardly new: For decades, government officials have battled back anonymous snail-mail flyers and robo-calls that misled voters on the locations of their polling places or the date of the election. But voter suppression increasingly has become more of a digital scourge — from robo-texts en masse to viral photos and videos in the age of Facebook and Twitter.

“Technology has made that information delivery more efficient and cheap, therefore potentially much more widespread,” said Wendy Weiser, the director of the Democracy Program at the Brennan Center for Justice at NYU School of Law. “And it has made it more difficult for people to detect.”

Facebook this year has set up a special reporting channel for state election officials to flag voting misinformation for review and removal. Behind the scenes, it’s also implemented its machine-learning tools to scan for obviously duplicitous content, including posts that share the wrong date of the election, a company official said Friday.

Under rules it revised in October, Facebook has banned an even broader swath of content, including posts that wrongly claim that people can vote online. And the company has said it would send some posts — like incorrect claims about long lines at polling places — for fact checkers to review, an official explained Friday. If they’re found to be false, Facebook has said it will limit their reach in users’ news feeds.

Twitter has adopted a different approach: It has partnered with the National Association of Secretaries of States (NASS) and the National Association of State Election Directors (NASED). Both organizations, along with the Democratic and Republican Parties, are supposed to funnel reports of voting disinformation through a web portal for Twitter employees to review for potential removal, the groups said this week.

Company executives said they already took action against multiple tweets involving immigration agents at polling places. Another meme, purporting to be from the Democratic Party, suggested men stay home on Election Day to amplify the votes of women. Twitter also required the user to remove that one.

“In order to make sure that we are prepared to counter and combat things like attempting to spread mis- or disinformation about elections or voting, or registering to vote . . .we’ve really been trying not just to tighten our policies, but our enforcement capacity,” said Del Harvey, vice president for trust and safety at Twitter.

State-election regulators also have taken advantage of these tools. In Nevada and Oregon, for example, officials flagged tweets that wrongly suggested Democrats are supposed to turn in their ballots on November 7, the day after the election. Twitter has since taken the posts down. Organizations like NASED reported the tweets, even though they gained limited traction, to prevent the misinformation from going viral — and because they can’t tell who’s behind the posts in the first place.

“We don’t know on front end if something is an innocent mistake. We don’t know on the front end if someone thinks they’re being funny. We don’t know on the front end if it’s part of a larger misinformation campaign,” said Amy Cohen, the group’s executive director. “But we’re focused on addressing the misinformation, regardless of the messenger.”

To state election officials, voting-rights advocates and academics, the new systems represent an important start. Those companies also are working alongside the Department of Homeland Security and FBI to monitor online threats. But fears linger that social media companies won’t respond quickly enough on Election Day. Other watchdogs fret they have limited visibility into what people are sharing on Facebook, Twitter and other sites in real time. Misinformation can easily spread across social-media sites, making it more difficult to track coordinated campaigns.

Election Protection, a coalition of voting-rights organizations, said it received a report in recent days about a post on Facebook — bearing the logo of the Department of Homeland Security — that was similar to the tweets about ICE agents at polling stations. The group since has shared it with Facebook, which said such posts violate its policies.

“Falsely claiming that ICE or law enforcement will be at polling places is a common voter suppression tactic,” said David Brody, senior fellow for privacy and technology at the Lawyers’ Committee for Civil Rights Under Law, one of the organizations that comprises Election Protection.

In Vermont, Condos said his state is one of many that lacks the staff, expertise and money to devote an entire team to monitoring election misinformation.

“We just rely on people contacting us and letting us know,” he said.

Experts stress it’s hard to gauge the broader effects of these posts, tweets and other content on voters’ behaviors — absent more data. On Facebook, in particular, the site’s efforts to limit the reach of content designed to dissuade voters makes it hard for researchers to track what’s happening there.

“Right now, all we see are anecdotes about bad action – but we have no sense of the prevalence or how effective it might be,” said Nathaniel Persily, a professor at the Stanford Law School.

For social media sites, though, one test came Tuesday, when conservative commentator Dinesh D’Souza told more than 1 million followers on Twitter, “Felons can’t vote. But pardoned ones can @realDonaldTrump @tedcruz.”

D’Souza previously pled guilty to violating federal campaign finance laws but was pardoned by President Donald Trump this year. His tweet had been shared more than 6,000 times by Friday, and D’Souza posted a similar version on Facebook that picked up more than 2,700 shares. While the post may have been meant as a joke, it’s also false: In many states, ex-offenders can vote — pardon not required.

Both posts remain in public view.

© The Washington Post 2018

Source link

Leave a Reply