gettyimages-1219976216

Facebook, Google, Twitter tell lawmakers they’re doing more to safeguard elections


Officials from Facebook, Google and Twitter testified before the House Intelligence Committee on Thursday. 


Getty Images

Officials from Facebook, Google and Twitter signaled to lawmakers Thursday that the companies are more prepared to deal with misinformation on their platforms during the 2020 US presidential election, even as bad actors change tactics to try to evade detection.

Foreign interference has been a top concern for lawmakers after Russian trolls used social media sites to sow discord among Americans during the 2016 election. Since then, officials from all three companies say the firms have taken steps to remove fake accounts and clarify who’s behind political advertising. 

Still, lawmakers from the House Intelligence Committee expressed skepticism that the companies are doing enough during this election season amid more threats — such as artificial intelligence-powered videos called deepfakes, that can make it seem like people are doing something they aren’t. 

“I’m concerned about whether social media platforms like YouTube, Facebook, Instagram and others, wittingly or otherwise, optimize for extreme content. These technologies are designed to engage users and keep them coming back, which is pushing us further apart and isolating Americans into information silos,” Rep. Adam Schiff, who chairs the House Intelligence Committee, said during a virtual hearing about election security and foreign interference. Schiff, a Democrat from California, said he can’t say he’s “confident” that the 2020 election will be free of interference, even though it would be more difficult for Russians to run the same playbook.

In their opening remarks, representatives from Facebook, Google and Twitter outlined what their companies are doing to safeguard election security. 

Nathanial Gleicher, who heads cybersecurity at Facebook, said there are more than 35,000 people working on safety and security at the company and that nearly 40 teams focus on elections. The company pulled down more than 52 separate networks in 2019 and is labeling posts by state-controlled media outlets. This week, Facebook launched a new online information hub for voter info. 

“Over the past three years, we’ve worked to protect more than 200 elections around the world. We’ve learned lessons from each of these, and we’re applying these lessons to protect the 2020 election in November,” Gleicher said. 

Some lawmakers, though, scrutinized Facebook’s approach to political content. The company doesn’t send posts and ads from politicians to its third-party fact checkers. Last year, the social network also faced criticism for leaving up an altered video of House Speaker Nancy Pelosi that made it seem like she was slurring her words. 

In one of its most recent controversial decisions, Facebook left up posts by President Donald Trump that critics including its own employees say could incite violence. In one of the posts, Trump uses the phrase “when the looting starts, the shooting starts” in response to news about protests that started after the death of George Floyd, a Black man who died in Minneapolis after a white police officer pinned his knee on Floyd’s neck for nearly 9 minutes. Twitter determined the same remarks violated its rules against “glorifying violence” and placed a notice over it. Users could still see the tweet if they clicked on the notice.

During the hearing, Gleicher said he found Trump’s remarks “abhorrent” but Facebook’s approach was “anchored in freedom of expression and respect for the democratic process.”

“That post was so abhorrent as you said…that I find it abhorrent that you would have allowed that to stay up,” Rep. Raja Krishnamoorthi, a Democrat from Illinois, told Gleicher. 

Facebook has taken action against ads by Trump’s re-election campaign. On Thursday, Facebook removed ads from the Trump campaign for violating its rules against hate. The ads featured an inverted red triangle, a symbol used by the Nazis to designate political prisoners in concentration camps.  

“We don’t allow symbols that represent hateful organizations or hateful ideologies, unless they’re put up with context or condemnation,” Gleicher said. Facebook will also automatically remove other content that includes this symbol but the company certainly isn’t “perfect” when it comes to content moderation, he said.

At one point during the hearing, Schiff asked Facebook for more clarity about how its algorithm works and whether it prioritizes engagement and attention. Gleicher said his work doesn’t focus on algorithms so he would have to get back to the lawmaker. Facebook’s algorithm includes different factors, but Gleicher couldn’t say whether or not it was the number one factor.

Nick Pickles, who oversees global public policy strategy and development at Twitter, said the company, like Facebook, has rules against voter suppression, fake accounts and impersonating others. In 2019, Twitter banned political ads from the platform.

“Online political advertising represents entirely new challenges to civic discourse that today’s democratic infrastructure may not be prepared to handle,” Pickles said.

Twitter also started fact-checking and labeling tweets, including posts by President Donald Trump, that contain misinformation about voting or the coronavirus

Google, which owns video service YouTube, said that during the 2016 election the company found relatively little government activity that violates its rules. Richard Salgado, director for law enforcement and information security at Google, said advertisers purchasing US election ads now need to verify who they are and that Google discloses who paid for the ad. Like Twitter and Facebook, Google also has a searchable database for ads. 

“Looking ahead to the November elections, we know that the COVID-19 pandemic, widespread protests and other significant events can provide fodder for nation states or disinformation campaigns,” Salgado said.

To start the questioning, Schiff singled out Google as being known as the “least transparent” of the big tech companies when it comes to disinformation. “How do you respond to the criticism that Google has essentially adopted a strategy of keeping its head down and avoiding attention to its platform while the others draw heat?” Schiff asked.

Salgado denied the claim, adding that YouTube, as well as Google’s advertising unit, release regular transparency reports. But he wouldn’t commit to Google building, like Twitter has, a database of disinformation posts that would allow researchers to study the content.  



Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version