Facebook Plans Big Overhaul of Political Ads After Criticism – by Sarah Frier September 21, 2017, 5:36 PM PDT


  • CEO Zuckerberg vows more transparency ahead of Senate grilling
  • Company to give details of Russian election ads to congress

 

Facebook Pledges Sweeping Overhaul of Political Ads

Facebook Inc., under fire over Russia’s use of its social network to spread pre-election discord in the U.S. last year, pledged a sweeping overhaul of political advertising and said it will give Congress all the evidence it has on the campaigns.

More than 3,000 Facebook ads linked to Russia have already been studied by special counsel Robert Mueller, who is investigating President Donald Trump’s ties to the country. Facebook initially didn’t want to share detailed information like this with Congress, but changed its mind on Thursday after a lengthy privacy and legal review.

“I don’t want anyone to use our tools to undermine democracy,” Chief Executive Officer Mark Zuckerberg said in a video on Facebook Thursday. “That’s not what we stand for. The integrity of elections is fundamental to democracy around the world.”

Read More: Mark Zuckerberg’s Fake News Problem Isn’t Going Away

Many Facebook ads are bought through a self-service system that doesn’t require interaction with a salesperson, making it harder to know who’s behind a purchase. Zuckerberg said that while it would be impossible to totally eliminate abuse of the system, Facebook can make it much more difficult for bad actors to be effective. The changes Zuckerberg outlined Thursday represent an attempt to bring more transparency to the process and prevent a regulatory crackdown.

He said he will add 250 employees to work on election integrity and make political ads on Facebook more transparent. For example, users will be able to visit an advertiser’s page and see all the other political ads they’re running to other audiences on the social network. Meanwhile, the company will work more closely with election officials and other technology companies to share information on any troubling marketing campaigns.

For more on Russian interference in the U.S. election, check out the Decrypted podcast:

“It is a new challenge for internet communities to have to deal with nation-states attempting to subvert elections,” Zuckerberg said. “But if that’s what we must do then we will rise to the occasion.”

The response comes after widespread criticism, especially from Democrats, about the company’s lack of clarity and cooperation. Some have called for tighter rules or regulation of online election ads, which don’t have the same requirement as television ads to include a clear sponsor and maintain a public record. It’s a threat that Facebook may avoid if it satisfies lawmakers by regulating itself. Lawmakers, for their part, indicated they view Thursday’s moves as a good sign, but not quite enough. Facebook is expected to appear before a congressional committee in October.

Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee and one of the company’s loudest critics, wrote on Twitter that Facebook’s move was an “important & absolutely necessary first step.”

Representative Adam Schiff of California, the senior Democrat on the House Intelligence Committee, said the Facebook data will help the panel try to figure out whether the company was tough enough in its internal investigation and why it took so long to learn of the ads.

Article continues:

Facebook among tech firms battling gag orders over government surveillance – Olivia Solon Last modified on Monday 10 July 2017 10.01 BST


US government prevents companies from revealing many user data requests – a practice which firms and civil liberties activists call unconstitutional

Tech companies and civil liberties campaigners say the gag orders violate the first and fourth amendments.
Tech companies and civil liberties campaigners say the gag orders violate the first and fourth amendments. Photograph: Leon Neal/AFP/Getty Images

Tech companies including Facebook, Twitter and Microsoft are fighting gag orders from US courts preventing them from talking about government surveillance of their users, arguing it has a chilling effect on free speech.

FacebookTwitter and Microsoft all have policies to notify users of government requests for account information unless they are prohibited by law from doing so in exceptional circumstances such as life-threatening emergencies, child sexual exploitation and terrorism.

However, it seems that the US government is attaching gag orders – many with no time limit – to their data requests in about half of all cases. This means that people are having their digital lives ransacked without their knowledge and with no chance for public scrutiny or appeal.

Tech companies and civil liberties campaigners argue that the gag orders are unconstitutional, violating the fourth amendment, which gives people the right to know if the government searches or seizes their property, and the first amendment, which protects the companies’ right to talk to their customers and discuss how the government conducts its investigations.

Article continues:

FACEBOOK’S TOUGH-ON-TERROR TALK OVERLOOKS WHITE EXTREMISTS Sam Biddle July 6 2017, 8:38 a.m.  


Photo: Mohammed Elshamy/Anadolu Agency/Getty Images

PUBLICLY TRADED COMPANIES don’t typically need to issue statements saying that they do not support terrorism. But Facebook is no ordinary company; its sheer scale means it is credited as a force capable of swaying elections, commerce, and, yes, violent radicalization. In June, the social network published an article outlining its counterterrorism policy, stating unequivocally that “There’s no place on Facebook for terrorism.” Bad news for foreign plotters and jihadis, maybe, but what about Americans who want violence in America?

In its post, Facebook said it will use a combination of artificial intelligence-enabled scanning and “human expertise” to “keep terrorist content off Facebook, something we have not talked about publicly before.” The detailed article takes what seems to be a zero-tolerance stance on terrorism-related content:

We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role — and we don’t want Facebook to be used for any terrorist activity whatsoever.

Keeping the (to put it mildly) highly motivated membership of ISIS and Al Qaeda off any site is no small feat; replacing a banned account or deleted post with a new one is a cinch. But if Facebook is serious about refusing violent radicals a seat at the table, it’s only doing half its job, as the site remains a cozy home for domestic — let’s be frank: white — extremists in the United States, whose views and hopes are often no less heinous and gory than those of the Islamic State.

In Facebook’s post, ISIS and Al Qaeda are mentioned by name 11 times, while the word “domestic” doesn’t appear once, nor are U.S.-based terror networks referenced in any other way. That gap in the company’s counterextremism policy is curious.

Article continues:

Leaks ‘expose peculiar Facebook moderation policy’ – BBC News May 22, 2017


 Facebook logoGetty Images

The guidelines Facebook uses to decide what users see are ‘confusing’ say staff

 

How Facebook censors what its users see has been revealed by internal documents, the Guardian newspaper says.

It said the manuals revealed the criteria used to judge if posts were too violent, sexual, racist, hateful or supported terrorism.

The Guardian said Facebook’s moderators were “overwhelmed” and had only seconds to decide if posts should stay.

The leak comes soon after British MPs said social media giants were “failing” to tackle toxic content.

Careful policing

The newspaper said it had managed to get hold of more than 100 manuals used internally at Facebook to educate moderators about what could, and could not, be posted on the site.

The social network has acknowledged that the documents seen by the newspaper were similar to what it used internally.

The manuals cover a vast array of sensitive subjects, including hate speech, revenge porn, self-harm, suicide, cannibalism and threats of violence.

Facebook moderators interviewed by the newspaper said the policies Facebook used to judge content were “inconsistent” and “peculiar”.

The decision-making process for judging whether content about sexual topics should stay or go were among the most “confusing”, they said.

The Open Rights Group, which campaigns on digital rights issues, said the report started to show how much influence Facebook could wield over its two billion users.

Article continues:

Facebook Battles Snapchat Over Future of the Camera – Mathew Ingram Apr 18, 2017


Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference

SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. The conference will explore Facebook’s new technology initiatives and products. (Photo by Justin Sullivan/Getty Images) Justin Sullivan Getty Images

Snapchat’s name didn’t come up during Mark Zuckerberg’s address at Facebook’s annual developer conference on Tuesday, but the company’s presence was still felt regardless, since Facebook’s strategy consists largely of colonizing the ground already staked out by its smaller competitor.

This became immediately apparent even before the Facebook CEO started his keynote, when Snapchat’s parent, Snap Inc., announced that it had added 3D “lenses” or filters to its Snapchat app, which lets users add virtual elements like rainbows to real-world locations.

Just hours after that news broke, Zuckerberg announced that Facebook was rolling out similar 3D add-ons that combine the real world and the virtual, including ways of adding animated effects to real objects. Plants can be given virtual flowers, 3D games can be played on real tabletops, and virtual notes can be left in real locations.

The key insight behind all of this, the Facebook CEO said, is that augmented reality’s near future is one in which smartphone cameras are the key interface, not the bulky headsets or eyeglasses used for full-scale virtual reality.

Article continues:

‘Disputed by multiple fact-checkers’: Facebook rolls out new alert to combat fake news – Elle Hunt – Tuesday 21 March 2017 20.37 EDT


Feature – which flags content as ‘disputed’ – trialled on story that falsely claimed thousands of Irish people were brought to the US as slaves

The warning message that appears when some Facebook users try to post a fake news article.
The warning message that appears when some Facebook users try to post a fake news article. Photograph: Facebook

Facebook has started rolling out its third-party fact-checking tool in the fight against fake news, alerting users to “disputed content”.

The site announced in December it would be partnering with independent fact-checkers to crack down on the spread of misinformation on its platform.

The tool was first observed by Facebook users attempting to link to a story that falsely claimed hundreds of thousands of Irish people were brought to the US as slaves.

Titled “The Irish slave trade – the slaves that time forgot”, the story published by the Rhode Island entertainment blog Newport Buzz was widely shared on the platform in the lead-up to St Patrick’s Day on 17 March.

For some users, attempting to share the story prompts a red alert stating the article has been disputed by both Snopes.com and the Associated Press. Clicking on that warning produces a second pop-up with more information “About disputed content”.

“Sometimes people share fake news without knowing it. When independent fact-checkers dispute this content, you may be able to visit their websites to find out why,” it reads. “Only fact-checkers signed up to Poynter’s non-partisan code of principles are shown.”

The Poynter code promotes excellence in non-partisan and transparent fact-checking for journalism. The pop-up also links to Snopes.com, AP and Facebook’s official help page.

Article continues:

Homeland Security Asks Travelers for Facebook and LinkedIn Accounts – Jeff John Roberts Updated: Dec 23, 2016 9:54 AM PST


Customs Agents On The New York And Canada Border

Starting this week, the federal government began asking some travelers to the U.S. to supply details about their social media accounts. As you can see below, Uncle Sam now wants visitors to disclose their presence on popular services like Facebook, Instagram and Twitter in what appears to be a long shot attempt to screen for terrorists.

The collection of social media data, which was first proposed by Homeland Security this summer, does not apply to U.S. citizens. Instead, it is for now aimed at foreigners from 32 countries who apply to arrive in the U.S. under the “visa waiver program”—an online tool that lets short-term visitors skip the formal process of applying for a visa.

Here is a screenshot from the online application that shows a list of social networks in the drop-down menu:

screen-shot-2016-12-23-at-10-33-01-am

Article continues:

%d bloggers like this: