Search

How big tech companies could team up to stop deepfakes - The Verge

Thanks to everyone who came out for last night’s sold-out event with Anna Wiener! We can’t wait to meet even more of you at our next Interface Live. Expect details soon!

The president was narrowly acquitted in his impeachment trial in the Senate on Wednesday, as Republican partisans refused to even hear evidence in the case. The move ushered in a legitimately scary new era of American politics, in which Trump has effectively been given free rein to indulge his most corrupt and authoritarian impulses, with little to no reason to fear that the legislative branch will hold him in check. (He celebrated acquittal by tweeting a graphic showing him staying in power for tens of thousands of years.)

The institutions that prop up our democracy — that uphold the rule of law — are eroding. That’s one reason why the spread of misinformation, hate speech, and other malicious content on social platforms has felt like such a crisis in the past three years. An enormous chunk of our political discourse takes place on, or is informed by, what we see on Facebook, YouTube, and Twitter. As they’ve grown in size and influence, they’ve all become institutions in their own right. An open question is whether they can have a positive effect in upholding democratic values and the rule of law — or whether they will accelerate the polarization of the electorate until it reaches some awful breaking point.

For that reason, around here we pay attention when the platforms take action to fight disinformation. To be sure, even a perfect information environment doesn’t guarantee a good outcome in a democracy — the case against Trump included multiple smoking guns, and Republican senators simply chose to ignore them — but governance benefits from a shared set of facts. So let’s see what they’re up to.

On Monday, YouTube laid out its policies for handling disinformation. None of the policies are new, but the announcement served as a kind of statement of purpose ahead of the (disastrous!) Iowa caucus. Here’s Julia Alexander in The Verge:

When it comes to manipulated videos, YouTube will remove “content that has been technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm.” The company previously took down a video of House Speaker Nancy Pelosi that was manipulated to make her appear intoxicated. [...]

Videos that also tell people incorrect information about voting, including trying to mislead people by using an incorrect voting date, are also not allowed. Neither are videos that advance “false claims related to the technical eligibility requirements for current political candidates and sitting elected government officials to serve in office.” YouTube will further terminate channels that attempt to impersonate another person or channel, or artificially increase the number of views, likes, and comments on a video.

Tech platforms are generally loath to evaluate claims of truth, particularly those involving politicians, but the big three are all dead set on removing anything that gives the wrong date for an election. Confidence in content moderation! You love to see it.

On Tuesday, Twitter followed suit with some fresh new policies to counter disinformation — specifically, altered photos and videos, or as they are increasingly being called, synthetic media. Beginning in March, Twitter said, it will add labels or outright remove deepfaked tweets. Here’s Davey Alba and Kate Conger in the New York Times:

To determine whether a tweet should be removed or labeled, Twitter said in a blog post, it will apply several tests: Is the media included with a tweet significantly altered or fabricated to mislead? Is it shared in a deceptive manner? In those cases, the tweet will probably get a label.

But if a tweet is “likely to impact public safety or cause serious harm,” it will be taken down. Twitter said it might also show a warning to people before they engaged with a tweet carrying manipulated content, or limit that tweet’s reach.

This is much more difficult than removing tweets that misstate the date of the election. For one thing, it leaves open the question of how Twitter will handle parody and satire. Still, I liked this quote from Yoel Roth, the company’s head of site integrity: “Whether you’re using advanced machine learning tools or just slowing down a video using a 99-cent app on your phone, our focus under this policy is to look at the outcome, not how it was achieved.”

Looking at the outcome is a useful frame for making individual policy decisions. There are lots of terrible pieces of social content that are essentially harmless, because no one sees them. And then there are the small few that go viral and do lots of damage. It makes sense that Twitter would focus its moderation efforts at that level. Promising to intervene in cases where there is serious harm isn’t just sensible — it’s also scalable.

Elsewhere, disinformation researcher Aviv Ovadya has some good suggestions for how tech platforms can respond to the threat of synthetic media. My favorite: they could use their monopoly powers to require app developers to insert watermarks — which could then be easily detected by other tech monopolies. Ovadya writes:

A further lever that could make these controls more ubiquitous would be if the Apple and Google app stores required all synthetic media tools to implement them. This would then have those companies impacting creation and partially governing how synthetic media can be created on their platforms. Finally, a company like Facebook also take advantage of the existence of hidden watermarks to treat synthesized content differently, impacting distribution (and governing their own influence; though they may be able to offer some of that governing power to independent bodies, as they do with third party fact checkers).

All of these restrictions are limited in impact — for example, malicious actors might still find tools that don’t have any controls. But with the right incentives, those tools are likely be harder to access and inferior in quality, as they may be more difficult to monetize if they are not available on popular platforms. No mitigation to this challenge is a silver bullet. We need defense-in-depth.

Many of the concerns I have about Big Tech are rooted in the sheer size of the companies, and all the unintended consequences that come with scale. If the big guys want to show off the benefits that come with scale, insisting on an ecosystem that watermarks synthetic media could be an excellent place to start.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: YouTube sent Clearview AI a letter demanding that the facial-recognition company stop scraping its site to build a database of faces for law enforcement. Clearview’s CEO made clear he plans to fight it, though — and legal precedent is on his side.

Trending down: Instagram is still promoting anti-vaxx content. It keeps saying that it has removed vaccine misinformation, but reporters keep finding it.

Governing

Cybersecurity experts examined The App from the doomed Iowa caucus and said it looked “hastily thrown together” and that it was built by someone who was “following a tutorial.” Drag them! Jason Koebler, Joseph Cox, and Emanuel Maiberg of Vice explain what happened:

Election security experts have been saying for years that we should not put election systems online, and that we shouldn’t be using apps to transmit results. And, if U.S. election officials are going to use apps like this, that they should be open to scrutiny and independent security audits.

“We were really concerned about the fact there was so much opacity. I said over and over again trust is the product of transparency times communication. The DNC steadfastly refused to offer any transparency. It was hard to know what to expect except the worst,” Greg Miller, cofounder of the Open Source Election Technology Institute, which publicly warned the IDP against using the app weeks ago, told Motherboard.

Shadow, the company behind the Iowa caucus app, is part of a web of startups connected to the well-funded nonprofit Acronym. Here’s how it landed a contract with the Iowa Democratic Party — and why the project went so wrong. (Emily Glazer, Deepa Seetharaman and Alexa Corse / The Wall Street Journal)

The first legal challenge to Singapore’s law against online misinformation was rejected today. It’s a blow to opponents who say the law is being used to stifle dissent before elections. Which, y’know, it is.

The Justice Department is ramping up its antitrust probe of Google, reaching out to more than a dozen companies including publishers, advertising firms and agencies. The move suggests Google’s online ad tools have become a major focus of the investigation. (Keach Hagey and Rob Copeland / The Wall Street Journal)

White House adviser Peter Navarro said Amazon CEO Jeff Bezos backed out of a meeting the two had planned, regarding counterfeit products on the platform. An Amazon spokeswoman said senior executives have met with Navarro and other White House officials “on multiple occasions.” (Jeff Stein and Abha Bhattarai / The Washington Post)

Amazon’s search for government incentives for its second headquarters location was reportedly driven in part by Jeff Bezos’ jealousy of Elon Musk. Bezos wanted Amazon to receive government handouts the size that Tesla did when it opened its plant in Nevada. (Spencer Soper, Matt Day, and Henry Goldman / Bloomberg)

California’s new privacy law has created a new market for privacy-focused startups. They’re offering personal data scrubbing and software to help companies comply with the law. (David Ingram / NBC)

Last year, Waymo re-classified some of its workers from contractors to vendors. Now, those same workers are complaining about slashed benefits and unruly customers. They say the company hasn’t been very responsive to their concerns. (Colin Lecher and Andrew J. Hawkins / The Verge)

Face masks are mandatory in at least two provinces in China, as the government tries to contain the coronavirus. Now, residents are saying the masks trip up facial recognition technology, which is used for many everyday transactions. (Anne Quito / Quartz)

Companies like WeChat and ByteDance are working to stop misinformation about the coronavirus in China. They’re filling a void left by the government which has been slow to acknowledge the crisis. (South China Morning Post)

Regulators in Ireland launched inquiries into Google and Tinder over how they process user data. They currently have 23 ongoing inquiries into big US tech companies, which also include Facebook and Twitter. (Associated Press)

Industry

Facebook is shutting down the mobile web arm of its Audience Network starting on April 11. The network offered advertisers a way to extend their Facebook ad campaigns to a network of third-party apps. Lara O’Reilly at Digiday explains the decision:

The open web environment outside Facebook’s properties has changed significantly in the years since its Audience Network launched. The majority of browsers have now turned off third-party web tracking by default. And Google, whose web browser commands the largest market share, indicated last month that it plans to switch off support for third-party cookies within two years. That move would likely hamper Facebook’s Audience Network.

The open real-time bidding environment is also under increasing scrutiny from regulators, particularly in Europe where the U.K.’s data protection authority has repeatedly called on ad tech companies to clean up their act or else face penalties under Europe’s General Data Protection Regulation.

Facebook issued a security advisory about a flaw in WhatsApp Desktop that could allow an attacker to remotely access files on a users’ computer. The company has shipped a new version of the app to fix the flaw. (Sean Gallagher / Ars Technica)

Jeff Weiner announced he is stepping down as CEO of Linkedin, after 11 years running the company. Ryan Roslansky, LinkedIn’s senior vice president of product, is taking over. Weiner will become executive chairman — a fake job befitting a former CEO of LinkedIn. (Nicholas Thompson / Wired)

The pro-Trump media outlet The Epoch Times has started pouring money into YouTube ads, after being kicked off Facebook last year. The paper has been known to spread conspiracy theories. (Kevin Roose / The New York Times)

YouTube has pledged to spend $100 million on kids programming. It’s focused on videos that “drive outcomes associated with the following character strengths”: Courage, compassion, communication, gratitude, curiosity, humility, teamwork, integrity, perseverance, self-control, empathy and creativity. Make the president watch it! (Lucas Shaw / Bloomberg)

BuzzFeed is recruiting teenagers to make election-themed TikTok and Instagram videos. The teen ambassadors will part part of the outlet’s 2020 coverage. (Sarah Scire / NiemanLab)

Justin Bieber is really trying to make his song “Yummy” go viral on TikTok. It’s the latest example of the platform’s cultural relevance — and of the way stars are eternally damned to master new platforms if they hope to maintain their place in the zeitgeist. (Julia Alexander / The Verge)

Tinder brought in $1.2 billion in revenue last year, the company’s latest earnings release shows. Its parent company Match Group made $2.1 billion, meaning Tinder represents half its income. (Ashley Carman / The Verge)

Tinder’s chief product officer, Ravi Mehta, has left the company after less than a year in the role. He allegedly clashed with CEO Elie Seidman about the direction of the app. Now Seidman will run the product team. (Alex Heath / The Information)

And finally...

It’s been a rough decade for Tyler.

Talk to us

Send us tips, comments, questions, and reasons to be hopeful about American democracy. Please! casey@theverge.com and zoe@theverge.com.

Let's block ads! (Why?)



"how" - Google News
February 06, 2020 at 06:00PM
https://ift.tt/2S1oJGo

How big tech companies could team up to stop deepfakes - The Verge
"how" - Google News
https://ift.tt/2MfXd3I

Bagikan Berita Ini

0 Response to "How big tech companies could team up to stop deepfakes - The Verge"

Post a Comment


Powered by Blogger.