© 2024 WFAE
90.7 Charlotte 93.7 Southern Pines 90.3 Hickory 106.1 Laurinburg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Why Facebook Is Banning Highly Manipulated Deepfake Videos

AUDIE CORNISH, HOST:

Deepfakes are banned, at least from Facebook. The company announced that they will no longer allow the high-tech manipulated videos to appear on their platform due to the potential to mislead users. This announcement comes the same week that Facebook is scheduled to appear before the House Energy and Commerce Committee. The story was first reported by Tony Romm for The Washington Post.

Welcome to the program.

TONY ROMM: Hey. Thanks for having me.

CORNISH: So we said deepfakes are manipulated videos. What else can you tell us? What are some examples we may have seen?

ROMM: These are highly manipulated videos. It's not just somebody taking video editing software and slowing something down or speeding it up. We're talking about the use of artificial intelligence and very high-powered computers that can be used to make it seem like someone is saying something that they actually aren't.

So imagine a video of President Trump speaking on the campaign trail and some malicious actor took that and altered what he was saying entirely, maybe to threaten an attack on a country that Trump wasn't actually threatening. These are the kinds of serious forms of manipulation and misinformation that are made possible because of deepfake technology.

CORNISH: So how is Facebook going about its ban? What's the distinction it's making?

ROMM: So Facebook has said that these sorts of deepfakes that are highly manipulated and aren't evident to an average user are no longer allowed on the platform. So you can't post those kinds of videos where someone is saying something that they aren't actually saying.

Now, there are some exceptions here, and they've given a lot of researchers and lawmakers some great pause. For example, you're allowed to post deepfakes if they're done under the guise of satire or parody, which is a pretty open-ended thought on the part of Facebook. And Facebook has also said here that lesser forms of manipulation will be allowed. And so that's created a whole sort of controversy here about whether there are some loopholes that malicious actors could exploit to still spread misinformation online.

CORNISH: What's their reasoning for these - I guess you're calling them loopholes.

ROMM: Right. It all comes back to Facebook's idea that it is not the arbiter of speech. Facebook really doesn't want to be in the place where it's telling people what's true and what's false. And then you add into the fact the difficulty of determining what's manipulated or determining what qualifies as parody and satire. And Facebook typically wants to err on the side of allowing people to express themselves rather than shutting down that speech. It would rather some of that bad stuff go out and then, after the fact, police it rather than ban it outright.

Now, that's not sat well with lawmakers and experts and others who remember all too well that in 2016 during the presidential election, there were malicious actors from Russia that sought to spread misinformation and exploit loopholes at companies like Facebook to weaponize those social media platforms. So the concern here is that the same thing could happen under Facebook's new deepfake policy.

CORNISH: And yet they make this announcement just before they are supposed to offer testimony before House lawmakers. What are the politics that they're walking into?

ROMM: Yeah, it's funny how that works. Typically, Facebook does these things in time for congressional hearings. And the politics they're walking into - it's really two issues here. The first is that there are a lot of Democrats who think that Facebook and its tech industry peers still haven't done enough in the aftermath of the 2016 election to ensure that the 2020 election isn't riddled with disinformation that could affect how people think about the candidates or how they vote at the end of the day. And the second issue is that there's still widespread concern that this deepfake policy isn't going to prevent the kinds of abuse that we've seen in the past.

So you may remember about a year ago, there was a video of House Speaker Nancy Pelosi that had made the rounds on Facebook. It was doctored to make her look like she was drunk. Essentially, it was sped down. So she appeared to be inebriated. Now, if you talk to experts, they don't call that a deepfake. They call that a cheap fake because it was made using less expensive technology. It wasn't artificial intelligence.

And that kind of video would still be allowed under Facebook's policy. Facebook says it would fact-check it, but it wouldn't outright ban it. And so there are a lot of concerns that that kind of thing could plague the 2020 election and the candidates running.

CORNISH: That's Tony Romm, senior tech policy reporter for The Washington Post.

Thank you for your time.

ROMM: Thanks for having me.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.