Skip to main content

How Facebook, Twitter Can Fight Fake News

SAN FRANCISCO -- Online platforms such as Facebook, Google, Twitter and lesser-known ad networks need to do more to combat fake news and other forms of disinformation, despite the financial consequences, a security expert said at the RSA Conference here Thursday (April 19).

Credit: Panuwat Phimpha/Shutterstock

(Image credit: Panuwat Phimpha/Shutterstock)

"If you're running one of these companies, you need to aggressively block ad buys on sites that traffic in misinformation, clickbait or hate speech -- even if it means a hit to your bottom line," said Dr. Daniel Rogers, CEO and co-founder of Terbium Labs in Baltimore. 

Rogers said Facebook, which he called "a weapon of mass influence," and Twitter weren't doing enough to stop the spread of fake news, because more fake content and more political arguments online means more money for them.

"I hope Facebook sees this presentation," he said. "I hope Twitter sees this. I want them to be accountable."

MORE: How to Stop Facebook from Sharing Your Data

Fake news is an information-security problem, Rogers said, even if it's not often acknowledged as such. He explained that properly handled data needs to possess three attributes: confidentiality, integrity and availability.

"We worry about confidentiality and availability," Rogers said, such as by guarding against data breaches or making sure servers stay online. "But we need to worry as much about integrity."

"Putin influencing the U.S. elections is definitely a hack," Rogers continued. "It's not technical, but it's an information-security problem."

Rogers said he didn't like the term "fake news," preferring "disinformation" instead. He defined that as information that someone has deliberately manufactured, distorted or taken out of context in order to fool others.

Misinformation, he said, was similar to disinformation, but less malicious because it was the result of someone's honest mistake or misunderstanding.

Referring to a chart made up by Harvard University's First Draft News website, Rogers named seven categories of misinformation and disinformation, ranging from satire and parody on one extreme ("no intention to cause harm, but has potential to fool") to fabricated content on the other ("new content that is 100 percent false, designed to deceive and do harm").

In between were false connections, such as misleading headlines or photo captions; misleading content, such as tying together unrelated events; false context that frames an issue in a misleading way; impostor content, when falsehoods are attributed to otherwise reliable sources; and manipulated content, such as when a video or photo is doctored.

The Russians, Rogers said, were experts at disseminating all these kinds of disinformation.

"RT [formerly Russia Today] happens to be the most trusted news source in parts of the world such as the Caribbean," he said. "Sixty to eighty percent of what they publish is legitimate. But twenty to forty percent is not, having been crafted by Russian propaganda experts."

Russian trolls working from Saint Petersburg created American online protest groups on both the right and the left during the 2016 presidential election campaign, Rogers said. In one instance, the Russians organized both a Black Lives Matter protest and a white-supremacist counter-protest in San Antonio on the same day in May 2016.

"If you're running one of these companies, you need to aggressively block ad buys on sites that traffic in misinformation, clickbait or hate speech -- even if it means a hit to your bottom line," said Dr. Daniel Rogers, CEO and co-founder of Terbium Labs

Such influence operations, as they're called, are easy to perform, he said. First, find a divisive issue, such as race relations in the U.S. or nationality issues in Ukraine; then amplify the extremes around the issue ("appeal to base emotions"); pre-emptively claim victimhood, such as by saying others are trying to silence you; and finally watch the discord and strife happen from a distance.

"If you want to win a vote, scratch a bigot's itch," Rogers said, quoting Gilbert and Sullivan from the 1878 comic opera "HMS Pinafore."

So what can we do about fake news, Rogers asked? He said there were many high-minded solutions that wouldn't work, such as encouraging "news literacy," or creating some kind of arbiter to decide what's true and what isn't, or demanding that American consumers be "smarter." 

Instead, he said, the U.S. should look to the countries of Eastern and Central Europe that have been dealing with Russian disinformation campaigns for more than a decade.

"Call out fake news as soon as you see it," Rogers said. If you have doubts about a story, check out the websites Politifact, First Draft or the Trust Project.

But big online platforms have a responsibility to combat fake news as well, Rogers said.

"If you run a social network, go aggressively after bots and bot armies used to disseminate misinformation and state propaganda," he said.

"If you run a content platform" -- he listed Facebook, Twitter and Disqus -- "enforce hate-speech policies and abuse policies. Stop enabling the organized dissemination of extremely divisive messaging."

"You must take a side," he said, addressing the large online platforms.

Asked by Tom's Guide why Twitter was not doing more to combat bots, which are generally easy to spot and which are often created en masse, Rogers was blunt.

"It's part of Twitter's business model for there to be bots on Twitter," he said. "Something like fifteen to twenty percent of Twitter users are bots. They tolerate it to maintain their share price. I would consider it stock fraud, but that's just me."