Posted by Bill Ferriter on Sunday, 11/20/2016
One of the most interesting conversations currently taking place around Donald Trump's surprise victory in our Presidential election has been the role that fake news peddled and promoted in Facebook news streams may have played in swaying voters. Mark Zuckerberg -- Facebook's charismatic founder -- has called the notion that fake news is a problem on his site "a pretty crazy idea" and argued that a clear process is in place that allows users to flag suspicious or hateful content for further review.
But that position was openly challenged over and over again all week long.
Buzzfeed, a popular online source covering digital media and technology, opened the criticism by publishing the frightening results of an analysis of the election stories generating the most engagement -- think likes, shares and comments -- on Facebook in the final three months of the election. Here's what they found:
NPR went on to interview Facebook executives and employees to gain insight into just what happens when suspicious or hateful content is flagged for review on the site.
Turns out, the process isn't consistent, thorough or reliable. It's true that every piece of content is reviewed by a human being, but those human beings are mostly working in other countries simply because Facebook has subcontracted the work to save money. Worse yet, while every decision is supposed to take the complete context of a situation into consideration before decisions are made, employees are evaluated based on the number of pieces of content that they review in a single day.
From the NPR article:
"Current and former employees of Facebook say that they've observed these subcontractors in action; that they are told to go fast — very fast; that they're evaluated on speed; and that on average, a worker makes a decision about a piece of flagged content once every 10 seconds.
Let's do a back-of-the-envelope calculation. Say a worker is doing an eight-hour shift, at the rate of one post per 10 seconds. That means they're clearing 2,880 posts a day per person. When NPR ran these numbers by current and former employees, they said that sounds reasonable."
Perhaps the most interesting article was this Washington Post interview with Paul Horner, who writes fake news stories for a living. Horner reports making close to $10,000 a MONTH off of the clicks on advertisements included on the fake news sites that he maintains. Every post that he writes on his slick looking ABC News ripoff website, for example, can make him rich, as long as it goes viral on Facebook. And what does Horner think of the people sharing his content over and over again?
It's not pretty:
"Honestly, people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything anymore — I mean, that’s how Trump got elected. He just said whatever he wanted, and people believed everything, and when the things he said turned out not to be true, people didn’t care because they’d already accepted it. It’s real scary. I’ve never seen anything like it."
Scrutiny of Facebook's treatment of fake news and the hefty rewards paid to peddlers of lies by companies like Google who rely on advertising revenue have pressured both services into much-needed action: They are working to develop policies that will effectively ban fake news sources like Horner's from access to corporate advertising programs in an attempt to dry up the revenue streams that provide the motivation to pollute the web with hoaxes and lies.
But I think that's the wrong solution to Facebook's fake news problem.
We don't need new policies and tools from tech companies to identify sketchy content on the web. Instead, we need to develop citizens who take careful steps to verify that the information they are reading anywhere on the web is reliable. That's a new literacy in today's complicated media ecology -- and it is a new literacy that we give too little attention to in schools.
The good news is that teaching students to identify sketchy content isn't all that hard to do.
There are simple questions that kids can ask when evaluating the reliability of a web source that can turn them into top-notch bunk filters without needing any help from Facebook or Google. Here are three:
How believable is this story to me?
The first lesson that I try to teach my students when spotting sketchy news stories is that their common sense is the most powerful tool that they have for fighting back against misinformation on the web. If a story just doesn't seem plausible, it's probably fake -- and the fact of the matter is that the vast majority of fake news stories really ARE that easy to spot. People with good common sense don't get fooled very often -- as long as they are willing to trust their intuition.
Try that with two recent headlines on Horner's fake ABC News website: Obama Signs Executive Order Banning National Anthem at All Sporting Events and Obama Signs Executive Order Banning Pledge of Allegience from All Schools Nationwide. Do either of those headlines seem even a little bit believable? Would a person who served as President of our country REALLY want to ban things like the National Anthem or the Pledge of Allegiance? No matter what you think about the people or parties leading our nation, chances are that they care enough about our country to protect our national symbols. That's just common sense.
And double-checking your common sense is super easy: Just take questionable headlines and drop them into Google. In most cases -- including the notion that Obama is banning the Pledge of Allegiance -- you'll see that reliable sites like Snopes and FactCheck.Org that are committed to debunking lies on the Internet have already reviewed the claims in question.
What do I know about this news source?
I also try to teach my students that spending a few minutes researching the author and the website of every piece of news that they are exploring can help them to spot sketchy news stories. Does the web address look reliable? What can you learn from the "About Us" or "Contact" links found on the page? What kind of search results are returned when you Google the name of the author of the article that you are reading?
Asking those questions about Dan Horner's ABC News website would identify it as a fraud in no time.
The web address -- http://abcnews.com.co/ -- is the first giveaway. Why would a major news network add a ".co' to the end of its web address? What's more, the contact information on the site shows that the headquarters of ABC News is a Tudor style home in Topeka, Kansas -- and just a few minutes of digging into the background of Dr. Jimmy Rustling, one of the lead authors on the site, brings up this tongue-in-cheek bio of the author and this set of Google Search Results explaining that "Jimmy Rustling" and "Rustle my Jimmies" are slang terms for evoking strong emotions.
Can I spot any loaded words in the piece I am reading?
The final lesson that I try to teach my students is that loaded words and phrases -- descriptions that imply a strong emotion and/or position -- are signs indicating that the author or source is trying to push readers to feel a certain way about a topic instead of simply reporting the news in an unbiased way. They are an easy way to spot opinions instead of facts -- and while opinions aren't automatically wrong, they need to be questioned by readers instead of accepted at face value.
What's interesting is that Dan Horner's fake news site avoids loaded words for the most part -- which is one of the reasons that it is so successful at generating attention. Each piece sounds like an unbiased reporting of fact -- even if those facts are impossible to believe.
But you don't have to go far to find loaded words in news sources. Can you spot the loaded words in these headlines from Fox News and the Huffington Post: Arizona Presidental Electors Being Harrassed, This is What it Means to Imprison a Whole Category of People.
In the first headline, I'd want my students to notice that "being harassed" is a loaded phrase that could mean a heck of a lot of things. Good readers would want to know what that harassment looked like before making a decision about the importance of the event. In the second headline, I'd want my students to notice that "imprison a whole category of people" is a phrase designed to elicit fear. Good readers would want to unpack that. Are newly elected officials REALLY trying to imprison entire categories of Americans? Or is "imprison" a metaphor?
In many ways, this is my favorite lesson to teach because kids LOVE looking for loaded words and phrases. Spotting the sneaky ways that authors are trying to influence readers -- and then trying to decide if the evidence in the article actually supports the author's opinions -- is like a scavenger hunt to them.
I've pulled all this content together into a handout that you can use if you are interested in teaching your students how to spot fake news sources. You can find it posted online here on my Teachers Pay Teachers website.
Does any of this make sense to you? More importantly, are you taking active steps to teach your kids the skills necessary to spot sketchy news stories?
If teaching students about managing information, thinking critically and engaging in collaborative dialogue resonates with you, check out Teaching the iGeneration — Bill’s book on using digital tools to introduce students to essential skills like information management, collaborative dialogue and critical thinking.