Fake news is a problem, everybody knows that. When technology helps bad actors spread lies and sow discord, it’s bad for democracy, which relies on citizens making considered judgments at the polls. It’s also a boon to authoritarians, who can stamp out criticism and bury unfavorable news by creating confusion about what’s true and what’s false.
The more interesting question is, what kind of problem is it?…
Two recent data points offer some hope. Last week, we wrote about big social media companies’ decisions to ban the conspiracy theorist Alex Jones, the latest sign that the big websites where fake news often spreads are becoming more engaged with the problem. Less well publicized was the fact that DARPA, the Pentagon’s research and development arm, has been making progress in developing tools that can detect so-called “deepfakes,” the ultra-realistic fake audio and video created using artificial intelligence that some people worry could unleash a torrent of politically-motivated fakery.
Part of the problem with fake news is that people tend to believe what they want to believe – technology won’t solve that. But with industry and government both now paying closer attention, maybe, just maybe, technology can make the problem more manageable.
Wonder what might affect the levels of “ignorant” and “gullible” in the United States?
“On Monday, Stone shared the now-deleted image, but, luckily, screenshots never die! The image featured the words “Space Force ― in space no one can hear you lie” with the accompanying caption: “I love this – proud to be in this crew – but the only lies being told are by liberal scumbags.”
Keep on rocking with the Huffington Post!