Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Disinformation as a service? DaaS not good!

Mike Elgan | Sept. 11, 2017
'Computational propaganda' started in politics, but may be coming soon to the world of business.

Disinformation campaigns using computational propaganda techniques emerged globally in 2010, and reportedly went mainstream last year during the U.S. presidential campaign.

"The overall political goal of disinformation is to confuse and to erode your trust in information altogether," according to University of Washington professor and researcher Kate Starbird.

The main targets have so far been governments and the media. The technique by actual fake news outlets is to consistently attack real news outlets as fake news. The objective is to get as many people as possible to throw up their hands and conclude, "it's all fake news, no source can be trusted." That confusion weakens the power of the press to hold politicians accountable and also weakens public trust in all democratic institutions.

The phrase "computational propaganda" is closely associated with the Computational Propaganda Project at Oxford University, which coined the phrase.


The future of fake news

The computer-enhanced disinformation campaigns launched by Russia and others are fairly crude, and the effort to cover their tracks limited. The future of disinformation is likely to be much more sophisticated and harder to defend against.

Disinformation is rapidly going multimedia, for example. Advances in A.I. and CGI will enable convincing audio and video that can make it appear that anyone is saying or doing anything.

University of Washington researchers used A.I. to create a fake video showing former president Barack Obama saying things he never actually said. And Stanford researchers developed something they call Face2Face, which creates real-time faked video, so basically anybody can be shown to say anything in a live video chat.

These techniques aren't perfect. But given time and better technology, they will be.

Adobe, as well as a Canadian startup called Lyrebird, have demonstrated convincing fake voices of famous people, which can be made to say anything at all.

The Stanford and Adobe techniques could enable real-time spoofing of people on the phone or through video chat. That could be a new way to plant fake news in real media, by tricking real journalists with imposter sources. It could also be used for a social engineering technique called "CEO fraud," where an important and well-known person in a company calls an underling and asks them to do something - like transfer funds to an offshore account or send sensitive documents to someone.

Another glimpse at the future of DaaS comes from Cambridge Analytica, which was used to help elect President Trump. The company reportedly performs psychological profiles of individual social media users, then serves them custom ads that appeal to their particular obsessions, fears and aspirations. In the future, every political ad could be unique to each voter and sway public opinion under the radar and beyond scrutiny.


Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.