The rapid development in the field of artificial intelligence (AI) over the last several years has opened up new and profitable lines of business, empowered the Big Data revolution, and enabled emerging technologies like autonomous vehicles. There have been so many benefits associated with modern AI that it’s easy to see the technology as a panacea with few real downsides.
The reality, however, is far more complex.
In practice, AI is creating or amplifying some problems that are only now beginning to become apparent. One of those issues that have been getting plenty of media attention in recent months is the growth and dissemination of deliberately misleading information online. The problem of so-called “fake news” is real, and it’s getting worse. New AI technologies are being used in the creation of deceptive videos, and they are spreading at a disquieting rate. The good news is that other new AI tools are being developed to help stem the rising tide of problematic content online, and not a moment too soon.
Here’s what’s happening.
Don’t believe your eyes
The most obvious (and disturbing) way that AI has been adapted for use in the creation of misleading content can be summed up in one catch-all term: Deepfakes. The term refers to images and videos that have been manipulated using AI technology to alter their content in a way that’s difficult to detect. Most people became aware of the technology when Buzzfeed News produced an alarmingly convincing video of Barack Obama making statements he never made. The only good news at the time was that videos manipulated in that way could be reliably detected by analyzing the way the subject blinked (or didn’t) in the video in question. Unfortunately, a newer version of the technique doesn’t suffer from the same flaw.
Solutions under development
In response to the threat that convincing, fabricated videos represent to the general public, members of the US Congress have been pressing representatives from technology companies to find ways to combat their spread. There’s been plenty of conjecture as to whose responsibility it will even be to police the internet for fake content. Some believe that it will fall to the digital forensics industry to weed out the bad actors, while others insist that it’s going to require government intervention.
That’s part of the reason why the Department of Defense is funding technology research aimed at creating detection mechanisms that can outwit the fakers. Their initiative will involve a competition among digital forensics experts to determine if they even have the skills necessary to tackle the problem. That also will go a long way towards determining what is digital forensics related and where additional purpose-built AI tools will need to be developed to support the effort – potentially through further government-backed technology programs.
A global arms race
If the situation surrounding AI-powered image and video manipulation and the technologies being readied to detect and fight them sounds familiar, it should. That’s because it calls to mind the earliest days of email spam, where black-hat marketers took advantage of a dearth of countermeasures to flood user inboxes with unsolicited messages. In the years since, we’ve witnessed a pitched battle between the two sides, culminating in the passage of legislation intended to curb abuses. As anyone with an email account can tell you, spam still exists anyway.
That may be close to the best we’ll ever be able to expect in the battle to identify and eradicate manipulated images and videos online as well. It seems like a foregone conclusion that every evolution in the technology will prompt an evolution in detection methods, ad infinitum. As with spam, that also means that it’s going to be up to individuals to take the initiative by trusting content from known sources and having a healthy skepticism for unsolicited information. In reality, that’s the only countermeasure that AI fakers can never defeat: a sharp and inquisitive human mind.
To learn more about how technology is being developed to combat fraud and deception, read Machine Learning – One New Weapon To Combat Fraud.