Q4 2018

Security Cred, Saving Face(s), Deep Fakeout

Security Cred, Saving Face(s), Deep Fakeout

Right Now

Grades for digital security

Consumers have been flying blind when it comes to the privacy and security cred of digital products and services. But ratings groups on both sides of the Atlantic are stepping into the breach.

Consumer Reports (CR), the respected United States source for evaluations of everything from lawnmowers to luggage, has begun rating digital privacy and security. The organization’s first foray was with mobile peer-to-peer payment services (Apple came out on top). But the rankings are just one part of a bigger movement. CR has proposed a set of standards named, appropriately, The Digital Standard and posted it to the software development platform GitHub so that anyone can access it and make suggestions. It’s a call-to-arms for secure products, consumer privacy and ownership, and ethical behavior.

In the UK, the 117-year-old British Standards Institute (BSI) recently launched its BSI Kitemark for IoT Devices standard. The quality mark will be displayed on Internet of Things products, both for consumer and commercial applications. It’s a response to the UK government’s Secure by Design plan that sets out measures for product security. We smell a trend: security labels on every smart device, whether tech companies like it or not.

 

Up Next

Facial recognition repellants

Facial recognition is here—at airports, in iPhones, and coming soon to the 2020 Olympics. So is the backlash. Orlando police nixed its use of facial recognition (for now) after a public outcry: Amazon employees protested their company’s sales of its facial recognition software to law enforcement, and shoppers weren’t happy when a mall in Calgary was busted by a Reddit user for using the technology on them. (The story was quickly picked up by media outlets; mall managers seemed somewhat confused about privacy law).

But as long as there has been facial identification tech in development, there have been researchers devising tech to block it. At Carnegie Mellon, a group has made eyeglasses that tricks facial recognition systems into misidentifying the wearer—and, they say, the specs could eventually be 3D-printable. Another technique uses artificial intelligence to fight image recognition: University of Toronto researchers have developed an algorithm that acts as a photo filter, changing the pixels in images to outwit AI image identification. There are also other image-foiling startups. And it turns out (welcome back, 1990s) that Juggalo makeup thwarts the tech.

The truly important question is whether private citizens need to arm themselves with privacy tools just to exist in public, tracking free. Hopefully not. But smart companies should realize by now that consumer fury and government regulations—which are gaining traction—are not the combo that endear themselves to the market.

 

Never

Can deepfakes be stopped?

Because doctored pictures aren’t bad enough, we’re now entering the era of deepfakes, videos made using deep learning to meld together facial images of different people. Although there are perhaps some positive uses for this technology, the downsides became quickly and abundantly clear, as evinced by the fake Barack Obama video produced by BuzzFeedVideo and featuring actor Jordan Peele’s impersonation.

But researchers are on it, and so is the U.S. Defense Advanced Research Projects Agency. Through its Media Forensics program, the military agency is funding the development of technologies that will identify fake images and videos. It’s already claimed some success, accurately identifying speaker and scene inaccuracies, both of which are predominant in faked videos, 75% of the time.

Other researchers are also looking into how to spot fakes. The University of Albany’s Siwei Lyu has identified a pretty simple giveaway: humans blink a lot, and fake video faces don’t. That’s because the source materials for fakes are still images, and there aren’t a lot of pics, for obvious reasons, with subjects’ eyes closed. His team’s research uses an algorithm to assess blinking frame by frame and then pinpoints the lack of it. But Lyu also says that we’re at the beginning of a “chess game” of fakery. Indeed, at this year’s SIGGRAPH convention, Technical University of Munich and University of Stanford researchers demonstrated their advanced video-faking technology. They came up short when an audience member asked about the ethical implications.

So can deepfakes be stopped? No. But policy, education, and technology should be able to ameliorate their effects. D!

 

About the Author

Danielle Beurteaux

Danielle Beurteaux is a New York–based writer who covers business, technology, and philanthropy. Her work has appeared in The New York Times and on Popular Mechanics, CNN, and Institutional Investor's Alpha, among other outlets.