‘Darkly’ Adds Privacy to Perception Software

A team of UT Austin and Princeton researchers have been awarded a 2014 PET Award for their potentially revolutionary paper A Scanner Darkly. The paper, which takes its name from the dystopian novel by Philip K. Dick, reveals a system designed to add an additional layer of privacy to what it calls “perceptual computing.” Authors Suman Jana, Arvind Narayanan, and Vitaly Shmatikov outline the need, process, and implementation of Darkly, the first example of “privacy-preserving perceptual computing.”

privacy
Photo: Mopic / Shutterstock

The research and subsequent software is a reaction to security concerns surrounding the increasing popularity of perception applications such as Google Glass and Microsoft Kinect. Security experts and amateur programmers alike have jumped on the chance to explore the implications of this kind of software.

According to a 2013 Fed Scoop article, the US Department of Homeland Security has been researching Kinect’s facial and vocal recognition and motion sensing capabilities to develop sophisticated threat-detection methods. More recently, a University of Massachusetts cyber forensics department created a Google Glass program that can successfully determine phone and tablet swipe-style passwords through motion detection. Augmented reality browsers such as Junaio warn users of their own potential security risks with the reminder that “all content is coming from external third parties and content providers.”

Darkly aims to mitigate these concerns so that individuals can make the most of this promising new technology without the security risks they may carry. The paper categorizes potential security threats into three sections: over-collection and aggregation, specific inference, and semantic inference.

Over-Collection and Aggregation

Thanks to released information regarding government and corporate data mining techniques, the general public is by now very familiar with the potential for harm via over-collection.

Darkly addresses this problem through opaque references. Where the original application code may point to images or other information it collects, Darkly replaces these pointers with opaque references that the original application cannot “see.” This technique is a fundamental part of many other Darkly activities, but in the case of over-collection, it means the application cannot directly access the data and therefore cannot establish patterns through data aggregation.

This process is in part thanks to its use of open-source visual computing library OpenCV.

“Darkly exploits the fact that most OpenCV data structures for images and video include a separate pointer to the actual pixel data… Darkly creates a copy of the data structure, fills the meta-data, but puts the opaque reference in place of the data pointer.”

Specific and Semantic Inference

Individual image frames might reveal private information like a credit card number or a personal identity. Darkly uses privacy transforms to alter this data into something less recognizable.

For instance, its sketching method will turn the image of a face into an indiscernible “sketch.” This is one example of the paper’s common goal: an attempt to balance privacy with utility with as few sacrifices as possible. In this case, the sacrifice is simplicity, with the benefit of more nuanced privacy controls on the user’s end. Users can personalize their privacy settings by setting the privacy transform level somewhere between 0 and 11, with 0 being the raw unaltered image.

Semantic inference, on the other end, is the intuitive conclusions drawn from basic information. For instance, even if a sketched image cannot reveal personal identities, it still reveals that there are individuals in the line of sight. Any individual can make their own conclusions based on this information, and the paper acknowledges Darkly does not yet account for that.

The Darkly system yields a few significant outcomes. It blocks direct access from the application to the user via opaque reference. It does not alter the original source code of the program, acting instead as an additional layer or “shield.” This means that “for most applications, there is no change of functionality and no loss of accuracy even at the maximum privacy setting.”

The Darkly console, meanwhile, keeps users informed. It shows the user precisely what image data the application receives (a completely blank image in the case of opaque referencing), with customizable privacy dials. It also allows for trusted storage by rerouting the application’s data storage to a user-controlled location.

The Darkly paper and system marks the beginning of an entirely new field of research: privacy-preserving perceptual computing.

Their long-term goals include future plans to extend Darkly to include audio data. They also hope to further research into machine learning to potentially mitigate semantic inference. The project’s award for Outstanding Research in Privacy Enhancing Technologies reveals an academic side to the computing industry that is enthusiastic about the privacy concerns brought about by new technologies.

Rather than fearing for the worst, the Darkly team saw an opportunity in these security concerns to revolutionize privacy software.

Leave a Comment

Your email address will not be published. Required fields are marked *