Back

 Industry News Details

 
The darker side of machine learning Posted on : Oct 28 - 2016

While machine learning is introducing innovation and change to many sectors, it also is bringing trouble and worries to others. One of the most worrying aspects of emerging machine learning technologies is their invasiveness on user privacy.

From rooting out your intimate and embarrassing secrets to imitating you, machine learning is making it hard to not only hide your identity but also keep ownership of it and prevent from being attributed to you words you haven’t uttered and actions you haven’t taken.

Here are some of the technologies that might have been created with good-natured intent, but can also be used for evil deeds when put into the wrong hands. This is a reminder that while we further delve into the seemingly countless possibilities of this exciting new technology, we should keep our eyes open for the repercussions and unwanted side-effects.

When facial recognition technology goes awry

Neural networks and deep learning algorithms that process images are working wonders to make our social media platforms, search engines, gaming consoles and authentication mechanisms smarter.

But can they also be put to ill-use? Facial recognition app FindFace proved that it can. Rolled out in Russia earlier this year, the app allows anyone to use its extremely efficient facial recognition capability to identify anyone who has a profile in VK.com, the social media platform known as the “Russian Facebook,” which boasts more than 200 million user accounts in Eastern Europe.

Its untethered access to the VK’s vast image database quickly turned FindFace into an attractive application for a number of different purposes. Within weeks of its launch, FindFace had acquired hundreds of thousands of users, and the Moscow law enforcement was slated to rent the service to enhance its network of 150,000 surveillance cameras.

But it was also put to sinister use by online vigilantes who used the technology to harass unfortunate victims, and there is concern that authoritarian regimes will use the same technology to identify dissidents and protestors in rallies and demonstrations. In an interview with the Guardian, the creators of the app said they were open to offers by the FSB, the Russian security service.

Experts at Kaspersky Labs have shared some tips on how to circumvent facial recognition apps such as FindFace, but the proposed poses and angles are somewhat awkward.

This warrants more discreetness in posting pictures on social media, as they can quickly find their way into the repositories of one of the many data-gobbling machine learning engines that are roaming across the internet. And who knows where it’s going to resurface after that?

Machine learning that peeks behind the pixels

Blurring and pixelation are common techniques used to preserve privacy in images and video. They’re practices that have proven their effectiveness in obscuring faces, license plates and writings from the human eye.

 

But it seems that machine learning can see through the pixels.

Researchers at University of Texas at Austin and Cornell Tech recently succeeded in training an image recognition machine learning algorithm that can undermine the privacy benefits of content-masking techniques such as pixelation and blurring. What’s worrying, the researchers underlined, is that the feat was accomplished with mainstream machine learning techniques that are widely known and available, and could be put to nefarious use by bad actors.

The team used the technology to attack some of the most well-known image obfuscation techniques, such as YouTube’s blur tool, standard mosaicing (or pixelation) and a popular JPEG encryption tool called Privacy Photo Sharing (P3).

The algorithm doesn’t actually reconstruct the obfuscated object, but if it has it in its database, it is very likely to be able to identify its blurred version. After having been trained, the neural network was able to identify faces, objects and handwritten text with accuracy rates as high as 90 percent.

The researchers’ goal was to warn the tech community about the privacy implications of advanced machine learning. Richard McPherson, one of the researchers, warned that similar methods might be used to bypass voice obfuscation techniques.

According to the scientists, the only way to bypass machine learning identification would be to use black boxes to completely obscure the parts of the image that need to be redacted, or to cover those areas with some other random image before blurring them in order to avoid the identification of the real image in case the obfuscation is defeated.

The resulting scene might not be as appealing as before, but at least it can provide you with guaranteed privacy.

An algorithm that imitates your handwriting

Handwriting forgery has always been a complicated task, one that’ll take even the most proficient fraudsters considerable time and practice to master. But it’ll only take a computer a few samples of your handwriting to discern your writing style — and imitate it.

Researchers at the University College London have developed a program called My Text in Your Handwriting, which analyzes as little as a paragraph’s worth of handwritten script, and then starts to generate text that is authentically similar to the same person’s handwriting.

The technique is not flawless. It needs assistance and fine-tuning by a human, and it will not slip past forensic examiners and scientists. But it is by far the most accurate replication of human handwriting to date. In a test that involved people who had prior knowledge of the technology, participants were fooled by the artificial handwriting 40 percent of the time. That number is likely to drop as the technology becomes more enhanced. View More