Back

 Industry News Details

 
The Rise Of The (Self-Replicating) Machines Posted on : Oct 22 - 2018

By now, it’s a truism that automation will replace certain careers while leaving others intact. Experts believe the most vulnerable are jobs that require routine, rote tasks: a bookkeeper, a secretary or a factory worker. Each of these involve highly repetitive and predictable duties easily taught to machines.

By that logic, roles that require abstract thinking should be safe. This includes graphic designers and software programmers, who must think deeply (and creatively) in order to solve problems.

Unfortunately, what was true several months ago may no longer be the case today. The rise of machine learning and self-replicating artificial intelligences (AI) has jeopardized many more professions, notably programmers. Ironically, some of their best work may be their downfall: As developers make ever-more powerful and intelligent algorithms, they risk coding themselves into obsolescence.

In all fairness, it is doubtful that the experts intentionally set out to make themselves (or anyone, for that matter) redundant. Machine learning, however, skews that equation.

Essentially, machine learning is just gathering data, identifying patterns and making decisions based on said patterns. A self-driving car algorithm can train itself to avoid obstacles like highway dividers, slow down at red lights or stop for pedestrians (though not always successfully). Amazon’s powerful recommendations engine is renowned for its spot-on accuracy -- and responsible for significant sales increases over the years.

The most powerful subset of machine learning is deep learning, which models computer frameworks after the structure of the human brain -- known as a neural network. The concept of a neural network isn’t new, having been in existence for decades. Thanks to increasingly capable computers and mathematical improvements, neural networks can finally cross the boundary from unwieldy theory to fully functioning prototype.

At its most basic, a neural network contains layers of inputs and outputs, each with a specific weight. For example, an image recognition program could adjust notice a certain shade of a color when analyzing pictures; any changes or adaptations would require the weights of each individual input and output to be adjusted. In the past, this led to elementary mistakes, such as a program confusing a cat face for a human one.

The key to the rise of the neural network was automating this adjustment, specifically in programming each layer of “neurons” to train themselves. Given that one of Google’s neural networks contains close to one billion connections, using human intervention to adjust each individual weighting would have been impossible. But the ability of neural networks to learn and adjust on their own opens up a whole new world: Google’s systems, for example, made quantum leaps in areas like translating languages or transcribing speech-to-text.

In many ways, machine learning is a logical progression. Constant human intervention, reprogramming various rules (if this, then that, if that, then do this) is time-consuming and expensive. Even a brute-force approach, in which networks constantly test thousands of different combinations of inputs and outputs at once, is far more efficient and economical than having developers butt in. Just look at DeepMind’s AlphaGo Zero, which used brute force to teach itself to play the notoriously abstract, open-ended game over three days -- and without human intervention, to boot. View More