Adapted from The Future of You (2014) by Lesley Scott

When Google acquired eight robotics companies in the second half of 2013, eyebrows raised – particularly when they bought the maker of futuristic military-grade robots. [i]  And when the search giant continued hiring expert after expert in the field of artificial intelligence (AI), tongues wagged. And they recently sent the rumor mill into overdrive by purchasing the secretive, London-based DeepMind Technologies that trains machines to learn the way we do.[ii]




All signs point to Google endowing machines with serious mental superpowers. Not that the road to machine enlightenment has been bump-free so far. If you’ve ever been foiled by autocorrect, scratched your head at Siri or wondered how a routine Google image search can produce such random and awful results, then you’ve come up against the current limitations of what machines can be taught to learn. Machine learning, however, is starting to become a lot more learned thanks to new algorithms. Lower-level concepts ground understanding of higher-level concepts, creating multiple layers of learning at different levels of abstraction.[iii] This deep learning is based on the way human nervous systems and brains recognize patterns and process information. During our very earliest stages of development, the area of the brain that processes sounds can also handle eyesight,[iv] an intriguing “general purpose” idea that is being modeled mathematically and used to design man-made or artificial neural networks.[v] Neural networks are generally so computationally costly that they have been limited to about 10 million connections.  However, Andrew Ng – Director of the Stanford Artificial Intelligence Lab – suspected that training larger networks would yield better results. So he used the power of his gig at Google to commandeer 16,000 CPU cores and increase the network of computations to more than 1 billion.


The resulting network, Google Brain, is not unlike a newborn’s brain, just smaller in scale (to compare, an adult human brain has about 100 trillion connections). And when the e-infant was exposed to a week of YouTube, what do you think it learned? “Indeed, to our amusement,” note Ng and his collaborator Jeff Dean, “one of our artificial neurons learned to respond strongly to pictures of… cats.”[vi] The network had not been told prior what a cat was nor given a single image labeled cat. Perusing a gazillion YouTube stills was how it discovered by itself what a cat is. It’s an amazing example of self-taught, deep learning…to say nothing of the mysterious power of kittens on YouTube. (For cats of the two-legged kind, there’s now even an algorithm that can determine whether your urban subculture is biker, surfer, punk, goth or hipster.)[vii]

Learning also encompasses natural language, a characteristic that Google’s Director of Engineering, Ray Kurzweil, predicts search engines will be based on within five years.[viii] The current state-of-the-art was showcased on prime time television in the form of Watson, IBM’s incredibly powerful, interactive and artificially-intelligent computer system capable of answering questions posed in natural language.  It famously won the $1 million prize on Jeopardy!, trouncing the human talent – soundly.  It’s knowledge bank was acquired from perusing Wikipedia and more than 200 million pages worth of natural language docs. In the game show’s category Rhyme Time, when presented with the clue: A long tiresome speech delivered by a frothy pie topping, Watson was able to respond correctly What is a meringue harangue?


Deep learning is going to be the site of some serious next-level stuff that will dramatically shape how we interact with our world and the business environment in which we operate, especially given the amazing brainpower and technology being recruited:


  • Yahoo acquired a startup known for its deep learning image recognition tech called LookFlow, which will become part of Flickr.[ix]
  • Facebook recently hired deep learning expert Yann LeCun of New York University.[x]
  • The Google Brain team now includes professor Geoffrey Hinton, a deep learning giant in the field who revolutionized speech recognition and translation.[xi] Of the estimated 50 experts worldwide in deep learning, about a dozen work for DeepMind, Google’s latest acquisition. “I think this is the main reason that Google bought DeepMind,” opines Yoshua Bengio, an AI researcher at University of Montreal. “It has one of the largest concentrations of deep learning experts.”[xii]


[su_spoiler title=”References”]i
xii [/su_spoiler]

Lesley Scott
Lesley Scott is editor of, a top longtime indie blog about style & author of The Future of You (2014). She is an eccentric 5.0 Fashion Futurist who uses the lens of the fashion tribes to grok the deep trends and hot technologies shaping pop culture today & tomorrow. She is also a Channel Curator for and a regular contributor to As host of the offbeat Fashiontribes Daily podcast, she delves into everything from the intersection of biology & fashion, to the Singularity & transhumanism, cyborgs, bionics, the aggressive rise of crafts and handmade stuff, futuristic bling...and just enough zombies to keep things interesting.