Deep Learning at Google [x]
Google [x] is Google’s secret lab, set up in 2010 to take scientific and engineering risks, just to see what might happen. ‘Google [x]’ is genuinely how they spell the name and it’s seen as a secret lab because it doesn’t have a webpage, something unusual at Google. Google [x]’s name most famous product is probably Google Glass, the Internet connected spectacles. A lesser-known project is Google Brian.
Early on in the lab’s history, its researchers asked whether it is possible for a computer to learn to detect faces using only unlabelled images. In other words, can a computer be shown a series of images, figure out on its own which are human, and which are not? To re-state the problem: the computer will be shown images and told to classify (or group) them. The question is, will do the classifications bring together all of the pictures of humans in one group. This is a tricky problem which has been worked since the 1950s. To answer it, the Google team used an approach called Deep Learning.
Deep Learning systems draw their inspiration form what we know of the human brain. Our brain are made up of millions of neurons with billions of interconnections between them. Goggle Brian is a series of 1000 computers programmed to simulate about 1 million neurons with 1 billion connections between them. Each neuron is a block of computer code that accepts input signals from the neurons that connect into it. Each signal is essentially a number. A neuron takes all the numbers sent to it and performs a calculation on them and sends the results out to the other neurons that it is connected into. The calculation that a neuron performs involves multiplying its input by numbers called weights. Learning occurs by setting values for what these weights should be. For example, if a particular input is very unimportant its weight might be close to zero. If it is important, its weight will be set to be high.
A deep learning strategy calls for the neurons to be organized into several layers. When looking at [pictures, the first layer of the learning might be to classify dark and light pixels (a pixel is the smallest element in a computerized image). This classification gets passed to the next layer which might distinguish edges in the picture. This then gets passed to the next layer which might recognize features such as eyes.
The Google [x] researchers took 10 million still images from YouTube and fed them into Google Brain. After three days studying them, the system learned to accurately identify certain categorises: human faces, human bodies and cats, achieving a 70 per cent relative improvement over the previous state-of-the-art techniques. (The cat category reflects the fact that YouTube is full of videos of cats.)
When Google adopted deep-learning-based speech recognition in its Android smartphone operating system, it achieved a 25 per cent reduction in word errors. ‘That’s the kind of drop you expect to take ten years to achieve,’ says computer scientist Geoffrey Hinton of the University of Toronto in Canada.
– a reflection of just how difficult it has been to make progress in this area. ‘That’s like ten breakthrough all together.’
Enjoy 40% Discount on All Assignments. Call: +61 282 942 025