July 31, 2015

Google brings deep neural networks to your phone with Translate

By

Google’s neural networks are good for more than just making trippy art. If you download and run the company’s latest translation app, you will be using a deep neural network — and not just through the cloud, but right on your own phone. The network instantly adds twenty additional languages to the existing seven that their app could decode before, but that’s just the beginning.


There is little in tech today that can’t be made infinitely better by putting the word ‘deep’ in front of it. What puts the ‘deep’ in neural networks really comes down to having a few hidden layers of neurons in between the input layer and the output layer. That’s where all the so-called deep learning comes into play.
The original neural network from many decades ago, the perceptron, was at its heart just an algorithm. It was run as single layer of neurons connected in a special way. Although the perceptron was intended to be a machine in its own right, its first practical implementation was in software running on a standard processor. Unfortunately, it seems that not all that much has changed.
Neural Network
However, you would be wrong to think that deep neural networks implemented in hardware aren’t coming. Networks made from memristor arrays or constructed from FPGAs are certainly possible now, just not entirely portable. One reason these technologies haven’t been expedited into service already may be that phones are now just that good — they can do many practical computing tasks that once only the cloud could do.
It probably wasn’t easy, but Google was able to extract the essence of a large and general translation architecture running in the cloud, and pare it down to something you could use to translate a menu in a restaurant that blocks or otherwise lacks any cell signal. The network itself is still powerful enough to do things like recognize letters that are rotated through a small offset, but not when they are rotated too much.
To be able to run in real time, Google had to optimize several math operations. Technically speaking, that entailed tuning things like matrix multiplies to fit processing into all levels of cache memory and making use of the smartphone processor’s SIMD instructions. It also involved ‘training’ the network a little differently. For example, they found at one point that ‘$’ started to be recognized as ‘S’. This required them to adjust the warping parameters to fix the bug.
GoogleneuronOne can imagine chips in the near future that have larger and more dedicated neural networks implemented in hardware, which can be accessed for all kinds of mission critical functions. For example, the so-called convolutional neural networks that are used here for letter translation are also used in a variety of other image processing operations. They are based individual neural units much like those in the retina, where they respond to overlapping receptive fields in the visual space.
A recent article in Tech Rev. mentions that several companies including Qualcomm, have been developing neuro-inspired chips that function more like living neurons. In other words, they actually generate spikes, which are accumulated and propagated in such a way that their timing matters. These kinds of networks are the real deal — the kind that will eventually give humans a run for their money in a game of table-tennis or an egg-toss.
 

No comments:

Post a Comment