Software Based On the Human Brain’s Neural Networks is Set To Improve Google Products


By: Talha Bhatti  |   October 31st, 2012   |   Google, News

Several months ago Google introduced artificial intelligence that was used to identify common objects in YouTube videos. The self teaching software was a success and was developed based on how actual human brain cells work. The search giant is now using the same technology to make other products and services including speech recognition better.

 

The learning software that Google developed used human neural networks as a model. When brain cells are connected together in a network, they can change each other through communication. What that means is that this when a neural network is fed information or data, it can react and change accordingly. This ability is learning at its most fundamental level because the brain cells have figured out that they need to react in a certain way to specific types of data.

 

The mimicking of neural networks is not a new concept and researchers have used it in machine-based learning for many years. Google’s approach has made the technology a lot more powerful because the firm’s researchers have been able add a lot more computing power to the technology. This means that all Google developed learning software will be able to be taught without human assistance making it commercially viable. The artificial intelligence can now figure out what it needs to concentrate on without humans having to tell it what to do. For example, if the software is being used for an identification application it will figure out that specific colours and shapes is what it needs to concentrate on rather than other factors.

 

Google latest implementation of the technology is speech recognition which is a growing part of its Android strategy. The mobile operating system is competing against Apple’s Siri and Google wants its software and more importantly Google search as a whole to be the top choice of consumers. The neural networks are making the speech based features more accurate and in turn giving users a better experience overall. According to Google’s speech recognition software head, Vincent Vanhoucke, “We got between 20 and 25 percent improvement in terms of words that are wrong. That means that many more people will have a perfect experience without errors.”

 

Other technologies that stand to benefit from the self learning technology include Google’s image search, Google’s self driving car and Google Glass.

 

Google engineer, Jeff Dean, shared more about the neural network and the experiments that Google is currently running. He says that, “Most people keep their model in a single machine, but we wanted to experiment with very large neural networks. If you scale up both the size of the model and the amount of data you train it with, you can learn finer distinctions or more complex features.”

 

The tests helped create the more robust technology than is currently available. Dean goes on to explain that, “These models can typically take a lot more context.” If a person says, ‘I like to eat Pumpkin” but the last word is muffled, the system can use context to figure out what the last word is likely to be. In this case it will be able to figure out that the last word is some sort of food. Google is also experimenting with systems that can understand text and images together.

 

Of course Google’s software is nowhere close to human intelligence, however it certainly represents another step closer to the goal of artificial intelligence. University of Montreal professor, Yoshua Bengio, explains that, “This is the route toward making more general artificial intelligence—there’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world.”

 

Source: Technology Review

Leave a Reply

Your email address will not be published. Required fields are marked *