Posted by Curt on 16 November, 2013 at 11:17 am. 8 comments already!

Loading

Jack Clark:

Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos.

This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.

The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.

“Deep learning” involves large clusters of computers ingesting and automatically classifying data, such as pictures. Google uses the technology for services like Android voice-controlled search, image recognition, and Google translate, among others.

The ad-slinger’s deep learning experiments caused a stir in June 2012 when a front-page New York Times article revealed that after Google fed its “DistBelief” technology with millions of YouTube videos, the software had learned to recognize the key features of cats.

A feline detector may sound trivial, but it’s the sort of digital brain-power needed to identify house numbers for Street View photos, individual faces on websites, or, say, if Google ever needs to identify rebel human forces creeping through the smoking ruins of a bombed-out Silicon Valley .

Google’s deep-learning tech works in a hierarchical way, so the bottom-most layer of the neural network can detect changes in color in an image’s pixels, and then the layer above may be able to use that to recognize certain types of edges. After adding successive analysis layers, different branches of the system can develop detection methods for faces, rocking chairs, computers, and so on.

What stunned Quoc V. Le is that the machine has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.

Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.”

Many of Quoc’s pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn’t quite sure how he could write program to do this.

At this point in the presentation another Googler who was sitting next to our humble El Reg hack burst out laughing, gasping: “Wow.”

“We had to rely on data to engineer the features for us, rather than engineer the features ourselves,” Quoc explained.

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.

Read more

0 0 votes
Article Rating
8
0
Would love your thoughts, please comment.x
()
x