Monday 11 January 2016

As The Development Of Driverless Cars Gathers Momentum, Here Is How Machines Are Thought To See

by Cambridge Research | Research


As The Development of driverless cars gathers momentum, University of Cambridge researcher explain the complex process of getting machines in motion to actually move according to a predetermined pattern or direction.

In this video titled "Teaching machines to see", Alex Kendall of the Department of Engineering, University of Cambridge say explain how two technologies, which use deep learning techniques to help machines to see and recognise their location and surroundings could be used for the development of driverless cars and autonomous roboticis. And, they can even be used on a regular camera or smartphone.

Human visual system, he says, is incredibly complex and as a child we learn to understand through hours and hours of experiences. But to teach machines to learn, `Deep Learning' system is used.

So what is Deep Learning? It is a field of machine learning that teaches the machine to learn, the same way a toddler learns - i.e without clear patterns.

But then, there is the problem of localisation, necessitated by the discovery that what the machine learns in one locality, does not necessarily work when other localities are tried.

For example, he explains simply that three questions are important. (a) Where Am I (b) What Is Around Me and (c) What Am I Going To Do Next.

Not as easy as it sounds because communities and local road networks differ from one place to another - creating a problem of adoption of technologies in other places.


In `how we teach computers to understand pictures', computer vision expert, Fei Fei Li, explains why despite technological advances, development of driverless automobiles is still a distance from full realisation and showed what they have done to get around limitations.


The reasons humans are still blind, she says, is because `our technologies are still blind'. For example cameras can see a child swimming in a pool, but cannot alert parents when the child gets into difficulty or is drowning. Fei Fei Li and her team are also helping to teach machines to see.

Fei Fei Li's thinking is in line with the works of researchers from the University of Cambridge. Notably that children must be an intrinsic part of that effort. Since there can be several variations of one image, teaching a machine to see, must be done from the point of view of a child learning his environment from real world experiences and examples.

So you can use a child's eye as the machine camera, and therefore, the ability to make a machine ‘see’ and accurately identify where it is and what it’s looking at will be a vital part of developing autonomous vehicles and robotics.


“It’s remarkably good at recognising things in an image, because it’s had so much practice,” said Alex Kendall, a PhD student in the Department of Engineering. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”


“Vision is our most powerful sense and driverless cars will also need to see,” said Professor Roberto Cipolla, who led the research. “But teaching a machine to see is far more difficult than it sounds.”

Read all about the [ Cambridge Research ] here.



No comments:

Post a Comment

Please add your comments here