Hacker shows what Tesla Full Self-Driving’s vision depth perception neural net can see

Tesla Full Self-Driving

A hacker managed to pull Tesla’s vision depth perception neural net from his car with “Full Self-Driving” package.

You can see how the vehicle detects depth with a point-cloud view powered by computer vision.

Tesla has recently started to move away from its radar sensor, which is useful to detect depth, and it is instead relying only on camera-based computer vision.

This is a very different approach from the rest of the industry that not only uses radar but also lidar sensors as well.

Tesla CEO Elon Musk maintains that cameras and neural nets are the keys to achieving self-driving.

He told Electrek last month:

The whole road system is designed to work with optical imagers (eyes) and neural nets (brain). That’s why cameras and silicon neural nets are the solution.

One of the problems with the lack of radar is that it complicates depth perception, which radar is great at, but Tesla plans to detect depth with a point-cloud view generated by its cameras and neural nets.

Green, a Tesla hacker who managed to get incredible access to Tesla software with root access in its vehicles, has accessed the depth perception neural net in Tesla’s Full Self-Driving package and released a video of it:

As Green mentioned, this neural net could actually produce a 3D view of the vehicle’s surroundings.

But the resolution is not as high as shown by Tesla in a previous presentation:


However, this new neural net is what runs live in the vehicle as it is driving around with Tesla’s Full Self-Driving package with “city driving” activated.

Green noted that this neural net only runs with the main front-facing camera, which is one out of three front-facing cameras and one out of eight total cameras.

News source

“If you liked the article, share it in ...”