Parallelizing Disparity Matching and ML

Post Reply
KieranMITDUT
Posts: 12
Joined: Sun Jun 09, 2019 2:15 am

Parallelizing Disparity Matching and ML

Post by KieranMITDUT » Wed Aug 28, 2019 5:01 pm

Nerian,

Would it be possible to pass images directly through the scene scan rather than packaging them up with the disparity map after it has been processed? This would be extremely useful for anyone doing any ML/object detection on the images prior to depth estimation. Given the typical latency on the depth maps (~50ms ish), this would match up perfectly with inference times for object detection and would cut overall perception system latencies by a significant amount. For our system for example, the scene scan latency accounts for 30% of the overall system latency which could be eliminated entirely if the images were passed through without waiting for the disparity matching to be complete.

Maybe this is already a feature?

Thanks for your time

-Kieran
KieranMITDUT
Posts: 12
Joined: Sun Jun 09, 2019 2:15 am

Re: Parallelizing Disparity Matching and ML

Post by KieranMITDUT » Wed Aug 28, 2019 5:07 pm

I also wanted to ask, why is it that the latency does not decrease when you move from 192 max disparity to 32 max disparity?

-Kieran
k.schauwecker
Posts: 30
Joined: Mon Mar 25, 2019 1:12 pm

Re: Parallelizing Disparity Matching and ML

Post by k.schauwecker » Thu Aug 29, 2019 6:43 am

Hi Kieran,

I agree that the way the images are output at the moment is not very flexible. Unfortunately the system has been designed around the concept of always streaming out two images in parallel, and changing this now would require a lot of effort. Technically it would be possible to send the raw image earlier. But for the rectified image this is not feasible. The rectification happens in sync with the stereo matching, and hence the rectified image is not ready earlier than the disparity map.

Regarding your measured latency vs. disparity range: often the network output is the bottleneck for many configurations. This is particularly the case for color cameras, because an RGB image contains so much more data. Using Jumbo frames reduces the problem, but for color cameras the network is still the limiting factor.

Regards,
Konstantin
Post Reply