Thanks, Leo! By the way, good to hear from you.
]]>Isn't it that VLA uses visual images. The AI recognizes and interprates shapes. Lidar doesn't produces a visual image.
And maybe, as probably seen in the recent Dongchedi ADAS test, with Lidar as add-on you might introduce conflicting information. In that test some cars seemed indecisive, couldn't choose between merging and braking, and eventually plowed into a road works section.
By the way, Jiri, nice interview!
]]>Seriously? China can use AI to recognize stealth submarine and you can't train AI with lidar data? Are you kidding, Candice Yuan? Better call Meng Hao.
"How AI Can Destroy Submarines In Minutes: Inside China’s New Tech That May Change Naval Warfare Forever
China anti-submarine technology: Research conducted by Meng Hao, which was published in the Electronics Optics & Control journal, identifies an AI-powered anti-submarine warfare (ASW) system that can detect even the most silent submarines. Working as a wise battle commander, the system combines information from sonar buoys, radar, underwater sensors, and oceanic factors like temperature and salinity."
Real reason? Price war.
]]>I think the problem is, as they hinted in the interview, it is hard to acquire Lidar or radar data to train models on compared to camera data.
Cameras are very standardized in the sense that they all detect the visible light spectrum in RGB in 4K resolution; even dashcam video might be useful for training. However, Lidars and radars have different abilities and resolutions, and I doubt it's easy to find good or useful data from competitor's cars to use for training, meaning at best you can get data from any XPeng sold that was equipped with Lidar; a much smaller population.
The lack of ability to utilise lidar for training appears to be a quite a flaw in the design, that a depth map is not created or used, however i am of course no expert. Hopefully the time will come soon where we can compare mass produced L4 vehicles with and without lidarr.
]]>