CN-Multimodal Training Data for Sensor Fusion Applications @Cognata Simulation Platform

Data from camera, radar, and LiDAR, sensor fusion for ADAS and AV perception systems


Bringing together data from camera, radar, and LiDAR, sensor fusion enables ADAS and AV perception systems to leverage the sensors’ complementary strengths and better see the world around them. In this video, we see how LiDAR for distance calculation, camera for traffic light detection, and radar for accurate speed of other cars come together for navigating a safe, efficient maneuver through evening traffic in Munich, in the rain. Whether considered independently for late fusion or processed together for early fusion, accurate multimodal data is key for thoroughly training today’s perception systems.

Cognata generates automotive training and validation data for machine learning and deep neural networks (DNN) to complement existing data, bridge gaps, and provide non-biased, accurate, diverse, and realistic datasets on demand.

Learn more at:
www.www.cognata.com/datasets
www.www.cognata.com/simulation