Skip to content

Deepen AI unveils multi-sensor calibration for physical AI applications

Credit: dies-irae/iStock /Getty Images Plus/Getty Images
Credit: dies-irae/iStock /Getty Images Plus/Getty Images

Deepen AI has released its latest targetless calibration platform, built to simplify and accelerate calibration for complex autonomous vehicles, automotive ADAS and robotics sensor suites.

The platform supports a wide range of configurations including GNSS receives, multiple lidars, radars, cameras and inertial measurement units (IMUs). It processes all inputs in one pass using a single continuous dataset such as a ROS bag.

As sensor stacks become more sophisticated, traditional calibration methods are increasingly becoming a bottleneck in deploying autonomous systems at scale. These approaches are often manual, iterative and dependent on physical targets. Deepen AI’s solution introduces a fully automated and unified approach that calibrates all sensors simultaneously.

The platform estimates intrinsic, extrinsic and temporal parameters across the entire sensor suite in a single streamlined workflow, removing the need for sensor-by-sensor calibration. This approach streamlines operations while delivering high performance, achieving up to 0.05° angular accuracy and 0.7 cm positional accuracy, exceeding traditional target-based calibration techniques.

Capabilities include:

  • Simultaneous calibration across all sensors using a single dataset
  • Support for multi LiDAR, camera, radar, IMU, and GNSS configurations
  • Accuracy of up to 0.05° and 0.7 cm
  • No strict requirement for loop closure or fixed driving patterns

“Calibration has traditionally been one of the most time-consuming, complex and fragmented steps in deploying autonomous systems,” said Mohammad Musa, founder and CEO of Deepen AI. “With this release, teams can move to a system level approach that delivers both speed and precision using real-world data.”

The system is designed to work without controlled environments or rigid data collection protocols, allowing teams to seamlessly integrate calibration into existing workflows for both research and large-scale production deployments. It requires only simple and practical conditions, with calibration possible in locations such as parking lots, garages or quiet streets, provided the environment is mostly static with minimal moving objects. A minimum of 30 seconds of continuous driving data is required.

The platform is already being deployed with customers working on highly complex sensor configurations, where multiple lidars and cameras need to be calibrated together as a single system. In one such deployment, the full sensor stack was calibrated during a normal drive in a parking garage, parking lot, or a small residential street, without any special driving patterns or looped trajectories.

Using only a short duration of driving data, Deepen AI simultaneously performed intrinsic, extrinsic and temporal calibration across all sensors in a single workflow. This unified approach not only simplifies operations and improves consistency, but also delivers accuracy that surpasses traditional target-based calibration methods, making it well suited for both research and production environments.