We use the best sensing technologies, including radars, LiDARs, and cameras to “see” 360 degrees around the vehicle. The fusion-based perception system allows our self-driving trucks to track vehicles one mile (or 1600 meters) out.
Our localization and mapping algorithms accurately track the location of our self-driving trucks and update the map with the truck’s surroundings.
At PLUS we train and deploy a number of deep learning models to perform complex tasks such as accurately detecting and analyzing ground objects and road structures, and predicting the behavior of our trucks and surrounding vehicles.
Multimodal sensors provide the redundancy to mitigate the effect of sensor failures. They also correct localized environmental noise, and significantly boost the precision within their operating range.
Complementary models and mechanisms such as odometry, VisualSLAM and PointCloud based localization provide functional redundancy. Statistical models can also be used to ensure smooth transitions between different operating modes.
PLUS collaborates closely with original equipment manufacturers to ensure our software system is deeply integrated with the vehicle redundant electrical and mechanical systems.
The complete Level 4 autonomous system and the minimum Safe Landing system ensure safe and graceful performance degradation in case of any software, sensor, or hardware failures.