PRESS RELEASE: Plus Together with TRATON GROUP Brands Scania, MAN, and International Launch Global Release of Level 4 Autonomous Trucking Software

PlusVision

Transformer-based Perception Software for All Levels of Autonomy

PlusVision represents a transformative leap in perception technology, utilizing advanced Transformer models, Occupancy Networks, and Bird’s Eye View (BEV) 3D spatial reasoning to achieve superior scene understanding in complex, dynamic environments. PlusVision offers unparalleled scalability across diverse driving scenarios, enhancing adaptability without extensive re-training.

PlusVision bridges the gap between today’s AV1.0 and the future of AV2.0, equipping customers to meet the diverse, real-world demands of autonomous mobility.


Proprietary transformer BEV model

Our proprietary transformer BEV model delivers superior 3D spatial reasoning and scene understanding and can adapt to various sensor configurations and multi-modalities. It is a turnkey perception solution on power efficient, automotive-grade chipsets.

Auto-labeling of camera-only data to 3D perception

We use a multi-view camera auto-labeling algorithm to produce 3D reconstructed ground-truth data to fine-tune our AI models.

Hardware agnostic perception

PlusVision supports a wide range of sensor configurations and multi-modalities to enable precise perception across different levels of autonomy. The SoC-agnostic software architecture is fully compatible with mainstream automotive chipsets.

PlusVision Transformer Model vs. Convolutional Neural Network Model

1. Strong zero-shot learning capabilities

Pre-trained models can excel in generalization to new data distribution.

1. Lack of zero-shot learning

Requires specific and extensive training and fine-tuning on new data sets or tasks.

2. Inherently sensor-agnostic

Can handle various multi-modal fusion, generating 360-degree scene understanding with spatial temporal consistency.

2. Sensitive to sensor setup and configuration

Separate models for different sensor types make cross-view spatial awareness challenging.

3. Unified attention mechanism

Processes multiple sensor inputs jointly, leading to an efficient computational requirement.

3. Not ideal for complex multi-camera setups

Complexity grows with multi-modal sensors with significant computational overhead, making it more efficient for single or small-scale camera set ups.

More with PlusVision™

Contact us to learn how our PlusVision™ AI perception software can help you accelerate the development of next-gen products in advanced safety systems, ADAS applications, and higher levels of autonomy.

Get in touch