Stereolabs Docs: API Reference, Tutorials, and Integration
The ZED camera is designed to replicate the way human vision works. By using its two “eyes” and triangulation, the ZED provides a three-dimensional understanding of the observed scene, allowing your application to become space and motion aware.
This guide will help you get started with using the ZED camera. We recommend the following steps:
- Begin by reading the Getting Started section
- Learn more about the Camera and Sensors features of your camera
- Explore the Depth, Tracking, Mapping, and Spatial AI modules
- Check out the different Integrations available with the ZED
- Get started with application development by exploring the Tutorials and Samples
Key Features #
🎯 End-to-end spatial perception platform for human-like sensing capabilities.
⚡ Real-time performance: all algorithms of the ZED SDK are designed and optimized to run in real-time.
📷 Reduce time-to-market with our comprehensive, ready-to-use hardware and software designed for multiple applications.
📖 User-friendly and intuitive, with easy-to-use integrations and well-documented API for streamlined development.
🛠️ Wide range of supported platforms, from desktop to embedded PCs.
|Depth Sensing||Object Detection||Body Tracking|
|Positional Tracking||Geo Tracking||Spatial Mapping|
|Camera Control||Plane Detection||Multi Camera Fusion|
Supported Platforms #
Here is the list of all supported operating systems for the latest version of the ZED SDK. Please find the recommended specifications to make sure your configuration is compatible with the ZED SDK.
Note: The ZED SDK requires the use of an NVIDIA GPU with a Compute Capability > 5.
If you are not familiar with the corresponding versions between NVIDIA JetPack SDK and Jetson Linux, please take a look at our blog post.
The ZED SDK can be easily integrated into projects using the following programming languages:
Thanks to its comprehensive API, ZED cameras can be interfaced with multiple third-party libraries and environments.