The ZED SDK allows you to add depth, motion sensing and spatial AI to your application. Available as a standalone installer, it includes applications, tools and sample projects with source code.
ZED SDK 5.1 introduces a fully reworked image capture and recording pipeline delivering stability, maximum performance, and reliability. It adds Jetson Thor support, delivering up to 2.5× higher performance. An extra-close depth range is now supported. Streaming latency is significantly reduced, ensuring smoother real-time operation. A redesigned Virtual Stereo workflow for ZED X One now achieves up to 60% lower CPU usage. OpenCV fisheye calibration is now supported natively, and the new ZED Studio app unifies camera, stream, and SVO management. Improved Positional Tracking boosts robustness, introduces a 2D mode, and refines statuses. Expanded support for CUDA 13, JetPack 7, Python 3.14, and C++17/20 makes ZED SDK 5.1 faster, more stable, and ready for the next generation of robotics and spatial AI.
For older releases and changelog, see the ZED SDK release archive.
sl::InitParameters::depth_minimum_distance in all Neural depth modes.POSITIONAL_TRACKING::GEN_3. This update resolves random race conditions.POSITIONAL_TRACKING::GEN_3 with DEPTH_MODE::NONE. This race condition resulted in inconsistent computation.getPositionalTrackingLandmarks and getPositionalTrackingLandmarks2D methods. Performance gains are especially visible in large mapped areas.FUSION_REFERENCE_FRAME enum.override_gravity field in FusionConfiguration.<, >, <=, >=) instead of equality checks (==, !=) when testing ERROR_CODE::SUCCESS for camera opening and frame grabbing. This follows best practices for distinguishing between warnings (negative values) and errors (positive values).Oct 23, 2025
sl::InputType::setVirtualStereoFromCameraIDs or sl::InputType::setVirtualStereoFromSerialNumbers. It shows significant performance optimization: CPU usage decreased by 60%, making this version 2.5x more CPU efficient than ZED_MediaServer, even in IPC mode. Refer to the sample virtual stereo/cpp for an example on how to use itsl::ERROR_CODE::POTENTIAL_CALIBRATION_ISSUE returned by the sl::Camera::open function, if camera calibration is poor and may require a recalibration. To verify this, open ZED_DepthViewer and perform a visual check on the scene (is the depth image full, are planes flat, are objects' shapes correct, …).getSensorsDataBatch() to retrieve all high-frequency sensor data associated with the latest grabbed frame.sl::VIEW to handle more color conventions. Added additional 3-channel (BGR) and GRAY color modes. DEPTH and CONFIDENCE views are now retrieved in color.VIDEO_SETTINGS reset calls. Now all the settings can be reset with the base function sl::Camera::setCameraSettings(setting, sl::VIDEO_SETTINGS_VALUE_AUTO), for sl::Camera and sl::CameraOne (in live and streaming mode).sl::InputType::setFromCameraID and bus_type from sl::InputType::setFromSerialNumber. Also removed the enum CAMERA_TYPE. Now, objects Camera and CameraOne handle the camera and bus types automatically.sdk_verbose of InitParametersOne to 1 to match InitParameters.sl::CameraOne. On some camera the rectified field of view was much lower than expected. To revert to the previous rectification, set the environment variable: ZED_SDK_OLD_FOV_COMPUTE=1DRIVER_FAILURE in the enum sl::ERROR_CODE. This error code can be returned when the driver's initialization has failed. When using gmsl cameras, it is then recommended to restart it with sudo systemctl restart zed_x_daemon.service.ZED_SDK_STREAM_VERSION=1. Important: The Receiver must match or exceed the sender's version for compatibility.AUTO_ANALOG_GAIN_RANGE, AUTO_DIGITAL_GAIN_RANGE, and AUTO_EXPOSURE_TIME_RANGE camera control values in streaming.PositionalTrackingParameters::enable_2d_ground_mode setting. This mode constrains positional tracking to two dimensions, recommended for grounded robots and vehicles.SPATIAL_MEMORY status in GEN3 to indicate whether tracking is active and relocated in a pre-mapped environment, performing loop closure, or relocating.getPosition(), which now returns sl::POSITIONAL_TRACKING::OK instead of previously returning sl::POSITIONAL_TRACKING::UNAVAILABLE.sl::Camera to sl::Fusion in the network configuration.sl.Mat.get_data() now supports explicit error and clearer warnings. Added instructions to set up and validate CuPy integration with PyZed.sl.FusionConfiguration in the Python wrapper.GNSS_STATUS in both GNSSData and FusedPositionalTrackingStatus.--cimud of the ZEDCalibration tool to now handle all GMSL cameras.hello_zed_gpu.py script in tutorials/tutorial1 - hello ZED/python to validate the usage of CuPy with the ZED SDK Python API.pytorch_yolov8_cupy_preproc, a YOLOv8 inference CuPy optimized sample. Also updated default weights to yolo11m.pt (previously it was yolov8m.pt)..area files, allowing for easier management of mapped environments. Also introduced the use of SPATIAL_MEMORY for better tracking of feedback.virtual stereo/cpp for an example on how to use the new VirtualStereo API from two ZED X One cameras.