Camera Class Reference

This class serves as the primary interface between the camera and the various features provided by the SDK. More...

Functions

 Camera ()
 Default constructor. More...
 
 ~Camera ()
 Class destructor. More...
 
ERROR_CODE open (InitParameters init_parameters=InitParameters())
 Opens the ZED camera from the provided InitParameters. More...
 
InitParameters getInitParameters ()
 Returns the InitParameters used. More...
 
bool isOpened ()
 Reports if the camera has been successfully opened. More...
 
void close ()
 Close an opened camera. More...
 
ERROR_CODE grab (RuntimeParameters rt_parameters=RuntimeParameters())
 This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.) More...
 
RuntimeParameters getRuntimeParameters ()
 Returns the RuntimeParameters used. More...
 
CameraInformation getCameraInformation (Resolution image_size=Resolution(0, 0))
 Returns the CameraInformation associated the camera being used. More...
 
void updateSelfCalibration ()
 Perform a new self-calibration process. More...
 
CUcontext getCUDAContext ()
 Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries. More...
 
CUstream getCUDAStream ()
 Gets the Camera-created CUDA stream for sharing it with other CUDA-capable libraries. More...
 
ERROR_CODE findPlaneAtHit (sl::uint2 coord, sl::Plane &plane, PlaneDetectionParameters parameters=PlaneDetectionParameters())
 Checks the plane at the given left image coordinates. More...
 
ERROR_CODE findFloorPlane (sl::Plane &floorPlane, sl::Transform &resetTrackingFloorFrame, float floor_height_prior=INVALID_VALUE, sl::Rotation world_orientation_prior=sl::Matrix3f::zeros(), float floor_height_prior_tolerance=INVALID_VALUE)
 Detect the floor plane of the scene. More...
 
HealthStatus getHealthStatus ()
 Returns HealthStatus. More...
 
 Camera (const Camera &)=delete
 The Camera object cannot be copied. Therfore, its copy constructor is disabled. More...
 
Video
ERROR_CODE retrieveImage (Mat &mat, VIEW view=VIEW::LEFT, MEM type=MEM::CPU, Resolution image_size=Resolution(0, 0))
 Retrieves images from the camera (or SVO file). More...
 
ERROR_CODE getCameraSettings (VIDEO_SETTINGS settings, int &setting)
 Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.). More...
 
ERROR_CODE getCameraSettings (VIDEO_SETTINGS settings, int &min_val, int &max_val)
 Fills the current values of the requested settings for VIDEO_SETTINGS that supports two values (min/max). More...
 
ERROR_CODE getCameraSettings (VIDEO_SETTINGS settings, Rect &roi, sl::SIDE side=sl::SIDE::BOTH)
 Overloaded method for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter. More...
 
ERROR_CODE setCameraSettings (VIDEO_SETTINGS settings, int value=VIDEO_SETTINGS_VALUE_AUTO)
 Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.). More...
 
ERROR_CODE setCameraSettings (VIDEO_SETTINGS settings, int min, int max)
 Sets the value of the requested camera setting that supports two values (min/max). More...
 
ERROR_CODE setCameraSettings (VIDEO_SETTINGS settings, Rect roi, sl::SIDE side=sl::SIDE::BOTH, bool reset=false)
 Overloaded method for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter. More...
 
ERROR_CODE getCameraSettingsRange (VIDEO_SETTINGS settings, int &min, int &max)
 Get the range for the specified camera settings VIDEO_SETTINGS as min/max value. More...
 
bool isCameraSettingSupported (VIDEO_SETTINGS setting)
 Test if the video setting is supported by the camera. More...
 
float getCurrentFPS ()
 Returns the current framerate at which the grab() method is successfully called. More...
 
Timestamp getTimestamp (sl::TIME_REFERENCE reference_time)
 Returns the timestamp in the requested TIME_REFERENCE. More...
 
unsigned int getFrameDroppedCount ()
 Returns the number of frames dropped since grab() was called for the first time. More...
 
int getSVOPosition ()
 Returns the current playback position in the SVO file. More...
 
int getSVOPositionAtTimestamp (const sl::Timestamp &timestamp)
 Retrieves the frame index within the SVO file corresponding to the provided timestamp. More...
 
void setSVOPosition (int frame_number)
 Sets the playback cursor to the desired frame number in the SVO file. More...
 
int getSVONumberOfFrames ()
 Returns the number of frames in the SVO file. More...
 
ERROR_CODE ingestDataIntoSVO (const sl::SVOData &data)
 Ingest SVOData in a SVO file. More...
 
ERROR_CODE retrieveSVOData (const std::string &key, std::map< sl::Timestamp, sl::SVOData > &data, sl::Timestamp ts_begin=0, sl::Timestamp ts_end=0)
 Retrieves SVO data from the SVO file at the given channel key and in the given timestamp range. More...
 
std::vector< std::string > getSVODataKeys ()
 Get the external channels that can be retrieved from the SVO file. More...
 
Depth Sensing
ERROR_CODE retrieveMeasure (Mat &mat, MEASURE measure=MEASURE::DEPTH, MEM type=MEM::CPU, Resolution image_size=Resolution(0, 0))
 Computed measures, like depth, point cloud, or normals, can be retrieved using this method. More...
 
ERROR_CODE setRegionOfInterest (sl::Mat &roi_mask, std::unordered_set< MODULE > module={MODULE::ALL})
 Defines a region of interest to focus on for all the SDK, discarding other parts. More...
 
ERROR_CODE getRegionOfInterest (sl::Mat &roi_mask, sl::Resolution image_size=Resolution(0, 0), MODULE module=MODULE::ALL)
 Get the previously set or computed region of interest. More...
 
ERROR_CODE startRegionOfInterestAutoDetection (sl::RegionOfInterestParameters roi_param=sl::RegionOfInterestParameters())
 Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts. This detection is based on the general motion of the camera combined with the motion in the scene. The camera must move for this process, an internal motion detector is used, based on the Positional Tracking module. It requires a few hundreds frames of motion to compute the mask. More...
 
REGION_OF_INTEREST_AUTO_DETECTION_STATE getRegionOfInterestAutoDetectionStatus ()
 Return the status of the automatic Region of Interest Detection The automatic Region of Interest Detection is enabled by using startRegionOfInterestAutoDetection. More...
 
ERROR_CODE getCurrentMinMaxDepth (float &min, float &max)
 Gets the current range of perceived depth. More...
 
Positional Tracking
ERROR_CODE enablePositionalTracking (PositionalTrackingParameters tracking_parameters=PositionalTrackingParameters())
 Initializes and starts the positional tracking processes. More...
 
POSITIONAL_TRACKING_STATE getPosition (Pose &camera_pose, REFERENCE_FRAME reference_frame=REFERENCE_FRAME::WORLD)
 Retrieves the estimated position and orientation of the camera in the specified reference frame. More...
 
sl::PositionalTrackingStatus getPositionalTrackingStatus ()
 Return the current status of positional tracking module. More...
 
ERROR_CODE saveAreaMap (String area_file_path)
 Saves the current area learning file. The file will contain spatial memory data generated by the tracking. More...
 
AREA_EXPORTING_STATE getAreaExportState ()
 Returns the state of the spatial memory export process. More...
 
ERROR_CODE resetPositionalTracking (const Transform &path)
 Resets the tracking, and re-initializes the position with the given transformation matrix. More...
 
void disablePositionalTracking (String area_file_path="")
 Disables the positional tracking. More...
 
bool isPositionalTrackingEnabled ()
 Tells if the tracking module is enabled. More...
 
PositionalTrackingParameters getPositionalTrackingParameters ()
 Returns the PositionalTrackingParameters used. More...
 
ERROR_CODE getSensorsData (SensorsData &data, TIME_REFERENCE reference_time)
 Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference. More...
 
ERROR_CODE setIMUPrior (const sl::Transform &transform)
 Set an optional IMU orientation hint that will be used to assist the tracking during the next grab(). More...
 
Spatial Mapping
ERROR_CODE enableSpatialMapping (SpatialMappingParameters spatial_mapping_parameters=SpatialMappingParameters())
 Initializes and starts the spatial mapping processes. More...
 
SPATIAL_MAPPING_STATE getSpatialMappingState ()
 Returns the current spatial mapping state. More...
 
void requestSpatialMapAsync ()
 Starts the spatial map generation process in a non-blocking thread from the spatial mapping process. More...
 
ERROR_CODE getSpatialMapRequestStatusAsync ()
 Returns the spatial map generation status. More...
 
ERROR_CODE retrieveSpatialMapAsync (Mesh &mesh)
 Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as MESH. More...
 
ERROR_CODE retrieveSpatialMapAsync (FusedPointCloud &fpc)
 Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as FUSED_POINT_CLOUD. More...
 
ERROR_CODE extractWholeSpatialMap (Mesh &mesh)
 Extract the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as MESH. More...
 
ERROR_CODE extractWholeSpatialMap (FusedPointCloud &fpc)
 Extract the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as FUSED_POINT_CLOUD. More...
 
void pauseSpatialMapping (bool status)
 Pauses or resumes the spatial mapping processes. More...
 
void disableSpatialMapping ()
 Disables the spatial mapping process. More...
 
SpatialMappingParameters getSpatialMappingParameters ()
 Returns the SpatialMappingParameters used. More...
 
Recording
ERROR_CODE enableRecording (RecordingParameters recording_parameters)
 Creates an SVO file to be filled by enableRecording() and disableRecording(). More...
 
RecordingStatus getRecordingStatus ()
 Get the recording information. More...
 
void pauseRecording (bool status)
 Pauses or resumes the recording. More...
 
void disableRecording ()
 Disables the recording initiated by enableRecording() and closes the generated file. More...
 
RecordingParameters getRecordingParameters ()
 Returns the RecordingParameters used. More...
 
Streaming
ERROR_CODE enableStreaming (StreamingParameters streaming_parameters=StreamingParameters())
 Creates a streaming pipeline. More...
 
void disableStreaming ()
 Disables the streaming initiated by enableStreaming(). More...
 
bool isStreamingEnabled ()
 Tells if the streaming is running. More...
 
StreamingParameters getStreamingParameters ()
 Returns the StreamingParameters used. More...
 
Object Detection
ERROR_CODE enableObjectDetection (ObjectDetectionParameters object_detection_parameters=ObjectDetectionParameters())
 Initializes and starts object detection module. More...
 
void disableObjectDetection (unsigned int instance_id=0, bool force_disable_all_instances=false)
 Disables the Object Detection process. More...
 
ERROR_CODE ingestCustomBoxObjects (const std::vector< CustomBoxObjectData > &objects_in, const unsigned int instance_id=0)
 Feed the 3D Object tracking method with your own 2D bounding boxes from your own detection algorithm. More...
 
ERROR_CODE ingestCustomMaskObjects (const std::vector< CustomMaskObjectData > &objects_in, const unsigned int instance_id=0)
 Feed the 3D Object tracking method with your own 2D bounding boxes with masks from your own detection algorithm. More...
 
ERROR_CODE retrieveObjects (Objects &objects, ObjectDetectionRuntimeParameters parameters=ObjectDetectionRuntimeParameters(), const unsigned int instance_id=0)
 Retrieve objects detected by the object detection module. More...
 
ERROR_CODE retrieveObjects (Objects &objects, CustomObjectDetectionRuntimeParameters parameters=CustomObjectDetectionRuntimeParameters(), const unsigned int instance_id=0)
 Retrieve objects detected by the object detection module. More...
 
ERROR_CODE getObjectsBatch (std::vector< sl::ObjectsBatch > &trajectories, unsigned int instance_id=0)
 Get a batch of detected objects. More...
 
bool isObjectDetectionEnabled (unsigned int instance_id=0)
 Tells if the object detection module is enabled. More...
 
ObjectDetectionParameters getObjectDetectionParameters (unsigned int instance_id=0)
 Returns the ObjectDetectionParameters used. More...
 
Body Tracking
ERROR_CODE enableBodyTracking (BodyTrackingParameters body_tracking_parameters=BodyTrackingParameters())
 Initializes and starts body tracking module. More...
 
void disableBodyTracking (unsigned int instance_id=0, bool force_disable_all_instances=false)
 Disables the body tracking process. More...
 
ERROR_CODE retrieveBodies (Bodies &bodies, BodyTrackingRuntimeParameters parameters=BodyTrackingRuntimeParameters(), unsigned int instance_id=0)
 Retrieves body tracking data from the body tracking module. More...
 
bool isBodyTrackingEnabled (unsigned int instance_id=0)
 Tells if the body tracking module is enabled. More...
 
BodyTrackingParameters getBodyTrackingParameters (unsigned int instance_id=0)
 Returns the BodyTrackingParameters used. More...
 
Fusion
ERROR_CODE startPublishing (CommunicationParameters configuration=CommunicationParameters())
 Set this camera as a data provider for the Fusion module. More...
 
ERROR_CODE stopPublishing ()
 Set this camera as normal camera (without data providing).
Stop to send camera data to fusion. More...
 
CommunicationParameters getCommunicationParameters ()
 Returns the CommunicationParameters used.
It corresponds to the structure given as argument to the startPublishing() method. More...
 

Static Functions

static String getSDKVersion ()
 Returns the version of the currently installed ZED SDK. More...
 
static void getSDKVersion (int &major, int &minor, int &patch)
 Returns the version of the currently installed ZED SDK. More...
 
static std::vector< sl::DevicePropertiesgetDeviceList ()
 List all the connected devices with their associated information. More...
 
static std::vector< sl::StreamingPropertiesgetStreamingDeviceList ()
 List all the streaming devices with their associated information. More...
 
static sl::ERROR_CODE reboot (int sn, bool fullReboot=true)
 Performs a hardware reset of the ZED 2 and the ZED 2i. More...
 
static sl::ERROR_CODE reboot (sl::INPUT_TYPE inputType)
 Performs a hardware reset of all devices matching the InputType. More...
 

Detailed Description

This class serves as the primary interface between the camera and the various features provided by the SDK.

It enables seamless integration and access to a wide array of capabilities, including video streaming, depth sensing, object tracking, mapping, and much more.

A standard program will use the sl::Camera class like this:

#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// --- Initialize a Camera object and open the ZED
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode for USB cameras
// init_params.camera_resolution = RESOLUTION::HD1200; // Use HD1200 video mode for GMSL cameras
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << err << " exit program " << std::endl;
return -1;
}
// --- Main loop grabbing images and depth values
// Capture 50 frames and stop
int i = 0;
Mat image, depth;
while (i < 50) {
// Grab an image
if (zed.grab() == ERROR_CODE::SUCCESS) { // A new image is available if grab() returns SUCCESS
// Display a pixel color
zed.retrieveImage(image, VIEW::LEFT); // Get the left image
sl::uchar4 centerBGRA;
image.getValue<sl::uchar4>(image.getWidth() / 2, image.getHeight() / 2, &centerBGRA);
std::cout << "Image " << i << " center pixel B: " << (int)centerBGRA[0] << " G: " << (int)centerBGRA[1] << " R: " << (int)centerBGRA[2] << std::endl;
// Display a pixel depth
zed.retrieveMeasure(depth, MEASURE::DEPTH); // Get the depth map
float centerDepth;
depth.getValue<float>(depth.getWidth() / 2, depth.getHeight() / 2, &centerDepth);
std::cout << "Image " << i << " center depth: " << centerDepth << std::endl;
i++;
}
}
// --- Close the Camera
zed.close();
return 0;
}
Camera()
Default constructor.
ERROR_CODE
Lists error codes in the ZED SDK.
Definition: types.hpp:482
Definition: defines.hpp:58

Constructor and Destructor

◆ Camera() [1/2]

Camera ( )

Default constructor.

Creates an empty Camera object. The parameters will be set when calling open(init_param) with the desired InitParameters .

A Camera object can be created like this:

Camera zed;

or

Camera* zed = new Camera();

◆ ~Camera()

~Camera ( )

Class destructor.

The destructor will call the close() function and clear the memory previously allocated by the object.

◆ Camera() [2/2]

Camera ( const Camera )
delete

The Camera object cannot be copied. Therfore, its copy constructor is disabled.

If you need to share a Camera instance across several threads or object, please consider using a pointer.

See also
Camera()

Functions

◆ open()

ERROR_CODE open ( InitParameters  init_parameters = InitParameters())

Opens the ZED camera from the provided InitParameters.

The method will also check the hardware requirements and run a self-calibration.

Parameters
init_parameters: A structure containing all the initial parameters. Default: a preset of InitParameters.
Returns
An error code giving information about the internal process. If ERROR_CODE::SUCCESS is returned, the camera is ready to use. Every other code indicates an error and the program should be stopped.

Here is the proper way to call this function:

Camera zed; // Create a ZED camera object
InitParameters init_params; // Set configuration parameters
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl; // Display the error
exit(-1);
}
* toString(const FLIP_MODE &flip_mode)
Note
If you are having issues opening a camera, the diagnostic tool provided in the SDK can help you identify to problems.
  • Windows: C:\Program Files (x86)\ZED SDK\tools\ZED Diagnostic.exe
  • Linux: /usr/local/zed/tools/ZED Diagnostic
If this method is called on an already opened camera, close() will be called.

◆ getInitParameters()

InitParameters getInitParameters ( )

Returns the InitParameters used.

It corresponds to the structure given as argument to the open() method.

Returns
InitParameters containing the parameters used to initialize the Camera object.

◆ isOpened()

bool isOpened ( )
inline

Reports if the camera has been successfully opened.

It has the same behavior as checking if open() returns ERROR_CODE::SUCCESS.

Returns
true if the ZED camera is already setup, otherwise false.

◆ close()

void close ( )

Close an opened camera.

If open() has been called, this method will close the connection to the camera (or the SVO file) and free the corresponding memory.

If open() wasn't called or failed, this method won't have any effects.

Note
If an asynchronous task is running within the Camera object, like saveAreaMap(), this method will wait for its completion.
To apply a new InitParameters, you will need to close the camera first and then open it again with the new InitParameters values.
Warning
If the CUDA context was created by open(), this method will destroy it.
Therefore you need to make sure to delete your GPU sl::Mat objects before the context is destroyed.

◆ grab()

ERROR_CODE grab ( RuntimeParameters  rt_parameters = RuntimeParameters())

This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)

As measures are created in this method, its execution can last a few milliseconds, depending on your parameters and your hardware.
The exact duration will mostly depend on the following parameters:

This method is meant to be called frequently in the main loop of your application.

Note
Since ZED SDK 3.0, this method is blocking. It means that grab() will wait until a new frame is detected and available.
If no new frames is available until timeout is reached, grab() will return ERROR_CODE::CAMERA_NOT_DETECTED since the camera has probably been disconnected.
Parameters
rt_parameters: A structure containing all the runtime parameters. Default: a preset of RuntimeParameters.
Returns
ERROR_CODE::SUCCESS means that no problem was encountered.
Note
Returned errors can be displayed using toString().
// Set runtime parameters after opening the camera
RuntimeParameters runtime_param;
runtime_param.confidence_threshold = 50; // Change the confidence threshold
Mat image;
while (true) {
// Grab an image
if (zed.grab(runtime_param) == ERROR_CODE::SUCCESS) { // A new image is available if grab() returns SUCCESS
zed.retrieveImage(image, VIEW::LEFT); // Get the left image
// Use the image for your application
}
}

◆ getRuntimeParameters()

RuntimeParameters getRuntimeParameters ( )

Returns the RuntimeParameters used.

It corresponds to the structure given as argument to the grab() method.

Returns
RuntimeParameters containing the parameters that define the behavior of the grab method.

◆ getCameraInformation()

CameraInformation getCameraInformation ( Resolution  image_size = Resolution(0, 0))

Returns the CameraInformation associated the camera being used.

To ensure accurate calibration, it is possible to specify a custom resolution as a parameter when obtaining scaled information, as calibration parameters are resolution-dependent.
When reading an SVO file, the parameters will correspond to the camera used for recording.

Parameters
image_size: You can specify a size different from the default image size to get the scaled camera information. Default = (0,0) meaning original image size (given by getCameraInformation().camera_configuration.resolution).
Returns
CameraInformation containing the calibration parameters of the ZED, as well as serial number and firmware version.
Note
The CameraInformation.camera_configuration will contain two types of calibration parameters:
  • camera_configuration.calibration_parameters: it contains the calibration for the rectified images. Rectified images are images that would come from perfect stereo camera (exact same camera, perfectly matched). Therefore, the camera matrix will be identical for left and right camera, and the distortion/rotation/translation matrix will be null (except for Tx, which is the exact distance between both eyes).
  • camera_configuration.calibration_parameters_raw: it contains the original calibration before rectification. Therefore it should be identical or very close to the calibration file SNXXXX.conf where XXXX is the serial number of the camera.
Warning
The returned parameters might vary between two execution due to the self-calibration being run in the open() method.
Note
The calibration file SNXXXX.conf can be found in:
  • Windows: C:/ProgramData/Stereolabs/settings/
  • Linux: /usr/local/zed/settings/

◆ updateSelfCalibration()

void updateSelfCalibration ( )

Perform a new self-calibration process.

In some cases, due to temperature changes or strong vibrations, the stereo calibration becomes less accurate.
Use this method to update the self-calibration data and get more reliable depth values.

Note
The self calibration will occur at the next grab() call.
This method is similar to the previous resetSelfCalibration() used in 2.X SDK versions.
Warning
New values will then be available in getCameraInformation(), be sure to get them to still have consistent 2D <-> 3D conversion.

◆ getCUDAContext()

CUcontext getCUDAContext ( )

Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries.

This can be useful for sharing GPU memories.

Note
If you're looking for the opposite mechanism, where an existing CUDA context is given to the Camera, please check InitParameters::sdk_cuda_ctx
Returns
The CUDA context used for GPU calls.

◆ getCUDAStream()

CUstream getCUDAStream ( )

Gets the Camera-created CUDA stream for sharing it with other CUDA-capable libraries.

Returns
The CUDA stream used for GPU calls.

◆ retrieveImage()

ERROR_CODE retrieveImage ( Mat mat,
VIEW  view = VIEW::LEFT,
MEM  type = MEM::CPU,
Resolution  image_size = Resolution(0, 0) 
)

Retrieves images from the camera (or SVO file).

Multiple images are available along with a view of various measures for display purposes.
Available images and views are listed here.
As an example, VIEW::DEPTH can be used to get a gray-scale version of the depth map, but the actual depth values can be retrieved using retrieveMeasure().

Pixels
Most VIEW modes output image with 4 channels as BGRA (Blue, Green, Red, Alpha), for more information see enum VIEW

Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.

Image size
By default, images are returned in the resolution provided by getCameraInformation().camera_configuration.resolution.
However, you can request custom resolutions. For example, requesting a smaller image can help you speed up your application.

Warning
A sl::Mat resolution higher than the camera resolution cannot be requested.
Parameters
mat: The sl::Mat to store the image. The method will create the Mat if necessary at the proper resolution. If already created, it will just update its data (CPU or GPU depending on the MEM type).
view: Defines the image you want (see VIEW). Default : VIEW::LEFT.
type: Defines on which memory the image should be allocated. Default: MEM::CPU.
image_size: If specified, define the resolution of the output sl::Mat. If set to Resolution(0,0), the camera resolution will be taken. Default: (0,0).
Returns
ERROR_CODE::SUCCESS if the method succeeded.
ERROR_CODE::INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW::DEPTH with DEPTH_MODE::NONE for example).
ERROR_CODE::INVALID_RESOLUTION if the resolution is higher than one provided by getCameraInformation().camera_configuration.resolution.
ERROR_CODE::FAILURE if another error occurred.
Note
As this method retrieves the images grabbed by the grab() method, it should be called afterward.
Mat leftImage; //create sl::Mat objects to store the image
while (true) {
// Grab an image
if (zed.grab() == ERROR_CODE::SUCCESS) { // A new image is available if grab() returns SUCCESS
zed.retrieveImage(leftImage, VIEW::LEFT); // Get the rectified left image
// Display the center pixel colors
sl::uchar4 leftCenter;
leftImage.getValue<sl::uchar4>(leftImage.getWidth() / 2, leftImage.getHeight() / 2, &leftCenter);
std::cout << "left image color B: " << (int)leftCenter[0] << " G: " << (int)leftCenter[1] << " R: " << (int)leftCenter[2] << std::endl;
}
}

◆ getCameraSettings() [1/3]

ERROR_CODE getCameraSettings ( VIDEO_SETTINGS  settings,
int &  setting 
)

Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.).

Possible values (range) of each setting are available here.

Parameters
settings: The requested setting.
setting: The setting variable to fill.
Returns
ERROR_CODE to indicate if the method was successful. If successful, setting will be filled with the corresponding value.
int gain;
err = zed.getCameraSettings(VIDEO_SETTINGS::GAIN, gain);
std::cout << "Current gain value: " << gain << std::endl;
Note
The method works only if the camera is open in LIVE or STREAM mode.
Settings are not exported in the SVO file format.

◆ getCameraSettings() [2/3]

ERROR_CODE getCameraSettings ( VIDEO_SETTINGS  settings,
int &  min_val,
int &  max_val 
)

Fills the current values of the requested settings for VIDEO_SETTINGS that supports two values (min/max).

This method only works with the following VIDEO_SETTINGS:

Possible values (range) of each setting are available here.

Parameters
settings: The requested setting.
min_val: The setting minimum variable to fill.
max_val: The setting maximum variable to fill.
Returns
ERROR_CODE to indicate if the method was successful. If successful, setting will be filled with the corresponding value.
int aec_range_min = 0;
int aec_range_max = 0;
sl::ERROR_CODE err = zed.getCameraSettings(sl::VIDEO_SETTINGS::AUTO_EXPOSURE_TIME_RANGE, aec_range_min, aec_range_max);
std::cout << "Current AUTO_EXPOSURE_TIME_RANGE range values ==> min: " << aec_range_min << " max: " << aec_range_max << std::endl;
Note
Works only with ZED X that supports low-level controls

◆ getCameraSettings() [3/3]

ERROR_CODE getCameraSettings ( VIDEO_SETTINGS  settings,
Rect roi,
sl::SIDE  side = sl::SIDE::BOTH 
)

Overloaded method for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter.

Parameters
setting: Must be set at VIDEO_SETTINGS::AEC_AGC_ROI, otherwise the method will have no impact.
roi: Roi that will be filled.
side: SIDE on which to get the ROI from.
Returns
ERROR_CODE to indicate if the method was successful. If successful, roi will be filled with the corresponding values.
Note
Works only if the camera is open in LIVE or STREAM mode with VIDEO_SETTINGS::AEC_AGC_ROI.
It will return ERROR_CODE::INVALID_FUNCTION_CALL or ERROR_CODE::INVALID_FUNCTION_PARAMETERS otherwise.

◆ setCameraSettings() [1/3]

ERROR_CODE setCameraSettings ( VIDEO_SETTINGS  settings,
int  value = VIDEO_SETTINGS_VALUE_AUTO 
)

Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.).

This method only applies for VIDEO_SETTINGS that require a single value.

Possible values (range) of each setting are available here.

Parameters
settings: The setting to be set.
value: The value to set. Default: auto mode
Returns
ERROR_CODE to indicate if the method was successful.
Warning
Setting VIDEO_SETTINGS::EXPOSURE or VIDEO_SETTINGS::GAIN to default will automatically sets the other to default.
Note
The method works only if the camera is open in LIVE or STREAM mode.
// Set the gain to 50
zed.setCameraSettings(VIDEO_SETTINGS::GAIN, 50);

◆ setCameraSettings() [2/3]

ERROR_CODE setCameraSettings ( VIDEO_SETTINGS  settings,
int  min,
int  max 
)

Sets the value of the requested camera setting that supports two values (min/max).

This method only works with the following VIDEO_SETTINGS:

Possible values (range) of each setting are available here.

Parameters
settings: The setting to be set.
min: The minimum value that can be reached (-1 or 0 gives full range).
max: The maximum value that can be reached (-1 or 0 gives full range).
Returns
ERROR_CODE to indicate if the method was successful.
Warning
If VIDEO_SETTINGS settings is not supported or min >= max, it will return ERROR_CODE::INVALID_FUNCTION_PARAMETERS.
Note
The method works only if the camera is open in LIVE or STREAM mode.
// For ZED X based product, set the automatic exposure from 2ms to 5ms. Expected exposure time cannot go beyond those values
zed.setCameraSettings(VIDEO_SETTINGS::AUTO_EXPOSURE_TIME_RANGE, 2000, 5000);

◆ setCameraSettings() [3/3]

ERROR_CODE setCameraSettings ( VIDEO_SETTINGS  settings,
Rect  roi,
sl::SIDE  side = sl::SIDE::BOTH,
bool  reset = false 
)

Overloaded method for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter.

Parameters
setting: Must be set at VIDEO_SETTINGS::AEC_AGC_ROI, otherwise the method will have no impact.
roi: Rect that defines the target to be applied for AEC/AGC computation. Must be given according to camera resolution.
side: SIDE on which to be applied for AEC/AGC computation.
reset: Cancel the manual ROI and reset it to the full image.
Note
Works only if the camera is open in LIVE or STREAM mode with VIDEO_SETTINGS::AEC_AGC_ROI.
Returns
ERROR_CODE to indicate if the method was successful.

◆ getCameraSettingsRange()

ERROR_CODE getCameraSettingsRange ( VIDEO_SETTINGS  settings,
int &  min,
int &  max 
)

Get the range for the specified camera settings VIDEO_SETTINGS as min/max value.

Parameters
setting: Must be set at a valid VIDEO_SETTINGS that accepts a min/max range and available for the current camera model.
Returns
ERROR_CODE to indicate if the method was successful.

◆ isCameraSettingSupported()

bool isCameraSettingSupported ( VIDEO_SETTINGS  setting)

Test if the video setting is supported by the camera.

Parameters
setting: The video setting to test
Returns
true if the VIDEO_SETTINGS is supported by the camera, false otherwise

◆ getCurrentFPS()

float getCurrentFPS ( )

Returns the current framerate at which the grab() method is successfully called.

The returned value is based on the difference of camera timestamps between two successful grab() calls.

Returns
The current SDK framerate.
Warning
The returned framerate (number of images grabbed per second) can be lower than InitParameters::camera_fps if the grab() function runs slower than the image stream or is called too often.
int current_fps = zed.getCurrentFPS();
std::cout << "Current framerate: " << current_fps << std::endl;

◆ getTimestamp()

Timestamp getTimestamp ( sl::TIME_REFERENCE  reference_time)

Returns the timestamp in the requested TIME_REFERENCE.

  • When requesting the TIME_REFERENCE::IMAGE timestamp, the UNIX nanosecond timestamp of the latest grabbed image will be returned.
    This value corresponds to the time at which the entire image was available in the PC memory. As such, it ignores the communication time that corresponds to 1 or 2 frame-time based on the fps (ex: 16.6ms to 33ms at 60fps).
  • When requesting the TIME_REFERENCE::CURRENT timestamp, the current UNIX nanosecond timestamp is returned.


This function can also be used when playing back an SVO file.

Parameters
reference_time: The selected TIME_REFERENCE.
Returns
The timestamp in nanosecond. 0 if not available (SVO file without compression).
Note
As this function returns UNIX timestamps, the reference it uses is common across several Camera instances.
This can help to organized the grabbed images in a multi-camera application.
Timestamp last_image_timestamp = zed.getTimestamp(TIME_REFERENCE::IMAGE);
Timestamp current_timestamp = zed.getTimestamp(TIME_REFERENCE::CURRENT);
std::cout << "Latest image timestamp: " << last_image_timestamp << "ns from Epoch." << std::endl;
std::cout << "Current timestamp: " << current_timestamp << "ns from Epoch." << std::endl;

◆ getFrameDroppedCount()

unsigned int getFrameDroppedCount ( )

Returns the number of frames dropped since grab() was called for the first time.

A dropped frame corresponds to a frame that never made it to the grab method.
This can happen if two frames were extracted from the camera when grab() is called. The older frame will be dropped so as to always use the latest (which minimizes latency).

Returns
The number of frames dropped since the first grab() call.

◆ getSVOPosition()

int getSVOPosition ( )

Returns the current playback position in the SVO file.

The position corresponds to the number of frames already read from the SVO file, starting from 0 to n.
Each grab() call increases this value by one (except when using InitParameters::svo_real_time_mode).

Returns
The current frame position in the SVO file. Returns -1 if the SDK is not reading an SVO.
Note
The method works only if the camera is open in SVO playback mode.
See also
setSVOPosition() for an example.

◆ getSVOPositionAtTimestamp()

int getSVOPositionAtTimestamp ( const sl::Timestamp &  timestamp)

Retrieves the frame index within the SVO file corresponding to the provided timestamp.

Parameters
timestampThe target timestamp for which the frame index is to be determined.
Returns
The frame index within the SVO file that aligns with the given timestamp. Returns -1 if the timestamp falls outside the bounds of the SVO file.

◆ setSVOPosition()

void setSVOPosition ( int  frame_number)

Sets the playback cursor to the desired frame number in the SVO file.

This method allows you to move around within a played-back SVO file. After calling, the next call to grab() will read the provided frame number.

Parameters
frame_number: The number of the desired frame to be decoded.
Note
The method works only if the camera is open in SVO playback mode.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.input.setFromSVOFile("path/to/my/file.svo");
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Loop between frames 0 and 50
Mat leftImage;
while (zed.getSVOPosition() < zed.getSVONumberOfFrames() - 1) {
std::cout << "Current frame: " << zed.getSVOPosition() << std::endl;
// Loop if we reached frame 50
if (zed.getSVOPosition() == 50)
zed.setSVOPosition(0);
// Grab an image
if (zed.grab() == ERROR_CODE::SUCCESS) {
zed.retrieveImage(leftImage, VIEW::LEFT); // Get the rectified left image
// Use the image in your application
}
}
// Close the Camera
zed.close();
return 0;
}

◆ getSVONumberOfFrames()

int getSVONumberOfFrames ( )

Returns the number of frames in the SVO file.

Returns
The total number of frames in the SVO file. -1 if the SDK is not reading a SVO.
Note
The method works only if the camera is open in SVO playback mode.
See also
setSVOPosition() for an example.

◆ ingestDataIntoSVO()

ERROR_CODE ingestDataIntoSVO ( const sl::SVOData data)

Ingest SVOData in a SVO file.

Parameters
data: Data to ingest in the SVO file.
Returns
sl::ERROR_CODE::SUCCESS in case of success, sl::ERROR_CODE::FAILURE otherwise.
Note
The method works only if the camera is recording.

◆ retrieveSVOData()

ERROR_CODE retrieveSVOData ( const std::string &  key,
std::map< sl::Timestamp, sl::SVOData > &  data,
sl::Timestamp  ts_begin = 0,
sl::Timestamp  ts_end = 0 
)

Retrieves SVO data from the SVO file at the given channel key and in the given timestamp range.

Parameters
key: The key of the SVOData that is going to be retrieved.
data: The map to be filled with SVOData objects, with timestamps as keys.
ts_begin: The beginning of the range.
ts_end: The end of the range.
Returns
sl::ERROR_CODE::SUCCESS in case of success, sl::ERROR_CODE::FAILURE otherwise.
Note
The method works only if the camera is in SVO mode.

◆ getSVODataKeys()

std::vector<std::string> getSVODataKeys ( )

Get the external channels that can be retrieved from the SVO file.

Returns
List of available keys.
Note
The method returns an empty std::vector if not in SVO mode.

◆ retrieveMeasure()

ERROR_CODE retrieveMeasure ( Mat mat,
MEASURE  measure = MEASURE::DEPTH,
MEM  type = MEM::CPU,
Resolution  image_size = Resolution(0, 0) 
)

Computed measures, like depth, point cloud, or normals, can be retrieved using this method.

Multiple measures are available after a grab() call. A full list is available here.

Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.

Measure size
By default, measures are returned in the resolution provided by .camera_configuration.resolution .
However, custom resolutions can be requested. For example, requesting a smaller measure can help you speed up your application.

Warning
A sl::Mat resolution higher than the camera resolution cannot be requested.
Parameters
mat: The Mat to store the measure. The method will create the Mat if necessary at the proper resolution. If already created, it will just update its data (CPU or GPU depending on the MEM type).
measure: Defines the measure you want (see MEASURE). Default: MEASURE::DEPTH.
type: Defines on which memory the image should be allocated. Default: MEM::CPU.
image_size: If specified, define the resolution of the output sl::Mat. If set to Resolution(0,0), the camera resolution will be taken. Default: (0,0).
Returns
ERROR_CODE::SUCCESS if the method succeeded.
ERROR_CODE::INVALID_FUNCTION_PARAMETERS if the view mode requires a module not enabled (VIEW::DEPTH with DEPTH_MODE::NONE for example).
ERROR_CODE::INVALID_RESOLUTION if the resolution is higher than one provided by getCameraInformation().camera_configuration.resolution.
ERROR_CODE::FAILURE if another error occurred.
Note
As this function retrieves the measures computed by the grab() function, This function should be called after a grab() call that returns ERROR_CODE::SUCCESS.
Measures containing "RIGHT" in their names, requires InitParameters::enable_right_side_measure to be enabled.
Mat imageMap, depthMap, pointCloud;
sl::Resolution resolution = zed.getCameraInformation().camera_configuration.resolution;
int x = resolution.width / 2; // Center coordinates
int y = resolution.height / 2;
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image
zed.retrieveImage(imageMap, VIEW::LEFT); // Get the image if necessary
zed.retrieveMeasure(depthMap, MEASURE::DEPTH, MEM::CPU); // Get the depth map
// Read a depth value
float centerDepth = 0;
depthMap.getValue<float>(x, y, &centerDepth, MEM::CPU); // each depth map pixel is a float value
if (std::isnormal(centerDepth)) { // + Inf is "too far", -Inf is "too close", Nan is "unknown/occlusion"
std::cout << "Depth value at center: " << centerDepth << " " << init_params.coordinate_units << std::endl;
}
zed.retrieveMeasure(pointCloud, MEASURE::XYZBGRA, MEM::CPU);// Get the point cloud
// Read a point cloud value
sl::float4 pcValue;
pointCloud.getValue<sl::float4>(x, y, &pcValue); // each point cloud pixel contains 4 floats, so we are using a sl::float4
if (std::isnormal(pcValue.z)) {
std::cout << "Point cloud coordinates at center: X=" << pcValue.x << ", Y=" << pcValue.y << ", Z=" << pcValue.z << std::endl;
unsigned char color[sizeof(float)];
memcpy(color, &pcValue[3], sizeof(float));
std::cout << "Point cloud color at center: B=" << (int)color[0] << ", G=" << (int)color[1] << ", R=" << (int)color[2] << std::endl;
}
}
}
Structure containing the width and height of an image.
Definition: types.hpp:262
size_t height
Height of the image in pixels.
Definition: types.hpp:270
size_t width
Width of the image in pixels.
Definition: types.hpp:266

◆ setRegionOfInterest()

ERROR_CODE setRegionOfInterest ( sl::Mat roi_mask,
std::unordered_set< MODULE module = {MODULE::ALL} 
)

Defines a region of interest to focus on for all the SDK, discarding other parts.

Parameters
roi_maskThe Mat defining the requested region of interest, pixels lower than 127 will be discarded from all modules: depth, positional tracking, etc. If empty, set all pixels as valid. The mask can be either at lower or higher resolution than the current images.
moduleApply the ROI to a list of SDK module, all by default
Returns
An ERROR_CODE if something went wrong.
Note
The method support U8_C1/U8_C3/U8_C4 images type.

◆ getRegionOfInterest()

ERROR_CODE getRegionOfInterest ( sl::Mat roi_mask,
sl::Resolution  image_size = Resolution(0, 0),
MODULE  module = MODULE::ALL 
)

Get the previously set or computed region of interest.

Parameters
roi_maskThe Mat returned
image_sizeThe optional size of the returned mask
modulespecify which module to get the ROI
Returns
An ERROR_CODE if something went wrong.

◆ startRegionOfInterestAutoDetection()

ERROR_CODE startRegionOfInterestAutoDetection ( sl::RegionOfInterestParameters  roi_param = sl::RegionOfInterestParameters())

Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts. This detection is based on the general motion of the camera combined with the motion in the scene. The camera must move for this process, an internal motion detector is used, based on the Positional Tracking module. It requires a few hundreds frames of motion to compute the mask.

Parameters
roi_paramThe RegionOfInterestParameters defining parameters for the detection
Note
This module is expecting a static portion, typically a fairly close vehicle hood at the bottom of the image. This module may not work correctly or detect incorrect background area, especially with slow motion, if there's no static element. This module work asynchronously, the status can be obtained using getRegionOfInterestAutoDetectionStatus(), the result is either auto applied, or can be retrieve using getRegionOfInterest function.
Returns
An ERROR_CODE if something went wrong.

◆ getRegionOfInterestAutoDetectionStatus()

REGION_OF_INTEREST_AUTO_DETECTION_STATE getRegionOfInterestAutoDetectionStatus ( )

Return the status of the automatic Region of Interest Detection The automatic Region of Interest Detection is enabled by using startRegionOfInterestAutoDetection.

Returns
REGION_OF_INTEREST_AUTO_DETECTION_STATE the status

◆ getCurrentMinMaxDepth()

ERROR_CODE getCurrentMinMaxDepth ( float &  min,
float &  max 
)

Gets the current range of perceived depth.

Parameters
min[out]: Minimum depth detected (in selected sl::UNIT).
max[out]: Maximum depth detected (in selected sl::UNIT).
Returns
ERROR_CODE::SUCCESS if values can be extracted, ERROR_CODE::FAILURE otherwise.

◆ enablePositionalTracking()

ERROR_CODE enablePositionalTracking ( PositionalTrackingParameters  tracking_parameters = PositionalTrackingParameters())

Initializes and starts the positional tracking processes.

This method allows you to enable the position estimation of the SDK. It only has to be called once in the camera's lifetime.
When enabled, the position will be update at each grab() call.
Tracking-specific parameter can be set by providing PositionalTrackingParameters to this method.

Parameters
tracking_parameters: A structure containing all the specific parameters for the positional tracking. Default: a preset of PositionalTrackingParameters.
Returns
ERROR_CODE::FAILURE if the PositionalTrackingParameters::area_file_path file wasn't found, ERROR_CODE::SUCCESS otherwise.
Warning
The positional tracking feature benefits from a high framerate. We found HD720@60fps to be the best compromise between image quality and framerate.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// --- Initialize a Camera object and open the ZED
// Create a ZED camera object
Camera zed;
// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode
init_params.camera_fps = 60; // Set fps at 60
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Set tracking parameters
PositionalTrackingParameters track_params;
track_params.enable_area_memory = true;
// Enable positional tracking
err = zed.enablePositionalTracking(track_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Tracking error: " << toString(err) << std::endl;
exit(-1);
}
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
zed.getPosition(cameraPose, REFERENCE_FRAME::WORLD);
std::cout << "Camera position: X=" << cameraPose.getTranslation().x << " Y=" << cameraPose.getTranslation().y << " Z=" << cameraPose.getTranslation().z << std::endl;
}
}
// --- Close the Camera
zed.close();
return 0;
}

◆ getPosition()

POSITIONAL_TRACKING_STATE getPosition ( Pose camera_pose,
REFERENCE_FRAME  reference_frame = REFERENCE_FRAME::WORLD 
)

Retrieves the estimated position and orientation of the camera in the specified reference frame.

If the tracking has been initialized with PositionalTrackingParameters::enable_area_memory to true (default), this method can return POSITIONAL_TRACKING_STATE::SEARCHING.
This means that the tracking lost its link to the initial referential and is currently trying to relocate the camera. However, it will keep on providing position estimations.

Parameters
camera_pose[out]: The pose containing the position of the camera and other information (timestamp, confidence).
reference_frame[in]: Defines the reference from which you want the pose to be expressed. Default: REFERENCE_FRAME::WORLD.
Returns
The current state of the tracking process.
Note
Extract Rotation Matrix: Pose.getRotationMatrix()
Extract Translation Vector: Pose.getTranslation()
Extract Orientation / Quaternion: Pose.getOrientation()
Warning
This method requires the tracking to be enabled. enablePositionalTracking() .
Note
The position is provided in the InitParameters::coordinate_system . See COORDINATE_SYSTEM for its physical origin.
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
zed.getPosition(cameraPose, REFERENCE_FRAME::WORLD);
std::cout << "Camera position: X=" << cameraPose.getTranslation().x << " Y=" << cameraPose.getTranslation().y << " Z=" << cameraPose.getTranslation().z << std::endl;
std::cout << "Camera Euler rotation: X=" << cameraPose.getEulerAngles().x << " Y=" << cameraPose.getEulerAngles().y << " Z=" << cameraPose.getEulerAngles().z << std::endl;
std::cout << "Camera Rodrigues rotation: X=" << cameraPose.getRotationVector().x << " Y=" << cameraPose.getRotationVector().y << " Z=" << cameraPose.getRotationVector().z << std::endl;
std::cout << "Camera quaternion orientation: X=" << cameraPose.getOrientation().x << " Y=" << cameraPose.getOrientation().y << " Z=" << cameraPose.getOrientation().z << " W=" << cameraPose.getOrientation().w << std::endl;
std::cout << std::endl;
}
}

◆ getPositionalTrackingStatus()

sl::PositionalTrackingStatus getPositionalTrackingStatus ( )

Return the current status of positional tracking module.

Returns
sl::PositionalTrackingStatus current status of positional tracking module.

◆ saveAreaMap()

ERROR_CODE saveAreaMap ( String  area_file_path)

Saves the current area learning file. The file will contain spatial memory data generated by the tracking.

If the tracking has been initialized with PositionalTrackingParameters::enable_area_memory to true (default), the method allows you to export the spatial memory.
Reloading the exported file in a future session with PositionalTrackingParameters::area_file_path initializes the tracking within the same referential.
This method is asynchronous, and only triggers the file generation. You can use getAreaExportState() to get the export state. The positional tracking keeps running while exporting.

Parameters
area_file_path: Path of an '.area' file to save the spatial memory database in.
Returns
ERROR_CODE::FAILURE if the area_file_path file wasn't found, ERROR_CODE::SUCCESS otherwise.
See also
getAreaExportState()
Note
This method is asynchronous because the generated data can be heavy, be sure to loop over the getAreaExportState() method with a waiting time.
Warning
If the camera wasn't moved during the tracking session, or not enough, the spatial memory won't be usable and the file won't be exported.
The getAreaExportState() method will return AREA_EXPORTING_STATE::FILE_EMPTY.
A few meters (~3m) of translation or a full rotation should be enough to get usable spatial memory.
However, as it should be used for relocation purposes, visiting a significant portion of the environment is recommended before exporting.
// --- Main loop
while (true) {
if (zed.grab() == SUCCESS) { // Grab an image and computes the tracking
Pose cameraPose;
zed.getPosition(cameraPose, REFERENCE_FRAME::WORLD);
}
}
// Export the spatial memory for future sessions
zed.saveAreaMap("MyMap.area");
while (export_state == sl::AREA_EXPORTING_STATE::RUNNING) {
export_state = zed.getAreaExportState();
sl::sleep_ms(5);
}
std::cout << "export state: " << export_state << std::endl;
// --- Close the Camera
zed.close(); // The close method will wait for the end of the file creation using getAreaExportState().
return 0;

◆ getAreaExportState()

AREA_EXPORTING_STATE getAreaExportState ( )

Returns the state of the spatial memory export process.

As saveAreaMap() only starts the exportation, this method allows you to know when the exportation finished or if it failed.

Returns
The current state of the spatial memory export process.

◆ resetPositionalTracking()

ERROR_CODE resetPositionalTracking ( const Transform path)

Resets the tracking, and re-initializes the position with the given transformation matrix.

Parameters
path: Position of the camera in the world frame when the method is called.
Returns
ERROR_CODE::SUCCESS if the tracking has been reset, ERROR_CODE::FAILURE otherwise.
Note
Please note that this method will also flush the accumulated or loaded spatial memory.

◆ disablePositionalTracking()

void disablePositionalTracking ( String  area_file_path = "")

Disables the positional tracking.

The positional tracking is immediately stopped. If a file path is given, saveAreaMap() will be called asynchronously. See getAreaExportState() to get the exportation state.
If the tracking has been enabled, this function will automatically be called by close() .

Parameters
area_file_path: If set, saves the spatial memory into an '.area' file. Default: (empty)
area_file_path is the name and path of the database, e.g. path/to/file/myArea1.area".
Note
The '.area' database depends on the depth mode and confidence threshold chosen during the recording. The same mode must be used to reload the database.

◆ isPositionalTrackingEnabled()

bool isPositionalTrackingEnabled ( )

Tells if the tracking module is enabled.

◆ getPositionalTrackingParameters()

PositionalTrackingParameters getPositionalTrackingParameters ( )

Returns the PositionalTrackingParameters used.

It corresponds to the structure given as argument to the enablePositionalTracking() method.

Returns
PositionalTrackingParameters containing the parameters used for positional tracking initialization.

◆ getSensorsData()

ERROR_CODE getSensorsData ( SensorsData data,
TIME_REFERENCE  reference_time 
)

Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference.

The delta time between previous and current values can be calculated using data.imu.timestamp

Note
The IMU quaternion (fused data) is given in the specified COORDINATE_SYSTEM of InitParameters.
Parameters
data[out]: The SensorsData variable to store the data.
reference_frame[in]Defines the reference from which you want the data to be expressed. Default: REFERENCE_FRAME::WORLD.
Returns
ERROR_CODE::SUCCESS if sensors data have been extracted.
ERROR_CODE::SENSORS_NOT_AVAILABLE if the camera model is a MODEL::ZED.
ERROR_CODE::MOTION_SENSORS_REQUIRED if the camera model is correct but the sensors module is not opened.
ERROR_CODE::INVALID_FUNCTION_PARAMETERS if the reference_time is not valid. See Warning.
Warning
In SVO or STREAM mode, the TIME_REFERENCE::CURRENT is currently not available (yielding ERROR_CODE::INVALID_FUNCTION_PARAMETERS.
Only the quaternion data and barometer data (if available) at TIME_REFERENCE::IMAGE are available. Other values will be set to 0.
if (zed.getSensorsData(sensors_data, TIME_REFERENCE::CURRENT) == ERROR_CODE::SUCCESS) {
std::cout << " - IMU:\n";
std::cout << " \t Orientation: {" << sensors_data.imu.pose.getOrientation() << "}\n";
std::cout << " \t Acceleration: {" << sensors_data.imu.linear_acceleration << "} [m/sec^2]\n";
std::cout << " \t Angular Velocity: {" << sensors_data.imu.angular_velocity << "} [deg/sec]\n";
std::cout << " - Magnetometer\n \t Magnetic Field: {" << sensors_data.magnetometer.magnetic_field_calibrated << "} [uT]\n";
std::cout << " - Barometer\n \t Atmospheric pressure:" << sensors_data.barometer.pressure << " [hPa]\n";
// retrieves camera sensors temperature
std::cout << " - Temperature\n";
float temperature;
for (int s = 0; s < static_cast<int>(SensorsData::TemperatureData::SENSOR_LOCATION::LAST); s++) {
auto sensor_loc = static_cast<SensorsData::TemperatureData::SENSOR_LOCATION>(s);
// depending on your Camera model or its firmware, different sensors can give thermal information
if (sensors_data.temperature.get(sensor_loc, temperature) == ERROR_CODE::SUCCESS)
std::cout << " \t " << sensor_loc << ": " << temperature << "C\n";
}
}
SENSOR_LOCATION
Lists possible locations of temperature sensors.
Definition: Core.hpp:1379

◆ setIMUPrior()

ERROR_CODE setIMUPrior ( const sl::Transform transform)

Set an optional IMU orientation hint that will be used to assist the tracking during the next grab().

This method can be used to assist the positional tracking rotation.

Note
This method is only effective if the camera has a model other than a MODEL::ZED, which does not contains internal sensors.
Warning
It needs to be called before the grab() method.
Parameters
sl::Transformto be ingested into IMU fusion. Note that only the rotation is used.
Returns
ERROR_CODE::SUCCESS if the transform has been passed, ERROR_CODE::INVALID_FUNCTION_CALL otherwise (e.g. when used with a ZED camera which doesn't have IMU data).

◆ enableSpatialMapping()

ERROR_CODE enableSpatialMapping ( SpatialMappingParameters  spatial_mapping_parameters = SpatialMappingParameters())

Initializes and starts the spatial mapping processes.

The spatial mapping will create a geometric representation of the scene based on both tracking data and 3D point clouds.
The resulting output can be a Mesh or a FusedPointCloud. It can be be obtained by calling extractWholeSpatialMap() or retrieveSpatialMapAsync(). Note that retrieveSpatialMapAsync() should be called after requestSpatialMapAsync().

Parameters
spatial_mapping_parameters: A structure containing all the specific parameters for the spatial mapping.
Default: a balanced parameter preset between geometric fidelity and output file size. For more information, see the SpatialMappingParameters documentation.
Returns
ERROR_CODE::SUCCESS if everything went fine, ERROR_CODE::FAILURE otherwise.
Warning
The tracking (enablePositionalTracking() ) and the depth (RuntimeParameters::enable_depth ) needs to be enabled to use the spatial mapping.
The performance greatly depends on the spatial_mapping_parameters.
Lower SpatialMappingParameters.range_meter and SpatialMappingParameters.resolution_meter for higher performance. If the mapping framerate is too slow in live mode, consider using an SVO file, or choose a lower mesh resolution.
Note
This feature uses host memory (RAM) to store the 3D map. The maximum amount of available memory allowed can be tweaked using the SpatialMappingParameters.
Exceeding the maximum memory allowed immediately stops the mapping.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system (The OpenGL one)
init_params.coordinate_units = UNIT::METER; // Set units in meters
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)
exit(-1);
// Positional tracking needs to be enabled before using spatial mapping
sl::PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)
exit(-1);
// Enable spatial mapping
sl::SpatialMappingParameters mapping_parameters;
err = zed.enableSpatialMapping(mapping_parameters);
if (err != ERROR_CODE::SUCCESS)
exit(-1);
// Grab data during 500 frames
int i = 0;
sl::Mesh mesh; // Create a mesh object
while (i < 500) {
// For each new grab, mesh data is updated
if (zed.grab() == ERROR_CODE::SUCCESS) {
// In the background, the spatial mapping will use newly retrieved images, depth and pose to update the mesh
sl::SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();
// Print spatial mapping state
std::cout << "Images captured: " << i << " / 500 || Spatial mapping state: " << mapping_state << std::endl;
i++;
}
}
std::cout << std::endl;
// Extract, filter and save the mesh in a obj file
std::cout << "Extracting Mesh ..." << std::endl;
zed.extractWholeSpatialMap(mesh); // Extract the whole mesh
std::cout << "Filtering Mesh ..." << std::endl;
mesh.filter(sl::MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)
std::cout << "Saving Mesh in mesh.obj ..." << std::endl;
mesh.save("mesh.obj"); // Save the mesh in an obj file
// Disable tracking and mapping and close the camera
zed.disableSpatialMapping();
zed.disablePositionalTracking();
zed.close();
return 0;
}
Class representing a mesh and containing the geometric (and optionally texture) data of the scene cap...
Definition: Mesh.hpp:254
bool save(String filename, MESH_FILE_FORMAT type=MESH_FILE_FORMAT::OBJ, chunkList IDs=chunkList())
Saves the current sl::Mesh into a file.
bool filter(MeshFilterParameters mesh_filter_params=MeshFilterParameters(), bool update_chunk_only=false)
Filters the mesh.
SPATIAL_MAPPING_STATE
Lists the different states of spatial mapping.
Definition: defines.hpp:502
Structure containing a set of parameters for the positional tracking module initialization.
Definition: Camera.hpp:644
Structure containing a set of parameters for the spatial mapping module.
Definition: Camera.hpp:824

◆ getSpatialMappingState()

SPATIAL_MAPPING_STATE getSpatialMappingState ( )

Returns the current spatial mapping state.

As the spatial mapping runs asynchronously, this method allows you to get reported errors or status info.

Returns
The current state of the spatial mapping process.
See also
SPATIAL_MAPPING_STATE

◆ requestSpatialMapAsync()

void requestSpatialMapAsync ( )

Starts the spatial map generation process in a non-blocking thread from the spatial mapping process.

The spatial map generation can take a long time depending on the mapping resolution and covered area. This function will trigger the generation of a mesh without blocking the program.
You can get info about the current generation using getSpatialMapRequestStatusAsync(), and retrieve the mesh using retrieveSpatialMapAsync() .

Note
Only one mesh can be generated at a time. If the previous mesh generation is not over, new calls of the function will be ignored.

See enableSpatialMapping() for an example.

◆ getSpatialMapRequestStatusAsync()

ERROR_CODE getSpatialMapRequestStatusAsync ( )

Returns the spatial map generation status.

This status allows to know if the mesh can be retrieved by calling retrieveSpatialMapAsync .

Returns
ERROR_CODE::SUCCESS if the mesh is ready and not yet retrieved, otherwise ERROR_CODE::FAILURE.

See enableSpatialMapping() for an example.

◆ retrieveSpatialMapAsync() [1/2]

ERROR_CODE retrieveSpatialMapAsync ( Mesh mesh)

Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as MESH.

After calling requestSpatialMapAsync(), this method allows you to retrieve the generated mesh.
The mesh will only be available when getSpatialMapRequestStatusAsync() returns ERROR_CODE::SUCCESS.

Parameters
mesh[out]: The mesh to be filled with the generated spatial map.
Returns
ERROR_CODE::SUCCESS if the mesh is retrieved, otherwise ERROR_CODE::FAILURE.
Note
This method only updates the necessary chunks and adds the new ones in order to improve update speed.
Warning
You should not modify the mesh between two calls of this method, otherwise it can lead to a corrupted mesh.
If the SpatialMappingParameters::map_type has not been setup as MESH, the object will be empty.

See enableSpatialMapping() for an example.

◆ retrieveSpatialMapAsync() [2/2]

ERROR_CODE retrieveSpatialMapAsync ( FusedPointCloud fpc)

Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as FUSED_POINT_CLOUD.

After calling requestSpatialMapAsync(), this method allows you to retrieve the generated fused point cloud.
The fused point cloud will only be available when getSpatialMapRequestStatusAsync() returns ERROR_CODE::SUCCESS.

Parameters
fpc[out]: The fused point cloud to be filled with the generated spatial map.
Returns
ERROR_CODE::SUCCESS if the mesh is retrieved, otherwise ERROR_CODE::FAILURE.
Note
This method only updates the necessary chunks and adds the new ones in order to improve update speed.
Warning
You should not modify the fused point cloud between two calls of this method, otherwise it can lead to a corrupted fused point cloud.
If the SpatialMappingParameters::map_type has not been setup as FUSED_POINT_CLOUD, the object will be empty.

See enableSpatialMapping() for an example.

◆ extractWholeSpatialMap() [1/2]

ERROR_CODE extractWholeSpatialMap ( Mesh mesh)

Extract the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as MESH.

If the object to be filled already contains a previous version of the mesh, only changes will be updated, optimizing performance.

Parameters
mesh[out]: The mesh to be filled with the generated spatial map.
Returns
ERROR_CODE::SUCCESS if the mesh is filled and available, otherwise ERROR_CODE::FAILURE.
Warning
This is a blocking method. You should either call it in a thread or at the end of the mapping process.
The extraction can be long, calling this method in the grab loop will block the depth and tracking computation giving bad results.
If the SpatialMappingParameters::map_type has not been setup as MESH, the object will be empty.

See enableSpatialMapping() for an example.

◆ extractWholeSpatialMap() [2/2]

ERROR_CODE extractWholeSpatialMap ( FusedPointCloud fpc)

Extract the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as FUSED_POINT_CLOUD.

If the object to be filled already contains a previous version of the fused point cloud, only changes will be updated, optimizing performance.

Parameters
fpc[out]: The fused point cloud to be filled with the generated spatial map.
Returns
ERROR_CODE::SUCCESS if the fused point cloud is filled and available, otherwise ERROR_CODE::FAILURE.
Warning
This is a blocking method. You should either call it in a thread or at the end of the mapping process.
The extraction can be long, calling this method in the grab loop will block the depth and tracking computation giving bad results.
If the SpatialMappingParameters::map_type has not been setup as FUSED_POINT_CLOUD, the object will be empty.

See enableSpatialMapping() for an example.

◆ pauseSpatialMapping()

void pauseSpatialMapping ( bool  status)

Pauses or resumes the spatial mapping processes.

As spatial mapping runs asynchronously, using this method can pause its computation to free some processing power, and resume it again later.
For example, it can be used to avoid mapping a specific area or to pause the mapping when the camera is static.

Parameters
status: If true, the integration is paused. If false, the spatial mapping is resumed.

◆ disableSpatialMapping()

void disableSpatialMapping ( )

Disables the spatial mapping process.

The spatial mapping is immediately stopped.
If the mapping has been enabled, this method will automatically be called by close().

Note
This method frees the memory allocated for the spatial mapping, consequently, meshes and fused point clouds cannot be retrieved after this call.

◆ getSpatialMappingParameters()

SpatialMappingParameters getSpatialMappingParameters ( )

Returns the SpatialMappingParameters used.

It corresponds to the structure given as argument to the enableSpatialMapping() method.

Returns
SpatialMappingParameters containing the parameters used for spatial mapping initialization.

◆ findPlaneAtHit()

ERROR_CODE findPlaneAtHit ( sl::uint2  coord,
sl::Plane plane,
PlaneDetectionParameters  parameters = PlaneDetectionParameters() 
)

Checks the plane at the given left image coordinates.

This method gives the 3D plane corresponding to a given pixel in the latest left image grabbed.
The pixel coordinates are expected to be contained x=[0;width-1] and y=[0;height-1], where width/height are defined by the input resolution.

Parameters
coord[in]: The image coordinate. The coordinate must be taken from the full-size image.
plane[out]: The detected plane if the method succeeded.
parameters[in]: A structure containing all the specific parameters for the plane detection. Default: a preset of PlaneDetectionParameters.
Returns
ERROR_CODE::SUCCESS if a plane is found, otherwise ERROR_CODE::PLANE_NOT_FOUND.
Note
The reference frame is defined by the RuntimeParameters::measure3D_reference_frame given to the grab() method.

◆ findFloorPlane()

ERROR_CODE findFloorPlane ( sl::Plane floorPlane,
sl::Transform resetTrackingFloorFrame,
float  floor_height_prior = INVALID_VALUE,
sl::Rotation  world_orientation_prior = sl::Matrix3f::zeros(),
float  floor_height_prior_tolerance = INVALID_VALUE 
)

Detect the floor plane of the scene.

This method analysis the latest image and depth to estimate the floor plane of the scene.
It expects the floor plane to be visible and bigger than other candidate planes, like a table.

Parameters
floorPlane[out]: The detected floor plane if the method succeeded.
resetTrackingFloorFrame[out]: The transform to align the tracking with the floor plane.
The initial position will then be at ground height, with the axis align with the gravity.
The positional tracking needs to be reset/enabled with this transform as a parameter (PositionalTrackingParameters.initial_world_transform).
floor_height_prior[in]: Prior set to locate the floor plane depending on the known camera distance to the ground, expressed in the same unit as the ZED.
If the prior is too far from the detected floor plane, the method will return ERROR_CODE::PLANE_NOT_FOUND.
world_orientation_prior[in]: Prior set to locate the floor plane depending on the known camera orientation to the ground.
If the prior is too far from the detected floor plane, the method will return ERROR_CODE::PLANE_NOT_FOUND.
floor_height_prior_tolerance[in]: Prior height tolerance, absolute value.
Returns
ERROR_CODE::SUCCESS if the floor plane is found and matches the priors (if defined), otherwise ERROR_CODE::PLANE_NOT_FOUND.
Note
The reference frame is defined by the sl::RuntimeParameters (measure3D_reference_frame) given to the grab() method.
The length unit is defined by sl:InitParameters (coordinate_units).
With the ZED, the assumption is made that the floor plane is the dominant plane in the scene. The ZED Mini uses gravity as prior.

◆ enableRecording()

ERROR_CODE enableRecording ( RecordingParameters  recording_parameters)

Creates an SVO file to be filled by enableRecording() and disableRecording().


SVO files are custom video files containing the un-rectified images from the camera along with some meta-data like timestamps or IMU orientation (if applicable).
They can be used to simulate a live ZED and test a sequence with various SDK parameters.
Depending on the application, various compression modes are available. See SVO_COMPRESSION_MODE.

Parameters
recording_parameters: A structure containing all the specific parameters for the recording such as filename and compression mode. Default: a reset of RecordingParameters .
Returns
An ERROR_CODE that defines if the SVO file was successfully created and can be filled with images.
Warning
This method can be called multiple times during a camera lifetime, but if video_filename is already existing, the file will be erased.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_units = UNIT::METER; // Set units in meters
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Enable video recording
err = zed.enableRecording(RecordingParameters("myVideoFile.svo", SVO_COMPRESSION_MODE::H264));
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Grab data during 500 frames
int i = 0;
while (i < 500) {
// Grab a new frame
if (zed.grab() == ERROR_CODE::SUCCESS) {
// Record the grabbed frame in the video file
i++;
}
}
zed.disableRecording();
std::cout << "Video has been saved ..." << std::endl;
zed.close();
return 0;
}

◆ getRecordingStatus()

RecordingStatus getRecordingStatus ( )

Get the recording information.

Returns
The recording state structure. For more details, see RecordingStatus.

◆ pauseRecording()

void pauseRecording ( bool  status)

Pauses or resumes the recording.

Parameters
status: If true, the recording is paused. If false, the recording is resumed.

◆ disableRecording()

void disableRecording ( )

Disables the recording initiated by enableRecording() and closes the generated file.

Note
This method will automatically be called by close() if enableRecording() was called.

See enableRecording() for an example.

◆ getRecordingParameters()

RecordingParameters getRecordingParameters ( )

Returns the RecordingParameters used.

It corresponds to the structure given as argument to the enableRecording() method.

Returns
RecordingParameters containing the parameters used for recording initialization.

◆ enableStreaming()

ERROR_CODE enableStreaming ( StreamingParameters  streaming_parameters = StreamingParameters())

Creates a streaming pipeline.

Parameters
streaming_parameters: A structure containing all the specific parameters for the streaming. Default: a reset of StreamingParameters .
Returns
ERROR_CODE::SUCCESS if streaming was successfully started.
ERROR_CODE::INVALID_FUNCTION_CALL if open() was not successfully called before.
ERROR_CODE::FAILURE if streaming RTSP protocol was not able to start.
ERROR_CODE::NO_GPU_COMPATIBLE if streaming codec is not supported (in this case, use H264 codec which is supported on all NVIDIA GPU the ZED SDK supports).
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Open the camera
ERROR_CODE err = zed.open();
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Enable video recording
sl::StreamingParameters stream_params;
err = zed.enableStreaming(stream_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Grab data during 500 frames
int i = 0;
while (i < 500) {
// Grab a new frame
if (zed.grab() == ERROR_CODE::SUCCESS)
i++;
}
zed.disableStreaming();
zed.close();
return 0;
}
Structure containing the options used to stream with the ZED SDK.
Definition: Camera.hpp:1084

◆ disableStreaming()

void disableStreaming ( )

Disables the streaming initiated by enableStreaming().

Note
This method will automatically be called by close() if enableStreaming() was called.

See enableStreaming() for an example.

◆ isStreamingEnabled()

bool isStreamingEnabled ( )

Tells if the streaming is running.

Returns
true if the stream is running, false otherwise.

◆ getStreamingParameters()

StreamingParameters getStreamingParameters ( )

Returns the StreamingParameters used.

It corresponds to the structure given as argument to the enableStreaming() method.

Returns
StreamingParameters containing the parameters used for streaming initialization.

◆ enableObjectDetection()

ERROR_CODE enableObjectDetection ( ObjectDetectionParameters  object_detection_parameters = ObjectDetectionParameters())

Initializes and starts object detection module.

The object detection module currently supports multiple class of objects with the OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX or OBJECT_DETECTION_MODEL::MULTI_CLASS_BOX_ACCURATE.
The full list of detectable objects is available through OBJECT_CLASS and OBJECT_SUBCLASS.


Detected objects can be retrieved using the retrieveObjects() method.

Note
- This Depth Learning detection module is not available MODEL::ZED cameras.
- This feature uses AI to locate objects and requires a powerful GPU. A GPU with at least 3GB of memory is recommended.
Parameters
object_detection_parameters: A structure containing all the specific parameters for the object detection. Default: a preset of ObjectDetectionParameters.
Returns
ERROR_CODE::SUCCESS if everything went fine.
ERROR_CODE::CORRUPTED_SDK_INSTALLATION if the AI model is missing or corrupted. In this case, the SDK needs to be reinstalled.
ERROR_CODE::MODULE_NOT_COMPATIBLE_WITH_CAMERA if the camera used does not have a IMU (MODEL::ZED).
ERROR_CODE::MOTION_SENSORS_REQUIRED if the camera model is correct (not MODEL.ZED) but the IMU is missing. It probably happens because InitParameters::sensors_required was set to false and that IMU has not been found.
ERROR_CODE::INVALID_FUNCTION_CALL if one of the object_detection_parameters parameter is not compatible with other modules parameters (for example, depth_mode has been set to DEPTH_MODE::NONE).
ERROR_CODE::FAILURE otherwise.
Note
The IMU gives the gravity vector that helps in the 3D box localization. Therefore the object detection module is not available for the MODEL::ZED camera model.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Open the camera
ERROR_CODE err = zed.open();
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Opening camera error: " << toString(err) << std::endl;
exit(-1);
}
// Enable position tracking (mandatory for object detection)
PositionalTrackingParameters tracking_params;
err = zed.enablePositionalTracking(tracking_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Enabling Positional Tracking error: " << toString(err) << std::endl;
exit(-1);
}
// Set the object detection parameters
ObjectDetectionParameters object_detection_params;
// Enable the object detection
err = zed.enableObjectDetection(object_detection_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Enabling Object Detection error: " << toString(err) << std::endl;
exit(-1);
}
// Grab an image and detect objects on it
Objects objects;
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) {
zed.retrieveObjects(objects);
std::cout << objects.object_list.size() << " objects detected " << std::endl;
// Use the objects in your application
}
}
// Close the Camera
zed.disableObjectDetection();
zed.close();
return 0;
}

◆ disableObjectDetection()

void disableObjectDetection ( unsigned int  instance_id = 0,
bool  force_disable_all_instances = false 
)

Disables the Object Detection process.

The object detection module immediately stops and frees its memory allocations.

Parameters
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
force_disable_all_instances: Should disable all instances of the object detection module or just instance_id.
Note
If the object detection has been enabled, this method will automatically be called by close().

◆ ingestCustomBoxObjects()

ERROR_CODE ingestCustomBoxObjects ( const std::vector< CustomBoxObjectData > &  objects_in,
const unsigned int  instance_id = 0 
)

Feed the 3D Object tracking method with your own 2D bounding boxes from your own detection algorithm.

Parameters
objects_in: Vector of CustomBoxObjectData to feed the object detection.
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine.
Note
The detection should be done on the current grabbed left image as the internal process will use all currently available data to extract 3D informations and perform object tracking.

◆ ingestCustomMaskObjects()

ERROR_CODE ingestCustomMaskObjects ( const std::vector< CustomMaskObjectData > &  objects_in,
const unsigned int  instance_id = 0 
)

Feed the 3D Object tracking method with your own 2D bounding boxes with masks from your own detection algorithm.

Parameters
objects_in: Vector of CustomMaskObjectData to feed the object detection.
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine.
Note
The detection should be done on the current grabbed left image as the internal process will use all currently available data to extract 3D informations and perform object tracking.

◆ retrieveObjects() [1/2]

ERROR_CODE retrieveObjects ( Objects objects,
ObjectDetectionRuntimeParameters  parameters = ObjectDetectionRuntimeParameters(),
const unsigned int  instance_id = 0 
)

Retrieve objects detected by the object detection module.

This method returns the result of the object detection, whether the module is running synchronously or asynchronously.

  • Asynchronous: this method immediately returns the last objects detected. If the current detection isn't done, the objects from the last detection will be returned, and Objects::is_new will be set to false.
  • Synchronous: this method executes detection and waits for it to finish before returning the detected objects.

It is recommended to keep the same Objects object as the input of all calls to this method. This will enable the identification and tracking of every object detected.

Parameters
objects: The detected objects will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person.
parameters: Object detection runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration.
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine, ERROR_CODE::FAILURE otherwise.
Objects objects; // Unique Objects to be updated after each grab
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image from the camera
zed.retrieveObjects(objects);
for (auto object : objects.object_list) {
std::cout << object.label << std::endl;
}
}
}

◆ retrieveObjects() [2/2]

ERROR_CODE retrieveObjects ( Objects objects,
CustomObjectDetectionRuntimeParameters  parameters = CustomObjectDetectionRuntimeParameters(),
const unsigned int  instance_id = 0 
)

Retrieve objects detected by the object detection module.

This method returns the result of the object detection, whether the module is running synchronously or asynchronously.

  • Asynchronous: this method immediately returns the last objects detected. If the current detection isn't done, the objects from the last detection will be returned, and Objects::is_new will be set to false.
  • Synchronous: this method executes detection and waits for it to finish before returning the detected objects.

It is recommended to keep the same Objects object as the input of all calls to this method. This will enable the identification and tracking of every object detected.

Parameters
objects: The detected objects will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person.
parameters: Object detection runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration.
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine, ERROR_CODE::FAILURE otherwise.
Objects objects; // Unique Objects to be updated after each grab
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image from the camera
zed.retrieveObjects(objects);
for (auto object : objects.object_list) {
std::cout << object.label << std::endl;
}
}
}

◆ getObjectsBatch()

ERROR_CODE getObjectsBatch ( std::vector< sl::ObjectsBatch > &  trajectories,
unsigned int  instance_id = 0 
)

Get a batch of detected objects.

Warning
This method need to be called after retrieveObjects, otherwise trajectories will be empty. It is the retrieveObjects method that ingest the current/live objects into the batching queue.
Parameters
trajectoriesas a std::vector of sl::ObjectsBatch, that will be filled by the batching queue process.
instance_id: Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine.
ERROR_CODE::INVALID_FUNCTION_CALL if batching module is not available or if object tracking was not enabled.
Note
Most of the time, the vector will be empty and will be filled every BatchParameters::latency.
Objects objects; // Unique Objects to be updated after each grab
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image from the camera
//Call retrieveObjects so that objects are ingested in the batching system
zed.retrieveObjects(objects);
// Get batch of objects
std::vector<sl::ObjectsBatch> traj_;
zed.getObjectsBatch(traj_);
std::cout<<" Size of batch: "<<traj_.size()<<std::endl;
// See zed-examples/object detection/birds eye viewer for a complete example.
}
}

◆ isObjectDetectionEnabled()

bool isObjectDetectionEnabled ( unsigned int  instance_id = 0)

Tells if the object detection module is enabled.

◆ getObjectDetectionParameters()

ObjectDetectionParameters getObjectDetectionParameters ( unsigned int  instance_id = 0)

Returns the ObjectDetectionParameters used.

It corresponds to the structure given as argument to the enableObjectDetection() method.

Returns
ObjectDetectionParameters containing the parameters used for object detection initialization.

◆ enableBodyTracking()

ERROR_CODE enableBodyTracking ( BodyTrackingParameters  body_tracking_parameters = BodyTrackingParameters())

Initializes and starts body tracking module.

The body tracking module currently supports multiple classes of human skeleton detection with the BODY_TRACKING_MODEL::HUMAN_BODY_FAST, BODY_TRACKING_MODEL::HUMAN_BODY_MEDIUM and BODY_TRACKING_MODEL::HUMAN_BODY_ACCURATE.
This model only detects humans but provides a full skeleton map for each person.


Detected objects can be retrieved using the retrieveBodies() method.

Note
- This Deep Learning detection module is not available for MODEL::ZED cameras (ZED first generation)..
- This feature uses AI to locate objects and requires a powerful GPU. A GPU with at least 3GB of memory is recommended.
Parameters
body_tracking_parameters: A structure containing all the specific parameters for the object detection. Default: default BodyTrackingParameters.
Returns
ERROR_CODE::SUCCESS if everything went fine.
ERROR_CODE::CORRUPTED_SDK_INSTALLATION if the AI model is missing or corrupted. In this case, the SDK needs to be reinstalled.
ERROR_CODE::MODULE_NOT_COMPATIBLE_WITH_CAMERA if the camera used does not have a IMU (MODEL::ZED).
ERROR_CODE::MOTION_SENSORS_REQUIRED if the camera model is correct (not MODEL::ZED) but the IMU is missing. It probably happens because InitParameters::sensors_required was set to false and that IMU has not been found.
ERROR_CODE::INVALID_FUNCTION_CALL if one of the body_tracking_parameters parameter is not compatible with other modules parameters (for example, depth_mode has been set to DEPTH_MODE::NONE).
ERROR_CODE::FAILURE otherwise.
#include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Open the camera
ERROR_CODE err = zed.open();
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Opening camera error: " << toString(err) << std::endl;
exit(-1);
}
// Enable position tracking (mandatory for body tracking)
PositionalTrackingParameters tracking_params;
err = zed.enablePositionalTracking(tracking_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Enabling Positional Tracking error: " << toString(err) << std::endl;
exit(-1);
}
// Set the body tracking parameters
BodyTrackingParameters body_tracking_params;
// Enable the body tracking
err = zed.enableBodyTracking(body_tracking_params);
if (err != ERROR_CODE::SUCCESS) {
std::cout << "Enabling Body Tracking error: " << toString(err) << std::endl;
exit(-1);
}
// Grab an image and detect bodies on it
Bodies bodies;
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) {
zed.retrieveBodies(bodies);
std::cout << bodies.body_list.size() << " bodies detected " << std::endl;
// Use the bodies in your application
}
}
// Close the Camera
zed.disableBodyTracking();
zed.close();
return 0;
}

◆ disableBodyTracking()

void disableBodyTracking ( unsigned int  instance_id = 0,
bool  force_disable_all_instances = false 
)

Disables the body tracking process.

The body tracking module immediately stops and frees its memory allocations.

Parameters
instance_id: Id of the body tracking instance. Used when multiple instances of the body tracking module are enabled at the same time.
force_disable_all_instances: Should disable all instances of the tracking module module or just instance_id.
Note
If the body tracking has been enabled, this method will automatically be called by close().

◆ retrieveBodies()

ERROR_CODE retrieveBodies ( Bodies bodies,
BodyTrackingRuntimeParameters  parameters = BodyTrackingRuntimeParameters(),
unsigned int  instance_id = 0 
)

Retrieves body tracking data from the body tracking module.

This method returns the result of the body tracking, whether the module is running synchronously or asynchronously.

  • Asynchronous: this method immediately returns the last bodies tracked. If the current tracking isn't done, the bodies from the last tracking will be returned, and Bodies::is_new will be set to false.
  • Synchronous: this method executes detection and waits for it to finish before returning the bodies tracked.

It is recommended to keep the same Bodies object as the input of all calls to this method. This will enable the identification and the tracking of every detected body.

Parameters
bodies: The tracked bodies will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person.
parameters: Body tracking runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration.
instance_id: Id of the body tracking instance. Used when multiple instances of the body tracking module are enabled at the same time.
Returns
ERROR_CODE::SUCCESS if everything went fine, ERROR_CODE::FAILURE otherwise.
Bodies bodies; // Unique Bodies to be updated after each grab
// --- Main loop
while (true) {
if (zed.grab() == ERROR_CODE::SUCCESS) { // Grab an image from the camera
zed.retrieveBodies(bodies);
for (auto body : bodies.body_list) {
std::cout << body.label << std::endl;
}
}
}

◆ isBodyTrackingEnabled()

bool isBodyTrackingEnabled ( unsigned int  instance_id = 0)

Tells if the body tracking module is enabled.

◆ getBodyTrackingParameters()

BodyTrackingParameters getBodyTrackingParameters ( unsigned int  instance_id = 0)

Returns the BodyTrackingParameters used.

It corresponds to the structure given as argument to the enableBodyTracking() method.

Returns
BodyTrackingParameters containing the parameters used for body tracking module initialization.

◆ getHealthStatus()

HealthStatus getHealthStatus ( )

Returns HealthStatus.

That self diagnostic can be enabled by sl::InitParameters::enable_image_validity_check

Returns
HealthStatus Structure containing the self diagnostic results of the image/depth/sensors

◆ startPublishing()

ERROR_CODE startPublishing ( CommunicationParameters  configuration = CommunicationParameters())

Set this camera as a data provider for the Fusion module.

Metadata is exchanged with the Fusion.

Note
If you use it, you should include sl/Fusion.hpp to avoid undefined reference.
Parameters
configuration: A structure containing all the initial parameters. Default: a preset of CommunicationParameters.
Returns
ERROR_CODE::SUCCESS if everything went fine, ERROR_CODE::FAILURE otherwise.

◆ getSDKVersion() [1/2]

static String getSDKVersion ( )
static

Returns the version of the currently installed ZED SDK.

Returns
The ZED SDK version as a string with the following format: MAJOR.MINOR.PATCH
std::cout << Camera::getSDKVersion() << std::endl;
static String getSDKVersion()
Returns the version of the currently installed ZED SDK.

◆ getSDKVersion() [2/2]

static void getSDKVersion ( int &  major,
int &  minor,
int &  patch 
)
static

Returns the version of the currently installed ZED SDK.

Parameters
major: Major variable of the version filled.
minor: Minor variable of the version filled.
patch: Patch variable of the version filled.
int mj_v, mn_v,ptch_v;
Camera::getSDKVersion(mj_v,mn_v,ptch_v);
std::cout << "SDK Version v" << mj_v << "." << mn_v << "." << ptch_v << std::endl;

◆ getDeviceList()

static std::vector<sl::DeviceProperties> getDeviceList ( )
static

List all the connected devices with their associated information.

This method lists all the cameras available and provides their serial number, models and other information.

Returns
The device properties for each connected camera.
Warning
As this method returns an std::vector, it is only safe to use in Release or ReleaseWithDebugInfos mode (not Debug).
This is due to a known compatibility issue between release (the SDK) and debug (your app) implementations of std::vector.

◆ getStreamingDeviceList()

static std::vector<sl::StreamingProperties> getStreamingDeviceList ( )
static

List all the streaming devices with their associated information.

Returns
The streaming properties for each connected camera.
Warning
As this method returns an std::vector, it is only safe to use in Release or ReleaseWithDebugInfos mode (not Debug).
This is due to a known compatibility issue between release (the SDK) and debug (your app) implementations of std::vector.
This method takes around 2 seconds to make sure all network informations has been captured. Make sure to run this method in a thread.

◆ reboot() [1/2]

static sl::ERROR_CODE reboot ( int  sn,
bool  fullReboot = true 
)
static

Performs a hardware reset of the ZED 2 and the ZED 2i.

Parameters
sn: Serial number of the camera to reset, or 0 to reset the first camera detected.
fullReboot: Perform a full reboot (sensors and video modules) if true, otherwise only the video module will be rebooted.
Returns
ERROR_CODE::SUCCESS if everything went fine.
ERROR_CODE::CAMERA_NOT_DETECTED if no camera was detected.
ERROR_CODE::FAILURE otherwise.
Note
This method only works for ZED 2, ZED 2i, and newer camera models.
Warning
This method will invalidate any sl::Camera object, since the device is rebooting.
Under Windows it is not possible to get exclusive access to HID devices, hence calling this method while the camera is opened by another process will cause it to freeze for a few seconds while the device is rebooting.

◆ reboot() [2/2]

static sl::ERROR_CODE reboot ( sl::INPUT_TYPE  inputType)
static

Performs a hardware reset of all devices matching the InputType.

Parameters
inputType: Input type of the devices to reset.
Returns
ERROR_CODE::SUCCESS if everything went fine.
ERROR_CODE::CAMERA_NOT_DETECTED if no camera was detected.
ERROR_CODE::FAILURE for SVOs otherwise.
ERROR_CODE::INVALID_FUNCTION_PARAMETERS for SVOs and streams.
Warning
This method will invalidate any sl::Camera object, since the device is rebooting.
Under Windows it is not possible to get exclusive access to HID devices, hence calling this method while the camera is opened by another process will cause it to freeze for a few seconds while the device is rebooting.