Fig. 6.7.9: Result of the hand-eye calibration process displayed in the Web GUI 6.7.4 Parameters The hand-eye calibration component is called rc_hand_eye_calibration in the REST-API and is represented by the Hand-Eye Calibration tab in the Web GUI (Section 4.5). The user can change the calibration parameters there or use the REST-API interface (Section 8.2). Parameter overview This component offers the following run-time parameters. Table 6.7.
Description of run-time parameters The parameter descriptions are given with the corresponding Web GUI names in brackets. grid_width (Grid Width (m)) Width of the calibration grid in meters. The width should be measured with a very great accuracy, preferably with sub-millimeter accuracy. grid_height (Grid Height (m)) Height of the calibration grid in meters. The height should be measured with a very great accuracy, preferably with sub-millimeter accuracy.
{ "message": "string", "status": "int32", "success": "bool" } The slot argument is used to assign numbers to the different calibration poses. At each instant when set_pose is called, an image is recorded. This service call fails if the grid was undetectable in the current image. status 1 3 success true true 4 8 12 false false false Table 6.7.2: Return codes of the set_pose service call Description pose stored successfully pose stored successfully; collected enough poses for calibration, i.e.
(continued from previous page) "success": "bool" } Table 6.7.3: Return codes of the calibrate service call success Description true calibration successful; returned resulting calibration pose false not enough poses to perform calibration false calibration result is invalid, please verify the input data false given calibration grid dimensions are not valid status 0 1 2 3 save_calibration persistently saves the result of hand-eye calibration to the rc_visard and overwrites the existing one.
(continued from previous page) "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "robot_mounted": "bool", "status": "int32", "success": "bool" } Table 6.7.6: Return codes of the get_calibration service call status success Description 0 true returned valid calibration pose 2 false calibration result is not available 6.7.
7 Optional software components The rc_visard offers optional software components that can be activated by purchasing a separate license (Section 9.6). The rc_visard’s optional software consists of the following components: • SLAM (rc_slam, Section 7.1) performs simultaneous localization and mapping for correcting accumulated poses. The rc_visard’s covered trajectory is offered via the REST-API interface (Section 8.2). • IO and Projector Control (rc_iocontrol, Section 7.
The pose estimate of the SLAM component will be initialized with the current estimate of the stereo INS - and thus the origin will be where the stereo INS was started. Since the SLAM component builds on the motion estimates of the stereo INS component, the latter will automatically be started up if it is not yet running when SLAM is started. When the SLAM component is running, the corrected pose estimates will be available via the datastreams pose, pose_rt, and dynamics of the rc_dynamics component.
Table 7.1.1: The rc_slam component’s run-time parameters Name Type Min Max Default Description autorecovery bool False True True In case of fatal errors recover corrected position and restart mapping _ _ _ halt on low memory bool False True True When the memory runs low, go to halted state This component reports the following status values. Table 7.1.
12:01:15 12:01:00 12:00:15 12:00:00 Time (hh:mm:ss) Whole trajectory rc_slam started +15s rc_slam stopped Selected subset Parameters (relative) +60s +15s –15s – 60s start_time end_time –15s Fig. 7.1.1: Examples for combinations of relative start and end times for the get_trajectory service. All combinations shown select the same subset of the trajectory.
{ "return_code": { "message": "string", "value": "int16" }, "trajectory": { "name": "string", "parent": "string", "poses": [ { "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "timestamp": { "nsec": "int32", "sec": "int32" } } ], "producer": "string", "timestamp": { "nsec": "int32", "sec": "int32" } } } save_map Stores the current state as a map to persistent memory.
This service requires no arguments. This service returns the following response: { "return_code": { "message": "string", "value": "int16" } } remove_map Removes the stored map from the persistent memory. This service requires no arguments. This service returns the following response: { "return_code": { "message": "string", "value": "int16" } } 7.
Description of run-time parameters out1_mode and out2_mode (Out1 and Out2) The output modes for GPIO Out 1 and Out 2 can be set individu- ally: Low sets the ouput permanently to low. This is the factory default of Out 2. High sets the output permanently to high. ExposureActive sets the output to high for the exposure time of every image. This is the factory default of Out 1. ExposureAlternateActive sets the output to high for the exposure time of every second image.
7.2.2 Services The IOControl component offers the following services. get_io_values This service call retrieves the current state of the general purpose inputs and outputs. The re- turned time stamp is the time of measurement. The call returns an error if the rc_visard does not have an IOControl license. This service requires no arguments.
Note: The TagDetect components are optional and require a separate license (Section 9.6) to be purchased. Tag detection is made up of three steps: 1. Tag reading on the 2D image pair (see Tag reading, Section 7.3.2). 2. Estimation of the pose of each tag (see Pose estimation, Section 7.3.3). 3. Re-identification of previously seen tags (see Tag re-identification, Section 7.3.4). In the following, the two supported tag types are described, followed by a comparison. QR code Fig. 7.3.
AprilTag Fig. 7.3.2: A 16h5 tag (left) and a 36h11 tag (right). AprilTags consist of a mandatory white (a) and black (b) border and a variable amount of data bits (c). AprilTags are similar to QR codes. However, they are specifically designed for robust identification at large distances. As for QR codes, we will call the tag pixels modules. Fig. 7.3.2 shows how AprilTags are structured. They are surrounded by a mandatory white and black border, each one module wide.
Comparison Both QR codes and AprilTags have their up and down sides. While QR codes allow arbitrary user-defined data to be stored, AprilTags have a pre-defined and limited set of tags. On the other hand, AprilTags have a lower resolution and can therefore be detected at larger distances. Moreover, the continuous white to black edge around AprilTags allow for more precise pose estimation. Note: If user-defined data is not required, AprilTags should be preferred over QR codes. 7.3.
Table 7.3.2: Maximum detection distance examples for AprilTags with a width of 𝑡 = 4 cm AprilTag family Tag width Maximum distance 36h11 (recommended) 8 modules 1.1 m 16h5 6 modules 1.4 m Table 7.3.3: Maximum detection distance examples for QR codes with a width of 𝑡 = 8 cm Tag width Maximum distance 29 modules 0.49 m 21 modules 0.70 m 7.3.3 Pose estimation For each detected tag, the pose of this tag in the camera coordinate frame is estimated.
have the same orientation and are approximately aligned in parallel to the image rows. However, even if the approximate size is not given, the TagDetect components try to detect such situations and filter out affected tags. Below tables give approximate precisions of the estimated poses of AprilTags and QR codes. We distinguish between lateral precision (i.e., in x and y direction) and precision in z direction.
After a number of tag detection runs, previously detected tag instances will be discarded if they are not detected in the meantime. This can be configured by the parameter forget_after_n_detections. 7.3.5 Interfaces There are two separate components for tag detection of the sensor, one for detecting AprilTags and one for QR codes, named rc_april_tag_detect and rc_qr_code_detect, respectively. Apart from the component names they share the same interface definition.
start starts the component by transitioning from IDLE to RUNNING. When running, the component receives images from the stereo camera and is ready to perform tag detections. To save computing resources on the sensor, the component should only be running when necessary. This service requires no arguments. This service returns the following response: { "accepted": "bool", "current_state": "string" } stop stops the component by transitioning to IDLE.
(continued from previous page) "message": "string", "value": "int16" }, "tags": [ { "id": "string", "instance_id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "size": "float64", "timestamp": { "nsec": "int32", "sec": "int32" } } ], "timestamp": { "nsec": "int32", "sec": "int32" } } Request: tags is the list of tag IDs that the TagDetect component should detect
following table contains a list of common codes: Code 0 -1 -4 -9 -101 -102 -200 101 102 200 201 Description Success An invalid argument was provided A timeout occurred while waiting for the image pair The license is not valid Internal error There was a backwards jump of system time A fatal internal error occurred A warning occurred during tag reading A warning occurred during pose estimation Multiple warnings occurred; see list in message The component was not in state RUNNING Tags might be omitted from t
• support for static and robot-mounted rc_visard devices and optional integration with the on-board Hand-eye calibration (Section 6.7) component, to provide grasps in the user-configured external reference frame • a quality value associated to each suggested grasp and related to the flatness of the grasping surface • sorting of grasps according to gravity and size so that items on top of a pile are grasped first.
rim_t hickn y x y z x inn er out _dime er_ dim nsion ens s.y ion s.y rim _th ick n z y ess.x ess . inner_dimensions.z outer_dimensions.z outer _d inner imension _dim ensio s.x ns.x Fig. 7.4.1: Load carrier models and reference frame. The user can optionally specify a prior for the load carrier pose. The detected load carrier pose is guaranteed to have the minimum rotation with respect to the load carrier prior pose.
Suction Grasp A grasp provided by the ItemPick and BoxPick components represents the recommended pose of the TCP (Tool Center Point) of the suction gripper. The grasp orientation is a right-handed coordinate system and is defined such that its z axis is normal to the surface pointing inside the object at the grasp position and its x axis is directed along the maximum elongation of the surface. The computed grasp pose is the center of the biggest ellipse that can be inscribed in each surface. Fig. 7.4.
• Disparity, error, and confidence images from the Stereo matching component (rc_stereomatching, Section 6.2). Sensor dynamics For each load carrier detection and grasp computation, the components estimate the gravity vector by subscribing to the IMU data stream from the Sensor dynamics component (rc_dynamics, Section 6.3). Note: The gravity vector is estimated from linear acceleration readings from the on-board IMU.
Table 7.4.2: The rc_itempick and rc_boxpick components load carrier detection parameters Name Type Min Max Default Description load_carrier_crop_distance float64 0.0 0.02 0.005 Safety margin in meters by which the load carrier inner dimensions are reduced to define the region of interest for grasp computation _ _ _ load carrier model tolerance float64 0.003 0.025 0.008 Indicates how much the estimated load carrier dimensions are allowed to differ from the load carrier model dimensions in meters 7.4.
Table 7.4.3: The rc_itempick and rc_boxpick components surface clustering parameters Name Type Min Max Default Description cluster_max_dimension float64 0.05 0.8 0.3 Only for rc_itempick. Diameter of the largest sphere enclosing each cluster in meters. Clusters larger than this value are filtered out before grasp computation. _ _ cluster max curvature float64 0.005 0.5 0.11 Maximum curvature allowed within one cluster. The smaller this value, the more clusters will be split apart.
State name IDLE RUNNING FATAL Table 7.4.5: Possible states of the ItemPick and BoxPick components Description The component is idle. The component is running and ready for load carrier detection and grasp computation. A fatal error has occurred. 7.4.5 Services The user can explore and call the rc_itempick and rc_boxpick component’s services, e.g. for development and testing, using Swagger UI (Section 8.2.4) or the rc_visard Web GUI (Section 4.5).
This service returns the following response: { "accepted": "bool", "current_state": "string" } set_region_of_interest Persistently stores a region of interest on the rc_visard. All configured regions of interest are persistent over firmware updates and rollbacks. See Region of Interest the definition of the region of interest type.
get_regions_of_interest Returns the configured regions of interest with the requested region_of_interest_ids. If no region_of_interest_ids are provided, all configured regions of interest are returned.
{ "return_code": { "message": "string", "value": "int16" } } set_load_carrier Persistently stores a load carrier on the rc_visard. All configured load carriers are persistent over firmware updates and rollbacks. See Load Carrier for the definition of the load carrier type.
{ "load_carrier_ids": [ "string" ] } This service returns the following response: { "load_carriers": [ { "id": "string", "inner_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "outer_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "rim_thickness": { "x": "float64", "y": "float64" } } ], "retur
{ "return_code": { "message": "string", "value": "int16" } } detect_load_carriers Triggers a load carrier detection. All images used by the node are guaranteed to be newer than the service trigger time.
(continued from previous page) "pose_frame": "string", "rim_thickness": { "x": "float64", "y": "float64" } } ], "return_code": { "message": "string", "value": "int16" }, "timestamp": { "nsec": "int32", "sec": "int32" } } Required arguments: pose_frame: defines the output pose frame for the detected load carriers. load_carrier_ids robot_pose: only if working in pose_frame="external" and the rc_visard is robot-mounted.
(continued from previous page) "w": "x": "y": "z": "float64", "float64", "float64", "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } }, "load_carrier_id": "string", "pose_frame": "string", "region_of_interest_id": "string", "robot_pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } } This service returns the following response: { "items": [ { "pose": { "orientation": { "w"
(continued from previous page) "load_carriers": [ { "id": "string", "inner_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "outer_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "rim_thickness": { "x": "float64", "y": "float64" } } ], "return_code": { "message": "string", "value": "int16" }, "t
compute_grasps (for ItemPick) Triggers the computation of grasping poses for a suction device. All images used by the node are guaranteed to be newer than the service trigger time. If successful, the service returns a sorted list of grasps and (optionally) the detected load carriers.
This service returns the following response: { "grasps": [ { "item_uuid": "string", "max_suction_surface_length": "float64", "max_suction_surface_width": "float64", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "quality": "float64", "timestamp": { "nsec": "int32", "sec": "int32" }, "type": "string", "uuid": "string" } ], "load_carriers": [ { "id": "string", "inner_dimensio
(continued from previous page) ], "return_code": { "message": "string", "value": "int16" }, "timestamp": { "nsec": "int32", "sec": "int32" } } Required arguments: pose_frame: defines the output pose frame for the computed grasps. suction_surface_length: length of the suction device grasping surface. suction_surface_width: width of the suction device grasping surface. robot_pose: only if working in pose_frame="external" and the rc_visard is robot-mounted.
(continued from previous page) "y": "float64", "z": "float64" }, "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } }, "load_carrier_id": "string", "pose_frame": "string", "region_of_interest_id": "string", "robot_pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "suction_surface_length": "float6
(continued from previous page) "nsec": "int32", "sec": "int32" }, "type": "string", "uuid": "string" } ], "items": [ { "grasp_uuids": [ "string" ], "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "rectangle": { "x": "float64", "y": "float64" }, "timestamp": { "nsec": "int32", "sec": "int32" }, "type": "string", "uuid": "string" } ], "load_carriers": [ { "id": "string", "inne
(continued from previous page) "z": "float64" } }, "pose_frame": "string", "rim_thickness": { "x": "float64", "y": "float64" } } ], "return_code": { "message": "string", "value": "int16" }, "timestamp": { "nsec": "int32", "sec": "int32" } } Required arguments: pose_frame: defines the output pose frame for the computed grasps and detected rectangles. item_models: defines a list of rectangles with minimum and maximum size, with the minimum dimensions strictly smaller than the maximum dimensions.
For the SilhouetteMatch component to work, special object templates are required for each type of object to be detected. Roboception offers a template generation service on their website (https://roboception.com/en/ template-request/), where the user can upload CAD files or recorded data of the objects and request object templates for the SilhouetteMatch component. The object templates consist of significant edges of each object.
In the REST-API, a plane is defined by a normal and a distance. normal is a normalized 3-vector, specifying the normal of the plane. The normal points away from the camera. distance represents the distance of the plane from the camera along the normal.
• The template of the object to be detected in the scene. • The coordinate frame in which the poses of the detected objects shall be returned (ref. Hand-eye calibration). Optionally, further information can be given to the SilhouetteMatch component: • An offset in case the objects are lying not on the base plane but on a plane parallel to it. The offset is the distance between both planes given in the direction towards the camera. If omitted, an offset of 0 is assumed.
Note: All changes and configuration updates to these components will affect the performance of the SilhouetteMatch component. Stereo camera and stereo matching The SilhouetteMatch component makes internally use of the rectified images from the Stereo camera component (rc_stereocamera, Section 6.1). Thus, the exposure time should be set properly to achieve the optimal performance of the component.
Note: Object detection can only be performed if the limit of 10 degrees angle offset between the base plane normal and the line of sight of the rc_visard is not exceeded. 7.5.6 Parameters and status values The SilhouetteMatch software component is called rc_silhouettematch in the REST-API. The user can explore and configure the rc_silhouettematch component’s run-time parameters, e.g. for development and testing, using the rc_visard Web GUI (Section 4.5) or Swagger UI (Section 8.2.4).
max_number_of_detected_objects (Maximum Object Number) This parameter gives the maximum number of objects to detect in the scene. If more than the given number of objects can be detected in the scene, only the objects with the highest matching results are returned. match_max_distance (Maximum Matching Distance) This parameter gives the maximum allowed pixel dis- tance of an image edge pixel from the object edge pixel in the template to be still considered as matching.
(continued from previous page) "distance": "float64", "normal": { "x": "float64", "y": "float64", "z": "float64" } }, "plane_estimation_method": "string", "pose_frame": "string", "region_of_interest_2d_id": "string", "robot_pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } } This service returns the following response: { "plane": { "distance": "float64", "normal": { "x": "float64", "y": "float64", "z"
This service requires the following arguments: { "pose_frame": "string", "robot_pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } } This service returns the following response: { "plane": { "distance": "float64", "normal": { "x": "float64", "y": "float64", "z": "float64" }, "pose_frame": "string" }, "return_code": { "message": "string", "value": "int16" } } Required arguments: pose_frame: see Hand-ey
{ "region_of_interest_2d": { "height": "uint32", "id": "string", "offset_x": "uint32", "offset_y": "uint32", "width": "uint32" } } This service returns the following response: { "return_code": { "message": "string", "value": "int16" } } get_regions_of_interest_2d Returns the configured 2D regions of interest with the requested region_of_interest_2d_ids. If no region_of_interest_2d_ids are provided, all configured 2D regions of interest are returned.
{ "return_code": { "message": "string", "value": "int16" } } detect_object Triggers an object detection and returns the pose of all found object instances. The maximum number of returned instances can be controlled with the max_number_of_detected_objects parameter. All images used by the service are guaranteed to be newer than the service trigger time. See Detection of objects for more details about the object detection.
(continued from previous page) } ], "object_id": "string", "return_code": { "message": "string", "value": "int16" }, "timestamp": { "nsec": "int32", "sec": "int32" } } Required arguments: object_id in object_to_detect: ID of the template which should be detected. pose_frame: see Hand-eye calibration. Potentially required arguments: robot_pose: see Hand-eye calibration. Optional arguments: offset: offset in meters by which the base-plane calibration will be shifted towards the camera.
• 200 OK – successful operation (returns array of Template) • 404 Not Found – node not found Referenced Data Models • Template (Section 8.2.3) GET /nodes/rc_silhouettematch/templates/{id} Get an rc_silhouettematch template. If the requested content-type is application/octet-stream, the template is returned as file. Template request GET /api/v1/nodes/rc_silhouettematch/templates/ HTTP/1.1 Host: Template response HTTP/1.
• file – template file (required) Request Headers • Accept – multipart/form-data application/json Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns Template) • 404 Not Found – node or template not found • 403 Forbidden – forbidden, e.g. because there is no valid license for this component.
8 Interfaces Four interfaces are provided for configuring and obtaining data from the rc_visard: 1. GigE Vision 2.0/GenICam (Section 8.1) Images and camera related configuration. 2. REST API (Section 8.2) API to configure the rc_visard, query status information, request streams, etc. 3. rc_dynamics streams (Section 8.3) Real-time streams containing state estimates with poses, velocities, etc. are provided over the rc_dynamics interface. It sends protobuf -encoded messages via UDP. 4.
8.1.1 Important GenICam parameters The following list gives an overview of the relevant GenICam features of the rc_visard that can be read and/or changed via the GenICam interface. In addition to the standard parameters, which are defined in the Standard Feature Naming Convention (SFNC, see http://www.emva.org/standards-technology/genicam/genicam-downloads/), rc_visard devices also offer custom parameters that account for special features of the Stereo camera (Section 6.
Category: AcquisitionControl AcquisitionFrameRate • type: Float, ranges from 1 Hz to 25 Hz • default: 25 Hz • description: Frame rate of the camera (FPS, Section 6.1.3). ExposureAuto • type: Enumeration, one of Continuous or Off • default: Continuous • description: Can be set to Off for manual exposure mode or to Continuous for auto exposure mode (Exposure, Section 6.1.3).
• description: Weighting of red or blue to green color channel. This feature is only available on color sensors (wb_ratio, Section 6.1.3). Category: DigitalIOControl Note: If IOControl license is not available, then the outputs will be configured according to the factory defaults and cannot be changed. The inputs will always return the logic value false, regardless of the signals on the physical inputs.
• type: Enumeration, is always DisparityC • description: Mode for the depth measurements, which is always DisparityC. Scan3dFocalLength (read-only) • type: Float • description: Focal length in pixel of image stream selected by ComponentSelector. In case of the component Disparity, Confidence and Error, the value also depends on the resolution that is implicitely selected by DepthQuality. Scan3dBaseline (read-only) • type: Float • description: Baseline of the stereo camera in meter.
• type: Boolean • default: False • description: Enables chunk data that is delivered with every image. Custom GenICam features of the rc_visard Category: ImageFormatControl ExposureTimeAutoMax • type: Float, ranges from 66 µs to 18000 µs • default: 7000 µs • description: Maximal exposure time in auto exposure mode (Auto, Section 6.1.3). ExposureRegionOffsetX • type: Integer in the range of 0 to 1280 • default: 0 • description: Horizontal offset of exposure region (Section 6.1.3) in pixel.
• description: Only effective in MultiPart mode. If this parameter is set to SingleComponent the images are sent immediately as a single component per frame/buffer when they become available. This is the same behavior as when MultiPart is not supported by the client. If set to SynchronizedComponents all enabled components are time synchronized on the rc_visard and only sent (in one frame/buffer) when they are all available for that timestamp.
• description: Maximum disparity value in pixels (Disparity Range, Section 6.2.4). DepthSmooth (read-only if StereoPlus license is not available) • type: Boolean • default: False • description: True for advanced smoothing of disparity values. (Smoothing, Section 6.2.4). DepthFill • type: Integer, ranges from 0 pixel to 4 pixels • default: 3 pixels • description: Value in pixels for Fill-In (Section 6.2.4).
Particularly useful chunk parameters are: • ChunkComponentSelector selects for which component to extract the chunk data in MultiPart mode. • ChunkComponentID and ChunkComponentIDValue provide the relation of the image to its component (e.g. camera image or disparity image) without guessing from the image format or size. • ChunkLineStatusAll provides the status of all GPIOs at the time of image acquisition. LineStatusAll above for a description of bits. See • ChunkScan3d...
transformed into 3D object coordinates in the sensor coordinate frame (Section 3.7) using the equations described in Computing depth images and point clouds (Section 6.2.2). Assuming that 𝑑𝑖𝑘 is the 16 bit disparity value at column 𝑖 and row 𝑘 of a disparity image, the 3D reconstruction in meters can be written with the GenICam parameters as Scan3dBaseline , 𝑑𝑖𝑘 · Scan3dCoordinateScale Scan3dBaseline , 𝑃𝑦 = (𝑘 − Scan3dPrincipalPointV) 𝑑𝑖𝑘 · Scan3dCoordinateScale Scan3dBaseline 𝑃𝑧 = Scan3dFocalLength .
tion (Section 4.3). Accessing this entry point with a web browser lets the user explore and test the full API during run-time using the Swagger UI (Section 8.2.4). For actual HTTP requests, the current API version is appended to the entry point of the API, i.e., http:// /api/v1. All data sent to and received by the REST-API follows the JavaScript Object Notation (JSON).
HTTP/1.1 200 OK Content-Type: application/json Content-Length: 157 { "name": "minconf", "min": 0, "default": 0, "max": 1, "value": 0, "type": "float64", "description": "Minimum confidence" } Note: The actual behavior, allowed requests, and specific return codes depend heavily on the specific resource, context, and action. Please refer to the rc_visard’s available resources (Section 8.2.2) and to each software component’s (Section 6) parameters and services. 8.2.
As an example, the rc_stereomatching parameters can be retrieved using curl -X GET http:///api/v1/nodes/rc_stereomatching/parameters Its median parameter could be set to 3 using curl -X PUT --header 'Content-Type: application/json' -d '{ "value": 3 }' http:/// ˓→api/v1/nodes/rc_stereomatching/parameters/median Note: Run-time parameters are specific to individual nodes and are documented in the respective software component (Section 6).
(continued from previous page) "snap" ], "services": [ "save_parameters", "reset_defaults", "change_state" ], "status": "stale" }, { "name": "rc_stereocamera", "parameters": [ "fps", "exp_auto", "exp_value", "exp_max" ], "services": [ "save_parameters", "reset_defaults" ], "status": "running" }, { "name": "rc_hand_eye_calibration", "parameters": [ "grid_width", "grid_height", "robot_mounted" ], "services": [ "save_parameters", "reset_defaults", "set_pose", "reset", "save", "calibrate", "get_calibration" ],
(continued from previous page) "services": [ "save_parameters", "reset_defaults" ], "status": "running" }, { "name": "rc_stereovisodo", "parameters": [ "disprange", "nkey", "ncorner", "nfeature" ], "services": [ "save_parameters", "reset_defaults" ], "status": "stale" } ] Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns array of NodeInfo) Referenced Data Models • NodeInfo (Section 8.2.3) GET /nodes/{node} Get info on a single node.
• node (string) – name of the node (required) Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns NodeInfo) • 404 Not Found – node not found Referenced Data Models • NodeInfo (Section 8.2.3) GET /nodes/{node}/parameters Get parameters of a node. Template request GET /api/v1/nodes//parameters?name= HTTP/1.1 Host: Sample response HTTP/1.
Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns array of Parameter) • 404 Not Found – node not found Referenced Data Models • Parameter (Section 8.2.3) PUT /nodes/{node}/parameters Update multiple parameters. Template request PUT /api/v1/nodes//parameters HTTP/1.1 Host: Accept: application/json [ { "name": "string", "value": {} } ] Sample response HTTP/1.
Parameters • node (string) – name of the node (required) Request JSON Array of Objects • parameters (Parameter) – array of parameters (required) Request Headers • Accept – application/json Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns array of Parameter) • 404 Not Found – node not found • 403 Forbidden – Parameter update forbidden, e.g. because they are locked by a running GigE Vision application or there is no valid license for this component.
• Parameter (Section 8.2.3) PUT /nodes/{node}/parameters/{param} Update a specific parameter of a node. Template request PUT /api/v1/nodes//parameters/ HTTP/1.1 Host: Accept: application/json { "name": "string", "value": {} } Sample response HTTP/1.1 200 OK Content-Type: application/json { "default": "H", "description": "Quality, i.e.
GET /api/v1/nodes//services HTTP/1.1 Host: Sample response HTTP/1.1 200 OK Content-Type: application/json [ { "args": {}, "description": "Restarts the component.", "name": "restart", "response": { "accepted": "bool", "current_state": "string" } }, { "args": {}, "description": "Starts the component.", "name": "start", "response": { "accepted": "bool", "current_state": "string" } }, { "args": {}, "description": "Stops the component.
HTTP/1.1 200 OK Content-Type: application/json { "args": { "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "slot": "int32" }, "description": "Save a pose (grid or gripper) for later calibration.
HTTP/1.1 200 OK Content-Type: application/json { "name": "set_pose", "response": { "message": "Grid detected, pose stored.
(continued from previous page) "height": "960", "temp_left": "39.6", "temp_right": "38.2", "time": "0.00406513", "width": "1280" } } Parameters • node (string) – name of the node (required) Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns NodeStatus) • 404 Not Found – node not found Referenced Data Models • NodeStatus (Section 8.2.
(continued from previous page) "destinations": [ "192.168.1.13:30000" ], "name": "pose", "protobuf": "Frame", "protocol": "UDP" }, { "description": "Pose of left camera (RealTime 200Hz)", "destinations": [ "192.168.1.100:20000", "192.168.1.
(continued from previous page) "destinations": [ "192.168.1.13:30000" ], "name": "pose", "protobuf": "Frame", "protocol": "UDP" } Parameters • stream (string) – name of the stream (required) Response Headers • Content-Type – application/json Status Codes • 200 OK – successful operation (returns Stream) • 404 Not Found – datastream not found Referenced Data Models • Stream (Section 8.2.3) PUT /datastreams/{stream} Update a datastream configuration. Template request PUT /api/v1/datastreams/ HTTP/1.
Status Codes • 200 OK – successful operation (returns Stream) • 404 Not Found – datastream not found Referenced Data Models • Stream (Section 8.2.3) DELETE /datastreams/{stream} Delete a destination from the datastream configuration. Template request DELETE /api/v1/datastreams/?destination= HTTP/1.1 Host: Sample response HTTP/1.
GET /logs Get list of available log files. Template request GET /api/v1/logs HTTP/1.1 Host: Sample response HTTP/1.1 200 OK Content-Type: application/json [ { "date": "name": "size": }, { "date": "name": "size": }, { "date": "name": "size": }, { "date": "name": } 1503060035.0625782, "rcsense-api.log", 730 1503060035.741574, "stereo.log", 39024 1503060044.0475223, "camera.log", 1091 1503060035.2115774, "dynamics.
(continued from previous page) "component": "rc_stereo_ins", "level": "INFO", "message": "Running rc_stereo_ins version 2.4.0", "timestamp": 1503060034.083 }, { "component": "rc_stereo_ins", "level": "INFO", "message": "Starting up communication interfaces", "timestamp": 1503060034.085 }, { "component": "rc_stereo_ins", "level": "INFO", "message": "Autostart disabled", "timestamp": 1503060034.
Query Parameters • format (string) – return log as JSON or raw (one of json, raw; default: json) (optional) • limit (integer) – limit to last x lines in JSON format (default: 100) (optional) Response Headers • Content-Type – text/plain application/json Status Codes • 200 OK – successful operation (returns Log) • 404 Not Found – log not found Referenced Data Models • Log (Section 8.2.3) GET /system Get system information on sensor. Template request GET /api/v1/system HTTP/1.
• Content-Type – application/json Status Codes • 200 OK – successful operation (returns SysInfo) Referenced Data Models • SysInfo (Section 8.2.3) GET /system/license Get information about licenses installed on sensor. Template request GET /api/v1/system/license HTTP/1.1 Host: Sample response HTTP/1.
• 200 OK – successful operation • 400 Bad Request – not a valid license PUT /system/reboot Reboot the sensor. Template request PUT /api/v1/system/reboot HTTP/1.1 Host: Status Codes • 200 OK – successful operation GET /system/rollback Get information about currently active and inactive firmware/system images on sensor. Template request GET /api/v1/system/rollback HTTP/1.1 Host: Sample response HTTP/1.
• 400 Bad Request – already set to use inactive partition on next boot GET /system/update Get information about currently active and inactive firmware/system images on sensor. Template request GET /api/v1/system/update HTTP/1.1 Host: Sample response HTTP/1.1 200 OK Content-Type: application/json { "active_image": { "image_version": "rc_visard_v1.1.0" }, "fallback_booted": false, "inactive_image": { "image_version": "rc_visard_v1.0.
8.2.3 Data type definitions The REST-API defines the following data models, which are used to access or modify the available resources (Section 8.2.2) either as required attributes/parameters of the requests or as return types. FirmwareInfo: Information about currently active and inactive firmware images, and what image is/will be booted.
• svo (boolean) - visual odometry component Template object { "calibration": false, "fusion": false, "hand_eye_calibration": false, "rectification": false, "self_calibration": false, "slam": false, "stereo": false, "svo": false } LicenseComponents objects are nested in LicenseInfo. LicenseInfo: Information about the currently applied software license on the sensor.
(continued from previous page) }, { "component": "string", "level": "string", "message": "string", "timestamp": 0 } ], "name": "string", "size": 0 } Log objects are used in the following requests: • GET /logs/{log} LogEntry: Representation of a single log entry in a log file.
• services (array of string) - list of the services this node offers • status (string) - status of the node (one of unknown, down, stale, running) Template object { "name": "string", "parameters": [ "string", "string" ], "services": [ "string", "string" ], "status": "string" } NodeInfo objects are used in the following requests: • GET /nodes • GET /nodes/{node} NodeStatus: Detailed current status of the node including run-time statistics.
• default (type not defined) - the parameter’s default value • description (string) - description of the parameter • max (type not defined) - maximum value this parameter can be assigned to • min (type not defined) - minimum value this parameter can be assigned to • name (string) - name of the parameter • type (string) - the parameter’s primitive type represented as string (one of bool, int8, uint8, int16, uint16, int32, uint32, int64, uint64, float32, float64, string) • value (type not defined) - the param
• name (string) - name of the service • response (ServiceResponse) - see description of ServiceResponse Template object { "args": {}, "description": "string", "name": "string", "response": {} } Service objects are used in the following requests: • GET /nodes/{node}/services • GET /nodes/{node}/services/{service} • PUT /nodes/{node}/services/{service} ServiceArgs: Arguments required to call a service with. The general representation of these arguments is a (nested) dictionary.
StreamType: Description of a data stream’s protocol. An object of type StreamType has the following properties: • protobuf (string) - type of data-serialization, i.e. name of protobuf message definition • protocol (string) - network protocol of the stream [UDP] Template object { "protobuf": "string", "protocol": "string" } StreamType objects are nested in Stream. SysInfo: System information about the sensor.
(continued from previous page) "serial": "string", "time": 0, "uptime": 0 } SysInfo objects are used in the following requests: • GET /system Template: rc_silhouettematch template An object of type Template has the following properties: • id (string) - Unique identifier of the template Template object { "id": "string" } Template objects are used in the following requests: • GET /nodes/rc_silhouettematch/templates • GET /nodes/rc_silhouettematch/templates/{id} • PUT /nodes/rc_silhouettematch/templates/{id}
Fig. 8.2.1: Initial view of the rc_visard’s Swagger UI with its resources and requests grouped into nodes, datastreams, logs, and system Using this interface, available resources and requests can be explored by clicking on them to uncollapse or recollapse them. The following figure shows an example of how to get a node’s current status by filling in the necessary parameter (node name) and clicking the Try it out! button.
Fig. 8.2.2: Result of requesting the rc_stereomatching node’s status Some actions, such as setting parameters or calling services, require more complex parameters to an HTTP request. The Swagger UI allows developers to explore the attributes required for these actions during run-time, as shown in the next example. In the figure below, the attributes required for the the rc_hand_eye_calibration node’s set_pose service are explored by performing a GET request on this resource.
Fig. 8.2.3: The result of the GET request on the set_pose service shows the required arguments for this service call. Users can easily use this preformatted JSON string as a template for the service arguments to actually call the service: 8.2.
Fig. 8.2.4: Filling in the arguments of the set_pose service request 8.3 The rc_dynamics interface The rc_dynamics interface offers continuous, real-time data-stream access to rc_visard’s several dynamic state estimates (Section 6.3.2) as continuous, real-time data streams. It allows state estimates of all offered types to be configured to be streamed to any host in the network. The Data-stream protocol (Section 8.3.3) used is agnostic vis-à-vis operating system and programming language. 8.3.
Name dynamics dynamics_ins pose pose_rt pose_ins pose_rt_ins imu Table 8.3.
message Frame { optional PoseStamped pose optional string parent optional string name optional string producer } = = = = 1; 2; // Name of the parent frame 3; // Name of the frame 4; // Name of the producer of this data The producer field can take the values ins, slam, rt_ins, and rt_slam, indicating whether the data was computed by SLAM or Stereo INS, and is real-time (rt) or not. • The real-time dynamics stream (Section 6.3.
message Pose { optional Vector3d position = 1; // Position in meters optional Quaternion orientation = 2; // Orientation as unit quaternion repeated double covariance = 3 [packed=true]; // Row-major ˓→representation of the 6x6 covariance matrix (x, y, z, rotation about X axis, ˓→rotation about Y axis, rotation about Z axis) } message Time { /// \brief Seconds optional int64 sec = 1; /// \brief Nanoseconds optional int32 nsec = 2; } message Quaternion { optional double x optional double y optional double z
The EKI Bridge gives access to run-time parameters and offered services of all computational nodes described in Software components (Section 6) and Optional software components (Section 7). The Ethernet connection to the rc_visard on the robot controller is configured using XML configuration files. The EKI XML configuration files of all nodes running on the rc_visard are available for download at: https://doc.rc-visard.com/latest/en/eki.
Note: By default the XML configuration files uses 998 as flag to notify KRL that the response data record has being received. If this value is already in use, it should be changed in the corresponding XML configuration file. Return code The element consists of a value and a message attribute. As for all other components, a successful request returns with a res/return_code/@value of 0. Negative values indicate that the request failed. The error message is contained in res/return_code/@message.
Positions are converted from meters to millimeters and orientations are converted from quaternions to KUKA ABC (in degrees). Note: No other unit conversions are included in the EKI Bridge. All dimensions and 3D coordinates that don’t belong to a pose are expected and returned in meters. Arrays: Arrays are identified by adding the child element (short for list element) to the list name.
(continued from previous page) Tag="req/node/" Type="STRING"/> Tag="req/service/" Type="STRING"/> Tag="req/args/" Type=""/> Tag="req/end_of_request" Type="BOOL"/> The element includes a child XML element that is used by the EKI Bridge to identify the target service from the XML telegram. The service name is already included in the configuration file.
(continued from previous page) This telegram can be sent from KRL by specifying req as path for EKI_Send: DECL EKI_STATUS RET RET = EKI_SetString("rc_itempick-get_load_carriers", "req/args/load_carrier_ids/le", ˓→"load_carrier2") RET = EKI_Send("rc_itempick-get_load_carriers", "req") Response XML structure The element in the XML configuration file for a generic service follows the specification below:
(continued from previous page) DECL FRAME pose = {X 0.0, Y 0.0, Z 0.0, A 0.0, B 0.0, C 0.0} RET = EKI_CheckBuffer("rc_april_tag_detect-detect", "res/tags/le/pose") num_instances = RET.
DECL EKI_STATUS RET RET = EKI_Send("rc_stereomatching-parameters", "req") The response from the EKI Bridge contains all parameters:
Note: The rc_visard does not have a backup battery for its real time clock and hence does not retain time across power cycles. The system time starts in the year 2000 at power up and is then automatically set via NTP if a server can be found. The current system time as well as NTP and PTP status can be queried via REST API (Section 8.2) and seen on the Web GUI’s (Section 4.5) System tab.
9 Maintenance Warning: The customer does not need to open the rc_visard’s housing to perform maintenance. Unauthorized opening will void the warranty. 9.1 Lens cleaning Glass lenses with antireflective coating are used to reduce glare. Please take special care when cleaning the lenses. To clean them, use a soft lens-cleaning brush to remove dust or dirt particles.
If a new firmware update is available for your rc_visard device, the respective file can be downloaded to a local computer from http://www.roboception.com/download. Step 2: Upload the update file. To update with the rc_visard’s REST-API, users may refer to the POST / system/update request. To update the firmware via the Web GUI, locate the Software Update row on the System tab and press the Upload Update button (see Fig. 9.3.1). Select the desired update image file (file extension .
Warning: Do not close the web browser tab which contains the Web GUI or press the renew button on this tab, because it will abort the update procedure. In that case, repeat the update procedure from the beginning. Step 3: Reboot the rc_visard. To apply a firmware update to the rc_visard device, a reboot is required after having uploaded the new image version. Note: The new image version is uploaded to the inactive partition of the rc_visard.
9.7 Downloading log files During operation, the rc_visard logs important information, warnings, and errors into files. If the rc_visard exhibits unexpected or erroneous behavior, the log files can be used to trace its origin. Log messages can be viewed and filtered using the Web GUI’s (Section 4.5) Logs tab. If contacting the support (Contact, Section 12), the log files are very useful for tracking possible problems. To download them as a .tar.gz file, click on Download all logs on the Web GUI’s Logs tab.
10 Accessories 10.1 Connectivity kit Roboception offers an optional connectivity kit to aid customers with setting up the rc_visard. It consists of a: • network cable with straight M12 plug to straight RJ45 connector in either 2 m or 5 m length; • power adapter cable with straight M12 socket to DC barrel connector in 30 cm length; • 24 V, 30 W desktop power supply. Connecting the rc_visard to residential or office grid power requires a power supply that meets EN 55011 Class B emission standards.
• Straight M12 plug to straight RJ45 connector, 10 m length: MURR Electronics Art.-Nr.: 7700-48521S4W1000 • Angled M12 plug to straight RJ45 connector, 10 m length: MURR Electronics Art.-Nr.: 7700-48551S4W1000 10.2.2 Power connections An 8-pin A-coded M12 plug connector is provided for power and GPIO connectivity. Various cabling solutions can be obtained from third party vendors. A selection of M12 to open ended cables is provided below.
11 Troubleshooting 11.1 LED colors During the boot process, the LED will change color several times to indicate stages in the boot process: Table 11.1.1: LED color codes LED color Boot stage white power supply OK yellow normal boot process in progress purple blue green boot complete, rc_visard ready The LED will signal some warning or error states to support the user during troubleshooting. Table 11.1.
The sensor may slow down processing when cooling is insufficient or the ambient temperature exceeds the specified range. Reliability issues and/or mechanical damage This may be an indication of ambient conditions (vibration, shock, resonance, and temperature) being outside of specified range. Please refer to the specification of environmental conditions (Section 3.3.1). • Operating the rc_visard outside of specified ambient conditions might lead to damage and will void the warranty.
• If the rc_visard is in manual exposure mode, increase the exposure time (see Parameters, Section 6.1.3), or • switch to auto-exposure mode (see Parameters, Section 6.1.3). The camera image is too noisy Large gain factors cause high-amplitude image noise. To decrease the image noise, • use an additional light source to increase the scene’s light intensity, or • choose a greater maximal auto-exposure time (see Parameters, Section 6.1.3).
The disparity images’ frame rate is too low • Check and increase the frame rate of the camera images (see Parameters, Section 6.1.3). The frame rate of the disparity image cannot be greater than the frame rate of the camera images. • Choose a lesser Disparity Image Quality (Section 6.2.4). High-resolution disparity images are only available at about 3 Hz. Full 25 Hz can only be achieved for low-resolution disparity images as described in the technical specifications (Section 3.2.1).
The state estimates are too noisy • Adapt the parameters for visual odometry as described in Parameters (Section 6.4.1). • Check whether the camera pose stream has enough accuracy. Pose estimation has jumps • Has the SLAM component been turned on? SLAM can cause jumps when reducing errors due to a loop closure. • Adapt the parameters for visual odometry as described in Parameters (Section 6.4.1). Pose frequency is too low • Use the real-time pose stream with a 200 Hz update rate. See Stereo INS (Section 6.
12 Contact 12.1 Support For support issues, please see http://www.roboception.com/support or contact support@roboception.de. 12.2 Downloads Software SDKs, etc. can be downloaded from http://www.roboception.com/download. 12.3 Address Roboception GmbH Kaflerstrasse 2 81241 Munich Germany Web: http://www.roboception.com Email: info@roboception.
13 Appendix 13.1 Pose formats 13.1.1 XYZABC format The XYZABC format is used to express a pose by 6 values. 𝑋𝑌 𝑍 is the position in millimeters. 𝐴𝐵𝐶 are Euler angles in degrees. The convention used for Euler angles is ZYX, i.e., 𝐴 rotates around the 𝑍 axis, 𝐵 rotates around the 𝑌 axis, and 𝐶 rotates around the 𝑋 axis.
Note: In XYZ+quaternion format, the pose is defined in meters, whereas in the XYZABC format, the pose is defined in millimeters. 13.1.
HTTP Routing Table /datastreams GET /datastreams, 140 GET /datastreams/{stream}, 141 PUT /datastreams/{stream}, 142 DELETE /datastreams/{stream}, 143 /logs GET /logs, 143 GET /logs/{log}, 144 /nodes GET /nodes, 130 GET /nodes/rc_silhouettematch/templates, 115 GET /nodes/rc_silhouettematch/templates/{id}, 116 GET GET GET GET GET GET PUT /nodes/{node}, 132 /nodes/{node}/parameters, 133 /nodes/{node}/parameters/{param}, 135 /nodes/{node}/services, 136 /nodes/{node}/services/{service}, 137 /nodes/{node}/sta
Index Symbols 3D coordinates, 34 disparity image, 33 3D modeling, 34, 41 A acceleration, 42 dynamics, 25 AcquisitionAlternateFilter GenICam, 123 AcquisitionFrameRate GenICam, 120 AcquisitionMultiPartMode GenICam, 123 active partition, 175 angular velocity, 42 AprilTag, 76 interfaces, 80 pose estimation, 78 re-identification, 79 auto exposure, 32 B BalanceRatio GenICam, 120 BalanceRatioSelector GenICam, 120 BalanceWhiteAuto GenICam, 120 base-plane SilhouetteMatch, 105 base-plane calibration SilhouetteMatch
corners visual odometry, 45, 47 correspondences visual odometry, 45 D data IMU, 42 inertial measurement unit, 42 data model REST-API, 150 data stream dynamics, 41 imu, 42 pose, 41 pose_rt, 41, 42 REST-API, 140 data types BoxPick, 84 ItemPick, 84 data-type REST-API, 150 depth error maximum, 38 depth image, 33, 33 Web GUI, 35 DepthAcquisitionMode GenICam, 124 DepthAcquisitionTrigger GenICam, 124 DepthDispRange GenICam, 124 DepthFill GenICam, 125 DepthMaxDepth GenICam, 125 DepthMaxDepthErr GenICam, 125 DepthM
ExposureTime GenICam, 120 ExposureTimeAutoMax GenICam, 123 external reference frame hand-eye calibration, 54 F features visual odometry, 47 fill-in, 38 GenICam, 125 firmware mender, 173 rollback, 175 update, 173 version, 173 focal length, 28 focal length factor GenICam, 124 FocalLengthFactor GenICam, 124 fps, see frame rate frame rate, 11 camera, 31 disparity image, 37 GenICam, 120 pose, 41, 42 visual odometry, 45 G Gain GenICam, 120 gain, 28 gain factor, 29, 32 GenICam, 6 GenICam AcquisitionAlternateFilt
pin assignments, 15 grasp computation, 83 grasp, 86 item model, 86 load carrier, 84 parameters, 87 region of interest, 84 services, 90 status, 87 H hand-eye calibration calibration, 26, 57 error, 61 external reference frame, 54 mounting, 54 parameters, 62 robot frame, 54 slot, 59 Height GenICam, 119 HeightMax GenICam, 119 host name, 20 housing temperature LED, 13 humidity, 13 I image timestamp, 35, 126 image features visual odometry, 45 image noise, 32 IMU, 6 IMU, 25 data, 42 inertial measurement unit, 4
minimum confidence, 38 minimum confidence, 38 GenICam, 125 minimum distance, 38 GenICam, 125 monocalibration camera calibration, 52 motion blur, 32 mounting, 16 hand-eye calibration, 54 N network cable, 177 network configuration, 19 node REST-API, 129 NTP, 6 NTP synchronization, 172 O object detection, 104 operating conditions, 13 P parameter REST-API, 129 parameters camera, 29, 31 camera calibration, 53 disparity image, 35 hand-eye calibration, 62 services, 32 visual odometry, 45 pin assignments Etherne
GenICam, 122 Scan3dDistanceUnit GenICam, 121 Scan3dFocalLength GenICam, 122 Scan3dInvalidDataFlag GenICam, 122 Scan3dInvalidDataValue GenICam, 122 Scan3dOutputMode GenICam, 121 Scan3dPrinciplePointU GenICam, 122 Scan3dPrinciplePointV GenICam, 122 SDK, 6 segmentation, 38 GenICam, 125 self-calibration, 48 Semi-Global Matching, see SGM sensor fusion, 45 services camera calibration, 54 dynamics, 42 parameters, 32 REST-API, 130 visual odometry, 47 SGM, 6 SGM, 24, 33 silhouette, 104 SilhouetteMatch, 104 base-plan
Web GUI, 45 VO, see visual odometry W Web GUI, 22 camera, 29 depth image, 35 disparity image, 35 dynamics, 45 logs, 176 SLAM, 67 update, 173 visual odometry, 45 white balance, 32 Width GenICam, 119 WidthMax GenICam, 119 X XYZ+quaternion, 7 XYZABC format, 7 Index 194