OAK-D Lite Camera — ROS2 Setup

RoboFoundry
4 min readMay 25, 2022

In the previous article we looked at just basic setup of OAK-D Lite camera and getting it hooked up to computer and running the depthAI GUI app to see the data coming from the camera and be able to apply various models to see the data.

In this article, we are going to try to hook it up with a Raspberry Pi 4 based robot and setup all its software on Raspberry Pi so it can publish its Pointcloud and other camera data on ROS2 topics.

For reference, I’m running Ubuntu 20.04 and ROS2 Foxy on Raspberry Pi 4 4GB version.

Raspberry Pi Setup

First on Raspberry Pi, plugin the USB-C cable from camera to one of the USB ports on RPi.

Follow the steps on the Luxonis depthai github page.

They have two separate repos, another one is for oak-d-lite here, I tried to setup that one first, but it could not compile anything first time I tried, so may be I forgot to run rosdep install on the workspace. However, the one for depthai github page worked fine first time for me. Later when I went back to oak-d-lite and reran the rosdep and build it compiled it properly as well. However, the depthai repo has several launch files and rviz files compared to the oak-d lite repo.

Here are the steps I followed for ROS2 Foxy, these may be different for you based on your ROS distro and your environment so you may have to tweak the steps.

Install the pre-requisites.

sudo wget -qO- https://raw.githubusercontent.com/luxonis/depthai-ros/main/install_dependencies.sh | sudo bashsudo apt install libopencv-dev# if you don't have rosdep installed and not initialized please execute the following steps:sudo apt install python-rosdep(melodic) or sudo apt install python3-rosdepsudo rosdep initrosdep update# you will need this to import external repossudo apt install python3-vcstool

Install the main repos and build the code

# The following setup procedure assumes you have cmake version >= 3.10.2 and OpenCV version >= 4.0.0mkdir -p luxonis_depthai_ws/srccd luxonis_depthai_wswget https://raw.githubusercontent.com/luxonis/depthai-ros/main/underlay.reposvcs import src < underlay.reposrosdep install --from-paths src --ignore-src -r -ysource /opt/ros/foxy/setup.bashcolon build source install/setup.bash

It will take almost 10 mins to build everything so be patient. But once everything compiles and you have sourced the workspace with last command above, you should have your ROS2 packages ready to execute.

Launch the example node [there is a small typo on their github page where the name of the launch file is not correct].

ros2 launch depthai_examples stereo.launch.py

You may see some errors for not able to launch Rviz2 [since I am running RPi in headless mode there is no GUI apps and even if they were installed it would not work unless I setup the display to be forwarded which I do not have since there is no need]. But you can ignore those errors for now and run the following two commands to make sure everything works as expected.

ros2 topic list# this will show output like this
/camera/left/image_raw
/camera/right/image_raw
/clicked_point
/initialpose
/left/camera_info
/move_base_simple/goal
/parameter_events
/right/camera_info
/right/image_rect
/robot_description
/rosout
/stereo/points
/tf
/tf_static

You can echo the topic to see if the data is actually being published.

ros2 topic echo /stereo/points

Host Machine

Once everything is confirmed on RPi, you can go to your host machine [laptop/desktop] where you have ROS2 GUI tools installed and launch the RViz2. You can either clone the github repo again there or open the rviz config file from RPi if you have network access to browse files on RPi [I have setup samba file sharing for me to be able to do that]. So you can navigate to the src/luxonis/depthai-ros-examples/depthai_examples/rviz/stereoPointCloud.rviz folder and open that file in the RViz2.

You should see something like this on your RViz2 screen with camera image stream and Pointcloud data showing up. The Pointcloud data may be very small, you can adjust the size of points from 0.01 to 0.05 so you can see it better if you want.

That’s pretty much it, you are listening to the data published by OAK-D Lite camera over ROS2 topic on another machine. Overall, it took me less than an hour to do the entire setup end-to-end.

Next you can configure your robot to use this Pointcloud data for Nav2 using some of the examples out there. That’s a topic for another day and beyond scope of this article.

I’m really impressed with the camera, its capabilities and the software that Luxonis has provided. It is really a pleasure to see everything working in one shot after you follow the instructions. Kudos to OAK-D Lite product team on great execution!!

References

luxonis/depthai-ros (github.com)

iftahnaf/ros2_oak_d_lite: ros2 foxy oak-d-lite camera interface (github.com)

--

--