OpenCV YOLO object detection with PiCamera and ROS2

RoboFoundry
5 min readMay 15, 2022

--

In this article I’ll go over my experience of getting YOLO object detection working with ROS2 and Raspberry Pi Camera. The basic setup is not that complicated, you need following things to get started:

  1. Raspberry Pi with ROS2 Foxy installed
  2. Raspberry Pi Camera and optional camera mount
  3. Laptop/Desktop running ROS2 Foxy [don’t really need any fancy GPU, I’m running it on 2009 Dell Inspiron with Intel i5]

You’ll need to understand some basics of OpenCV and YOLO object detection, for that you can refer to the articles in reference section at the bottom.

Follow the instructions here to properly connect your Pi Camera V2 with Raspberry Pi running Ubuntu and configure the RPi to get the basic Camera functionality working.

For this particular exercise, you don’t really need to calibrate your Pi Camera but if you want to do that you follow the article here.

Run the following installation commands on RPi

sudo apt install ros-foxy-v4l2-camerasudo apt install v4l-utils

You can run one of these commands to check if the camera is getting detected and test the camera our to save the snapshot image to make sure everything is working Ok.

v4l2-ctl --list-devices# should output something like this:bcm2835-codec-decode (platform:bcm2835-codec):
/dev/video10
/dev/video11
/dev/video12
/dev/media0
bcm2835-isp (platform:bcm2835-isp):
/dev/video13
/dev/video14
/dev/video15
/dev/video16
/dev/media1
mmal service 16.1 (platform:bcm2835-v4l2):
/dev/video0
# ORls /dev/video* -l

Test Camera for image and video [raspistill and raspivid should be already there on your Ubuntu install]

## test it out in standalone mode with raspistillraspistill -o test.jpg## or video, you can open the video file using vlc playerraspivid -o testvideo.h264 -t 5000

Now that everything seems to be working for Pi Camera let’s launch the v4l2 camera node for ROS2 to start publishing the image stream to ROS2 topic. If your camera shows up at different device number, you may have to update the parameter in bold below.

ros2 run v4l2_camera v4l2_camera_node --ros-args --param video_device:="/dev/video0"

If everything comes up without errors [you may see some warnings but those should be harmless], you should see these topics being published. In addition, if you launch the rqt from terminal on your laptop, you should be able to see the image stream being published live and see what camera sees

ros2 topic list# you should see following topics being published
/camera_info
/image_raw
/parameter_events
/rosout

Your rqt screen should look like this if you add node inspector and image viewer. If you move your Pi Camera around, you should see the live image stream changing.

At this point we have done all we can from RPi, which is to publish the image stream from Pi Camera over a ROS2 topic named /image_raw.

Now lets setup laptop/desktop computer which will run object detection software. Run the following commands to get your laptop/desktop or even a docker setup.

The main ROS2 project we are going to use is residing here in github repository, just make sure you get the foxy branch for ROS2.

mkdir -p ~/darknet_ros2_ws/src 
cd ~/darknet_ros2_ws/src
# following is single line command showing wrapped in blog so make sure you copy the entire command at once and paste itgit clone -b foxy --recursive git@github.com:leggedrobotics/darknet_ros.git# change to workspace root folder
cd ~/darknet_ros2_ws

Before we can build the workspace, we need to make a few changes to the configuration file to make sure the darknet_ros node is listening to the /image_raw topic we are publishing from RPi.

In order to do that open the ~/darknet_ros2_ws folder in Visual Studio code and open src/darknet_ros/darknet_ros/config/ros.yaml file and make sure the first 6 lines looks like this, rest of it is unchanged.

ros.yaml

Now we can build the workspace and run the darknet_ros node.

# build workspace
colcon build --cmake-args " -DCMAKE_BUILD_TYPE=Release"
# source workspace
source install/setup.bash
# run the darknet_ros node to start doing the object detection, the following command should also launch a opencv window
ros2 launch darknet_ros darknet_ros.launch.py

Once you launch the darknet_ros node, you should be able to see an openCV window open up with YOLO V3 title and if you point the camera to various objects in your room, it should start detecting various objects with confidence level showing bounding boxes around the detected objects like this. Just keep in mind the YOLO V3 window doesn’t always popup on top and may take few seconds, so you may have to look at your taskbar to click on it to bring to top.

It will also publish the same data that can be consumed by other ROS2 nodes if you want to take that into actionable messages that you want to subscribe to and do something useful with that information.

ros2 topic list # should show something like this
/camera_info
/darknet_ros/bounding_boxes
/darknet_ros/detection_image
/darknet_ros/found_object
/image_raw
/parameter_events
/rosout

If you examine the node graph in rqt after launching darknet_ros node, you should see the node graph that looks like this. It should be pretty clear looking at it that v4l2_camera node on your RPi is publishing /image_raw which is being subscribed to and processed by darknet_ros node.

Hope this helps someone else get the object detection working with their Pi Camera. Enjoy!!

References

--

--

No responses yet