The Raspberry Pi 5 is a very useful and adaptable platform for various embedded and edge computing applications. However, when working with computer vision models, the performance on the single board computer can lack in either accuracy, speed, or both. To address this use case, the Raspberry Pi company released the Raspberry Pi AI Kit and Raspberry Pi AI HAT+ with dedicated neural network accelerators.
This additional hardware enables the Raspberry Pi 5 to run models like YOLO with live camera streams while keeping the computer's compact form factor.
The active cooler is recommended by Raspberry Pi for the best performance.
The Raspberry Pi AI Kit may still be for sale but it has been replaced by the AI HAT+ going forward, which is why it is recommended here. Either the 13 TOPS or 26 TOPS version will work in this codelab.
Follow the official documentation for instructions about installing the AI HAT+ on the Raspberry Pi 5.
See a demonstration of the AI HAT+ object detection:
The Raspberry Pi boots from a microSD card. You need to install Raspberry Pi OS on the microSD card that you will use with your Pi. For more details about alternative methods of setting up your Raspberry Pi, refer to the Viam docs.
test
.pi
(not recommended for security reasons). And specify a password.viam-server
wirelessly. Check Configure wireless LAN and enter your wireless network credentials. SSID (short for Service Set Identifier) is your Wi-Fi network name, and password is the network password. Change the section Wireless LAN country
to where your router is currently being operated. YES
to apply OS customization settings. Confirm YES
to erase data on the USB flash drive. You may also be prompted by your operating system to enter an administrator password. After granting permissions to the Imager, it will begin writing and then verifying the Linux installation to the USB flash drive.ssh <USERNAME>@<HOSTNAME>.local
sudo apt update
sudo apt upgrade
With the OS installed, it's time to set up the system packages for the AI HAT+.
Connect to the Pi with SSH, if you're not still connected from the previous step.
ssh <USERNAME>@<HOSTNAME>.local
sudo raspi-config
Advanced Options
PCIe Speed
Yes
to enable PCIe Gen 3 mode Finish
to exit the configuration interface sudo reboot
hailo-all
package, which contains the firmware, device drivers, and processing libraries for the AI HAT+: sudo apt install -y hailo-allThis may take a few minutes depending on your network speed.
sudo reboot
hailortcli fw-control identify
You should see output similar to the following:
Executing on device: 0000:01:00.0 Identifying board Control Protocol Version: 2 Firmware Version: 4.17.0 (release,app,extended context switch buffer) Logger Version: 0 Board Name: Hailo-8 Device Architecture: HAILO8L Serial Number: HLDDLBB234500054 Part Number: HM21LB1C2LAE Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
Now that we have physically connected our hardware components, place them in a spot with a good view of the traffic. In the next section, we'll configure our machine.
viam-server
on the Raspberry Pi device that you want to use, select the Linux / Aarch64
platform for the Raspberry Pi, and leave your installation method as viam-agent
. viam-agent
to download and install viam-server
on your Raspberry Pi. Follow the instructions to run the command provided in the setup instructions from the SSH prompt of your Raspberry Pi. camera
, and find the webcam
module. This adds the module for working with a USB webcam. Leave the default name camera-1
for now.camera-1
. From the Attributes section of the panel, select a video_path
. camera-1
panel, expand the TEST section to ensure you have configured the camera properly and see a video feed. vision
, and find the hailo-rt
module. This adds the module for working with Hailo Runtime used by the AI HAT+. Select "Add module". Leave the default name vision-1
for now.vision-1
. From the Depends on section of the panel, select camera-1
from the "Search resources" dropdown. vision-1
panel, expand the TEST section to ensure you have configured the service properly and see images from camera-1
with object detection boxes on top. sensor
, and find the detections
module. This adds a module for capturing object detection data from a vision service. Leave the default name sensor-1
for now.sensor-1
. From the Attributes section of the panel, add the following JSON configuration. {
"camera": "camera-1",
"detector": "vision-1",
"labels": ["car", "bus", "person"]
}
data_manager-1
. sensor-1
panel, expand the TEST section to ensure you have configured the sensor properly and see a list of the configured labels with the number of detections refreshed regularly. With all the components and services in place, you can move on to creating a live tele-operations dashboard for your machine!
This step walks through how to use the teleop (or tele-operations) feature of the Viam app.
Camera
. Select camera-1
from the "Camera name" field, keep the "Refresh type" as "Live". Time series
. Set the "Title" to "Traffic" and "Time range (min)" to 30.sensor-1
for the "Resource name", Readings
for "Capture method", cars
for "Title", and readings.car
for "Path". bus
and person
.At this point, you have created an edge device that can perform real-time object detection and monitor it remotely from anywhere! You can keep building on this project with additional features:
The default YOLO-based models can do more than detect traffic! You can find the full list of identifiable objects included in the module repo: https://github.com/HipsterBrown/viam-pi-hailo-ml/
If none of those items suit your project requirements, you can explore the full suite of model types in the Hailo Model Zoo or retrain a model on a custom dataset.
In addition to the project ideas mentioned above, consider other ways to continue your journey with Viam.