ONNX Runtime - Object detection with YOLO
This example makes use of the ONNX Runtime to run object detection using a YOLO model. It differs from the YOLO example in that the later uses the Ultralytics SDK (Pytorch).
The YOLO ONNX model was obtained by simply exporting the YOLOv8n model to the ONNX format.
The pre-processing and post-processing is implemented using OpenCV and NumPy.
Requirements
-
Pipeless: Check the installation guide.
-
Python OpenCV and NumPy packages. Install them by running:
pip install opencv-python numpy
Run the example
Create an empty Pipeless project
pipeless init my-project --template empty # Using the empty template we avoid the interactive shell
cd my-project
Feel free to change
my-project
by any name you want.
Download the stage folder
wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/onnx-yolo"
(Optional) If you have CUDA or TensorRT installed you can enable them at process.json
{
"runtime": "onnx",
"model_uri": "https://pipeless-public.s3.eu-west-3.amazonaws.com/yolov8n.onnx",
"inference_params": {
"execution_provider": "tensorrt"
}
}
Start Pipeless
The following command leaves Pipeless running on the current terminal
pipeless start --stages-dir .
Provide a stream
Open a new terminal and run:
pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "onnx-yolo"
This command assumes you have a webcam available, if you don't just change the input URI.