How to Deploy YOLOv8 on NVIDIA Jetson Orin for Real-Time Object Detection

YOLOv8 is the most widely used object detection model in the world. NVIDIA Jetson Orin Nano is the most powerful edge AI board available today. Together, they let you run real-time AI vision — detecting, classifying, and tracking objects in live video — without sending a single byte to the cloud. This guide shows you exactly how to make them work together.

What is YOLOv8?

YOLO stands for You Only Look Once. Unlike older detection systems that scan an image multiple times, YOLO processes the entire frame in a single pass through a neural network — making it extraordinarily fast. YOLOv8, released by Ultralytics in 2023, improved on its predecessors with a new anchor-free detection head, better accuracy, and a clean Python API that makes deployment on edge hardware much simpler.

YOLOv8 supports five tasks out of the box: object detection, instance segmentation, pose estimation, classification, and oriented bounding boxes — all from a single framework.

Why deploy on NVIDIA Jetson instead of the cloud?

  • Zero latency: inference happens in under 20ms on-device — no round trip to a server
  • No internet required: works in factories, fields, aircraft, and remote locations
  • Privacy: video never leaves your device
  • Cost: no per-image API fees — run it 24/7 for free after purchase
  • Power: Jetson Orin Nano uses as little as 7W — a fraction of a cloud GPU

What you need

  • NVIDIA Jetson Orin Nano Developer Kit (8GB recommended)
  • MicroSD card (64GB+) or NVMe SSD
  • USB camera or CSI-2 camera module
  • JetPack 6.x flashed via NVIDIA SDK Manager
  • Python 3.10 and pip

Step 1 — Flash JetPack 6.x

Download NVIDIA SDK Manager on your host PC and flash JetPack 6.x onto your Jetson. This installs Ubuntu 22.04, CUDA 12.x, cuDNN, TensorRT 10.x, and DeepStream in one step. After flashing, run:

sudo apt update && sudo apt upgrade -y
sudo apt install python3-pip python3-dev -y

Step 2 — Install Ultralytics YOLOv8

pip install ultralytics
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu121

Verify the install works by running a quick detection on a test image:

from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model("https://ultralytics.com/images/bus.jpg")
results[0].show()

Step 3 — Export to TensorRT for maximum speed

Running YOLOv8 in PyTorch mode gives around 8–12 FPS on the Jetson Orin Nano. Exporting to TensorRT FP16 jumps this to 35–60 FPS — a 4–5× speedup with no loss in accuracy that matters for real-world use.

from ultralytics import YOLO
model = YOLO("yolov8n.pt")
# Export to TensorRT FP16 — takes 5–10 minutes first time
model.export(format="engine", device=0, half=True, imgsz=640)
print("Done! Model saved as yolov8n.engine")

Step 4 — Run real-time detection on a live camera

Load your exported TensorRT engine and point it at a live camera stream:

from ultralytics import YOLO
import cv2

model = YOLO("yolov8n.engine", task="detect")  # TensorRT engine
cap   = cv2.VideoCapture(0)                     # USB camera (0) or /dev/video0

while True:
    ret, frame = cap.read()
    if not ret: break

    results = model(frame, verbose=False)
    annotated = results[0].plot()               # draw boxes on frame

    cv2.imshow("YOLOv8 — Jetson Real-Time", annotated)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Benchmarks: PyTorch vs TensorRT on Jetson Orin Nano

ModelRuntimeFPS (640px)mAP50-95
YOLOv8nPyTorch~10 FPS37.3
YOLOv8nTensorRT FP16~55 FPS37.1
YOLOv8sTensorRT FP16~30 FPS44.9
YOLOv8mTensorRT FP16~16 FPS50.2

Step 5 — Train a custom model on your own data

The pre-trained COCO model detects 80 common objects. For industrial inspection, agriculture, or sports, you need your own classes. Training on Jetson is possible but slow — we recommend training on a cloud GPU (Google Colab, Paperspace) and deploying the engine file to Jetson.

from ultralytics import YOLO

model = YOLO("yolov8n.pt")   # start from pretrained weights
model.train(
    data    = "data.yaml",   # your dataset config
    epochs  = 100,
    imgsz   = 640,
    device  = 0,             # GPU 0
    batch   = 16
)
# Then export to TensorRT and copy .engine file to Jetson

Common use cases on Jetson + YOLOv8

  • Industrial inspection: detect defects on production lines at 60fps
  • Agriculture: identify plant diseases, pests, and growth stages
  • Sports analytics: track players, ball, and shot types in live video
  • Security: people counting, intrusion detection, PPE compliance
  • Sky imaging: detect airplanes, satellites, and astronomical objects

The HemiHex Jetson Inspection Kit comes pre-configured with JetPack 6.x and everything you need to start deploying YOLOv8 models immediately — no setup headaches. Shop the Jetson Kit →

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
0

Subtotal