Run a head-to-head benchmark of YOLO11 and YOLOv8 directly on your Jetson hardware. Measure real FPS, inference latency, and mAP accuracy — and decide which model is right for your application.
What you will learn
- Key architecture differences between YOLOv8 and YOLO11
- How to run a standardised benchmark on Jetson hardware
- How to measure FPS, latency, memory usage, and mAP50-95
- When to use YOLO11 vs YOLOv8 for your specific use case
- How TensorRT FP16 vs FP32 affects speed and accuracy
Step 1 — Run the benchmark script
cd ~/tutorials/14-yolo11-benchmark
python3 benchmark.py --models yolov8n yolov8s yolo11n yolo11s --imgsz 640
The script runs each model on 500 frames from a standard test video and prints a comparison table.
Step 2 — Benchmark code
from ultralytics import YOLO
import time, cv2
def benchmark_model(model_path, source, frames=500):
model = YOLO(model_path)
cap = cv2.VideoCapture(source)
times = []
for _ in range(frames):
ret, frame = cap.read()
if not ret: break
start = time.perf_counter()
model(frame, verbose=False)
times.append(time.perf_counter() - start)
avg_ms = sum(times)/len(times) * 1000
avg_fps = 1000 / avg_ms
print(f"{model_path:30s} {avg_fps:.1f} FPS | {avg_ms:.1f}ms/frame")
return avg_fps, avg_ms
benchmark_model("yolov8n.engine", 0)
benchmark_model("yolo11n.engine", 0)
Expected results on Jetson Orin Nano (TensorRT FP16)
| Model | FPS | Latency | Parameters |
|---|---|---|---|
| YOLOv8n | ~55 | ~18ms | 3.2M |
| YOLO11n | ~60 | ~17ms | 2.6M |
| YOLOv8s | ~30 | ~33ms | 11.2M |
| YOLO11s | ~33 | ~30ms | 9.4M |
✅ Next: Tutorial 15 — TensorRT Export Guide | Back to Jetson Kit