Tutorial 16 — DeepStream Multi-Camera Pipeline on Jetson

Run AI inference on 4 camera streams simultaneously at 30+ FPS each — on a single Jetson board. NVIDIA DeepStream SDK manages the entire video pipeline efficiently, using hardware-accelerated decoding and a single shared inference engine.

What you will learn

  • How DeepStream’s GStreamer pipeline works
  • How to connect up to 4 cameras and run one shared inference engine
  • How to configure nvstreammux for multi-source inputs
  • How to read metadata from all streams simultaneously
  • How to output a 2×2 tiled display of all camera views

Step 1 — Run the 4-camera demo

cd ~/tutorials/16-deepstream-multicam
python3 multicam.py     --sources /dev/video0 /dev/video1 /dev/video2 /dev/video3     --show-tiled

Step 2 — DeepStream pipeline configuration

# config_multicam.txt — pre-configured on your kit
[source0]
enable=1
type=4       # RTSP or USB camera
uri=file:///dev/video0

[source1]
enable=1
type=4
uri=file:///dev/video1

[streammux]
batch-size=4
width=1280
height=720
batched-push-timeout=40000

[primary-gie]
enable=1
model-engine-file=../models/yolov8n.engine
batch-size=4
interval=0

Step 3 — Read detections from all streams

def osd_sink_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    batch_meta  = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    for frame_meta in pyds.NvDsFrameMetaList(batch_meta.frame_meta_list):
        source_id = frame_meta.source_id   # which camera (0-3)
        for obj_meta in pyds.NvDsObjectMetaList(frame_meta.obj_meta_list):
            label = obj_meta.obj_label
            conf  = obj_meta.confidence
            print(f"Camera {source_id}: {label} ({conf:.2f})")

Next: Tutorial 17 — MQTT Factory Integration | Back to Jetson Kit

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
0

Subtotal