Give a robot real-time visual awareness. In this tutorial you will build a vision pipeline that detects obstacles, estimates their distance using a depth camera, and sends avoidance commands to a robot — all running locally on the Jetson.
What you will learn
- How to detect obstacles and people in a robot’s field of view
- How to estimate distance using bounding box size and depth camera
- How to send movement commands via ROS 2 (pre-installed)
- How to implement simple reactive obstacle avoidance
- How to build a pick-and-place object detection system
Step 1 — Run the robot vision demo
cd ~/tutorials/09-robot-vision
python3 robot_vision.py --source 0 --show
Step 2 — Estimate distance from bounding box size
# Simple distance estimation using known object size
# Average person height ~1.7m, calibrated at 2m = 200px height
FOCAL_LENGTH = 600 # calibrate for your camera
KNOWN_HEIGHT = 1.7 # metres (average person)
def estimate_distance(bbox_height_px):
return (KNOWN_HEIGHT * FOCAL_LENGTH) / bbox_height_px
for box in results[0].boxes:
h_px = box.xyxy[0][3] - box.xyxy[0][1]
dist = estimate_distance(h_px)
print(f"Object at approx {dist:.1f}m")
Step 3 — Send avoidance command via ROS 2
import rclpy
from geometry_msgs.msg import Twist
def send_command(linear_x, angular_z):
msg = Twist()
msg.linear.x = linear_x
msg.angular.z = angular_z
publisher.publish(msg)
# If obstacle closer than 0.5m — stop and turn
if nearest_obstacle < 0.5:
send_command(0.0, 0.5) # stop, rotate left
else:
send_command(0.3, 0.0) # move forward
✅ Next: Tutorial 10 — Barcode, QR & OCR | Back to Jetson Kit