Back to skills
SkillHub ClubDesign ProductFull StackDevOpsDesigner

robot-perception

Comprehensive best practices for robot perception systems covering cameras, LiDARs, depth sensors, IMUs, and multi-sensor setups. Use this skill when working with RGB image processing, depth maps, point clouds, sensor calibration (intrinsic, extrinsic, hand-eye), object detection, semantic segmentation, 3D reconstruction, visual servoing, or perception pipeline optimization. Trigger whenever the user mentions OpenCV, Open3D, PCL, RealSense, ZED, OAK-D, camera calibration, AprilTags, ArUco markers, stereo vision, RGBD, point cloud filtering, ICP registration, coordinate transforms, camera intrinsics, distortion correction, image undistortion, sensor streaming, frame synchronization, or any computer vision task in a robotics context. Also covers multi-camera rigs, time synchronization across sensors, perception latency budgets, and production deployment of perception pipelines.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
150
Hot score
96
Updated
March 20, 2026
Overall rating
C2.9
Composite score
2.9
Best-practice grade
B71.9

Install command

npx @skill-hub/cli install arpitg1304-robotics-agent-skills-robot-perception

Repository

arpitg1304/robotics-agent-skills

Skill path: skills/robot-perception

Comprehensive best practices for robot perception systems covering cameras, LiDARs, depth sensors, IMUs, and multi-sensor setups. Use this skill when working with RGB image processing, depth maps, point clouds, sensor calibration (intrinsic, extrinsic, hand-eye), object detection, semantic segmentation, 3D reconstruction, visual servoing, or perception pipeline optimization. Trigger whenever the user mentions OpenCV, Open3D, PCL, RealSense, ZED, OAK-D, camera calibration, AprilTags, ArUco markers, stereo vision, RGBD, point cloud filtering, ICP registration, coordinate transforms, camera intrinsics, distortion correction, image undistortion, sensor streaming, frame synchronization, or any computer vision task in a robotics context. Also covers multi-camera rigs, time synchronization across sensors, perception latency budgets, and production deployment of perception pipelines.

Open repository

Best for

Primary workflow: Design Product.

Technical facets: Full Stack, DevOps, Designer.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: arpitg1304.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install robot-perception into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/arpitg1304/robotics-agent-skills before adding robot-perception to shared team environments
  • Use robot-perception for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

robot-perception | SkillHub