pytorch-distributed
Distributed training strategies including DistributedDataParallel (DDP) and Fully Sharded Data Parallel (FSDP). Covers multi-node setup, checkpointing, and process management using torchrun. (ddp, fsdp, distributeddataparallel, torchrun, nccl, rank, process-group)
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install benchflow-ai-skillsbench-pytorch-distributed
Repository
Skill path: registry/terminal_bench_2.0/full_batch_reviewed/terminal_bench_2_0_torch-tensor-parallelism/environment/skills/pytorch-distributed
Distributed training strategies including DistributedDataParallel (DDP) and Fully Sharded Data Parallel (FSDP). Covers multi-node setup, checkpointing, and process management using torchrun. (ddp, fsdp, distributeddataparallel, torchrun, nccl, rank, process-group)
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Data / AI.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: benchflow-ai.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install pytorch-distributed into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/benchflow-ai/SkillsBench before adding pytorch-distributed to shared team environments
- Use pytorch-distributed for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.