Back to skills
SkillHub ClubShip Full StackFull Stack

torch-tensor-parallelism

Guidance for implementing tensor parallelism in PyTorch, including ColumnParallelLinear and RowParallelLinear layers. This skill should be used when implementing distributed tensor parallel operations, sharding linear layers across multiple GPUs, or simulating collective operations like all-gather and all-reduce for parallel computation.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
781
Hot score
99
Updated
March 20, 2026
Overall rating
C4.3
Composite score
4.3
Best-practice grade
B78.7

Install command

npx @skill-hub/cli install benchflow-ai-skillsbench-torch-tensor-parallelism

Repository

benchflow-ai/SkillsBench

Skill path: registry/terminal_bench_2.0/letta_skills_batch/terminal_bench_2_0_torch-tensor-parallelism/environment/skills/torch-tensor-parallelism

Guidance for implementing tensor parallelism in PyTorch, including ColumnParallelLinear and RowParallelLinear layers. This skill should be used when implementing distributed tensor parallel operations, sharding linear layers across multiple GPUs, or simulating collective operations like all-gather and all-reduce for parallel computation.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: benchflow-ai.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install torch-tensor-parallelism into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/benchflow-ai/SkillsBench before adding torch-tensor-parallelism to shared team environments
  • Use torch-tensor-parallelism for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

torch-tensor-parallelism | SkillHub