Back to skills
SkillHub ClubShip Full StackFull StackBackend

huggingface-accelerate

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
5,242
Hot score
99
Updated
March 20, 2026
Overall rating
C5.0
Composite score
5.0
Best-practice grade
C62.3

Install command

npx @skill-hub/cli install orchestra-research-ai-research-skills-accelerate
Distributed TrainingHuggingFaceAccelerateDeepSpeedFSDPMixed PrecisionPyTorchDDPUnified APISimple

Repository

Orchestra-Research/AI-Research-SKILLs

Skill path: 08-distributed-training/accelerate

Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack, Backend.

Target audience: Development teams looking for install-ready agent workflows..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: Orchestra-Research.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install huggingface-accelerate into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/Orchestra-Research/AI-Research-SKILLs before adding huggingface-accelerate to shared team environments
  • Use huggingface-accelerate for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

huggingface-accelerate | SkillHub