megatron-memory-estimator
Estimate GPU memory usage for Megatron-based MoE (Mixture of Experts) and dense models. Use when users need to (1) estimate memory from HuggingFace model configs (DeepSeek-V3, Qwen, etc.), (2) plan GPU resource allocation for training, (3) compare different parallelism strategies (TP/PP/EP/CP), (4) determine if a model fits in available GPU memory, or (5) optimize training configurations for memory efficiency.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install yzlnew-infra-skills-megatron-memory-estimator
Repository
Skill path: megatron-memory-estimator
Estimate GPU memory usage for Megatron-based MoE (Mixture of Experts) and dense models. Use when users need to (1) estimate memory from HuggingFace model configs (DeepSeek-V3, Qwen, etc.), (2) plan GPU resource allocation for training, (3) compare different parallelism strategies (TP/PP/EP/CP), (4) determine if a model fits in available GPU memory, or (5) optimize training configurations for memory efficiency.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: Development teams looking for install-ready agent workflows..
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: yzlnew.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install megatron-memory-estimator into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/yzlnew/infra-skills before adding megatron-memory-estimator to shared team environments
- Use megatron-memory-estimator for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.