experiment_analysis
Analyze completed experiments and craft executive-ready summaries with insights and recommendations.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install edwardmonteiro-aiskillinpractice-experiment-analysis
Repository
Skill path: skills/optimization/experiment_analysis
Analyze completed experiments and craft executive-ready summaries with insights and recommendations.
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: edwardmonteiro.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install experiment_analysis into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/edwardmonteiro/Aiskillinpractice before adding experiment_analysis to shared team environments
- Use experiment_analysis for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: optimization.experiment_analysis
phase: optimization
roles:
- Data Analyst
- Product Manager
description: Analyze completed experiments and craft executive-ready summaries with insights and recommendations.
variables:
required:
- name: experiment_name
description: Identifier for the experiment.
- name: primary_metric
description: Primary metric evaluated.
optional:
- name: secondary_metrics
description: Additional metrics tracked.
- name: audience
description: Audience for the analysis (e.g., execs, squad).
outputs:
- Results summary with statistical interpretation.
- Customer and business impact assessment.
- Recommendations and decision rationale.
---
# Purpose
Accelerate experiment readouts by combining statistical rigor with storytelling tailored to executive stakeholders.
# Pre-run Checklist
- ✅ Export experiment results (variant metrics, significance, sample sizes).
- ✅ Gather qualitative feedback or session notes if applicable.
- ✅ Align on rollout decisions pending the analysis.
# Invocation Guidance
```bash
codex run --skill optimization.experiment_analysis \
--input data/{{experiment_name}}-results.csv \
--vars "experiment_name={{experiment_name}}" \
"primary_metric={{primary_metric}}" \
"secondary_metrics={{secondary_metrics}}" \
"audience={{audience}}"
```
# Recommended Input Attachments
- Experiment tracking sheet or stats engine export.
- Screenshots of variants.
- Customer feedback related to the experiment.
# Claude Workflow Outline
1. Summarize experiment purpose, setup, and success criteria.
2. Present results for primary and secondary metrics with statistical significance.
3. Interpret findings, including customer behavior shifts and operational considerations.
4. Recommend decisions (ship, iterate, stop) with supporting rationale.
5. Highlight next steps, follow-up analyses, and knowledge base updates.
# Output Template
```
# Experiment Analysis — {{experiment_name}}
## Overview
- Objective:
- Dates:
- Audience:
## Results Summary
| Metric | Control | Variant | Δ | Significance | Notes |
| --- | --- | --- | --- | --- | --- |
## Interpretation
- Customer Impact:
- Business Impact:
- Operational Considerations:
## Recommendation
- Decision:
- Rationale:
- Dependencies:
## Next Steps
- Action:
- Owner:
- Timeline:
```
# Follow-up Actions
- Present findings in the growth or optimization forum.
- Update experiment backlog with learnings and links to artifacts.
- Coordinate rollout or rollback actions per recommendation.