ab-test-setup
When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install aitytech-agentkits-marketing-ab-test-setup
Repository
Skill path: skills/ab-test-setup
When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.
Open repositoryBest for
Primary workflow: Write Technical Docs.
Technical facets: Data / AI, Tech Writer, Designer, Testing.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: aitytech.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install ab-test-setup into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/aitytech/agentkits-marketing before adding ab-test-setup to shared team environments
- Use ab-test-setup for cro workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: ab-test-setup
version: "1.0.0"
brand: AgentKits Marketing by AityTech
category: cro
difficulty: intermediate
description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.
triggers:
- A/B test
- split test
- experiment
- test this change
- variant copy
- multivariate test
- hypothesis
- statistical significance
prerequisites:
- page-cro
- analytics-attribution
related_skills:
- page-cro
- analytics-attribution
agents:
- conversion-optimizer
- researcher
mcp_integrations:
optional:
- google-analytics
success_metrics:
- test_velocity
- win_rate
output_schema: ab-test-plan
---
# A/B Test Setup
You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.
## Initial Assessment
Before designing a test, understand:
1. **Test Context**
- What are you trying to improve?
- What change are you considering?
- What made you want to test this?
2. **Current State**
- Baseline conversion rate?
- Current traffic volume?
- Any historical test data?
3. **Constraints**
- Technical implementation complexity?
- Timeline requirements?
- Tools available?
---
## Core Principles
### 1. Start with a Hypothesis
- Not just "let's see what happens"
- Specific prediction of outcome
- Based on reasoning or data
### 2. Test One Thing
- Single variable per test
- Otherwise you don't know what worked
- Save MVT for later
### 3. Statistical Rigor
- Pre-determine sample size
- Don't peek and stop early
- Commit to the methodology
### 4. Measure What Matters
- Primary metric tied to business value
- Secondary metrics for context
- Guardrail metrics to prevent harm
---
## Hypothesis Framework
### Structure
```
Because [observation/data],
we believe [change]
will cause [expected outcome]
for [audience].
We'll know this is true when [metrics].
```
### Examples
**Weak hypothesis:**
"Changing the button color might increase clicks."
**Strong hypothesis:**
"Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."
### Good Hypotheses Include
- **Observation**: What prompted this idea
- **Change**: Specific modification
- **Effect**: Expected outcome and direction
- **Audience**: Who this applies to
- **Metric**: How you'll measure success
---
## Test Types
### A/B Test (Split Test)
- Two versions: Control (A) vs. Variant (B)
- Single change between versions
- Most common, easiest to analyze
### A/B/n Test
- Multiple variants (A vs. B vs. C...)
- Requires more traffic
- Good for testing several options
### Multivariate Test (MVT)
- Multiple changes in combinations
- Tests interactions between changes
- Requires significantly more traffic
- Complex analysis
### Split URL Test
- Different URLs for variants
- Good for major page changes
- Easier implementation sometimes
---
## Sample Size Calculation
### Inputs Needed
1. **Baseline conversion rate**: Your current rate
2. **Minimum detectable effect (MDE)**: Smallest change worth detecting
3. **Statistical significance level**: Usually 95%
4. **Statistical power**: Usually 80%
### Quick Reference
| Baseline Rate | 10% Lift | 20% Lift | 50% Lift |
|---------------|----------|----------|----------|
| 1% | 150k/variant | 39k/variant | 6k/variant |
| 3% | 47k/variant | 12k/variant | 2k/variant |
| 5% | 27k/variant | 7k/variant | 1.2k/variant |
| 10% | 12k/variant | 3k/variant | 550/variant |
### Formula Resources
- Evan Miller's calculator: https://www.evanmiller.org/ab-testing/sample-size.html
- Optimizely's calculator: https://www.optimizely.com/sample-size-calculator/
### Test Duration
```
Duration = Sample size needed per variant × Number of variants
───────────────────────────────────────────────────
Daily traffic to test page × Conversion rate
```
Minimum: 1-2 business cycles (usually 1-2 weeks)
Maximum: Avoid running too long (novelty effects, external factors)
---
## Metrics Selection
### Primary Metric
- Single metric that matters most
- Directly tied to hypothesis
- What you'll use to call the test
### Secondary Metrics
- Support primary metric interpretation
- Explain why/how the change worked
- Help understand user behavior
### Guardrail Metrics
- Things that shouldn't get worse
- Revenue, retention, satisfaction
- Stop test if significantly negative
### Metric Examples by Test Type
**Homepage CTA test:**
- Primary: CTA click-through rate
- Secondary: Time to click, scroll depth
- Guardrail: Bounce rate, downstream conversion
**Pricing page test:**
- Primary: Plan selection rate
- Secondary: Time on page, plan distribution
- Guardrail: Support tickets, refund rate
**Signup flow test:**
- Primary: Signup completion rate
- Secondary: Field-level completion, time to complete
- Guardrail: User activation rate (post-signup quality)
---
## Designing Variants
### Control (A)
- Current experience, unchanged
- Don't modify during test
### Variant (B+)
**Best practices:**
- Single, meaningful change
- Bold enough to make a difference
- True to the hypothesis
**What to vary:**
Headlines/Copy:
- Message angle
- Value proposition
- Specificity level
- Tone/voice
Visual Design:
- Layout structure
- Color and contrast
- Image selection
- Visual hierarchy
CTA:
- Button copy
- Size/prominence
- Placement
- Number of CTAs
Content:
- Information included
- Order of information
- Amount of content
- Social proof type
### Documenting Variants
```
Control (A):
- Screenshot
- Description of current state
Variant (B):
- Screenshot or mockup
- Specific changes made
- Hypothesis for why this will win
```
---
## Traffic Allocation
### Standard Split
- 50/50 for A/B test
- Equal split for multiple variants
### Conservative Rollout
- 90/10 or 80/20 initially
- Limits risk of bad variant
- Longer to reach significance
### Ramping
- Start small, increase over time
- Good for technical risk mitigation
- Most tools support this
### Considerations
- Consistency: Users see same variant on return
- Segment sizes: Ensure segments are large enough
- Time of day/week: Balanced exposure
---
## Implementation Approaches
### Client-Side Testing
**Tools**: PostHog, Optimizely, VWO, custom
**How it works**:
- JavaScript modifies page after load
- Quick to implement
- Can cause flicker
**Best for**:
- Marketing pages
- Copy/visual changes
- Quick iteration
### Server-Side Testing
**Tools**: PostHog, LaunchDarkly, Split, custom
**How it works**:
- Variant determined before page renders
- No flicker
- Requires development work
**Best for**:
- Product features
- Complex changes
- Performance-sensitive pages
### Feature Flags
- Binary on/off (not true A/B)
- Good for rollouts
- Can convert to A/B with percentage split
---
## Running the Test
### Pre-Launch Checklist
- [ ] Hypothesis documented
- [ ] Primary metric defined
- [ ] Sample size calculated
- [ ] Test duration estimated
- [ ] Variants implemented correctly
- [ ] Tracking verified
- [ ] QA completed on all variants
- [ ] Stakeholders informed
### During the Test
**DO:**
- Monitor for technical issues
- Check segment quality
- Document any external factors
**DON'T:**
- Peek at results and stop early
- Make changes to variants
- Add traffic from new sources
- End early because you "know" the answer
### Peeking Problem
Looking at results before reaching sample size and stopping when you see significance leads to:
- False positives
- Inflated effect sizes
- Wrong decisions
**Solutions:**
- Pre-commit to sample size and stick to it
- Use sequential testing if you must peek
- Trust the process
---
## Analyzing Results
### Statistical Significance
- 95% confidence = p-value < 0.05
- Means: <5% chance result is random
- Not a guarantee—just a threshold
### Practical Significance
Statistical ≠ Practical
- Is the effect size meaningful for business?
- Is it worth the implementation cost?
- Is it sustainable over time?
### What to Look At
1. **Did you reach sample size?**
- If not, result is preliminary
2. **Is it statistically significant?**
- Check confidence intervals
- Check p-value
3. **Is the effect size meaningful?**
- Compare to your MDE
- Project business impact
4. **Are secondary metrics consistent?**
- Do they support the primary?
- Any unexpected effects?
5. **Any guardrail concerns?**
- Did anything get worse?
- Long-term risks?
6. **Segment differences?**
- Mobile vs. desktop?
- New vs. returning?
- Traffic source?
### Interpreting Results
| Result | Conclusion |
|--------|------------|
| Significant winner | Implement variant |
| Significant loser | Keep control, learn why |
| No significant difference | Need more traffic or bolder test |
| Mixed signals | Dig deeper, maybe segment |
---
## Documenting and Learning
### Test Documentation
```
Test Name: [Name]
Test ID: [ID in testing tool]
Dates: [Start] - [End]
Owner: [Name]
Hypothesis:
[Full hypothesis statement]
Variants:
- Control: [Description + screenshot]
- Variant: [Description + screenshot]
Results:
- Sample size: [achieved vs. target]
- Primary metric: [control] vs. [variant] ([% change], [confidence])
- Secondary metrics: [summary]
- Segment insights: [notable differences]
Decision: [Winner/Loser/Inconclusive]
Action: [What we're doing]
Learnings:
[What we learned, what to test next]
```
### Building a Learning Repository
- Central location for all tests
- Searchable by page, element, outcome
- Prevents re-running failed tests
- Builds institutional knowledge
---
## Output Format
### Test Plan Document
```
# A/B Test: [Name]
## Hypothesis
[Full hypothesis using framework]
## Test Design
- Type: A/B / A/B/n / MVT
- Duration: X weeks
- Sample size: X per variant
- Traffic allocation: 50/50
## Variants
[Control and variant descriptions with visuals]
## Metrics
- Primary: [metric and definition]
- Secondary: [list]
- Guardrails: [list]
## Implementation
- Method: Client-side / Server-side
- Tool: [Tool name]
- Dev requirements: [If any]
## Analysis Plan
- Success criteria: [What constitutes a win]
- Segment analysis: [Planned segments]
```
### Results Summary
When test is complete
### Recommendations
Next steps based on results
---
## Common Mistakes
### Test Design
- Testing too small a change (undetectable)
- Testing too many things (can't isolate)
- No clear hypothesis
- Wrong audience
### Execution
- Stopping early
- Changing things mid-test
- Not checking implementation
- Uneven traffic allocation
### Analysis
- Ignoring confidence intervals
- Cherry-picking segments
- Over-interpreting inconclusive results
- Not considering practical significance
---
## Questions to Ask
If you need more context:
1. What's your current conversion rate?
2. How much traffic does this page get?
3. What change are you considering and why?
4. What's the smallest improvement worth detecting?
5. What tools do you have for testing?
6. Have you tested this area before?
---
## Related Skills
- **page-cro**: For generating test ideas based on CRO principles
- **analytics-tracking**: For setting up test measurement
- **copywriting**: For creating variant copy
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### references/statistical-guide.md
```markdown
# A/B Testing Statistical Guide
Based on research from [Evan Miller](https://www.evanmiller.org/ab-testing/sample-size.html), [CXL](https://cxl.com/ab-test-calculator/), [Optimizely](https://www.optimizely.com/sample-size-calculator/).
---
## Statistical Foundations
### Key Concepts
**Statistical Significance (Confidence Level)**
- Probability that result is not due to chance
- Standard: 95% (alpha = 0.05)
- Conservative: 99% (alpha = 0.01)
- Aggressive: 90% (alpha = 0.10)
**Statistical Power**
- Probability of detecting a real effect
- Standard: 80% (beta = 0.20)
- Conservative: 90% (beta = 0.10)
- Minimum: 70% (beta = 0.30)
**Minimum Detectable Effect (MDE)**
- Smallest improvement you want to detect
- Lower MDE = larger sample needed
- Realistic MDE = 10-20% for most tests
---
## Sample Size Quick Reference
### Sample Size per Variant (95% confidence, 80% power)
| Baseline Rate | 5% MDE | 10% MDE | 15% MDE | 20% MDE | 30% MDE |
|---------------|--------|---------|---------|---------|---------|
| 1% | 630K | 157K | 70K | 39K | 17K |
| 2% | 315K | 78K | 35K | 19K | 8.7K |
| 3% | 207K | 52K | 23K | 13K | 5.8K |
| 5% | 125K | 31K | 14K | 7.8K | 3.5K |
| 10% | 62K | 15K | 6.9K | 3.9K | 1.7K |
| 15% | 41K | 10K | 4.6K | 2.6K | 1.1K |
| 20% | 31K | 7.8K | 3.5K | 1.9K | 860 |
| 30% | 20K | 5.1K | 2.3K | 1.3K | 570 |
**Formula:** n = 2 * ((z_α + z_β)² * p * (1-p)) / MDE²
---
## Test Duration Guidelines
### Minimum Duration Rules
1. **Absolute minimum:** 7 days (capture weekly patterns)
2. **Recommended minimum:** 14 days
3. **Maximum:** 6-8 weeks (data decay)
### Duration Formula
```
Days = (Sample Size per Variant × Number of Variants) / Daily Traffic to Test
```
### Quick Duration Estimates
| Daily Visitors | Sample Needed | Duration |
|----------------|---------------|----------|
| 1,000 | 10,000 | 10-14 days |
| 5,000 | 10,000 | 3-4 days* |
| 10,000 | 10,000 | 2-3 days* |
| 1,000 | 50,000 | 50-70 days |
| 5,000 | 50,000 | 10-14 days |
| 10,000 | 50,000 | 5-7 days* |
*Still run minimum 7 days for business cycle validity
---
## Test Types
### A/B Test (Split Test)
- 2 variants (Control + Test)
- 50/50 traffic split
- Simplest to analyze
- Best for clear hypotheses
### A/B/n Test
- 3+ variants
- Traffic split evenly
- Requires more sample
- Good for comparing multiple ideas
### Multivariate Test (MVT)
- Multiple elements changed simultaneously
- Tests element combinations
- Requires MUCH larger sample
- Only for high-traffic sites
### Sample Size for Multiple Variants
```
Sample per variant = Base sample / (Number of comparisons correction)
Bonferroni correction: α / number of comparisons
```
---
## Hypothesis Framework
### PICO Format
**P** - Population (who are you testing?)
**I** - Intervention (what are you changing?)
**C** - Comparison (what's the control?)
**O** - Outcome (what's the primary metric?)
### Good Hypothesis Template
"If we [change], then [metric] will [improve/decrease] by [amount], because [rationale]."
**Example:**
"If we change the CTA from 'Submit' to 'Get My Free Report', then form submissions will increase by 15%, because benefit-focused CTAs reduce friction and increase motivation."
### Hypothesis Prioritization (ICE)
| Factor | Score 1-10 |
|--------|------------|
| **I**mpact | Expected lift × audience size |
| **C**onfidence | Evidence supporting hypothesis |
| **E**ase | Implementation effort |
Priority = (I + C + E) / 3
---
## Common Pitfalls
### Peeking Problem
**Issue:** Checking results early and stopping when significant
**Impact:** 30%+ false positive rate
**Solution:**
- Pre-commit to sample size
- Use sequential testing if must peek
- Set decision rules upfront
### Multiple Comparison Problem
**Issue:** Testing many variants/metrics
**Impact:** Inflated false positive rate
**Solution:**
- Bonferroni correction (α / n tests)
- Designate ONE primary metric
- Pre-register secondary metrics
### Novelty/Primacy Effect
**Issue:** New = initial spike, then decay
**Impact:** False positives from early results
**Solution:**
- Run full duration
- Segment new vs returning users
- Monitor for trend changes
### Simpson's Paradox
**Issue:** Aggregate result hides segment differences
**Impact:** Wrong winner for key segments
**Solution:**
- Segment analysis
- Check for interaction effects
- Consider stratified randomization
---
## Analysis Framework
### Step 1: Check Validity
- [ ] Sample size reached?
- [ ] Ran minimum 7 days?
- [ ] Sample Ratio Mismatch (SRM) check?
- [ ] Any technical issues during test?
### Step 2: Primary Metric
- Conversion rate for each variant
- Lift (% change vs control)
- Confidence interval
- Statistical significance (p-value)
### Step 3: Secondary Metrics
- Check for guardrail metric degradation
- Look for trade-offs
- Consider per-segment impact
### Step 4: Segment Analysis
| Segment | Control | Variant | Lift | Significant? |
|---------|---------|---------|------|--------------|
| All Users | 5.0% | 5.5% | +10% | Yes (95%) |
| New Users | 4.0% | 4.8% | +20% | Yes (95%) |
| Returning | 7.0% | 6.5% | -7% | No (75%) |
| Mobile | 4.5% | 5.2% | +16% | Yes (92%) |
| Desktop | 5.5% | 5.8% | +5% | No (70%) |
### Step 5: Decision
| Result | Action |
|--------|--------|
| Winner (≥95% confidence) | Ship variant |
| Loser (≤5% chance variant wins) | Keep control |
| Inconclusive | Increase sample or accept uncertainty |
| Mixed (segment differences) | Consider personalization |
---
## Test Velocity Optimization
### Increasing Test Speed
1. **Raise MDE** - Accept detecting only larger effects
2. **Lower confidence** - Accept more risk (90% vs 95%)
3. **Test high-traffic areas** - Faster sample accumulation
4. **Prioritize ruthlessly** - Fewer, higher-impact tests
### Testing Cadence
| Traffic Level | Suggested Cadence |
|---------------|-------------------|
| <10K monthly | 1-2 tests/quarter |
| 10K-100K monthly | 1-2 tests/month |
| 100K-1M monthly | 2-4 tests/month |
| >1M monthly | Continuous testing |
---
## Sample Size Calculators
### Recommended Tools
1. **Evan Miller** - evanmiller.org/ab-testing/sample-size.html
2. **Optimizely** - optimizely.com/sample-size-calculator
3. **Statsig** - statsig.com/calculator
4. **VWO** - vwo.com/tools/duration-calculator
5. **CXL** - cxl.com/ab-test-calculator
### Calculator Inputs
| Input | What to Enter |
|-------|---------------|
| Baseline rate | Current conversion rate |
| MDE | Minimum improvement worth detecting |
| Significance | 95% (standard) |
| Power | 80% (standard) |
| Test type | One-tailed (variant better) or two-tailed |
| Traffic split | Usually 50/50 |
---
## Documentation Template
### Test Brief
```markdown
## Test: [Name]
**Hypothesis:** If we [change], then [metric] will [outcome] because [reason].
**Primary Metric:** [Metric name]
**Secondary Metrics:** [List]
**Guardrail Metrics:** [List]
**Audience:** [Who]
**Traffic Allocation:** [X%] Control / [Y%] Variant
**Duration:** [Start] to [End]
**Sample Size Target:** [Number per variant]
**Implementation Notes:** [Technical details]
```
### Test Results
```markdown
## Results: [Test Name]
**Duration:** [Start] to [End]
**Sample Size:** [Control: X, Variant: Y]
### Primary Metric: [Name]
| Variant | Rate | Lift | Confidence |
|---------|------|------|------------|
| Control | X% | - | - |
| Variant | Y% | +Z% | W% |
**Decision:** [Ship / Kill / Iterate / Inconclusive]
**Learnings:** [What we learned]
**Next Steps:** [Follow-up tests or actions]
```
```