Back to skills
SkillHub ClubBuild MobileFull StackMobile

app-store-optimization

App Store Optimization toolkit for researching keywords, optimizing metadata, and tracking mobile app performance on Apple App Store and Google Play Store.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,076
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C67.6

Install command

npx @skill-hub/cli install openclaw-skills-app-store-optimization

Repository

openclaw/skills

Skill path: skills/alirezarezvani/app-store-optimization

App Store Optimization toolkit for researching keywords, optimizing metadata, and tracking mobile app performance on Apple App Store and Google Play Store.

Open repository

Best for

Primary workflow: Build Mobile.

Technical facets: Full Stack, Mobile.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install app-store-optimization into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding app-store-optimization to shared team environments
  • Use app-store-optimization for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: app-store-optimization
description: App Store Optimization toolkit for researching keywords, optimizing metadata, and tracking mobile app performance on Apple App Store and Google Play Store.
triggers:
  - ASO
  - app store optimization
  - app store ranking
  - app keywords
  - app metadata
  - play store optimization
  - app store listing
  - improve app rankings
  - app visibility
  - app store SEO
  - mobile app marketing
  - app conversion rate
---

# App Store Optimization (ASO)

ASO tools for researching keywords, optimizing metadata, analyzing competitors, and improving app store visibility on Apple App Store and Google Play Store.

---

## Table of Contents

- [Keyword Research Workflow](#keyword-research-workflow)
- [Metadata Optimization Workflow](#metadata-optimization-workflow)
- [Competitor Analysis Workflow](#competitor-analysis-workflow)
- [App Launch Workflow](#app-launch-workflow)
- [A/B Testing Workflow](#ab-testing-workflow)
- [Before/After Examples](#beforeafter-examples)
- [Tools and References](#tools-and-references)

---

## Keyword Research Workflow

Discover and evaluate keywords that drive app store visibility.

### Workflow: Conduct Keyword Research

1. Define target audience and core app functions:
   - Primary use case (what problem does the app solve)
   - Target user demographics
   - Competitive category
2. Generate seed keywords from:
   - App features and benefits
   - User language (not developer terminology)
   - App store autocomplete suggestions
3. Expand keyword list using:
   - Modifiers (free, best, simple)
   - Actions (create, track, organize)
   - Audiences (for students, for teams, for business)
4. Evaluate each keyword:
   - Search volume (estimated monthly searches)
   - Competition (number and quality of ranking apps)
   - Relevance (alignment with app function)
5. Score and prioritize keywords:
   - Primary: Title and keyword field (iOS)
   - Secondary: Subtitle and short description
   - Tertiary: Full description only
6. Map keywords to metadata locations
7. Document keyword strategy for tracking
8. **Validation:** Keywords scored; placement mapped; no competitor brand names included; no plurals in iOS keyword field

### Keyword Evaluation Criteria

| Factor | Weight | High Score Indicators |
|--------|--------|----------------------|
| Relevance | 35% | Describes core app function |
| Volume | 25% | 10,000+ monthly searches |
| Competition | 25% | Top 10 apps have <4.5 avg rating |
| Conversion | 15% | Transactional intent ("best X app") |

### Keyword Placement Priority

| Location | Search Weight | Character Limit |
|----------|---------------|-----------------|
| App Title | Highest | 30 (iOS) / 50 (Android) |
| Subtitle (iOS) | High | 30 |
| Keyword Field (iOS) | High | 100 |
| Short Description (Android) | High | 80 |
| Full Description | Medium | 4,000 |

See: [references/keyword-research-guide.md](references/keyword-research-guide.md)

---

## Metadata Optimization Workflow

Optimize app store listing elements for search ranking and conversion.

### Workflow: Optimize App Metadata

1. Audit current metadata against platform limits:
   - Title character count and keyword presence
   - Subtitle/short description usage
   - Keyword field efficiency (iOS)
   - Description keyword density
2. Optimize title following formula:
   ```
   [Brand Name] - [Primary Keyword] [Secondary Keyword]
   ```
3. Write subtitle (iOS) or short description (Android):
   - Focus on primary benefit
   - Include secondary keyword
   - Use action verbs
4. Optimize keyword field (iOS only):
   - Remove duplicates from title
   - Remove plurals (Apple indexes both forms)
   - No spaces after commas
   - Prioritize by score
5. Rewrite full description:
   - Hook paragraph with value proposition
   - Feature bullets with keywords
   - Social proof section
   - Call to action
6. Validate character counts for each field
7. Calculate keyword density (target 2-3% primary)
8. **Validation:** All fields within character limits; primary keyword in title; no keyword stuffing (>5%); natural language preserved

### Platform Character Limits

| Field | Apple App Store | Google Play Store |
|-------|-----------------|-------------------|
| Title | 30 characters | 50 characters |
| Subtitle | 30 characters | N/A |
| Short Description | N/A | 80 characters |
| Keywords | 100 characters | N/A |
| Promotional Text | 170 characters | N/A |
| Full Description | 4,000 characters | 4,000 characters |
| What's New | 4,000 characters | 500 characters |

### Description Structure

```
PARAGRAPH 1: Hook (50-100 words)
├── Address user pain point
├── State main value proposition
└── Include primary keyword

PARAGRAPH 2-3: Features (100-150 words)
├── Top 5 features with benefits
├── Bullet points for scanability
└── Secondary keywords naturally integrated

PARAGRAPH 4: Social Proof (50-75 words)
├── Download count or rating
├── Press mentions or awards
└── Summary of user testimonials

PARAGRAPH 5: Call to Action (25-50 words)
├── Clear next step
└── Reassurance (free trial, no signup)
```

See: [references/platform-requirements.md](references/platform-requirements.md)

---

## Competitor Analysis Workflow

Analyze top competitors to identify keyword gaps and positioning opportunities.

### Workflow: Analyze Competitor ASO Strategy

1. Identify top 10 competitors:
   - Direct competitors (same core function)
   - Indirect competitors (overlapping audience)
   - Category leaders (top downloads)
2. Extract competitor keywords from:
   - App titles and subtitles
   - First 100 words of descriptions
   - Visible metadata patterns
3. Build competitor keyword matrix:
   - Map which keywords each competitor targets
   - Calculate coverage percentage per keyword
4. Identify keyword gaps:
   - Keywords with <40% competitor coverage
   - High volume terms competitors miss
   - Long-tail opportunities
5. Analyze competitor visual assets:
   - Icon design patterns
   - Screenshot messaging and style
   - Video presence and quality
6. Compare ratings and review patterns:
   - Average rating by competitor
   - Common praise themes
   - Common complaint themes
7. Document positioning opportunities
8. **Validation:** 10+ competitors analyzed; keyword matrix complete; gaps identified with volume estimates; visual audit documented

### Competitor Analysis Matrix

| Analysis Area | Data Points |
|---------------|-------------|
| Keywords | Title keywords, description frequency |
| Metadata | Character utilization, keyword density |
| Visuals | Icon style, screenshot count/style |
| Ratings | Average rating, total count, velocity |
| Reviews | Top praise, top complaints |

### Gap Analysis Template

| Opportunity Type | Example | Action |
|------------------|---------|--------|
| Keyword gap | "habit tracker" (40% coverage) | Add to keyword field |
| Feature gap | Competitor lacks widget | Highlight in screenshots |
| Visual gap | No videos in top 5 | Create app preview |
| Messaging gap | None mention "free" | Test free positioning |

---

## App Launch Workflow

Execute a structured launch for maximum initial visibility.

### Workflow: Launch App to Stores

1. Complete pre-launch preparation (4 weeks before):
   - Finalize keywords and metadata
   - Prepare all visual assets
   - Set up analytics (Firebase, Mixpanel)
   - Build press kit and media list
2. Submit for review (2 weeks before):
   - Complete all store requirements
   - Verify compliance with guidelines
   - Prepare launch communications
3. Configure post-launch systems:
   - Set up review monitoring
   - Prepare response templates
   - Configure rating prompt timing
4. Execute launch day:
   - Verify app is live in both stores
   - Announce across all channels
   - Begin review response cycle
5. Monitor initial performance (days 1-7):
   - Track download velocity hourly
   - Monitor reviews and respond within 24 hours
   - Document any issues for quick fixes
6. Conduct 7-day retrospective:
   - Compare performance to projections
   - Identify quick optimization wins
   - Plan first metadata update
7. Schedule first update (2 weeks post-launch)
8. **Validation:** App live in stores; analytics tracking; review responses within 24h; download velocity documented; first update scheduled

### Pre-Launch Checklist

| Category | Items |
|----------|-------|
| Metadata | Title, subtitle, description, keywords |
| Visual Assets | Icon, screenshots (all sizes), video |
| Compliance | Age rating, privacy policy, content rights |
| Technical | App binary, signing certificates |
| Analytics | SDK integration, event tracking |
| Marketing | Press kit, social content, email ready |

### Launch Timing Considerations

| Factor | Recommendation |
|--------|----------------|
| Day of week | Tuesday-Wednesday (avoid weekends) |
| Time of day | Morning in target market timezone |
| Seasonal | Align with relevant category seasons |
| Competition | Avoid major competitor launch dates |

See: [references/aso-best-practices.md](references/aso-best-practices.md)

---

## A/B Testing Workflow

Test metadata and visual elements to improve conversion rates.

### Workflow: Run A/B Test

1. Select test element (prioritize by impact):
   - Icon (highest impact)
   - Screenshot 1 (high impact)
   - Title (high impact)
   - Short description (medium impact)
2. Form hypothesis:
   ```
   If we [change], then [metric] will [improve/increase] by [amount]
   because [rationale].
   ```
3. Create variants:
   - Control: Current version
   - Treatment: Single variable change
4. Calculate required sample size:
   - Baseline conversion rate
   - Minimum detectable effect (usually 5%)
   - Statistical significance (95%)
5. Launch test:
   - Apple: Use Product Page Optimization
   - Android: Use Store Listing Experiments
6. Run test for minimum duration:
   - At least 7 days
   - Until statistical significance reached
7. Analyze results:
   - Compare conversion rates
   - Check statistical significance
   - Document learnings
8. **Validation:** Single variable tested; sample size sufficient; significance reached (95%); results documented; winner implemented

### A/B Test Prioritization

| Element | Conversion Impact | Test Complexity |
|---------|-------------------|-----------------|
| App Icon | 10-25% lift possible | Medium (design needed) |
| Screenshot 1 | 15-35% lift possible | Medium |
| Title | 5-15% lift possible | Low |
| Short Description | 5-10% lift possible | Low |
| Video | 10-20% lift possible | High |

### Sample Size Quick Reference

| Baseline CVR | Impressions Needed (per variant) |
|--------------|----------------------------------|
| 1% | 31,000 |
| 2% | 15,500 |
| 5% | 6,200 |
| 10% | 3,100 |

### Test Documentation Template

```
TEST ID: ASO-2025-001
ELEMENT: App Icon
HYPOTHESIS: A bolder color icon will increase conversion by 10%
START DATE: [Date]
END DATE: [Date]

RESULTS:
├── Control CVR: 4.2%
├── Treatment CVR: 4.8%
├── Lift: +14.3%
├── Significance: 97%
└── Decision: Implement treatment

LEARNINGS:
- Bold colors outperform muted tones in this category
- Apply to screenshot backgrounds for next test
```

---

## Before/After Examples

### Title Optimization

**Productivity App:**

| Version | Title | Analysis |
|---------|-------|----------|
| Before | "MyTasks" | No keywords, brand only (8 chars) |
| After | "MyTasks - Todo List & Planner" | Primary + secondary keywords (29 chars) |

**Fitness App:**

| Version | Title | Analysis |
|---------|-------|----------|
| Before | "FitTrack Pro" | Generic modifier (12 chars) |
| After | "FitTrack: Workout Log & Gym" | Category keywords (27 chars) |

### Subtitle Optimization (iOS)

| Version | Subtitle | Analysis |
|---------|----------|----------|
| Before | "Get Things Done" | Vague, no keywords |
| After | "Daily Task Manager & Planner" | Two keywords, benefit clear |

### Keyword Field Optimization (iOS)

**Before (Inefficient - 89 chars, 8 keywords):**
```
task manager, todo list, productivity app, daily planner, reminder app
```

**After (Optimized - 97 chars, 14 keywords):**
```
task,todo,checklist,reminder,organize,daily,planner,schedule,deadline,goals,habit,widget,sync,team
```

**Improvements:**
- Removed spaces after commas (+8 chars)
- Removed duplicates (task manager → task)
- Removed plurals (reminders → reminder)
- Removed words in title
- Added more relevant keywords

### Description Opening

**Before:**
```
MyTasks is a comprehensive task management solution designed
to help busy professionals organize their daily activities
and boost productivity.
```

**After:**
```
Forget missed deadlines. MyTasks keeps every task, reminder,
and project in one place—so you focus on doing, not remembering.
Trusted by 500,000+ professionals.
```

**Improvements:**
- Leads with user pain point
- Specific benefit (not generic "boost productivity")
- Social proof included
- Keywords natural, not stuffed

### Screenshot Caption Evolution

| Version | Caption | Issue |
|---------|---------|-------|
| Before | "Task List Feature" | Feature-focused, passive |
| Better | "Create Task Lists" | Action verb, but still feature |
| Best | "Never Miss a Deadline" | Benefit-focused, emotional |

---

## Tools and References

### Scripts

| Script | Purpose | Usage |
|--------|---------|-------|
| [keyword_analyzer.py](scripts/keyword_analyzer.py) | Analyze keywords for volume and competition | `python keyword_analyzer.py --keywords "todo,task,planner"` |
| [metadata_optimizer.py](scripts/metadata_optimizer.py) | Validate metadata character limits and density | `python metadata_optimizer.py --platform ios --title "App Title"` |
| [competitor_analyzer.py](scripts/competitor_analyzer.py) | Extract and compare competitor keywords | `python competitor_analyzer.py --competitors "App1,App2,App3"` |
| [aso_scorer.py](scripts/aso_scorer.py) | Calculate overall ASO health score | `python aso_scorer.py --app-id com.example.app` |
| [ab_test_planner.py](scripts/ab_test_planner.py) | Plan tests and calculate sample sizes | `python ab_test_planner.py --cvr 0.05 --lift 0.10` |
| [review_analyzer.py](scripts/review_analyzer.py) | Analyze review sentiment and themes | `python review_analyzer.py --app-id com.example.app` |
| [launch_checklist.py](scripts/launch_checklist.py) | Generate platform-specific launch checklists | `python launch_checklist.py --platform ios` |
| [localization_helper.py](scripts/localization_helper.py) | Manage multi-language metadata | `python localization_helper.py --locales "en,es,de,ja"` |

### References

| Document | Content |
|----------|---------|
| [platform-requirements.md](references/platform-requirements.md) | iOS and Android metadata specs, visual asset requirements |
| [aso-best-practices.md](references/aso-best-practices.md) | Optimization strategies, rating management, launch tactics |
| [keyword-research-guide.md](references/keyword-research-guide.md) | Research methodology, evaluation framework, tracking |

### Assets

| Template | Purpose |
|----------|---------|
| [aso-audit-template.md](assets/aso-audit-template.md) | Structured audit checklist for app store listings |

---

## Platform Limitations

### Data Constraints

| Constraint | Impact |
|------------|--------|
| No official keyword volume data | Estimates based on third-party tools |
| Competitor data limited to public info | Cannot see internal metrics |
| Review access limited to public reviews | No access to private feedback |
| Historical data unavailable for new apps | Cannot compare to past performance |

### Platform Behavior

| Platform | Behavior |
|----------|----------|
| iOS | Keyword changes require app submission |
| iOS | Promotional text editable without update |
| Android | Metadata changes index in 1-2 hours |
| Android | No separate keyword field (use description) |
| Both | Algorithm changes without notice |

### When Not to Use This Skill

| Scenario | Alternative |
|----------|-------------|
| Web apps | Use web SEO skills |
| Enterprise apps (not public) | Internal distribution tools |
| Beta/TestFlight only | Focus on feedback, not ASO |
| Paid advertising strategy | Use paid acquisition skills |

---

## Related Skills

| Skill | Integration Point |
|-------|-------------------|
| [content-creator](../content-creator/) | App description copywriting |
| [marketing-demand-acquisition](../marketing-demand-acquisition/) | Launch promotion campaigns |
| [marketing-strategy-pmm](../marketing-strategy-pmm/) | Go-to-market planning |


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/keyword-research-guide.md

```markdown
# Keyword Research Guide

Systematic approach to discovering, evaluating, and selecting keywords for app store optimization.

---

## Table of Contents

- [Keyword Research Methodology](#keyword-research-methodology)
- [Keyword Evaluation Framework](#keyword-evaluation-framework)
- [Competitor Keyword Analysis](#competitor-keyword-analysis)
- [Keyword Mapping Strategy](#keyword-mapping-strategy)
- [Keyword Tracking and Iteration](#keyword-tracking-and-iteration)

---

## Keyword Research Methodology

### Phase 1: Seed Keyword Generation

Start by generating initial keyword ideas from multiple sources.

**Source 1: Core App Functions**

List every action or problem the app solves:

```
Example for a task management app:
- Create tasks
- Set reminders
- Track deadlines
- Organize projects
- Collaborate with team
- Plan daily schedule
```

**Source 2: User Language Mapping**

Match developer terminology to user searches:

| Developer Term | User Search Terms |
|----------------|-------------------|
| Task management | todo list, task app, tasks |
| Project organization | project planner, project tracker |
| Deadline tracking | due date reminder, deadline app |
| Time blocking | schedule planner, calendar app |
| GTD methodology | getting things done, productivity system |

**Source 3: App Store Autocomplete**

Type seed keywords into App Store/Play Store search and record suggestions:

```
"todo" → todo list, todo app, todo list app, todolist widget
"task" → task manager, task planner, task list, tasks to do
"remind" → reminder app, reminder, reminders widget, remind me
```

**Source 4: Competitor Analysis**

Extract keywords from top 10 competitors in category (detailed in section below).

### Phase 2: Keyword Expansion

**Expansion Techniques:**

| Technique | Example (seed: "todo") |
|-----------|------------------------|
| Add modifiers | free todo, best todo, simple todo |
| Add actions | make todo list, create todo, organize todo |
| Add platforms | todo app iphone, todo for mac, todo widget |
| Add audiences | todo for students, business todo, family todo |
| Add features | todo with reminders, todo calendar, todo sync |
| Add problems | forgot tasks todo, procrastination todo |

**Keyword Matrix Template:**

| Core Term | Modifier 1 | Modifier 2 | Full Keyword |
|-----------|------------|------------|--------------|
| todo | free | app | free todo app |
| todo | best | iphone | best todo iphone |
| task | manager | simple | simple task manager |
| reminder | daily | widget | daily reminder widget |
| planner | weekly | calendar | weekly planner calendar |

### Phase 3: Keyword Filtering

Remove irrelevant or low-quality keywords:

**Exclusion Criteria:**

| Criterion | Reason | Example |
|-----------|--------|---------|
| Competitor brand names | Policy violation | "todoist alternative" |
| Unrelated categories | Low conversion | "todo games" |
| Plural duplicates (iOS) | Wasted space | "tasks" when "task" exists |
| Single characters | No search value | "to do" vs "todo" |

---

## Keyword Evaluation Framework

### Keyword Scoring Model

Evaluate each keyword on four dimensions:

**1. Search Volume (0-100)**

| Volume Level | Score | Monthly Searches |
|--------------|-------|------------------|
| Very High | 80-100 | 50,000+ |
| High | 60-79 | 10,000-49,999 |
| Medium | 40-59 | 1,000-9,999 |
| Low | 20-39 | 100-999 |
| Very Low | 0-19 | <100 |

**2. Competition (0-100, inverted)**

| Competition | Score | Top 10 App Ratings |
|-------------|-------|-------------------|
| Very Low | 80-100 | Average <4.0 stars |
| Low | 60-79 | Average 4.0-4.2 stars |
| Medium | 40-59 | Average 4.3-4.5 stars |
| High | 20-39 | Average 4.6-4.8 stars |
| Very High | 0-19 | Average 4.9+ stars |

**3. Relevance (0-100)**

| Relevance | Score | Criteria |
|-----------|-------|----------|
| Exact Match | 90-100 | Keyword describes core function |
| Strong Match | 70-89 | Keyword describes major feature |
| Moderate Match | 50-69 | Keyword describes secondary feature |
| Weak Match | 30-49 | Keyword tangentially related |
| No Match | 0-29 | Keyword unrelated to app |

**4. Conversion Potential (0-100)**

| Intent | Score | User Query Type |
|--------|-------|-----------------|
| Transactional | 80-100 | "best [app type]", "[app type] app" |
| Commercial | 60-79 | "free [app type]", "[app type] for [use]" |
| Informational | 40-59 | "how to [action]", "what is [concept]" |
| Navigational | 20-39 | "[brand name]", "[specific app]" |

### Composite Score Calculation

```
Keyword Score = (Volume × 0.25) + (Competition × 0.25) +
                (Relevance × 0.35) + (Conversion × 0.15)
```

**Score Interpretation:**

| Score Range | Priority | Action |
|-------------|----------|--------|
| 80-100 | Primary | Target in title and keyword field |
| 60-79 | Secondary | Include in subtitle/description |
| 40-59 | Tertiary | Use in long description only |
| 0-39 | Deprioritize | Do not target |

### Keyword Evaluation Worksheet

```
KEYWORD EVALUATION

Keyword: "task manager app"
Date: [Date]

SCORES:
├── Search Volume: 72/100 (High - ~25,000/month)
├── Competition: 45/100 (Medium - 4.4 avg rating in top 10)
├── Relevance: 95/100 (Exact match to core function)
└── Conversion: 85/100 (Transactional intent)

COMPOSITE SCORE: 74.5/100

RECOMMENDATION: Secondary Priority
- Include in subtitle or short description
- Not competitive enough for title (dominated by Todoist, Any.do)
- Consider long-tail variant: "simple task manager app"
```

---

## Competitor Keyword Analysis

### Competitor Identification

**Step 1: Direct Competitors**
Apps solving the same problem for the same audience.

**Step 2: Indirect Competitors**
Apps solving related problems or targeting overlapping audiences.

**Step 3: Category Leaders**
Top 10-20 apps by downloads in primary category.

### Competitor Keyword Extraction

**From App Title:**
```
Competitor: "Todoist: To-Do List & Tasks"
Keywords: todoist, to-do list, tasks, to do
```

**From Subtitle (iOS):**
```
Competitor subtitle: "Task Manager & Planner"
Keywords: task manager, planner
```

**From Description (First 100 words):**
Identify frequently used terms:
```
"Todoist is the world's favorite task manager and to-do list app.
Organize work and life, hit your goals, and find productivity..."

Extracted: task manager, to-do list, organize, goals, productivity
```

### Competitor Keyword Matrix

| Keyword | Comp 1 | Comp 2 | Comp 3 | Comp 4 | Comp 5 | Coverage |
|---------|--------|--------|--------|--------|--------|----------|
| task manager | ✓ | ✓ | ✓ | ✓ | ✓ | 100% |
| to-do list | ✓ | ✓ | ✓ | ✓ | | 80% |
| planner | ✓ | ✓ | | ✓ | ✓ | 80% |
| reminder | ✓ | ✓ | ✓ | | | 60% |
| productivity | ✓ | | ✓ | ✓ | | 60% |
| checklist | | ✓ | | ✓ | ✓ | 60% |
| project | ✓ | ✓ | | | | 40% |
| habit | | | ✓ | | ✓ | 40% |

**Analysis:**
- 100% coverage = Highly competitive, essential keyword
- 60-80% coverage = Important category term
- 40% coverage = Potential differentiator
- <40% coverage = Unique opportunity or irrelevant

### Keyword Gap Analysis

Identify keywords competitors miss:

```
KEYWORD GAP ANALYSIS

Underserved Keywords (Low competitor coverage, decent volume):
1. "daily planner widget" - 2/10 competitors, 5,000 searches
2. "task list for teams" - 3/10 competitors, 3,500 searches
3. "todo with calendar sync" - 1/10 competitors, 2,800 searches

Opportunity Assessment:
- "daily planner widget" → Add widget feature, target keyword
- "task list for teams" → Already have feature, update metadata
- "todo with calendar sync" → Feature gap, add to roadmap
```

---

## Keyword Mapping Strategy

### Keyword Placement Map

Assign each keyword to specific metadata locations:

```
KEYWORD PLACEMENT MAP

PRIMARY (Title + Keyword Field):
├── task manager (Score: 82)
├── todo list (Score: 78)
└── planner (Score: 75)

SECONDARY (Subtitle + Short Description):
├── reminder app (Score: 68)
├── daily tasks (Score: 65)
└── organize (Score: 62)

TERTIARY (Full Description):
├── checklist (Score: 55)
├── productivity (Score: 52)
├── schedule (Score: 48)
├── deadline (Score: 45)
└── project management (Score: 42)
```

### iOS Keyword Field Strategy

**100 Character Optimization:**

```
STEP 1: List all target keywords
task,manager,todo,list,planner,reminder,organize,daily,checklist,
productivity,schedule,deadline,project,goals,habit,widget,sync,
team,collaborate,notes,calendar

STEP 2: Remove duplicates from title
Title: "TaskFlow - Todo List Manager"
Remove: task, todo, list, manager

STEP 3: Remove plurals
Keep: reminder (not reminders)
Keep: goal (not goals)

STEP 4: Prioritize by score and fit
Final 100 chars:
planner,reminder,organize,daily,checklist,productivity,schedule,
deadline,project,goals,habit,widget,sync,team,collaborate

Character count: 98/100
```

### Android Description Keyword Integration

**Natural keyword placement in 4,000 characters:**

```
PARAGRAPH 1 (Hook - 300 chars):
Keywords: task manager, todo list, organize
"TaskFlow is the task manager trusted by 2 million users. Create
your perfect todo list and organize everything that matters..."

PARAGRAPH 2 (Features - 800 chars):
Keywords: reminder, checklist, deadline, project
"Set smart reminders that notify you at the right time. Build
checklists for any project. Never miss a deadline with..."

PARAGRAPH 3 (Benefits - 600 chars):
Keywords: productivity, schedule, goals
"Boost your productivity with proven planning methods. Schedule
your day in minutes. Track goals and celebrate..."

PARAGRAPH 4 (Differentiators - 500 chars):
Keywords: widget, sync, team, collaborate
"Beautiful widgets keep tasks visible. Sync across all devices
instantly. Invite your team to collaborate on..."

Total keyword coverage: 14 keywords naturally integrated
```

---

## Keyword Tracking and Iteration

### Ranking Tracking Cadence

| Frequency | Action |
|-----------|--------|
| Daily | Track top 5-10 primary keywords |
| Weekly | Full keyword set review |
| Monthly | Competitor keyword comparison |
| Quarterly | Full keyword research refresh |

### Keyword Performance Metrics

| Metric | Target | Action if Below |
|--------|--------|-----------------|
| Top 10 ranking | 3+ keywords | Increase keyword weight |
| Top 50 ranking | 10+ keywords | Maintain current strategy |
| Ranking velocity | Improving trend | Continue optimization |
| Conversion rate | >5% | Review relevance alignment |

### Iteration Process

**Monthly Keyword Audit:**

```
1. EXPORT current rankings
   - List all tracked keywords
   - Record current position
   - Note 30-day trend (up/down/stable)

2. IDENTIFY opportunities
   - Keywords improving but not top 10
   - Keywords declining from previous position
   - New high-volume keywords in category

3. PRIORITIZE changes
   - Boost: Keywords at position 11-20
   - Maintain: Keywords at position 1-10
   - Replace: Keywords at position 50+ with no improvement

4. IMPLEMENT updates
   - Adjust keyword field (iOS)
   - Update description (Android)
   - Modify subtitle if needed

5. DOCUMENT changes
   - Record what changed and why
   - Set reminder for 2-week check-in
```

### Keyword Testing Log Template

```
KEYWORD TEST LOG

Test ID: KW-2025-001
Date Started: [Date]
Keywords Changed:
  - Added: "habit tracker" (replacing "goals app")
  - Added: "daily routine" (replacing "schedule planner")

Rationale:
- "habit tracker" has 3x volume of "goals app"
- "daily routine" trending up 40% in category

Baseline Rankings:
- "habit tracker": Not ranked
- "daily routine": Position 87

30-Day Results:
- "habit tracker": Position 34 (+53)
- "daily routine": Position 28 (+59)

Conclusion: Test successful - retain new keywords
Next Action: Target subtitle position for "habit tracker"
```

```

### references/platform-requirements.md

```markdown
# Platform Requirements Reference

Technical specifications and metadata requirements for Apple App Store and Google Play Store.

---

## Table of Contents

- [Apple App Store Requirements](#apple-app-store-requirements)
- [Google Play Store Requirements](#google-play-store-requirements)
- [Visual Asset Specifications](#visual-asset-specifications)
- [Localization Requirements](#localization-requirements)
- [Compliance Guidelines](#compliance-guidelines)

---

## Apple App Store Requirements

### Metadata Character Limits

| Field | Character Limit | Notes |
|-------|----------------|-------|
| App Name (Title) | 30 characters | Visible in search results |
| Subtitle | 30 characters | iOS 11+ only, appears below title |
| Promotional Text | 170 characters | Editable without app update |
| Description | 4,000 characters | Not indexed for search |
| Keywords Field | 100 characters | Comma-separated, no spaces after commas |
| What's New | 4,000 characters | Release notes for updates |
| Developer Name | 255 characters | Company or individual name |
| Support URL | Required | Must be valid HTTPS URL |
| Privacy Policy URL | Required | Must be valid HTTPS URL |

### Keyword Field Optimization Rules

1. **No duplicates** - Words in title are already indexed
2. **No plurals** - Apple indexes both singular and plural forms
3. **No spaces after commas** - Wastes character space
4. **No brand names** - Violates App Store guidelines
5. **No category names** - Already indexed via category selection

**Example - Efficient keyword field:**
```
task,todo,checklist,reminder,productivity,organize,schedule,planner,goals,habit
```

**Example - Inefficient keyword field (avoid):**
```
task manager, todo list, productivity app, task tracking
```

### App Store Connect Metadata Fields

| Category | Field | Required |
|----------|-------|----------|
| **App Information** | Name | Yes |
| | Subtitle | No |
| | Category | Yes |
| | Secondary Category | No |
| | Content Rights | Yes |
| | Age Rating | Yes |
| **Version Information** | Description | Yes |
| | Keywords | Yes |
| | Promotional Text | No |
| | What's New | Yes (for updates) |
| | Support URL | Yes |
| | Marketing URL | No |
| **Pricing** | Price Tier | Yes |
| | Availability | Yes |

### Age Rating Content Descriptors

| Content Type | None | Infrequent | Frequent |
|--------------|------|------------|----------|
| Cartoon Violence | 4+ | 9+ | 12+ |
| Realistic Violence | 9+ | 12+ | 17+ |
| Sexual Content | 12+ | 17+ | 17+ |
| Profanity | 4+ | 12+ | 17+ |
| Alcohol/Drug Reference | 12+ | 17+ | 17+ |
| Gambling | 12+ | 17+ | 17+ |
| Horror/Fear | 9+ | 12+ | 17+ |

---

## Google Play Store Requirements

### Metadata Character Limits

| Field | Character Limit | Notes |
|-------|----------------|-------|
| App Title | 50 characters | Increased from 30 in 2021 |
| Short Description | 80 characters | Visible on store listing |
| Full Description | 4,000 characters | Indexed for search keywords |
| Developer Name | 64 characters | Organization or individual |
| Developer Email | Required | Public support contact |
| Privacy Policy URL | Required | Must be valid HTTPS URL |

### Description Keyword Strategy

Google Play has no separate keyword field. Keywords are extracted from:

1. **App Title** - Highest weight, most important
2. **Short Description** - High weight, visible in search
3. **Full Description** - Medium weight, use naturally throughout
4. **Developer Name** - Low weight but indexed

**Keyword Density Guidelines:**
- Primary keyword: 2-3% density in full description
- Secondary keywords: 1-2% each
- Avoid keyword stuffing (>5% triggers spam detection)

### Google Play Console Metadata

| Category | Field | Required |
|----------|-------|----------|
| **Store Listing** | Title | Yes |
| | Short Description | Yes |
| | Full Description | Yes |
| | App Icon | Yes |
| | Feature Graphic | Yes |
| | Screenshots | Yes (min 2) |
| | Video | No |
| **Store Settings** | App Category | Yes |
| | Tags | No |
| | Contact Email | Yes |
| | Privacy Policy | Yes |
| **Content Rating** | IARC Questionnaire | Yes |

### Content Rating (IARC)

| Rating | Age | Description |
|--------|-----|-------------|
| PEGI 3 / Everyone | 3+ | Suitable for all ages |
| PEGI 7 / Everyone 10+ | 7+ | Mild violence, comic mischief |
| PEGI 12 / Teen | 12+ | Moderate violence, mild language |
| PEGI 16 / Mature 17+ | 16+ | Intense violence, strong language |
| PEGI 18 / Adults Only | 18+ | Extreme content |

---

## Visual Asset Specifications

### App Icon Requirements

**Apple App Store:**

| Device | Size | Format |
|--------|------|--------|
| iPhone | 1024x1024 px | PNG, no alpha |
| iPad | 1024x1024 px | PNG, no alpha |
| App Store | 1024x1024 px | PNG, no alpha |
| Spotlight | 120x120 px | PNG |
| Settings | 87x87 px | PNG |

**Google Play Store:**

| Asset | Size | Format |
|-------|------|--------|
| App Icon | 512x512 px | PNG, 32-bit |
| Feature Graphic | 1024x500 px | PNG or JPG |
| Promo Graphic | 180x120 px | PNG or JPG |
| TV Banner | 1280x720 px | PNG or JPG |

### Screenshot Requirements

**Apple App Store:**

| Device | Portrait | Landscape |
|--------|----------|-----------|
| iPhone 6.9" | 1320x2868 px | 2868x1320 px |
| iPhone 6.5" | 1290x2796 px | 2796x1290 px |
| iPhone 5.5" | 1242x2208 px | 2208x1242 px |
| iPad Pro 12.9" | 2048x2732 px | 2732x2048 px |
| iPad 10.5" | 1668x2224 px | 2224x1668 px |

- Minimum: 2 screenshots per device
- Maximum: 10 screenshots per device
- Format: PNG or JPG, no alpha channel
- First 3 screenshots are critical (most users don't scroll)

**Google Play Store:**

| Device | Dimensions | Notes |
|--------|------------|-------|
| Phone | 320-3840 px | Min 2:1 aspect ratio |
| 7" Tablet | 320-3840 px | Min 2:1 aspect ratio |
| 10" Tablet | 320-3840 px | Min 2:1 aspect ratio |
| Chromebook | 320-3840 px | Optional |
| TV | 320-3840 px | For TV apps only |

- Minimum: 2 screenshots
- Maximum: 8 screenshots
- Format: PNG or JPG
- No transparency or borders

### App Preview Video

**Apple App Store:**
- Duration: 15-30 seconds
- Resolution: Match device screenshot size
- Format: M4V, MP4, MOV
- Frame rate: 30 fps
- Audio: Optional but recommended

**Google Play Store:**
- YouTube video link only
- No duration limit (recommend under 2 minutes)
- Landscape orientation preferred
- Must not contain age-restricted content

---

## Localization Requirements

### Priority Markets by Revenue

| Rank | Market | Language Code |
|------|--------|---------------|
| 1 | United States | en-US |
| 2 | Japan | ja |
| 3 | United Kingdom | en-GB |
| 4 | Germany | de-DE |
| 5 | China | zh-Hans (iOS), zh-CN (Android) |
| 6 | South Korea | ko |
| 7 | France | fr-FR |
| 8 | Canada | en-CA, fr-CA |
| 9 | Australia | en-AU |
| 10 | Russia | ru |

### Apple App Store Localization

Supported localizations: 40+ languages

| Language | Locale Code |
|----------|-------------|
| English (US) | en-US |
| English (UK) | en-GB |
| Spanish | es-ES |
| Spanish (Mexico) | es-MX |
| French | fr-FR |
| German | de-DE |
| Japanese | ja |
| Korean | ko |
| Simplified Chinese | zh-Hans |
| Traditional Chinese | zh-Hant |

### Google Play Store Localization

Supported localizations: 75+ languages

Each locale requires:
- Title (50 chars)
- Short description (80 chars)
- Full description (4,000 chars)
- Screenshots (can reuse or localize)

---

## Compliance Guidelines

### Apple App Store Review Guidelines Summary

| Category | Key Requirements |
|----------|------------------|
| **Safety** | No objectionable content, privacy protection |
| **Performance** | App must work as described, no crashes |
| **Business** | Accurate app description, clear pricing |
| **Design** | Follow Human Interface Guidelines |
| **Legal** | Comply with local laws, proper licensing |

**Common Rejection Reasons:**
1. Bugs and crashes (50%+ of rejections)
2. Broken links or placeholder content
3. Misleading app descriptions
4. Privacy policy missing or incomplete
5. In-app purchase issues

### Google Play Developer Policies

| Policy Area | Requirements |
|-------------|--------------|
| **Restricted Content** | No hate speech, violence, gambling (without license) |
| **Privacy** | Data collection disclosure, privacy policy |
| **Monetization** | Clear pricing, compliant IAPs |
| **Ads** | No deceptive ads, proper disclosure |
| **Store Listing** | Accurate description, no keyword stuffing |

**Common Suspension Reasons:**
1. Policy violation (content, ads, permissions)
2. Repetitive content (clone apps)
3. Impersonation (fake apps)
4. Intellectual property infringement
5. Malicious behavior

### Privacy Requirements

**Apple (App Tracking Transparency):**
- ATT prompt required for tracking
- Privacy nutrition labels mandatory
- Data collection disclosure required

**Google (Data Safety):**
- Data safety section mandatory
- Data collection and sharing disclosure
- Security practices declaration

---

## Quick Reference Card

### Apple vs Google Comparison

| Attribute | Apple App Store | Google Play Store |
|-----------|-----------------|-------------------|
| Title Length | 30 chars | 50 chars |
| Subtitle | 30 chars | N/A |
| Short Description | N/A | 80 chars |
| Full Description | 4,000 chars | 4,000 chars |
| Keywords Field | 100 chars | N/A (in description) |
| Promotional Text | 170 chars | N/A |
| Icon Size | 1024x1024 px | 512x512 px |
| Min Screenshots | 2 | 2 |
| Max Screenshots | 10 | 8 |
| Review Time | 24-48 hours | 1-7 days |
| Metadata Update | Requires review | 1-2 hours to index |

```

### references/aso-best-practices.md

```markdown
# ASO Best Practices Reference

Optimization strategies for improving app store visibility, conversion, and rankings.

---

## Table of Contents

- [Keyword Optimization](#keyword-optimization)
- [Metadata Optimization](#metadata-optimization)
- [Visual Asset Optimization](#visual-asset-optimization)
- [Rating and Review Management](#rating-and-review-management)
- [Launch Strategy](#launch-strategy)
- [A/B Testing Framework](#ab-testing-framework)
- [Conversion Optimization](#conversion-optimization)
- [Common Mistakes to Avoid](#common-mistakes-to-avoid)

---

## Keyword Optimization

### Keyword Research Process

1. **Brainstorm seed keywords** - Core terms users search for
2. **Expand with variations** - Synonyms, related terms, long-tail
3. **Analyze competition** - Check difficulty scores
4. **Evaluate search volume** - Prioritize high-volume terms
5. **Test and iterate** - Monitor rankings and adjust

### Keyword Selection Criteria

| Factor | Weight | Evaluation Method |
|--------|--------|-------------------|
| Relevance | 40% | Does it describe app function? |
| Search Volume | 30% | Monthly search estimates |
| Competition | 20% | Number of ranking apps |
| Conversion Potential | 10% | User intent alignment |

### Keyword Placement Priority

| Location | Search Weight | Example |
|----------|---------------|---------|
| App Title | Highest | "TaskMaster - Todo List Manager" |
| Subtitle (iOS) | High | "Organize Your Daily Tasks" |
| Keyword Field (iOS) | High | "planner,reminder,checklist" |
| Short Description (Android) | High | "Simple task manager for busy professionals" |
| Full Description | Medium | Natural keyword usage throughout |

### Long-Tail Keyword Strategy

Long-tail keywords have lower volume but higher conversion:

| Type | Example | Volume | Competition | Conversion |
|------|---------|--------|-------------|------------|
| Short-tail | "todo app" | High | High | Low |
| Mid-tail | "daily task manager" | Medium | Medium | Medium |
| Long-tail | "free todo list with reminders" | Low | Low | High |

**Formula for keyword priority:**
```
Score = (Volume × 0.3) + (1/Competition × 0.3) + (Relevance × 0.4)
```

---

## Metadata Optimization

### Title Optimization

**Structure Formula:**
```
[Brand Name] - [Primary Keyword] [Secondary Keyword/Benefit]
```

**Examples by category:**

| Category | Before | After |
|----------|--------|-------|
| Productivity | "MyTasks" | "MyTasks - Todo List & Planner" |
| Fitness | "FitTrack" | "FitTrack: Workout & Gym Log" |
| Finance | "MoneyApp" | "MoneyApp - Budget Tracker" |
| Photo | "SnapEdit" | "SnapEdit: Photo Editor & AI" |

**Title Optimization Checklist:**
- [ ] Primary keyword within first 3 words
- [ ] Brand name is memorable and unique
- [ ] Character count matches platform limit
- [ ] No keyword stuffing
- [ ] Readable and natural sounding

### Description Optimization

**Full Description Structure:**

```
PARAGRAPH 1: Hook + Primary Benefit (50-100 words)
- Address user pain point
- State main value proposition
- Include primary keyword naturally

PARAGRAPH 2-3: Feature Highlights (100-150 words)
- Top 3-5 features with benefits
- Use bullet points or emojis for scanability
- Include secondary keywords

PARAGRAPH 4: Social Proof (50-75 words)
- Download numbers or ratings
- Press mentions or awards
- User testimonials (summarized)

PARAGRAPH 5: Call to Action (25-50 words)
- Clear next step
- Urgency or incentive
- Reassurance (free trial, no credit card)
```

**Keyword Density Target:**
- Primary keyword: 2-3% (8-12 mentions in 4000 chars)
- Secondary keywords: 1-2% each (4-8 mentions each)

### Subtitle Optimization (iOS)

**Effective Subtitle Formulas:**

| Formula | Example |
|---------|---------|
| [Verb] + [Benefit] | "Organize Your Life" |
| [Adjective] + [Category] | "Smart Task Manager" |
| [Feature] + [Feature] | "Lists, Reminders & Notes" |
| [Audience] + [Solution] | "For Busy Professionals" |

---

## Visual Asset Optimization

### App Icon Best Practices

| Principle | Do | Don't |
|-----------|-----|-------|
| Simplicity | Single focal element | Multiple competing elements |
| Recognizability | Works at 60x60px | Requires large size to read |
| Uniqueness | Distinct from competitors | Generic category icon |
| Color | Bold, contrasting colors | Muted or similar to background |
| Text | None or single letter | Full words or app name |

**Icon Testing Questions:**
1. Is it recognizable at 29x29px (smallest iOS size)?
2. Does it stand out in search results?
3. Does it communicate app function?
4. Is it distinct from top 10 category competitors?

### Screenshot Optimization

**Screenshot Hierarchy:**

| Position | Purpose | Content Strategy |
|----------|---------|------------------|
| Screenshot 1 | Hook/Hero | Main value proposition + key UI |
| Screenshot 2 | Primary Feature | Most-used feature demonstration |
| Screenshot 3 | Secondary Feature | Differentiating capability |
| Screenshot 4 | Social Proof | Ratings, awards, user count |
| Screenshot 5+ | Additional Features | Supporting functionality |

**Caption Best Practices:**
- Maximum 5-7 words per caption
- Action-oriented verbs ("Track", "Organize", "Discover")
- Benefit-focused, not feature-focused
- Consistent typography and style

**Example Caption Evolution:**

| Weak | Better | Best |
|------|--------|------|
| "Task List Feature" | "Create Task Lists" | "Never Forget a Task Again" |
| "Calendar View" | "See Your Schedule" | "Plan Your Week in Seconds" |
| "Notifications" | "Get Reminders" | "Stay on Top of Deadlines" |

### Video Preview Strategy

**Video Structure (30 seconds):**

| Seconds | Content |
|---------|---------|
| 0-5 | Hook: Show end result or main benefit |
| 5-15 | Demo: Core feature in action |
| 15-25 | Features: Quick feature montage |
| 25-30 | CTA: Logo and download prompt |

---

## Rating and Review Management

### Review Response Framework

**For Negative Reviews (1-2 stars):**

```
Structure:
1. Acknowledge the issue (1 sentence)
2. Apologize without making excuses (1 sentence)
3. Offer solution or next step (1-2 sentences)
4. Invite direct contact (1 sentence)

Example:
"We're sorry the syncing issues are affecting your experience.
Our team is actively working on a fix for the next update.
In the meantime, please try logging out and back in, which
resolves this for most users. If issues persist, email us at
[email protected] and we'll prioritize your case."
```

**For Positive Reviews (4-5 stars):**

```
Structure:
1. Thank sincerely (1 sentence)
2. Acknowledge specific praise (1 sentence)
3. Encourage continued use or sharing (1 sentence)

Example:
"Thank you for the kind words! We're thrilled the reminder
feature helps you stay organized. If you're enjoying the app,
we'd love if you'd share it with friends who might benefit."
```

### Rating Improvement Tactics

| Tactic | Implementation | Expected Impact |
|--------|----------------|-----------------|
| In-app prompt timing | After positive action (task completed, milestone reached) | +0.3 stars |
| Bug fix velocity | Address 1-star issues within 7 days | +0.2 stars |
| Response rate | Reply to 80%+ of reviews | +0.1 stars |
| Feature requests | Implement top-requested features | +0.2 stars |

### Review Prompt Best Practices

**When to prompt:**
- After user completes 5+ successful sessions
- After milestone achievement (first task completed, 7-day streak)
- After positive in-app feedback ("Was this helpful? Yes")

**When NOT to prompt:**
- First session
- After error or crash
- During critical workflow
- More than once per 30 days

---

## Launch Strategy

### Pre-Launch Checklist

**4 Weeks Before Launch:**
- [ ] Finalize app name and keywords
- [ ] Complete all metadata fields
- [ ] Prepare all visual assets
- [ ] Set up analytics (Firebase, Mixpanel)
- [ ] Create press kit and media assets
- [ ] Build email list for launch notification

**2 Weeks Before Launch:**
- [ ] Submit for app review
- [ ] Prepare social media content
- [ ] Brief press and influencers
- [ ] Set up review response templates
- [ ] Configure in-app rating prompts

**Launch Day:**
- [ ] Verify app is live in stores
- [ ] Announce across all channels
- [ ] Monitor reviews and respond quickly
- [ ] Track download velocity
- [ ] Document any issues for Day 2 fix

### Update Cadence

| Update Type | Frequency | ASO Impact |
|-------------|-----------|------------|
| Bug fixes | As needed | Prevents rating drops |
| Minor features | Every 2-4 weeks | Maintains freshness signal |
| Major features | Every 4-8 weeks | Opportunity for "What's New" |
| Metadata refresh | Every 4-6 weeks | Keyword optimization cycle |

### Seasonal Optimization

| Season | Optimization Focus | Example Categories |
|--------|--------------------|--------------------|
| Jan (New Year) | Resolutions, goals | Fitness, Productivity |
| Feb (Valentine's) | Dating, relationships | Dating, Photo |
| Mar-Apr (Tax) | Finance, organization | Finance, Productivity |
| May-Jun (Summer) | Travel, fitness | Travel, Health |
| Aug-Sep (Back to School) | Education, organization | Education, Productivity |
| Nov-Dec (Holidays) | Shopping, social | Shopping, Social |

---

## A/B Testing Framework

### Test Prioritization Matrix

| Element | Impact | Ease | Priority |
|---------|--------|------|----------|
| App Icon | High | Medium | 1 |
| Screenshot 1 | High | Medium | 2 |
| Title | High | Easy | 3 |
| Short Description | Medium | Easy | 4 |
| Screenshots 2-5 | Medium | Medium | 5 |
| Video | Medium | Hard | 6 |

### Sample Size Calculator

**Formula:**
```
Sample Size = (2 × (Z² × p × (1-p))) / E²

Where:
Z = 1.96 (for 95% confidence)
p = baseline conversion rate
E = minimum detectable effect (usually 0.05)
```

**Quick Reference:**

| Baseline CVR | Min. Impressions for 5% Lift |
|--------------|------------------------------|
| 1% | 31,000 per variant |
| 2% | 15,500 per variant |
| 5% | 6,200 per variant |
| 10% | 3,100 per variant |

### Test Duration Guidelines

| Daily Impressions | Minimum Test Duration |
|-------------------|----------------------|
| 1,000+ | 7 days |
| 500-1,000 | 14 days |
| 100-500 | 30 days |
| <100 | Not recommended |

---

## Conversion Optimization

### Conversion Funnel Metrics

| Stage | Metric | Benchmark |
|-------|--------|-----------|
| Discovery | Impressions | Category dependent |
| Consideration | Page Views | 30-50% of impressions |
| Conversion | Installs | 3-8% of page views |
| Activation | First Open | 70-90% of installs |

### Conversion Optimization Levers

| Lever | Typical Lift | Effort |
|-------|--------------|--------|
| Icon redesign | 10-25% | High |
| Screenshot optimization | 15-35% | Medium |
| Title keyword optimization | 5-15% | Low |
| Description rewrite | 5-10% | Low |
| Video addition | 10-20% | High |
| Localization | 20-50% per market | Medium |

---

## Common Mistakes to Avoid

### Keyword Mistakes

| Mistake | Problem | Solution |
|---------|---------|----------|
| Keyword stuffing | Spam detection, rejection | Natural usage, 2-3% density |
| Competitor names | Guideline violation | Focus on category terms |
| Duplicate keywords | Wasted character space | Remove duplicates from keyword field |
| Ignoring long-tail | Missing conversion | Include specific phrases |

### Metadata Mistakes

| Mistake | Problem | Solution |
|---------|---------|----------|
| Vague descriptions | Low conversion | Specific benefits and features |
| Feature-focused copy | Doesn't resonate | Benefit-focused messaging |
| Outdated information | Misleading users | Update with each release |
| Missing localization | Lost global revenue | Prioritize top 5 markets |

### Visual Asset Mistakes

| Mistake | Problem | Solution |
|---------|---------|----------|
| Text-heavy screenshots | Unreadable on phones | Minimal text, clear UI focus |
| Inconsistent style | Unprofessional appearance | Design system for all assets |
| Portrait-only screenshots | Missed tablet users | Include landscape variants |
| No social proof | Lower trust | Add ratings, awards, press |

### Launch Mistakes

| Mistake | Problem | Solution |
|---------|---------|----------|
| Launching on Friday | No support over weekend | Launch Tuesday-Wednesday |
| No analytics setup | Can't measure success | Firebase/Mixpanel before launch |
| Immediate rating prompt | Negative ratings | Wait for positive experience |
| Ignoring reviews | Declining ratings | Respond within 24-48 hours |

```

### scripts/keyword_analyzer.py

```python
"""
Keyword analysis module for App Store Optimization.
Analyzes keyword search volume, competition, and relevance for app discovery.
"""

from typing import Dict, List, Any, Optional, Tuple
import re
from collections import Counter


class KeywordAnalyzer:
    """Analyzes keywords for ASO effectiveness."""

    # Competition level thresholds (based on number of competing apps)
    COMPETITION_THRESHOLDS = {
        'low': 1000,
        'medium': 5000,
        'high': 10000
    }

    # Search volume categories (monthly searches estimate)
    VOLUME_CATEGORIES = {
        'very_low': 1000,
        'low': 5000,
        'medium': 20000,
        'high': 100000,
        'very_high': 500000
    }

    def __init__(self):
        """Initialize keyword analyzer."""
        self.analyzed_keywords = {}

    def analyze_keyword(
        self,
        keyword: str,
        search_volume: int = 0,
        competing_apps: int = 0,
        relevance_score: float = 0.0
    ) -> Dict[str, Any]:
        """
        Analyze a single keyword for ASO potential.

        Args:
            keyword: The keyword to analyze
            search_volume: Estimated monthly search volume
            competing_apps: Number of apps competing for this keyword
            relevance_score: Relevance to your app (0.0-1.0)

        Returns:
            Dictionary with keyword analysis
        """
        competition_level = self._calculate_competition_level(competing_apps)
        volume_category = self._categorize_search_volume(search_volume)
        difficulty_score = self._calculate_keyword_difficulty(
            search_volume,
            competing_apps
        )

        # Calculate potential score (0-100)
        potential_score = self._calculate_potential_score(
            search_volume,
            competing_apps,
            relevance_score
        )

        analysis = {
            'keyword': keyword,
            'search_volume': search_volume,
            'volume_category': volume_category,
            'competing_apps': competing_apps,
            'competition_level': competition_level,
            'relevance_score': relevance_score,
            'difficulty_score': difficulty_score,
            'potential_score': potential_score,
            'recommendation': self._generate_recommendation(
                potential_score,
                difficulty_score,
                relevance_score
            ),
            'keyword_length': len(keyword.split()),
            'is_long_tail': len(keyword.split()) >= 3
        }

        self.analyzed_keywords[keyword] = analysis
        return analysis

    def compare_keywords(self, keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
        """
        Compare multiple keywords and rank by potential.

        Args:
            keywords_data: List of dicts with keyword, search_volume, competing_apps, relevance_score

        Returns:
            Comparison report with ranked keywords
        """
        analyses = []
        for kw_data in keywords_data:
            analysis = self.analyze_keyword(
                keyword=kw_data['keyword'],
                search_volume=kw_data.get('search_volume', 0),
                competing_apps=kw_data.get('competing_apps', 0),
                relevance_score=kw_data.get('relevance_score', 0.0)
            )
            analyses.append(analysis)

        # Sort by potential score (descending)
        ranked_keywords = sorted(
            analyses,
            key=lambda x: x['potential_score'],
            reverse=True
        )

        # Categorize keywords
        primary_keywords = [
            kw for kw in ranked_keywords
            if kw['potential_score'] >= 70 and kw['relevance_score'] >= 0.8
        ]

        secondary_keywords = [
            kw for kw in ranked_keywords
            if 50 <= kw['potential_score'] < 70 and kw['relevance_score'] >= 0.6
        ]

        long_tail_keywords = [
            kw for kw in ranked_keywords
            if kw['is_long_tail'] and kw['relevance_score'] >= 0.7
        ]

        return {
            'total_keywords_analyzed': len(analyses),
            'ranked_keywords': ranked_keywords,
            'primary_keywords': primary_keywords[:5],  # Top 5
            'secondary_keywords': secondary_keywords[:10],  # Top 10
            'long_tail_keywords': long_tail_keywords[:10],  # Top 10
            'summary': self._generate_comparison_summary(
                primary_keywords,
                secondary_keywords,
                long_tail_keywords
            )
        }

    def find_long_tail_opportunities(
        self,
        base_keyword: str,
        modifiers: List[str]
    ) -> List[Dict[str, Any]]:
        """
        Generate long-tail keyword variations.

        Args:
            base_keyword: Core keyword (e.g., "task manager")
            modifiers: List of modifiers (e.g., ["free", "simple", "team"])

        Returns:
            List of long-tail keyword suggestions
        """
        long_tail_keywords = []

        # Generate combinations
        for modifier in modifiers:
            # Modifier + base
            variation1 = f"{modifier} {base_keyword}"
            long_tail_keywords.append({
                'keyword': variation1,
                'pattern': 'modifier_base',
                'estimated_competition': 'low',
                'rationale': f"Less competitive variation of '{base_keyword}'"
            })

            # Base + modifier
            variation2 = f"{base_keyword} {modifier}"
            long_tail_keywords.append({
                'keyword': variation2,
                'pattern': 'base_modifier',
                'estimated_competition': 'low',
                'rationale': f"Specific use-case variation of '{base_keyword}'"
            })

        # Add question-based long-tail
        question_words = ['how', 'what', 'best', 'top']
        for q_word in question_words:
            question_keyword = f"{q_word} {base_keyword}"
            long_tail_keywords.append({
                'keyword': question_keyword,
                'pattern': 'question_based',
                'estimated_competition': 'very_low',
                'rationale': f"Informational search query"
            })

        return long_tail_keywords

    def extract_keywords_from_text(
        self,
        text: str,
        min_word_length: int = 3
    ) -> List[Tuple[str, int]]:
        """
        Extract potential keywords from text (descriptions, reviews).

        Args:
            text: Text to analyze
            min_word_length: Minimum word length to consider

        Returns:
            List of (keyword, frequency) tuples
        """
        # Clean and normalize text
        text = text.lower()
        text = re.sub(r'[^\w\s]', ' ', text)

        # Extract words
        words = text.split()

        # Filter by length
        words = [w for w in words if len(w) >= min_word_length]

        # Remove common stop words
        stop_words = {
            'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
            'but', 'not', 'you', 'all', 'can', 'are', 'was', 'were', 'been'
        }
        words = [w for w in words if w not in stop_words]

        # Count frequency
        word_counts = Counter(words)

        # Extract 2-word phrases
        phrases = []
        for i in range(len(words) - 1):
            phrase = f"{words[i]} {words[i+1]}"
            phrases.append(phrase)

        phrase_counts = Counter(phrases)

        # Combine and sort
        all_keywords = list(word_counts.items()) + list(phrase_counts.items())
        all_keywords.sort(key=lambda x: x[1], reverse=True)

        return all_keywords[:50]  # Top 50

    def calculate_keyword_density(
        self,
        text: str,
        target_keywords: List[str]
    ) -> Dict[str, float]:
        """
        Calculate keyword density in text.

        Args:
            text: Text to analyze (title, description)
            target_keywords: Keywords to check density for

        Returns:
            Dictionary of keyword: density (percentage)
        """
        text_lower = text.lower()
        total_words = len(text_lower.split())

        densities = {}
        for keyword in target_keywords:
            keyword_lower = keyword.lower()
            occurrences = text_lower.count(keyword_lower)
            density = (occurrences / total_words) * 100 if total_words > 0 else 0
            densities[keyword] = round(density, 2)

        return densities

    def _calculate_competition_level(self, competing_apps: int) -> str:
        """Determine competition level based on number of competing apps."""
        if competing_apps < self.COMPETITION_THRESHOLDS['low']:
            return 'low'
        elif competing_apps < self.COMPETITION_THRESHOLDS['medium']:
            return 'medium'
        elif competing_apps < self.COMPETITION_THRESHOLDS['high']:
            return 'high'
        else:
            return 'very_high'

    def _categorize_search_volume(self, search_volume: int) -> str:
        """Categorize search volume."""
        if search_volume < self.VOLUME_CATEGORIES['very_low']:
            return 'very_low'
        elif search_volume < self.VOLUME_CATEGORIES['low']:
            return 'low'
        elif search_volume < self.VOLUME_CATEGORIES['medium']:
            return 'medium'
        elif search_volume < self.VOLUME_CATEGORIES['high']:
            return 'high'
        else:
            return 'very_high'

    def _calculate_keyword_difficulty(
        self,
        search_volume: int,
        competing_apps: int
    ) -> float:
        """
        Calculate keyword difficulty score (0-100).
        Higher score = harder to rank.
        """
        if competing_apps == 0:
            return 0.0

        # Competition factor (0-1)
        competition_factor = min(competing_apps / 50000, 1.0)

        # Volume factor (0-1) - higher volume = more difficulty
        volume_factor = min(search_volume / 1000000, 1.0)

        # Difficulty score (weighted average)
        difficulty = (competition_factor * 0.7 + volume_factor * 0.3) * 100

        return round(difficulty, 1)

    def _calculate_potential_score(
        self,
        search_volume: int,
        competing_apps: int,
        relevance_score: float
    ) -> float:
        """
        Calculate overall keyword potential (0-100).
        Higher score = better opportunity.
        """
        # Volume score (0-40 points)
        volume_score = min((search_volume / 100000) * 40, 40)

        # Competition score (0-30 points) - inverse relationship
        if competing_apps > 0:
            competition_score = max(30 - (competing_apps / 500), 0)
        else:
            competition_score = 30

        # Relevance score (0-30 points)
        relevance_points = relevance_score * 30

        total_score = volume_score + competition_score + relevance_points

        return round(min(total_score, 100), 1)

    def _generate_recommendation(
        self,
        potential_score: float,
        difficulty_score: float,
        relevance_score: float
    ) -> str:
        """Generate actionable recommendation for keyword."""
        if relevance_score < 0.5:
            return "Low relevance - avoid targeting"

        if potential_score >= 70:
            return "High priority - target immediately"
        elif potential_score >= 50:
            if difficulty_score < 50:
                return "Good opportunity - include in metadata"
            else:
                return "Competitive - use in description, not title"
        elif potential_score >= 30:
            return "Secondary keyword - use for long-tail variations"
        else:
            return "Low potential - deprioritize"

    def _generate_comparison_summary(
        self,
        primary_keywords: List[Dict[str, Any]],
        secondary_keywords: List[Dict[str, Any]],
        long_tail_keywords: List[Dict[str, Any]]
    ) -> str:
        """Generate summary of keyword comparison."""
        summary_parts = []

        summary_parts.append(
            f"Identified {len(primary_keywords)} high-priority primary keywords."
        )

        if primary_keywords:
            top_keyword = primary_keywords[0]['keyword']
            summary_parts.append(
                f"Top recommendation: '{top_keyword}' (potential score: {primary_keywords[0]['potential_score']})."
            )

        summary_parts.append(
            f"Found {len(secondary_keywords)} secondary keywords for description and metadata."
        )

        summary_parts.append(
            f"Discovered {len(long_tail_keywords)} long-tail opportunities with lower competition."
        )

        return " ".join(summary_parts)


def analyze_keyword_set(keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
    """
    Convenience function to analyze a set of keywords.

    Args:
        keywords_data: List of keyword data dictionaries

    Returns:
        Complete analysis report
    """
    analyzer = KeywordAnalyzer()
    return analyzer.compare_keywords(keywords_data)

```

### scripts/metadata_optimizer.py

```python
"""
Metadata optimization module for App Store Optimization.
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
"""

from typing import Dict, List, Any, Optional, Tuple
import re


class MetadataOptimizer:
    """Optimizes app store metadata for maximum discoverability and conversion."""

    # Platform-specific character limits
    CHAR_LIMITS = {
        'apple': {
            'title': 30,
            'subtitle': 30,
            'promotional_text': 170,
            'description': 4000,
            'keywords': 100,
            'whats_new': 4000
        },
        'google': {
            'title': 50,
            'short_description': 80,
            'full_description': 4000
        }
    }

    def __init__(self, platform: str = 'apple'):
        """
        Initialize metadata optimizer.

        Args:
            platform: 'apple' or 'google'
        """
        if platform not in ['apple', 'google']:
            raise ValueError("Platform must be 'apple' or 'google'")

        self.platform = platform
        self.limits = self.CHAR_LIMITS[platform]

    def optimize_title(
        self,
        app_name: str,
        target_keywords: List[str],
        include_brand: bool = True
    ) -> Dict[str, Any]:
        """
        Optimize app title with keyword integration.

        Args:
            app_name: Your app's brand name
            target_keywords: List of keywords to potentially include
            include_brand: Whether to include brand name

        Returns:
            Optimized title options with analysis
        """
        max_length = self.limits['title']

        title_options = []

        # Option 1: Brand name only
        if include_brand:
            option1 = app_name[:max_length]
            title_options.append({
                'title': option1,
                'length': len(option1),
                'remaining_chars': max_length - len(option1),
                'keywords_included': [],
                'strategy': 'brand_only',
                'pros': ['Maximum brand recognition', 'Clean and simple'],
                'cons': ['No keyword targeting', 'Lower discoverability']
            })

        # Option 2: Brand + Primary Keyword
        if target_keywords:
            primary_keyword = target_keywords[0]
            option2 = self._build_title_with_keywords(
                app_name,
                [primary_keyword],
                max_length
            )
            if option2:
                title_options.append({
                    'title': option2,
                    'length': len(option2),
                    'remaining_chars': max_length - len(option2),
                    'keywords_included': [primary_keyword],
                    'strategy': 'brand_plus_primary',
                    'pros': ['Targets main keyword', 'Maintains brand identity'],
                    'cons': ['Limited keyword coverage']
                })

        # Option 3: Brand + Multiple Keywords (if space allows)
        if len(target_keywords) > 1:
            option3 = self._build_title_with_keywords(
                app_name,
                target_keywords[:2],
                max_length
            )
            if option3:
                title_options.append({
                    'title': option3,
                    'length': len(option3),
                    'remaining_chars': max_length - len(option3),
                    'keywords_included': target_keywords[:2],
                    'strategy': 'brand_plus_multiple',
                    'pros': ['Multiple keyword targets', 'Better discoverability'],
                    'cons': ['May feel cluttered', 'Less brand focus']
                })

        # Option 4: Keyword-first approach (for new apps)
        if target_keywords and not include_brand:
            option4 = " ".join(target_keywords[:2])[:max_length]
            title_options.append({
                'title': option4,
                'length': len(option4),
                'remaining_chars': max_length - len(option4),
                'keywords_included': target_keywords[:2],
                'strategy': 'keyword_first',
                'pros': ['Maximum SEO benefit', 'Clear functionality'],
                'cons': ['No brand recognition', 'Generic appearance']
            })

        return {
            'platform': self.platform,
            'max_length': max_length,
            'options': title_options,
            'recommendation': self._recommend_title_option(title_options)
        }

    def optimize_description(
        self,
        app_info: Dict[str, Any],
        target_keywords: List[str],
        description_type: str = 'full'
    ) -> Dict[str, Any]:
        """
        Optimize app description with keyword integration and conversion focus.

        Args:
            app_info: Dict with 'name', 'key_features', 'unique_value', 'target_audience'
            target_keywords: List of keywords to integrate naturally
            description_type: 'full', 'short' (Google), 'subtitle' (Apple)

        Returns:
            Optimized description with analysis
        """
        if description_type == 'short' and self.platform == 'google':
            return self._optimize_short_description(app_info, target_keywords)
        elif description_type == 'subtitle' and self.platform == 'apple':
            return self._optimize_subtitle(app_info, target_keywords)
        else:
            return self._optimize_full_description(app_info, target_keywords)

    def optimize_keyword_field(
        self,
        target_keywords: List[str],
        app_title: str = "",
        app_description: str = ""
    ) -> Dict[str, Any]:
        """
        Optimize Apple's 100-character keyword field.

        Rules:
        - No spaces between commas
        - No plural forms if singular exists
        - No duplicates
        - Keywords in title/subtitle are already indexed

        Args:
            target_keywords: List of target keywords
            app_title: Current app title (to avoid duplication)
            app_description: Current description (to check coverage)

        Returns:
            Optimized keyword field (comma-separated, no spaces)
        """
        if self.platform != 'apple':
            return {'error': 'Keyword field optimization only applies to Apple App Store'}

        max_length = self.limits['keywords']

        # Extract words already in title (these don't need to be in keyword field)
        title_words = set(app_title.lower().split()) if app_title else set()

        # Process keywords
        processed_keywords = []
        for keyword in target_keywords:
            keyword_lower = keyword.lower().strip()

            # Skip if already in title
            if keyword_lower in title_words:
                continue

            # Remove duplicates and process
            words = keyword_lower.split()
            for word in words:
                if word not in processed_keywords and word not in title_words:
                    processed_keywords.append(word)

        # Remove plurals if singular exists
        deduplicated = self._remove_plural_duplicates(processed_keywords)

        # Build keyword field within 100 character limit
        keyword_field = self._build_keyword_field(deduplicated, max_length)

        # Calculate keyword density in description
        density = self._calculate_coverage(target_keywords, app_description)

        return {
            'keyword_field': keyword_field,
            'length': len(keyword_field),
            'remaining_chars': max_length - len(keyword_field),
            'keywords_included': keyword_field.split(','),
            'keywords_count': len(keyword_field.split(',')),
            'keywords_excluded': [kw for kw in target_keywords if kw.lower() not in keyword_field],
            'description_coverage': density,
            'optimization_tips': [
                'Keywords in title are auto-indexed - no need to repeat',
                'Use singular forms only (Apple indexes plurals automatically)',
                'No spaces between commas to maximize character usage',
                'Update keyword field with each app update to test variations'
            ]
        }

    def validate_character_limits(
        self,
        metadata: Dict[str, str]
    ) -> Dict[str, Any]:
        """
        Validate all metadata fields against platform character limits.

        Args:
            metadata: Dictionary of field_name: value

        Returns:
            Validation report with errors and warnings
        """
        validation_results = {
            'is_valid': True,
            'errors': [],
            'warnings': [],
            'field_status': {}
        }

        for field_name, value in metadata.items():
            if field_name not in self.limits:
                validation_results['warnings'].append(
                    f"Unknown field '{field_name}' for {self.platform} platform"
                )
                continue

            max_length = self.limits[field_name]
            actual_length = len(value)
            remaining = max_length - actual_length

            field_status = {
                'value': value,
                'length': actual_length,
                'limit': max_length,
                'remaining': remaining,
                'is_valid': actual_length <= max_length,
                'usage_percentage': round((actual_length / max_length) * 100, 1)
            }

            validation_results['field_status'][field_name] = field_status

            if actual_length > max_length:
                validation_results['is_valid'] = False
                validation_results['errors'].append(
                    f"'{field_name}' exceeds limit: {actual_length}/{max_length} chars"
                )
            elif remaining > max_length * 0.2:  # More than 20% unused
                validation_results['warnings'].append(
                    f"'{field_name}' under-utilizes space: {remaining} chars remaining"
                )

        return validation_results

    def calculate_keyword_density(
        self,
        text: str,
        target_keywords: List[str]
    ) -> Dict[str, Any]:
        """
        Calculate keyword density in text.

        Args:
            text: Text to analyze
            target_keywords: Keywords to check

        Returns:
            Density analysis
        """
        text_lower = text.lower()
        total_words = len(text_lower.split())

        keyword_densities = {}
        for keyword in target_keywords:
            keyword_lower = keyword.lower()
            count = text_lower.count(keyword_lower)
            density = (count / total_words * 100) if total_words > 0 else 0

            keyword_densities[keyword] = {
                'occurrences': count,
                'density_percentage': round(density, 2),
                'status': self._assess_density(density)
            }

        # Overall assessment
        total_keyword_occurrences = sum(kw['occurrences'] for kw in keyword_densities.values())
        overall_density = (total_keyword_occurrences / total_words * 100) if total_words > 0 else 0

        return {
            'total_words': total_words,
            'keyword_densities': keyword_densities,
            'overall_keyword_density': round(overall_density, 2),
            'assessment': self._assess_overall_density(overall_density),
            'recommendations': self._generate_density_recommendations(keyword_densities)
        }

    def _build_title_with_keywords(
        self,
        app_name: str,
        keywords: List[str],
        max_length: int
    ) -> Optional[str]:
        """Build title combining app name and keywords within limit."""
        separators = [' - ', ': ', ' | ']

        for sep in separators:
            for kw in keywords:
                title = f"{app_name}{sep}{kw}"
                if len(title) <= max_length:
                    return title

        return None

    def _optimize_short_description(
        self,
        app_info: Dict[str, Any],
        target_keywords: List[str]
    ) -> Dict[str, Any]:
        """Optimize Google Play short description (80 chars)."""
        max_length = self.limits['short_description']

        # Focus on unique value proposition with primary keyword
        unique_value = app_info.get('unique_value', '')
        primary_keyword = target_keywords[0] if target_keywords else ''

        # Template: [Primary Keyword] - [Unique Value]
        short_desc = f"{primary_keyword.title()} - {unique_value}"[:max_length]

        return {
            'short_description': short_desc,
            'length': len(short_desc),
            'remaining_chars': max_length - len(short_desc),
            'keywords_included': [primary_keyword] if primary_keyword in short_desc.lower() else [],
            'strategy': 'keyword_value_proposition'
        }

    def _optimize_subtitle(
        self,
        app_info: Dict[str, Any],
        target_keywords: List[str]
    ) -> Dict[str, Any]:
        """Optimize Apple App Store subtitle (30 chars)."""
        max_length = self.limits['subtitle']

        # Very concise - primary keyword or key feature
        primary_keyword = target_keywords[0] if target_keywords else ''
        key_feature = app_info.get('key_features', [''])[0] if app_info.get('key_features') else ''

        options = [
            primary_keyword[:max_length],
            key_feature[:max_length],
            f"{primary_keyword} App"[:max_length]
        ]

        return {
            'subtitle_options': [opt for opt in options if opt],
            'max_length': max_length,
            'recommendation': options[0] if options else ''
        }

    def _optimize_full_description(
        self,
        app_info: Dict[str, Any],
        target_keywords: List[str]
    ) -> Dict[str, Any]:
        """Optimize full app description (4000 chars for both platforms)."""
        max_length = self.limits.get('description', self.limits.get('full_description', 4000))

        # Structure: Hook → Features → Benefits → Social Proof → CTA
        sections = []

        # Hook (with primary keyword)
        primary_keyword = target_keywords[0] if target_keywords else ''
        unique_value = app_info.get('unique_value', '')
        hook = f"{unique_value} {primary_keyword.title()} that helps you achieve more.\n\n"
        sections.append(hook)

        # Features (with keywords naturally integrated)
        features = app_info.get('key_features', [])
        if features:
            sections.append("KEY FEATURES:\n")
            for i, feature in enumerate(features[:5], 1):
                # Integrate keywords naturally
                feature_text = f"• {feature}"
                if i <= len(target_keywords):
                    keyword = target_keywords[i-1]
                    if keyword.lower() not in feature.lower():
                        feature_text = f"• {feature} with {keyword}"
                sections.append(f"{feature_text}\n")
            sections.append("\n")

        # Benefits
        target_audience = app_info.get('target_audience', 'users')
        sections.append(f"PERFECT FOR:\n{target_audience}\n\n")

        # Social proof placeholder
        sections.append("WHY USERS LOVE US:\n")
        sections.append("Join thousands of satisfied users who have transformed their workflow.\n\n")

        # CTA
        sections.append("Download now and start experiencing the difference!")

        # Combine and validate length
        full_description = "".join(sections)
        if len(full_description) > max_length:
            full_description = full_description[:max_length-3] + "..."

        # Calculate keyword density
        density = self.calculate_keyword_density(full_description, target_keywords)

        return {
            'full_description': full_description,
            'length': len(full_description),
            'remaining_chars': max_length - len(full_description),
            'keyword_analysis': density,
            'structure': {
                'has_hook': True,
                'has_features': len(features) > 0,
                'has_benefits': True,
                'has_cta': True
            }
        }

    def _remove_plural_duplicates(self, keywords: List[str]) -> List[str]:
        """Remove plural forms if singular exists."""
        deduplicated = []
        singular_set = set()

        for keyword in keywords:
            if keyword.endswith('s') and len(keyword) > 1:
                singular = keyword[:-1]
                if singular not in singular_set:
                    deduplicated.append(singular)
                    singular_set.add(singular)
            else:
                if keyword not in singular_set:
                    deduplicated.append(keyword)
                    singular_set.add(keyword)

        return deduplicated

    def _build_keyword_field(self, keywords: List[str], max_length: int) -> str:
        """Build comma-separated keyword field within character limit."""
        keyword_field = ""

        for keyword in keywords:
            test_field = f"{keyword_field},{keyword}" if keyword_field else keyword
            if len(test_field) <= max_length:
                keyword_field = test_field
            else:
                break

        return keyword_field

    def _calculate_coverage(self, keywords: List[str], text: str) -> Dict[str, int]:
        """Calculate how many keywords are covered in text."""
        text_lower = text.lower()
        coverage = {}

        for keyword in keywords:
            coverage[keyword] = text_lower.count(keyword.lower())

        return coverage

    def _assess_density(self, density: float) -> str:
        """Assess individual keyword density."""
        if density < 0.5:
            return "too_low"
        elif density <= 2.5:
            return "optimal"
        else:
            return "too_high"

    def _assess_overall_density(self, density: float) -> str:
        """Assess overall keyword density."""
        if density < 2:
            return "Under-optimized: Consider adding more keyword variations"
        elif density <= 5:
            return "Optimal: Good keyword integration without stuffing"
        elif density <= 8:
            return "High: Approaching keyword stuffing - reduce keyword usage"
        else:
            return "Too High: Keyword stuffing detected - rewrite for natural flow"

    def _generate_density_recommendations(
        self,
        keyword_densities: Dict[str, Dict[str, Any]]
    ) -> List[str]:
        """Generate recommendations based on keyword density analysis."""
        recommendations = []

        for keyword, data in keyword_densities.items():
            if data['status'] == 'too_low':
                recommendations.append(
                    f"Increase usage of '{keyword}' - currently only {data['occurrences']} times"
                )
            elif data['status'] == 'too_high':
                recommendations.append(
                    f"Reduce usage of '{keyword}' - appears {data['occurrences']} times (keyword stuffing risk)"
                )

        if not recommendations:
            recommendations.append("Keyword density is well-balanced")

        return recommendations

    def _recommend_title_option(self, options: List[Dict[str, Any]]) -> str:
        """Recommend best title option based on strategy."""
        if not options:
            return "No valid options available"

        # Prefer brand_plus_primary for established apps
        for option in options:
            if option['strategy'] == 'brand_plus_primary':
                return f"Recommended: '{option['title']}' (Balance of brand and SEO)"

        # Fallback to first option
        return f"Recommended: '{options[0]['title']}' ({options[0]['strategy']})"


def optimize_app_metadata(
    platform: str,
    app_info: Dict[str, Any],
    target_keywords: List[str]
) -> Dict[str, Any]:
    """
    Convenience function to optimize all metadata fields.

    Args:
        platform: 'apple' or 'google'
        app_info: App information dictionary
        target_keywords: Target keywords list

    Returns:
        Complete metadata optimization package
    """
    optimizer = MetadataOptimizer(platform)

    return {
        'platform': platform,
        'title': optimizer.optimize_title(
            app_info['name'],
            target_keywords
        ),
        'description': optimizer.optimize_description(
            app_info,
            target_keywords,
            'full'
        ),
        'keyword_field': optimizer.optimize_keyword_field(
            target_keywords
        ) if platform == 'apple' else None
    }

```

### scripts/competitor_analyzer.py

```python
"""
Competitor analysis module for App Store Optimization.
Analyzes top competitors' ASO strategies and identifies opportunities.
"""

from typing import Dict, List, Any, Optional
from collections import Counter
import re


class CompetitorAnalyzer:
    """Analyzes competitor apps to identify ASO opportunities."""

    def __init__(self, category: str, platform: str = 'apple'):
        """
        Initialize competitor analyzer.

        Args:
            category: App category (e.g., "Productivity", "Games")
            platform: 'apple' or 'google'
        """
        self.category = category
        self.platform = platform
        self.competitors = []

    def analyze_competitor(
        self,
        app_data: Dict[str, Any]
    ) -> Dict[str, Any]:
        """
        Analyze a single competitor's ASO strategy.

        Args:
            app_data: Dictionary with app_name, title, description, rating, ratings_count, keywords

        Returns:
            Comprehensive competitor analysis
        """
        app_name = app_data.get('app_name', '')
        title = app_data.get('title', '')
        description = app_data.get('description', '')
        rating = app_data.get('rating', 0.0)
        ratings_count = app_data.get('ratings_count', 0)
        keywords = app_data.get('keywords', [])

        analysis = {
            'app_name': app_name,
            'title_analysis': self._analyze_title(title),
            'description_analysis': self._analyze_description(description),
            'keyword_strategy': self._extract_keyword_strategy(title, description, keywords),
            'rating_metrics': {
                'rating': rating,
                'ratings_count': ratings_count,
                'rating_quality': self._assess_rating_quality(rating, ratings_count)
            },
            'competitive_strength': self._calculate_competitive_strength(
                rating,
                ratings_count,
                len(description)
            ),
            'key_differentiators': self._identify_differentiators(description)
        }

        self.competitors.append(analysis)
        return analysis

    def compare_competitors(
        self,
        competitors_data: List[Dict[str, Any]]
    ) -> Dict[str, Any]:
        """
        Compare multiple competitors and identify patterns.

        Args:
            competitors_data: List of competitor data dictionaries

        Returns:
            Comparative analysis with insights
        """
        # Analyze each competitor
        analyses = []
        for comp_data in competitors_data:
            analysis = self.analyze_competitor(comp_data)
            analyses.append(analysis)

        # Extract common keywords across competitors
        all_keywords = []
        for analysis in analyses:
            all_keywords.extend(analysis['keyword_strategy']['primary_keywords'])

        common_keywords = self._find_common_keywords(all_keywords)

        # Identify keyword gaps (used by some but not all)
        keyword_gaps = self._identify_keyword_gaps(analyses)

        # Rank competitors by strength
        ranked_competitors = sorted(
            analyses,
            key=lambda x: x['competitive_strength'],
            reverse=True
        )

        # Analyze rating distribution
        rating_analysis = self._analyze_rating_distribution(analyses)

        # Identify best practices
        best_practices = self._identify_best_practices(ranked_competitors)

        return {
            'category': self.category,
            'platform': self.platform,
            'competitors_analyzed': len(analyses),
            'ranked_competitors': ranked_competitors,
            'common_keywords': common_keywords,
            'keyword_gaps': keyword_gaps,
            'rating_analysis': rating_analysis,
            'best_practices': best_practices,
            'opportunities': self._identify_opportunities(
                analyses,
                common_keywords,
                keyword_gaps
            )
        }

    def identify_gaps(
        self,
        your_app_data: Dict[str, Any],
        competitors_data: List[Dict[str, Any]]
    ) -> Dict[str, Any]:
        """
        Identify gaps between your app and competitors.

        Args:
            your_app_data: Your app's data
            competitors_data: List of competitor data

        Returns:
            Gap analysis with actionable recommendations
        """
        # Analyze your app
        your_analysis = self.analyze_competitor(your_app_data)

        # Analyze competitors
        competitor_comparison = self.compare_competitors(competitors_data)

        # Identify keyword gaps
        your_keywords = set(your_analysis['keyword_strategy']['primary_keywords'])
        competitor_keywords = set(competitor_comparison['common_keywords'])
        missing_keywords = competitor_keywords - your_keywords

        # Identify rating gap
        avg_competitor_rating = competitor_comparison['rating_analysis']['average_rating']
        rating_gap = avg_competitor_rating - your_analysis['rating_metrics']['rating']

        # Identify description length gap
        avg_competitor_desc_length = sum(
            len(comp['description_analysis']['text'])
            for comp in competitor_comparison['ranked_competitors']
        ) / len(competitor_comparison['ranked_competitors'])
        your_desc_length = len(your_analysis['description_analysis']['text'])
        desc_length_gap = avg_competitor_desc_length - your_desc_length

        return {
            'your_app': your_analysis,
            'keyword_gaps': {
                'missing_keywords': list(missing_keywords)[:10],
                'recommendations': self._generate_keyword_recommendations(missing_keywords)
            },
            'rating_gap': {
                'your_rating': your_analysis['rating_metrics']['rating'],
                'average_competitor_rating': avg_competitor_rating,
                'gap': round(rating_gap, 2),
                'action_items': self._generate_rating_improvement_actions(rating_gap)
            },
            'content_gap': {
                'your_description_length': your_desc_length,
                'average_competitor_length': int(avg_competitor_desc_length),
                'gap': int(desc_length_gap),
                'recommendations': self._generate_content_recommendations(desc_length_gap)
            },
            'competitive_positioning': self._assess_competitive_position(
                your_analysis,
                competitor_comparison
            )
        }

    def _analyze_title(self, title: str) -> Dict[str, Any]:
        """Analyze title structure and keyword usage."""
        parts = re.split(r'[-:|]', title)

        return {
            'title': title,
            'length': len(title),
            'has_brand': len(parts) > 0,
            'has_keywords': len(parts) > 1,
            'components': [part.strip() for part in parts],
            'word_count': len(title.split()),
            'strategy': 'brand_plus_keywords' if len(parts) > 1 else 'brand_only'
        }

    def _analyze_description(self, description: str) -> Dict[str, Any]:
        """Analyze description structure and content."""
        lines = description.split('\n')
        word_count = len(description.split())

        # Check for structural elements
        has_bullet_points = '•' in description or '*' in description
        has_sections = any(line.isupper() for line in lines if len(line) > 0)
        has_call_to_action = any(
            cta in description.lower()
            for cta in ['download', 'try', 'get', 'start', 'join']
        )

        # Extract features mentioned
        features = self._extract_features(description)

        return {
            'text': description,
            'length': len(description),
            'word_count': word_count,
            'structure': {
                'has_bullet_points': has_bullet_points,
                'has_sections': has_sections,
                'has_call_to_action': has_call_to_action
            },
            'features_mentioned': features,
            'readability': 'good' if 50 <= word_count <= 300 else 'needs_improvement'
        }

    def _extract_keyword_strategy(
        self,
        title: str,
        description: str,
        explicit_keywords: List[str]
    ) -> Dict[str, Any]:
        """Extract keyword strategy from metadata."""
        # Extract keywords from title
        title_keywords = [word.lower() for word in title.split() if len(word) > 3]

        # Extract frequently used words from description
        desc_words = re.findall(r'\b\w{4,}\b', description.lower())
        word_freq = Counter(desc_words)
        frequent_words = [word for word, count in word_freq.most_common(15) if count > 2]

        # Combine with explicit keywords
        all_keywords = list(set(title_keywords + frequent_words + explicit_keywords))

        return {
            'primary_keywords': title_keywords,
            'description_keywords': frequent_words[:10],
            'explicit_keywords': explicit_keywords,
            'total_unique_keywords': len(all_keywords),
            'keyword_focus': self._assess_keyword_focus(title_keywords, frequent_words)
        }

    def _assess_rating_quality(self, rating: float, ratings_count: int) -> str:
        """Assess the quality of ratings."""
        if ratings_count < 100:
            return 'insufficient_data'
        elif rating >= 4.5 and ratings_count > 1000:
            return 'excellent'
        elif rating >= 4.0 and ratings_count > 500:
            return 'good'
        elif rating >= 3.5:
            return 'average'
        else:
            return 'poor'

    def _calculate_competitive_strength(
        self,
        rating: float,
        ratings_count: int,
        description_length: int
    ) -> float:
        """
        Calculate overall competitive strength (0-100).

        Factors:
        - Rating quality (40%)
        - Rating volume (30%)
        - Metadata quality (30%)
        """
        # Rating quality score (0-40)
        rating_score = (rating / 5.0) * 40

        # Rating volume score (0-30)
        volume_score = min((ratings_count / 10000) * 30, 30)

        # Metadata quality score (0-30)
        metadata_score = min((description_length / 2000) * 30, 30)

        total_score = rating_score + volume_score + metadata_score

        return round(total_score, 1)

    def _identify_differentiators(self, description: str) -> List[str]:
        """Identify key differentiators from description."""
        differentiator_keywords = [
            'unique', 'only', 'first', 'best', 'leading', 'exclusive',
            'revolutionary', 'innovative', 'patent', 'award'
        ]

        differentiators = []
        sentences = description.split('.')

        for sentence in sentences:
            sentence_lower = sentence.lower()
            if any(keyword in sentence_lower for keyword in differentiator_keywords):
                differentiators.append(sentence.strip())

        return differentiators[:5]

    def _find_common_keywords(self, all_keywords: List[str]) -> List[str]:
        """Find keywords used by multiple competitors."""
        keyword_counts = Counter(all_keywords)
        # Return keywords used by at least 2 competitors
        common = [kw for kw, count in keyword_counts.items() if count >= 2]
        return sorted(common, key=lambda x: keyword_counts[x], reverse=True)[:20]

    def _identify_keyword_gaps(self, analyses: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
        """Identify keywords used by some competitors but not others."""
        all_keywords_by_app = {}

        for analysis in analyses:
            app_name = analysis['app_name']
            keywords = analysis['keyword_strategy']['primary_keywords']
            all_keywords_by_app[app_name] = set(keywords)

        # Find keywords used by some but not all
        all_keywords_set = set()
        for keywords in all_keywords_by_app.values():
            all_keywords_set.update(keywords)

        gaps = []
        for keyword in all_keywords_set:
            using_apps = [
                app for app, keywords in all_keywords_by_app.items()
                if keyword in keywords
            ]
            if 1 < len(using_apps) < len(analyses):
                gaps.append({
                    'keyword': keyword,
                    'used_by': using_apps,
                    'usage_percentage': round(len(using_apps) / len(analyses) * 100, 1)
                })

        return sorted(gaps, key=lambda x: x['usage_percentage'], reverse=True)[:15]

    def _analyze_rating_distribution(self, analyses: List[Dict[str, Any]]) -> Dict[str, Any]:
        """Analyze rating distribution across competitors."""
        ratings = [a['rating_metrics']['rating'] for a in analyses]
        ratings_counts = [a['rating_metrics']['ratings_count'] for a in analyses]

        return {
            'average_rating': round(sum(ratings) / len(ratings), 2),
            'highest_rating': max(ratings),
            'lowest_rating': min(ratings),
            'average_ratings_count': int(sum(ratings_counts) / len(ratings_counts)),
            'total_ratings_in_category': sum(ratings_counts)
        }

    def _identify_best_practices(self, ranked_competitors: List[Dict[str, Any]]) -> List[str]:
        """Identify best practices from top competitors."""
        if not ranked_competitors:
            return []

        top_competitor = ranked_competitors[0]
        practices = []

        # Title strategy
        title_analysis = top_competitor['title_analysis']
        if title_analysis['has_keywords']:
            practices.append(
                f"Title Strategy: Include primary keyword in title (e.g., '{title_analysis['title']}')"
            )

        # Description structure
        desc_analysis = top_competitor['description_analysis']
        if desc_analysis['structure']['has_bullet_points']:
            practices.append("Description: Use bullet points to highlight key features")

        if desc_analysis['structure']['has_sections']:
            practices.append("Description: Organize content with clear section headers")

        # Rating strategy
        rating_quality = top_competitor['rating_metrics']['rating_quality']
        if rating_quality in ['excellent', 'good']:
            practices.append(
                f"Ratings: Maintain high rating quality ({top_competitor['rating_metrics']['rating']}★) "
                f"with significant volume ({top_competitor['rating_metrics']['ratings_count']} ratings)"
            )

        return practices[:5]

    def _identify_opportunities(
        self,
        analyses: List[Dict[str, Any]],
        common_keywords: List[str],
        keyword_gaps: List[Dict[str, Any]]
    ) -> List[str]:
        """Identify ASO opportunities based on competitive analysis."""
        opportunities = []

        # Keyword opportunities from gaps
        if keyword_gaps:
            underutilized_keywords = [
                gap['keyword'] for gap in keyword_gaps
                if gap['usage_percentage'] < 50
            ]
            if underutilized_keywords:
                opportunities.append(
                    f"Target underutilized keywords: {', '.join(underutilized_keywords[:5])}"
                )

        # Rating opportunity
        avg_rating = sum(a['rating_metrics']['rating'] for a in analyses) / len(analyses)
        if avg_rating < 4.5:
            opportunities.append(
                f"Category average rating is {avg_rating:.1f} - opportunity to differentiate with higher ratings"
            )

        # Content depth opportunity
        avg_desc_length = sum(
            a['description_analysis']['length'] for a in analyses
        ) / len(analyses)
        if avg_desc_length < 1500:
            opportunities.append(
                "Competitors have relatively short descriptions - opportunity to provide more comprehensive information"
            )

        return opportunities[:5]

    def _extract_features(self, description: str) -> List[str]:
        """Extract feature mentions from description."""
        # Look for bullet points or numbered lists
        lines = description.split('\n')
        features = []

        for line in lines:
            line = line.strip()
            # Check if line starts with bullet or number
            if line and (line[0] in ['•', '*', '-', '✓'] or line[0].isdigit()):
                # Clean the line
                cleaned = re.sub(r'^[•*\-✓\d.)\s]+', '', line)
                if cleaned:
                    features.append(cleaned)

        return features[:10]

    def _assess_keyword_focus(
        self,
        title_keywords: List[str],
        description_keywords: List[str]
    ) -> str:
        """Assess keyword focus strategy."""
        overlap = set(title_keywords) & set(description_keywords)

        if len(overlap) >= 3:
            return 'consistent_focus'
        elif len(overlap) >= 1:
            return 'moderate_focus'
        else:
            return 'broad_focus'

    def _generate_keyword_recommendations(self, missing_keywords: set) -> List[str]:
        """Generate recommendations for missing keywords."""
        if not missing_keywords:
            return ["Your keyword coverage is comprehensive"]

        recommendations = []
        missing_list = list(missing_keywords)[:5]

        recommendations.append(
            f"Consider adding these competitor keywords: {', '.join(missing_list)}"
        )
        recommendations.append(
            "Test keyword variations in subtitle/promotional text first"
        )
        recommendations.append(
            "Monitor competitor keyword changes monthly"
        )

        return recommendations

    def _generate_rating_improvement_actions(self, rating_gap: float) -> List[str]:
        """Generate actions to improve ratings."""
        actions = []

        if rating_gap > 0.5:
            actions.append("CRITICAL: Significant rating gap - prioritize user satisfaction improvements")
            actions.append("Analyze negative reviews to identify top issues")
            actions.append("Implement in-app rating prompts after positive experiences")
            actions.append("Respond to all negative reviews professionally")
        elif rating_gap > 0.2:
            actions.append("Focus on incremental improvements to close rating gap")
            actions.append("Optimize timing of rating requests")
        else:
            actions.append("Ratings are competitive - maintain quality and continue improvements")

        return actions

    def _generate_content_recommendations(self, desc_length_gap: int) -> List[str]:
        """Generate content recommendations based on length gap."""
        recommendations = []

        if desc_length_gap > 500:
            recommendations.append(
                "Expand description to match competitor detail level"
            )
            recommendations.append(
                "Add use case examples and success stories"
            )
            recommendations.append(
                "Include more feature explanations and benefits"
            )
        elif desc_length_gap < -500:
            recommendations.append(
                "Consider condensing description for better readability"
            )
            recommendations.append(
                "Focus on most important features first"
            )
        else:
            recommendations.append(
                "Description length is competitive"
            )

        return recommendations

    def _assess_competitive_position(
        self,
        your_analysis: Dict[str, Any],
        competitor_comparison: Dict[str, Any]
    ) -> str:
        """Assess your competitive position."""
        your_strength = your_analysis['competitive_strength']
        competitors = competitor_comparison['ranked_competitors']

        if not competitors:
            return "No comparison data available"

        # Find where you'd rank
        better_than_count = sum(
            1 for comp in competitors
            if your_strength > comp['competitive_strength']
        )

        position_percentage = (better_than_count / len(competitors)) * 100

        if position_percentage >= 75:
            return "Strong Position: Top quartile in competitive strength"
        elif position_percentage >= 50:
            return "Competitive Position: Above average, opportunities for improvement"
        elif position_percentage >= 25:
            return "Challenging Position: Below average, requires strategic improvements"
        else:
            return "Weak Position: Bottom quartile, major ASO overhaul needed"


def analyze_competitor_set(
    category: str,
    competitors_data: List[Dict[str, Any]],
    platform: str = 'apple'
) -> Dict[str, Any]:
    """
    Convenience function to analyze a set of competitors.

    Args:
        category: App category
        competitors_data: List of competitor data
        platform: 'apple' or 'google'

    Returns:
        Complete competitive analysis
    """
    analyzer = CompetitorAnalyzer(category, platform)
    return analyzer.compare_competitors(competitors_data)

```

### scripts/aso_scorer.py

```python
"""
ASO scoring module for App Store Optimization.
Calculates comprehensive ASO health score across multiple dimensions.
"""

from typing import Dict, List, Any, Optional


class ASOScorer:
    """Calculates overall ASO health score and provides recommendations."""

    # Score weights for different components (total = 100)
    WEIGHTS = {
        'metadata_quality': 25,
        'ratings_reviews': 25,
        'keyword_performance': 25,
        'conversion_metrics': 25
    }

    # Benchmarks for scoring
    BENCHMARKS = {
        'title_keyword_usage': {'min': 1, 'target': 2},
        'description_length': {'min': 500, 'target': 2000},
        'keyword_density': {'min': 2, 'optimal': 5, 'max': 8},
        'average_rating': {'min': 3.5, 'target': 4.5},
        'ratings_count': {'min': 100, 'target': 5000},
        'keywords_top_10': {'min': 2, 'target': 10},
        'keywords_top_50': {'min': 5, 'target': 20},
        'conversion_rate': {'min': 0.02, 'target': 0.10}
    }

    def __init__(self):
        """Initialize ASO scorer."""
        self.score_breakdown = {}

    def calculate_overall_score(
        self,
        metadata: Dict[str, Any],
        ratings: Dict[str, Any],
        keyword_performance: Dict[str, Any],
        conversion: Dict[str, Any]
    ) -> Dict[str, Any]:
        """
        Calculate comprehensive ASO score (0-100).

        Args:
            metadata: Title, description quality metrics
            ratings: Rating average and count
            keyword_performance: Keyword ranking data
            conversion: Impression-to-install metrics

        Returns:
            Overall score with detailed breakdown
        """
        # Calculate component scores
        metadata_score = self.score_metadata_quality(metadata)
        ratings_score = self.score_ratings_reviews(ratings)
        keyword_score = self.score_keyword_performance(keyword_performance)
        conversion_score = self.score_conversion_metrics(conversion)

        # Calculate weighted overall score
        overall_score = (
            metadata_score * (self.WEIGHTS['metadata_quality'] / 100) +
            ratings_score * (self.WEIGHTS['ratings_reviews'] / 100) +
            keyword_score * (self.WEIGHTS['keyword_performance'] / 100) +
            conversion_score * (self.WEIGHTS['conversion_metrics'] / 100)
        )

        # Store breakdown
        self.score_breakdown = {
            'metadata_quality': {
                'score': metadata_score,
                'weight': self.WEIGHTS['metadata_quality'],
                'weighted_contribution': round(metadata_score * (self.WEIGHTS['metadata_quality'] / 100), 1)
            },
            'ratings_reviews': {
                'score': ratings_score,
                'weight': self.WEIGHTS['ratings_reviews'],
                'weighted_contribution': round(ratings_score * (self.WEIGHTS['ratings_reviews'] / 100), 1)
            },
            'keyword_performance': {
                'score': keyword_score,
                'weight': self.WEIGHTS['keyword_performance'],
                'weighted_contribution': round(keyword_score * (self.WEIGHTS['keyword_performance'] / 100), 1)
            },
            'conversion_metrics': {
                'score': conversion_score,
                'weight': self.WEIGHTS['conversion_metrics'],
                'weighted_contribution': round(conversion_score * (self.WEIGHTS['conversion_metrics'] / 100), 1)
            }
        }

        # Generate recommendations
        recommendations = self.generate_recommendations(
            metadata_score,
            ratings_score,
            keyword_score,
            conversion_score
        )

        # Assess overall health
        health_status = self._assess_health_status(overall_score)

        return {
            'overall_score': round(overall_score, 1),
            'health_status': health_status,
            'score_breakdown': self.score_breakdown,
            'recommendations': recommendations,
            'priority_actions': self._prioritize_actions(recommendations),
            'strengths': self._identify_strengths(self.score_breakdown),
            'weaknesses': self._identify_weaknesses(self.score_breakdown)
        }

    def score_metadata_quality(self, metadata: Dict[str, Any]) -> float:
        """
        Score metadata quality (0-100).

        Evaluates:
        - Title optimization
        - Description quality
        - Keyword usage
        """
        scores = []

        # Title score (0-35 points)
        title_keywords = metadata.get('title_keyword_count', 0)
        title_length = metadata.get('title_length', 0)

        title_score = 0
        if title_keywords >= self.BENCHMARKS['title_keyword_usage']['target']:
            title_score = 35
        elif title_keywords >= self.BENCHMARKS['title_keyword_usage']['min']:
            title_score = 25
        else:
            title_score = 10

        # Adjust for title length usage
        if title_length > 25:  # Using most of available space
            title_score += 0
        else:
            title_score -= 5

        scores.append(min(title_score, 35))

        # Description score (0-35 points)
        desc_length = metadata.get('description_length', 0)
        desc_quality = metadata.get('description_quality', 0.0)  # 0-1 scale

        desc_score = 0
        if desc_length >= self.BENCHMARKS['description_length']['target']:
            desc_score = 25
        elif desc_length >= self.BENCHMARKS['description_length']['min']:
            desc_score = 15
        else:
            desc_score = 5

        # Add quality bonus
        desc_score += desc_quality * 10
        scores.append(min(desc_score, 35))

        # Keyword density score (0-30 points)
        keyword_density = metadata.get('keyword_density', 0.0)

        if self.BENCHMARKS['keyword_density']['min'] <= keyword_density <= self.BENCHMARKS['keyword_density']['optimal']:
            density_score = 30
        elif keyword_density < self.BENCHMARKS['keyword_density']['min']:
            # Too low - proportional scoring
            density_score = (keyword_density / self.BENCHMARKS['keyword_density']['min']) * 20
        else:
            # Too high (keyword stuffing) - penalty
            excess = keyword_density - self.BENCHMARKS['keyword_density']['optimal']
            density_score = max(30 - (excess * 5), 0)

        scores.append(density_score)

        return round(sum(scores), 1)

    def score_ratings_reviews(self, ratings: Dict[str, Any]) -> float:
        """
        Score ratings and reviews (0-100).

        Evaluates:
        - Average rating
        - Total ratings count
        - Review velocity
        """
        average_rating = ratings.get('average_rating', 0.0)
        total_ratings = ratings.get('total_ratings', 0)
        recent_ratings = ratings.get('recent_ratings_30d', 0)

        # Rating quality score (0-50 points)
        if average_rating >= self.BENCHMARKS['average_rating']['target']:
            rating_quality_score = 50
        elif average_rating >= self.BENCHMARKS['average_rating']['min']:
            # Proportional scoring between min and target
            proportion = (average_rating - self.BENCHMARKS['average_rating']['min']) / \
                        (self.BENCHMARKS['average_rating']['target'] - self.BENCHMARKS['average_rating']['min'])
            rating_quality_score = 30 + (proportion * 20)
        elif average_rating >= 3.0:
            rating_quality_score = 20
        else:
            rating_quality_score = 10

        # Rating volume score (0-30 points)
        if total_ratings >= self.BENCHMARKS['ratings_count']['target']:
            rating_volume_score = 30
        elif total_ratings >= self.BENCHMARKS['ratings_count']['min']:
            # Proportional scoring
            proportion = (total_ratings - self.BENCHMARKS['ratings_count']['min']) / \
                        (self.BENCHMARKS['ratings_count']['target'] - self.BENCHMARKS['ratings_count']['min'])
            rating_volume_score = 15 + (proportion * 15)
        else:
            # Very low volume
            rating_volume_score = (total_ratings / self.BENCHMARKS['ratings_count']['min']) * 15

        # Rating velocity score (0-20 points)
        if recent_ratings > 100:
            velocity_score = 20
        elif recent_ratings > 50:
            velocity_score = 15
        elif recent_ratings > 10:
            velocity_score = 10
        else:
            velocity_score = 5

        total_score = rating_quality_score + rating_volume_score + velocity_score

        return round(min(total_score, 100), 1)

    def score_keyword_performance(self, keyword_performance: Dict[str, Any]) -> float:
        """
        Score keyword ranking performance (0-100).

        Evaluates:
        - Top 10 rankings
        - Top 50 rankings
        - Ranking trends
        """
        top_10_count = keyword_performance.get('top_10', 0)
        top_50_count = keyword_performance.get('top_50', 0)
        top_100_count = keyword_performance.get('top_100', 0)
        improving_keywords = keyword_performance.get('improving_keywords', 0)

        # Top 10 score (0-50 points) - most valuable rankings
        if top_10_count >= self.BENCHMARKS['keywords_top_10']['target']:
            top_10_score = 50
        elif top_10_count >= self.BENCHMARKS['keywords_top_10']['min']:
            proportion = (top_10_count - self.BENCHMARKS['keywords_top_10']['min']) / \
                        (self.BENCHMARKS['keywords_top_10']['target'] - self.BENCHMARKS['keywords_top_10']['min'])
            top_10_score = 25 + (proportion * 25)
        else:
            top_10_score = (top_10_count / self.BENCHMARKS['keywords_top_10']['min']) * 25

        # Top 50 score (0-30 points)
        if top_50_count >= self.BENCHMARKS['keywords_top_50']['target']:
            top_50_score = 30
        elif top_50_count >= self.BENCHMARKS['keywords_top_50']['min']:
            proportion = (top_50_count - self.BENCHMARKS['keywords_top_50']['min']) / \
                        (self.BENCHMARKS['keywords_top_50']['target'] - self.BENCHMARKS['keywords_top_50']['min'])
            top_50_score = 15 + (proportion * 15)
        else:
            top_50_score = (top_50_count / self.BENCHMARKS['keywords_top_50']['min']) * 15

        # Coverage score (0-10 points) - based on top 100
        coverage_score = min((top_100_count / 30) * 10, 10)

        # Trend score (0-10 points) - are rankings improving?
        if improving_keywords > 5:
            trend_score = 10
        elif improving_keywords > 0:
            trend_score = 5
        else:
            trend_score = 0

        total_score = top_10_score + top_50_score + coverage_score + trend_score

        return round(min(total_score, 100), 1)

    def score_conversion_metrics(self, conversion: Dict[str, Any]) -> float:
        """
        Score conversion performance (0-100).

        Evaluates:
        - Impression-to-install conversion rate
        - Download velocity
        """
        conversion_rate = conversion.get('impression_to_install', 0.0)
        downloads_30d = conversion.get('downloads_last_30_days', 0)
        downloads_trend = conversion.get('downloads_trend', 'stable')  # 'up', 'stable', 'down'

        # Conversion rate score (0-70 points)
        if conversion_rate >= self.BENCHMARKS['conversion_rate']['target']:
            conversion_score = 70
        elif conversion_rate >= self.BENCHMARKS['conversion_rate']['min']:
            proportion = (conversion_rate - self.BENCHMARKS['conversion_rate']['min']) / \
                        (self.BENCHMARKS['conversion_rate']['target'] - self.BENCHMARKS['conversion_rate']['min'])
            conversion_score = 35 + (proportion * 35)
        else:
            conversion_score = (conversion_rate / self.BENCHMARKS['conversion_rate']['min']) * 35

        # Download velocity score (0-20 points)
        if downloads_30d > 10000:
            velocity_score = 20
        elif downloads_30d > 1000:
            velocity_score = 15
        elif downloads_30d > 100:
            velocity_score = 10
        else:
            velocity_score = 5

        # Trend bonus (0-10 points)
        if downloads_trend == 'up':
            trend_score = 10
        elif downloads_trend == 'stable':
            trend_score = 5
        else:
            trend_score = 0

        total_score = conversion_score + velocity_score + trend_score

        return round(min(total_score, 100), 1)

    def generate_recommendations(
        self,
        metadata_score: float,
        ratings_score: float,
        keyword_score: float,
        conversion_score: float
    ) -> List[Dict[str, Any]]:
        """Generate prioritized recommendations based on scores."""
        recommendations = []

        # Metadata recommendations
        if metadata_score < 60:
            recommendations.append({
                'category': 'metadata_quality',
                'priority': 'high',
                'action': 'Optimize app title and description',
                'details': 'Add more keywords to title, expand description to 1500-2000 characters, improve keyword density to 3-5%',
                'expected_impact': 'Improve discoverability and ranking potential'
            })
        elif metadata_score < 80:
            recommendations.append({
                'category': 'metadata_quality',
                'priority': 'medium',
                'action': 'Refine metadata for better keyword targeting',
                'details': 'Test variations of title/subtitle, optimize keyword field for Apple',
                'expected_impact': 'Incremental ranking improvements'
            })

        # Ratings recommendations
        if ratings_score < 60:
            recommendations.append({
                'category': 'ratings_reviews',
                'priority': 'high',
                'action': 'Improve rating quality and volume',
                'details': 'Address top user complaints, implement in-app rating prompts, respond to negative reviews',
                'expected_impact': 'Better conversion rates and trust signals'
            })
        elif ratings_score < 80:
            recommendations.append({
                'category': 'ratings_reviews',
                'priority': 'medium',
                'action': 'Increase rating velocity',
                'details': 'Optimize timing of rating requests, encourage satisfied users to rate',
                'expected_impact': 'Sustained rating quality'
            })

        # Keyword performance recommendations
        if keyword_score < 60:
            recommendations.append({
                'category': 'keyword_performance',
                'priority': 'high',
                'action': 'Improve keyword rankings',
                'details': 'Target long-tail keywords with lower competition, update metadata with high-potential keywords, build backlinks',
                'expected_impact': 'Significant improvement in organic visibility'
            })
        elif keyword_score < 80:
            recommendations.append({
                'category': 'keyword_performance',
                'priority': 'medium',
                'action': 'Expand keyword coverage',
                'details': 'Target additional related keywords, test seasonal keywords, localize for new markets',
                'expected_impact': 'Broader reach and more discovery opportunities'
            })

        # Conversion recommendations
        if conversion_score < 60:
            recommendations.append({
                'category': 'conversion_metrics',
                'priority': 'high',
                'action': 'Optimize store listing for conversions',
                'details': 'Improve screenshots and icon, strengthen value proposition in description, add video preview',
                'expected_impact': 'Higher impression-to-install conversion'
            })
        elif conversion_score < 80:
            recommendations.append({
                'category': 'conversion_metrics',
                'priority': 'medium',
                'action': 'Test visual asset variations',
                'details': 'A/B test different icon designs and screenshot sequences',
                'expected_impact': 'Incremental conversion improvements'
            })

        return recommendations

    def _assess_health_status(self, overall_score: float) -> str:
        """Assess overall ASO health status."""
        if overall_score >= 80:
            return "Excellent - Top-tier ASO performance"
        elif overall_score >= 65:
            return "Good - Competitive ASO with room for improvement"
        elif overall_score >= 50:
            return "Fair - Needs strategic improvements"
        else:
            return "Poor - Requires immediate ASO overhaul"

    def _prioritize_actions(
        self,
        recommendations: List[Dict[str, Any]]
    ) -> List[Dict[str, Any]]:
        """Prioritize actions by impact and urgency."""
        # Sort by priority (high first) and expected impact
        priority_order = {'high': 0, 'medium': 1, 'low': 2}

        sorted_recommendations = sorted(
            recommendations,
            key=lambda x: priority_order[x['priority']]
        )

        return sorted_recommendations[:3]  # Top 3 priority actions

    def _identify_strengths(self, score_breakdown: Dict[str, Any]) -> List[str]:
        """Identify areas of strength (scores >= 75)."""
        strengths = []

        for category, data in score_breakdown.items():
            if data['score'] >= 75:
                strengths.append(
                    f"{category.replace('_', ' ').title()}: {data['score']}/100"
                )

        return strengths if strengths else ["Focus on building strengths across all areas"]

    def _identify_weaknesses(self, score_breakdown: Dict[str, Any]) -> List[str]:
        """Identify areas needing improvement (scores < 60)."""
        weaknesses = []

        for category, data in score_breakdown.items():
            if data['score'] < 60:
                weaknesses.append(
                    f"{category.replace('_', ' ').title()}: {data['score']}/100 - needs improvement"
                )

        return weaknesses if weaknesses else ["All areas performing adequately"]


def calculate_aso_score(
    metadata: Dict[str, Any],
    ratings: Dict[str, Any],
    keyword_performance: Dict[str, Any],
    conversion: Dict[str, Any]
) -> Dict[str, Any]:
    """
    Convenience function to calculate ASO score.

    Args:
        metadata: Metadata quality metrics
        ratings: Ratings data
        keyword_performance: Keyword ranking data
        conversion: Conversion metrics

    Returns:
        Complete ASO score report
    """
    scorer = ASOScorer()
    return scorer.calculate_overall_score(
        metadata,
        ratings,
        keyword_performance,
        conversion
    )

```

### scripts/ab_test_planner.py

```python
"""
A/B testing module for App Store Optimization.
Plans and tracks A/B tests for metadata and visual assets.
"""

from typing import Dict, List, Any, Optional
import math


class ABTestPlanner:
    """Plans and tracks A/B tests for ASO elements."""

    # Minimum detectable effect sizes (conservative estimates)
    MIN_EFFECT_SIZES = {
        'icon': 0.10,  # 10% conversion improvement
        'screenshot': 0.08,  # 8% conversion improvement
        'title': 0.05,  # 5% conversion improvement
        'description': 0.03  # 3% conversion improvement
    }

    # Statistical confidence levels
    CONFIDENCE_LEVELS = {
        'high': 0.95,  # 95% confidence
        'standard': 0.90,  # 90% confidence
        'exploratory': 0.80  # 80% confidence
    }

    def __init__(self):
        """Initialize A/B test planner."""
        self.active_tests = []

    def design_test(
        self,
        test_type: str,
        variant_a: Dict[str, Any],
        variant_b: Dict[str, Any],
        hypothesis: str,
        success_metric: str = 'conversion_rate'
    ) -> Dict[str, Any]:
        """
        Design an A/B test with hypothesis and variables.

        Args:
            test_type: Type of test ('icon', 'screenshot', 'title', 'description')
            variant_a: Control variant details
            variant_b: Test variant details
            hypothesis: Expected outcome hypothesis
            success_metric: Metric to optimize

        Returns:
            Test design with configuration
        """
        test_design = {
            'test_id': self._generate_test_id(test_type),
            'test_type': test_type,
            'hypothesis': hypothesis,
            'variants': {
                'a': {
                    'name': 'Control',
                    'details': variant_a,
                    'traffic_split': 0.5
                },
                'b': {
                    'name': 'Variation',
                    'details': variant_b,
                    'traffic_split': 0.5
                }
            },
            'success_metric': success_metric,
            'secondary_metrics': self._get_secondary_metrics(test_type),
            'minimum_effect_size': self.MIN_EFFECT_SIZES.get(test_type, 0.05),
            'recommended_confidence': 'standard',
            'best_practices': self._get_test_best_practices(test_type)
        }

        self.active_tests.append(test_design)
        return test_design

    def calculate_sample_size(
        self,
        baseline_conversion: float,
        minimum_detectable_effect: float,
        confidence_level: str = 'standard',
        power: float = 0.80
    ) -> Dict[str, Any]:
        """
        Calculate required sample size for statistical significance.

        Args:
            baseline_conversion: Current conversion rate (0-1)
            minimum_detectable_effect: Minimum effect size to detect (0-1)
            confidence_level: 'high', 'standard', or 'exploratory'
            power: Statistical power (typically 0.80 or 0.90)

        Returns:
            Sample size calculation with duration estimates
        """
        alpha = 1 - self.CONFIDENCE_LEVELS[confidence_level]
        beta = 1 - power

        # Expected conversion for variant B
        expected_conversion_b = baseline_conversion * (1 + minimum_detectable_effect)

        # Z-scores for alpha and beta
        z_alpha = self._get_z_score(1 - alpha / 2)  # Two-tailed test
        z_beta = self._get_z_score(power)

        # Pooled standard deviation
        p_pooled = (baseline_conversion + expected_conversion_b) / 2
        sd_pooled = math.sqrt(2 * p_pooled * (1 - p_pooled))

        # Sample size per variant
        n_per_variant = math.ceil(
            ((z_alpha + z_beta) ** 2 * sd_pooled ** 2) /
            ((expected_conversion_b - baseline_conversion) ** 2)
        )

        total_sample_size = n_per_variant * 2

        # Estimate duration based on typical traffic
        duration_estimates = self._estimate_test_duration(
            total_sample_size,
            baseline_conversion
        )

        return {
            'sample_size_per_variant': n_per_variant,
            'total_sample_size': total_sample_size,
            'baseline_conversion': baseline_conversion,
            'expected_conversion_improvement': minimum_detectable_effect,
            'expected_conversion_b': expected_conversion_b,
            'confidence_level': confidence_level,
            'statistical_power': power,
            'duration_estimates': duration_estimates,
            'recommendations': self._generate_sample_size_recommendations(
                n_per_variant,
                duration_estimates
            )
        }

    def calculate_significance(
        self,
        variant_a_conversions: int,
        variant_a_visitors: int,
        variant_b_conversions: int,
        variant_b_visitors: int
    ) -> Dict[str, Any]:
        """
        Calculate statistical significance of test results.

        Args:
            variant_a_conversions: Conversions for control
            variant_a_visitors: Visitors for control
            variant_b_conversions: Conversions for variation
            variant_b_visitors: Visitors for variation

        Returns:
            Significance analysis with decision recommendation
        """
        # Calculate conversion rates
        rate_a = variant_a_conversions / variant_a_visitors if variant_a_visitors > 0 else 0
        rate_b = variant_b_conversions / variant_b_visitors if variant_b_visitors > 0 else 0

        # Calculate improvement
        if rate_a > 0:
            relative_improvement = (rate_b - rate_a) / rate_a
        else:
            relative_improvement = 0

        absolute_improvement = rate_b - rate_a

        # Calculate standard error
        se_a = math.sqrt(rate_a * (1 - rate_a) / variant_a_visitors) if variant_a_visitors > 0 else 0
        se_b = math.sqrt(rate_b * (1 - rate_b) / variant_b_visitors) if variant_b_visitors > 0 else 0
        se_diff = math.sqrt(se_a**2 + se_b**2)

        # Calculate z-score
        z_score = absolute_improvement / se_diff if se_diff > 0 else 0

        # Calculate p-value (two-tailed)
        p_value = 2 * (1 - self._standard_normal_cdf(abs(z_score)))

        # Determine significance
        is_significant_95 = p_value < 0.05
        is_significant_90 = p_value < 0.10

        # Generate decision
        decision = self._generate_test_decision(
            relative_improvement,
            is_significant_95,
            is_significant_90,
            variant_a_visitors + variant_b_visitors
        )

        return {
            'variant_a': {
                'conversions': variant_a_conversions,
                'visitors': variant_a_visitors,
                'conversion_rate': round(rate_a, 4)
            },
            'variant_b': {
                'conversions': variant_b_conversions,
                'visitors': variant_b_visitors,
                'conversion_rate': round(rate_b, 4)
            },
            'improvement': {
                'absolute': round(absolute_improvement, 4),
                'relative_percentage': round(relative_improvement * 100, 2)
            },
            'statistical_analysis': {
                'z_score': round(z_score, 3),
                'p_value': round(p_value, 4),
                'is_significant_95': is_significant_95,
                'is_significant_90': is_significant_90,
                'confidence_level': '95%' if is_significant_95 else ('90%' if is_significant_90 else 'Not significant')
            },
            'decision': decision
        }

    def track_test_results(
        self,
        test_id: str,
        results_data: Dict[str, Any]
    ) -> Dict[str, Any]:
        """
        Track ongoing test results and provide recommendations.

        Args:
            test_id: Test identifier
            results_data: Current test results

        Returns:
            Test tracking report with next steps
        """
        # Find test
        test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
        if not test:
            return {'error': f'Test {test_id} not found'}

        # Calculate significance
        significance = self.calculate_significance(
            results_data['variant_a_conversions'],
            results_data['variant_a_visitors'],
            results_data['variant_b_conversions'],
            results_data['variant_b_visitors']
        )

        # Calculate test progress
        total_visitors = results_data['variant_a_visitors'] + results_data['variant_b_visitors']
        required_sample = results_data.get('required_sample_size', 10000)
        progress_percentage = min((total_visitors / required_sample) * 100, 100)

        # Generate recommendations
        recommendations = self._generate_tracking_recommendations(
            significance,
            progress_percentage,
            test['test_type']
        )

        return {
            'test_id': test_id,
            'test_type': test['test_type'],
            'progress': {
                'total_visitors': total_visitors,
                'required_sample_size': required_sample,
                'progress_percentage': round(progress_percentage, 1),
                'is_complete': progress_percentage >= 100
            },
            'current_results': significance,
            'recommendations': recommendations,
            'next_steps': self._determine_next_steps(
                significance,
                progress_percentage
            )
        }

    def generate_test_report(
        self,
        test_id: str,
        final_results: Dict[str, Any]
    ) -> Dict[str, Any]:
        """
        Generate final test report with insights and recommendations.

        Args:
            test_id: Test identifier
            final_results: Final test results

        Returns:
            Comprehensive test report
        """
        test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
        if not test:
            return {'error': f'Test {test_id} not found'}

        significance = self.calculate_significance(
            final_results['variant_a_conversions'],
            final_results['variant_a_visitors'],
            final_results['variant_b_conversions'],
            final_results['variant_b_visitors']
        )

        # Generate insights
        insights = self._generate_test_insights(
            test,
            significance,
            final_results
        )

        # Implementation plan
        implementation_plan = self._create_implementation_plan(
            test,
            significance
        )

        return {
            'test_summary': {
                'test_id': test_id,
                'test_type': test['test_type'],
                'hypothesis': test['hypothesis'],
                'duration_days': final_results.get('duration_days', 'N/A')
            },
            'results': significance,
            'insights': insights,
            'implementation_plan': implementation_plan,
            'learnings': self._extract_learnings(test, significance)
        }

    def _generate_test_id(self, test_type: str) -> str:
        """Generate unique test ID."""
        import time
        timestamp = int(time.time())
        return f"{test_type}_{timestamp}"

    def _get_secondary_metrics(self, test_type: str) -> List[str]:
        """Get secondary metrics to track for test type."""
        metrics_map = {
            'icon': ['tap_through_rate', 'impression_count', 'brand_recall'],
            'screenshot': ['tap_through_rate', 'time_on_page', 'scroll_depth'],
            'title': ['impression_count', 'tap_through_rate', 'search_visibility'],
            'description': ['time_on_page', 'scroll_depth', 'tap_through_rate']
        }
        return metrics_map.get(test_type, ['tap_through_rate'])

    def _get_test_best_practices(self, test_type: str) -> List[str]:
        """Get best practices for specific test type."""
        practices_map = {
            'icon': [
                'Test only one element at a time (color vs. style vs. symbolism)',
                'Ensure icon is recognizable at small sizes (60x60px)',
                'Consider cultural context for global audience',
                'Test against top competitor icons'
            ],
            'screenshot': [
                'Test order of screenshots (users see first 2-3)',
                'Use captions to tell story',
                'Show key features and benefits',
                'Test with and without device frames'
            ],
            'title': [
                'Test keyword variations, not major rebrand',
                'Keep brand name consistent',
                'Ensure title fits within character limits',
                'Test on both search and browse contexts'
            ],
            'description': [
                'Test structure (bullet points vs. paragraphs)',
                'Test call-to-action placement',
                'Test feature vs. benefit focus',
                'Maintain keyword density'
            ]
        }
        return practices_map.get(test_type, ['Test one variable at a time'])

    def _estimate_test_duration(
        self,
        required_sample_size: int,
        baseline_conversion: float
    ) -> Dict[str, Any]:
        """Estimate test duration based on typical traffic levels."""
        # Assume different daily traffic scenarios
        traffic_scenarios = {
            'low': 100,      # 100 page views/day
            'medium': 1000,  # 1000 page views/day
            'high': 10000    # 10000 page views/day
        }

        estimates = {}
        for scenario, daily_views in traffic_scenarios.items():
            days = math.ceil(required_sample_size / daily_views)
            estimates[scenario] = {
                'daily_page_views': daily_views,
                'estimated_days': days,
                'estimated_weeks': round(days / 7, 1)
            }

        return estimates

    def _generate_sample_size_recommendations(
        self,
        sample_size: int,
        duration_estimates: Dict[str, Any]
    ) -> List[str]:
        """Generate recommendations based on sample size."""
        recommendations = []

        if sample_size > 50000:
            recommendations.append(
                "Large sample size required - consider testing smaller effect size or increasing traffic"
            )

        if duration_estimates['medium']['estimated_days'] > 30:
            recommendations.append(
                "Long test duration - consider higher minimum detectable effect or focus on high-impact changes"
            )

        if duration_estimates['low']['estimated_days'] > 60:
            recommendations.append(
                "Insufficient traffic for reliable testing - consider user acquisition or broader targeting"
            )

        if not recommendations:
            recommendations.append("Sample size and duration are reasonable for this test")

        return recommendations

    def _get_z_score(self, percentile: float) -> float:
        """Get z-score for given percentile (approximation)."""
        # Common z-scores
        z_scores = {
            0.80: 0.84,
            0.85: 1.04,
            0.90: 1.28,
            0.95: 1.645,
            0.975: 1.96,
            0.99: 2.33
        }
        return z_scores.get(percentile, 1.96)

    def _standard_normal_cdf(self, z: float) -> float:
        """Approximate standard normal cumulative distribution function."""
        # Using error function approximation
        t = 1.0 / (1.0 + 0.2316419 * abs(z))
        d = 0.3989423 * math.exp(-z * z / 2.0)
        p = d * t * (0.3193815 + t * (-0.3565638 + t * (1.781478 + t * (-1.821256 + t * 1.330274))))

        if z > 0:
            return 1.0 - p
        else:
            return p

    def _generate_test_decision(
        self,
        improvement: float,
        is_significant_95: bool,
        is_significant_90: bool,
        total_visitors: int
    ) -> Dict[str, Any]:
        """Generate test decision and recommendation."""
        if total_visitors < 1000:
            return {
                'decision': 'continue',
                'rationale': 'Insufficient data - continue test to reach minimum sample size',
                'action': 'Keep test running'
            }

        if is_significant_95:
            if improvement > 0:
                return {
                    'decision': 'implement_b',
                    'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 95% confidence',
                    'action': 'Implement Variant B'
                }
            else:
                return {
                    'decision': 'keep_a',
                    'rationale': 'Variant A performs better with 95% confidence',
                    'action': 'Keep current version (A)'
                }

        elif is_significant_90:
            if improvement > 0:
                return {
                    'decision': 'implement_b_cautiously',
                    'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 90% confidence',
                    'action': 'Consider implementing B, monitor closely'
                }
            else:
                return {
                    'decision': 'keep_a',
                    'rationale': 'Variant A performs better with 90% confidence',
                    'action': 'Keep current version (A)'
                }

        else:
            return {
                'decision': 'inconclusive',
                'rationale': 'No statistically significant difference detected',
                'action': 'Either keep A or test different hypothesis'
            }

    def _generate_tracking_recommendations(
        self,
        significance: Dict[str, Any],
        progress: float,
        test_type: str
    ) -> List[str]:
        """Generate recommendations for ongoing test."""
        recommendations = []

        if progress < 50:
            recommendations.append(
                f"Test is {progress:.0f}% complete - continue collecting data"
            )

        if progress >= 100:
            if significance['statistical_analysis']['is_significant_95']:
                recommendations.append(
                    "Sufficient data collected with significant results - ready to conclude test"
                )
            else:
                recommendations.append(
                    "Sample size reached but no significant difference - consider extending test or concluding"
                )

        return recommendations

    def _determine_next_steps(
        self,
        significance: Dict[str, Any],
        progress: float
    ) -> str:
        """Determine next steps for test."""
        if progress < 100:
            return f"Continue test until reaching 100% sample size (currently {progress:.0f}%)"

        decision = significance.get('decision', {}).get('decision', 'inconclusive')

        if decision == 'implement_b':
            return "Implement Variant B and monitor metrics for 2 weeks"
        elif decision == 'keep_a':
            return "Keep Variant A and design new test with different hypothesis"
        else:
            return "Test inconclusive - either keep A or design new test"

    def _generate_test_insights(
        self,
        test: Dict[str, Any],
        significance: Dict[str, Any],
        results: Dict[str, Any]
    ) -> List[str]:
        """Generate insights from test results."""
        insights = []

        improvement = significance['improvement']['relative_percentage']

        if significance['statistical_analysis']['is_significant_95']:
            insights.append(
                f"Strong evidence: Variant B {'improved' if improvement > 0 else 'decreased'} "
                f"conversion by {abs(improvement):.1f}% with 95% confidence"
            )

        insights.append(
            f"Tested {test['test_type']} changes: {test['hypothesis']}"
        )

        # Add context-specific insights
        if test['test_type'] == 'icon' and improvement > 5:
            insights.append(
                "Icon change had substantial impact - visual first impression is critical"
            )

        return insights

    def _create_implementation_plan(
        self,
        test: Dict[str, Any],
        significance: Dict[str, Any]
    ) -> List[Dict[str, str]]:
        """Create implementation plan for winning variant."""
        plan = []

        if significance.get('decision', {}).get('decision') == 'implement_b':
            plan.append({
                'step': '1. Update store listing',
                'details': f"Replace {test['test_type']} with Variant B across all platforms"
            })
            plan.append({
                'step': '2. Monitor metrics',
                'details': 'Track conversion rate for 2 weeks to confirm sustained improvement'
            })
            plan.append({
                'step': '3. Document learnings',
                'details': 'Record insights for future optimization'
            })

        return plan

    def _extract_learnings(
        self,
        test: Dict[str, Any],
        significance: Dict[str, Any]
    ) -> List[str]:
        """Extract key learnings from test."""
        learnings = []

        improvement = significance['improvement']['relative_percentage']

        learnings.append(
            f"Testing {test['test_type']} can yield {abs(improvement):.1f}% conversion change"
        )

        if test['test_type'] == 'title':
            learnings.append(
                "Title changes affect search visibility and user perception"
            )
        elif test['test_type'] == 'screenshot':
            learnings.append(
                "First 2-3 screenshots are critical for conversion"
            )

        return learnings


def plan_ab_test(
    test_type: str,
    variant_a: Dict[str, Any],
    variant_b: Dict[str, Any],
    hypothesis: str,
    baseline_conversion: float
) -> Dict[str, Any]:
    """
    Convenience function to plan an A/B test.

    Args:
        test_type: Type of test
        variant_a: Control variant
        variant_b: Test variant
        hypothesis: Test hypothesis
        baseline_conversion: Current conversion rate

    Returns:
        Complete test plan
    """
    planner = ABTestPlanner()

    test_design = planner.design_test(
        test_type,
        variant_a,
        variant_b,
        hypothesis
    )

    sample_size = planner.calculate_sample_size(
        baseline_conversion,
        planner.MIN_EFFECT_SIZES.get(test_type, 0.05)
    )

    return {
        'test_design': test_design,
        'sample_size_requirements': sample_size
    }

```

### scripts/review_analyzer.py

```python
"""
Review analysis module for App Store Optimization.
Analyzes user reviews for sentiment, issues, and feature requests.
"""

from typing import Dict, List, Any, Optional, Tuple
from collections import Counter
import re


class ReviewAnalyzer:
    """Analyzes user reviews for actionable insights."""

    # Sentiment keywords
    POSITIVE_KEYWORDS = [
        'great', 'awesome', 'excellent', 'amazing', 'love', 'best', 'perfect',
        'fantastic', 'wonderful', 'brilliant', 'outstanding', 'superb'
    ]

    NEGATIVE_KEYWORDS = [
        'bad', 'terrible', 'awful', 'horrible', 'hate', 'worst', 'useless',
        'broken', 'crash', 'bug', 'slow', 'disappointing', 'frustrating'
    ]

    # Issue indicators
    ISSUE_KEYWORDS = [
        'crash', 'bug', 'error', 'broken', 'not working', 'doesnt work',
        'freezes', 'slow', 'laggy', 'glitch', 'problem', 'issue', 'fail'
    ]

    # Feature request indicators
    FEATURE_REQUEST_KEYWORDS = [
        'wish', 'would be nice', 'should add', 'need', 'want', 'hope',
        'please add', 'missing', 'lacks', 'feature request'
    ]

    def __init__(self, app_name: str):
        """
        Initialize review analyzer.

        Args:
            app_name: Name of the app
        """
        self.app_name = app_name
        self.reviews = []
        self.analysis_cache = {}

    def analyze_sentiment(
        self,
        reviews: List[Dict[str, Any]]
    ) -> Dict[str, Any]:
        """
        Analyze sentiment across reviews.

        Args:
            reviews: List of review dicts with 'text', 'rating', 'date'

        Returns:
            Sentiment analysis summary
        """
        self.reviews = reviews

        sentiment_counts = {
            'positive': 0,
            'neutral': 0,
            'negative': 0
        }

        detailed_sentiments = []

        for review in reviews:
            text = review.get('text', '').lower()
            rating = review.get('rating', 3)

            # Calculate sentiment score
            sentiment_score = self._calculate_sentiment_score(text, rating)
            sentiment_category = self._categorize_sentiment(sentiment_score)

            sentiment_counts[sentiment_category] += 1

            detailed_sentiments.append({
                'review_id': review.get('id', ''),
                'rating': rating,
                'sentiment_score': sentiment_score,
                'sentiment': sentiment_category,
                'text_preview': text[:100] + '...' if len(text) > 100 else text
            })

        # Calculate percentages
        total = len(reviews)
        sentiment_distribution = {
            'positive': round((sentiment_counts['positive'] / total) * 100, 1) if total > 0 else 0,
            'neutral': round((sentiment_counts['neutral'] / total) * 100, 1) if total > 0 else 0,
            'negative': round((sentiment_counts['negative'] / total) * 100, 1) if total > 0 else 0
        }

        # Calculate average rating
        avg_rating = sum(r.get('rating', 0) for r in reviews) / total if total > 0 else 0

        return {
            'total_reviews_analyzed': total,
            'average_rating': round(avg_rating, 2),
            'sentiment_distribution': sentiment_distribution,
            'sentiment_counts': sentiment_counts,
            'sentiment_trend': self._assess_sentiment_trend(sentiment_distribution),
            'detailed_sentiments': detailed_sentiments[:50]  # Limit output
        }

    def extract_common_themes(
        self,
        reviews: List[Dict[str, Any]],
        min_mentions: int = 3
    ) -> Dict[str, Any]:
        """
        Extract frequently mentioned themes and topics.

        Args:
            reviews: List of review dicts
            min_mentions: Minimum mentions to be considered common

        Returns:
            Common themes analysis
        """
        # Extract all words from reviews
        all_words = []
        all_phrases = []

        for review in reviews:
            text = review.get('text', '').lower()
            # Clean text
            text = re.sub(r'[^\w\s]', ' ', text)
            words = text.split()

            # Filter out common words
            stop_words = {
                'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
                'app', 'apps', 'very', 'really', 'just', 'but', 'not', 'you'
            }
            words = [w for w in words if w not in stop_words and len(w) > 3]

            all_words.extend(words)

            # Extract 2-3 word phrases
            for i in range(len(words) - 1):
                phrase = f"{words[i]} {words[i+1]}"
                all_phrases.append(phrase)

        # Count frequency
        word_freq = Counter(all_words)
        phrase_freq = Counter(all_phrases)

        # Filter by min_mentions
        common_words = [
            {'word': word, 'mentions': count}
            for word, count in word_freq.most_common(30)
            if count >= min_mentions
        ]

        common_phrases = [
            {'phrase': phrase, 'mentions': count}
            for phrase, count in phrase_freq.most_common(20)
            if count >= min_mentions
        ]

        # Categorize themes
        themes = self._categorize_themes(common_words, common_phrases)

        return {
            'common_words': common_words,
            'common_phrases': common_phrases,
            'identified_themes': themes,
            'insights': self._generate_theme_insights(themes)
        }

    def identify_issues(
        self,
        reviews: List[Dict[str, Any]],
        rating_threshold: int = 3
    ) -> Dict[str, Any]:
        """
        Identify bugs, crashes, and other issues from reviews.

        Args:
            reviews: List of review dicts
            rating_threshold: Only analyze reviews at or below this rating

        Returns:
            Issue identification report
        """
        issues = []

        for review in reviews:
            rating = review.get('rating', 5)
            if rating > rating_threshold:
                continue

            text = review.get('text', '').lower()

            # Check for issue keywords
            mentioned_issues = []
            for keyword in self.ISSUE_KEYWORDS:
                if keyword in text:
                    mentioned_issues.append(keyword)

            if mentioned_issues:
                issues.append({
                    'review_id': review.get('id', ''),
                    'rating': rating,
                    'date': review.get('date', ''),
                    'issue_keywords': mentioned_issues,
                    'text': text[:200] + '...' if len(text) > 200 else text
                })

        # Group by issue type
        issue_frequency = Counter()
        for issue in issues:
            for keyword in issue['issue_keywords']:
                issue_frequency[keyword] += 1

        # Categorize issues
        categorized_issues = self._categorize_issues(issues)

        # Calculate issue severity
        severity_scores = self._calculate_issue_severity(
            categorized_issues,
            len(reviews)
        )

        return {
            'total_issues_found': len(issues),
            'issue_frequency': dict(issue_frequency.most_common(15)),
            'categorized_issues': categorized_issues,
            'severity_scores': severity_scores,
            'top_issues': self._rank_issues_by_severity(severity_scores),
            'recommendations': self._generate_issue_recommendations(
                categorized_issues,
                severity_scores
            )
        }

    def find_feature_requests(
        self,
        reviews: List[Dict[str, Any]]
    ) -> Dict[str, Any]:
        """
        Extract feature requests and desired improvements.

        Args:
            reviews: List of review dicts

        Returns:
            Feature request analysis
        """
        feature_requests = []

        for review in reviews:
            text = review.get('text', '').lower()
            rating = review.get('rating', 3)

            # Check for feature request indicators
            is_feature_request = any(
                keyword in text
                for keyword in self.FEATURE_REQUEST_KEYWORDS
            )

            if is_feature_request:
                # Extract the specific request
                request_text = self._extract_feature_request_text(text)

                feature_requests.append({
                    'review_id': review.get('id', ''),
                    'rating': rating,
                    'date': review.get('date', ''),
                    'request_text': request_text,
                    'full_review': text[:200] + '...' if len(text) > 200 else text
                })

        # Cluster similar requests
        clustered_requests = self._cluster_feature_requests(feature_requests)

        # Prioritize based on frequency and rating context
        prioritized_requests = self._prioritize_feature_requests(clustered_requests)

        return {
            'total_feature_requests': len(feature_requests),
            'clustered_requests': clustered_requests,
            'prioritized_requests': prioritized_requests,
            'implementation_recommendations': self._generate_feature_recommendations(
                prioritized_requests
            )
        }

    def track_sentiment_trends(
        self,
        reviews_by_period: Dict[str, List[Dict[str, Any]]]
    ) -> Dict[str, Any]:
        """
        Track sentiment changes over time.

        Args:
            reviews_by_period: Dict of period_name: reviews

        Returns:
            Trend analysis
        """
        trends = []

        for period, reviews in reviews_by_period.items():
            sentiment = self.analyze_sentiment(reviews)

            trends.append({
                'period': period,
                'total_reviews': len(reviews),
                'average_rating': sentiment['average_rating'],
                'positive_percentage': sentiment['sentiment_distribution']['positive'],
                'negative_percentage': sentiment['sentiment_distribution']['negative']
            })

        # Calculate trend direction
        if len(trends) >= 2:
            first_period = trends[0]
            last_period = trends[-1]

            rating_change = last_period['average_rating'] - first_period['average_rating']
            sentiment_change = last_period['positive_percentage'] - first_period['positive_percentage']

            trend_direction = self._determine_trend_direction(
                rating_change,
                sentiment_change
            )
        else:
            trend_direction = 'insufficient_data'

        return {
            'periods_analyzed': len(trends),
            'trend_data': trends,
            'trend_direction': trend_direction,
            'insights': self._generate_trend_insights(trends, trend_direction)
        }

    def generate_response_templates(
        self,
        issue_category: str
    ) -> List[Dict[str, str]]:
        """
        Generate response templates for common review scenarios.

        Args:
            issue_category: Category of issue ('crash', 'feature_request', 'positive', etc.)

        Returns:
            Response templates
        """
        templates = {
            'crash': [
                {
                    'scenario': 'App crash reported',
                    'template': "Thank you for bringing this to our attention. We're sorry you experienced a crash. "
                               "Our team is investigating this issue. Could you please share more details about when "
                               "this occurred (device model, iOS/Android version) by contacting support@[company].com? "
                               "We're committed to fixing this quickly."
                },
                {
                    'scenario': 'Crash already fixed',
                    'template': "Thank you for your feedback. We've identified and fixed this crash issue in version [X.X]. "
                               "Please update to the latest version. If the problem persists, please reach out to "
                               "support@[company].com and we'll help you directly."
                }
            ],
            'bug': [
                {
                    'scenario': 'Bug reported',
                    'template': "Thanks for reporting this bug. We take these issues seriously. Our team is looking into it "
                               "and we'll have a fix in an upcoming update. We appreciate your patience and will notify you "
                               "when it's resolved."
                }
            ],
            'feature_request': [
                {
                    'scenario': 'Feature request received',
                    'template': "Thank you for this suggestion! We're always looking to improve [app_name]. We've added your "
                               "request to our roadmap and will consider it for a future update. Follow us @[social] for "
                               "updates on new features."
                },
                {
                    'scenario': 'Feature already planned',
                    'template': "Great news! This feature is already on our roadmap and we're working on it. Stay tuned for "
                               "updates in the coming months. Thanks for your feedback!"
                }
            ],
            'positive': [
                {
                    'scenario': 'Positive review',
                    'template': "Thank you so much for your kind words! We're thrilled that you're enjoying [app_name]. "
                               "Reviews like yours motivate our team to keep improving. If you ever have suggestions, "
                               "we'd love to hear them!"
                }
            ],
            'negative_general': [
                {
                    'scenario': 'General complaint',
                    'template': "We're sorry to hear you're not satisfied with your experience. We'd like to make this right. "
                               "Please contact us at support@[company].com so we can understand the issue better and help "
                               "you directly. Thank you for giving us a chance to improve."
                }
            ]
        }

        return templates.get(issue_category, templates['negative_general'])

    def _calculate_sentiment_score(self, text: str, rating: int) -> float:
        """Calculate sentiment score (-1 to 1)."""
        # Start with rating-based score
        rating_score = (rating - 3) / 2  # Convert 1-5 to -1 to 1

        # Adjust based on text sentiment
        positive_count = sum(1 for keyword in self.POSITIVE_KEYWORDS if keyword in text)
        negative_count = sum(1 for keyword in self.NEGATIVE_KEYWORDS if keyword in text)

        text_score = (positive_count - negative_count) / 10  # Normalize

        # Weighted average (60% rating, 40% text)
        final_score = (rating_score * 0.6) + (text_score * 0.4)

        return max(min(final_score, 1.0), -1.0)

    def _categorize_sentiment(self, score: float) -> str:
        """Categorize sentiment score."""
        if score > 0.3:
            return 'positive'
        elif score < -0.3:
            return 'negative'
        else:
            return 'neutral'

    def _assess_sentiment_trend(self, distribution: Dict[str, float]) -> str:
        """Assess overall sentiment trend."""
        positive = distribution['positive']
        negative = distribution['negative']

        if positive > 70:
            return 'very_positive'
        elif positive > 50:
            return 'positive'
        elif negative > 30:
            return 'concerning'
        elif negative > 50:
            return 'critical'
        else:
            return 'mixed'

    def _categorize_themes(
        self,
        common_words: List[Dict[str, Any]],
        common_phrases: List[Dict[str, Any]]
    ) -> Dict[str, List[str]]:
        """Categorize themes from words and phrases."""
        themes = {
            'features': [],
            'performance': [],
            'usability': [],
            'support': [],
            'pricing': []
        }

        # Keywords for each category
        feature_keywords = {'feature', 'functionality', 'option', 'tool'}
        performance_keywords = {'fast', 'slow', 'crash', 'lag', 'speed', 'performance'}
        usability_keywords = {'easy', 'difficult', 'intuitive', 'confusing', 'interface', 'design'}
        support_keywords = {'support', 'help', 'customer', 'service', 'response'}
        pricing_keywords = {'price', 'cost', 'expensive', 'cheap', 'subscription', 'free'}

        for word_data in common_words:
            word = word_data['word']
            if any(kw in word for kw in feature_keywords):
                themes['features'].append(word)
            elif any(kw in word for kw in performance_keywords):
                themes['performance'].append(word)
            elif any(kw in word for kw in usability_keywords):
                themes['usability'].append(word)
            elif any(kw in word for kw in support_keywords):
                themes['support'].append(word)
            elif any(kw in word for kw in pricing_keywords):
                themes['pricing'].append(word)

        return {k: v for k, v in themes.items() if v}  # Remove empty categories

    def _generate_theme_insights(self, themes: Dict[str, List[str]]) -> List[str]:
        """Generate insights from themes."""
        insights = []

        for category, keywords in themes.items():
            if keywords:
                insights.append(
                    f"{category.title()}: Users frequently mention {', '.join(keywords[:3])}"
                )

        return insights[:5]

    def _categorize_issues(self, issues: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]:
        """Categorize issues by type."""
        categories = {
            'crashes': [],
            'bugs': [],
            'performance': [],
            'compatibility': []
        }

        for issue in issues:
            keywords = issue['issue_keywords']

            if 'crash' in keywords or 'freezes' in keywords:
                categories['crashes'].append(issue)
            elif 'bug' in keywords or 'error' in keywords or 'broken' in keywords:
                categories['bugs'].append(issue)
            elif 'slow' in keywords or 'laggy' in keywords:
                categories['performance'].append(issue)
            else:
                categories['compatibility'].append(issue)

        return {k: v for k, v in categories.items() if v}

    def _calculate_issue_severity(
        self,
        categorized_issues: Dict[str, List[Dict[str, Any]]],
        total_reviews: int
    ) -> Dict[str, Dict[str, Any]]:
        """Calculate severity scores for each issue category."""
        severity_scores = {}

        for category, issues in categorized_issues.items():
            count = len(issues)
            percentage = (count / total_reviews) * 100 if total_reviews > 0 else 0

            # Calculate average rating of affected reviews
            avg_rating = sum(i['rating'] for i in issues) / count if count > 0 else 0

            # Severity score (0-100)
            severity = min((percentage * 10) + ((5 - avg_rating) * 10), 100)

            severity_scores[category] = {
                'count': count,
                'percentage': round(percentage, 2),
                'average_rating': round(avg_rating, 2),
                'severity_score': round(severity, 1),
                'priority': 'critical' if severity > 70 else ('high' if severity > 40 else 'medium')
            }

        return severity_scores

    def _rank_issues_by_severity(
        self,
        severity_scores: Dict[str, Dict[str, Any]]
    ) -> List[Dict[str, Any]]:
        """Rank issues by severity score."""
        ranked = sorted(
            [{'category': cat, **data} for cat, data in severity_scores.items()],
            key=lambda x: x['severity_score'],
            reverse=True
        )
        return ranked

    def _generate_issue_recommendations(
        self,
        categorized_issues: Dict[str, List[Dict[str, Any]]],
        severity_scores: Dict[str, Dict[str, Any]]
    ) -> List[str]:
        """Generate recommendations for addressing issues."""
        recommendations = []

        for category, score_data in severity_scores.items():
            if score_data['priority'] == 'critical':
                recommendations.append(
                    f"URGENT: Address {category} issues immediately - affecting {score_data['percentage']}% of reviews"
                )
            elif score_data['priority'] == 'high':
                recommendations.append(
                    f"HIGH PRIORITY: Focus on {category} issues in next update"
                )

        return recommendations

    def _extract_feature_request_text(self, text: str) -> str:
        """Extract the specific feature request from review text."""
        # Simple extraction - find sentence with feature request keywords
        sentences = text.split('.')
        for sentence in sentences:
            if any(keyword in sentence for keyword in self.FEATURE_REQUEST_KEYWORDS):
                return sentence.strip()
        return text[:100]  # Fallback

    def _cluster_feature_requests(
        self,
        feature_requests: List[Dict[str, Any]]
    ) -> List[Dict[str, Any]]:
        """Cluster similar feature requests."""
        # Simplified clustering - group by common keywords
        clusters = {}

        for request in feature_requests:
            text = request['request_text'].lower()
            # Extract key words
            words = [w for w in text.split() if len(w) > 4]

            # Try to find matching cluster
            matched = False
            for cluster_key in clusters:
                if any(word in cluster_key for word in words[:3]):
                    clusters[cluster_key].append(request)
                    matched = True
                    break

            if not matched and words:
                cluster_key = ' '.join(words[:2])
                clusters[cluster_key] = [request]

        return [
            {'feature_theme': theme, 'request_count': len(requests), 'examples': requests[:3]}
            for theme, requests in clusters.items()
        ]

    def _prioritize_feature_requests(
        self,
        clustered_requests: List[Dict[str, Any]]
    ) -> List[Dict[str, Any]]:
        """Prioritize feature requests by frequency."""
        return sorted(
            clustered_requests,
            key=lambda x: x['request_count'],
            reverse=True
        )[:10]

    def _generate_feature_recommendations(
        self,
        prioritized_requests: List[Dict[str, Any]]
    ) -> List[str]:
        """Generate recommendations for feature requests."""
        recommendations = []

        if prioritized_requests:
            top_request = prioritized_requests[0]
            recommendations.append(
                f"Most requested feature: {top_request['feature_theme']} "
                f"({top_request['request_count']} mentions) - consider for next major release"
            )

        if len(prioritized_requests) > 1:
            recommendations.append(
                f"Also consider: {prioritized_requests[1]['feature_theme']}"
            )

        return recommendations

    def _determine_trend_direction(
        self,
        rating_change: float,
        sentiment_change: float
    ) -> str:
        """Determine overall trend direction."""
        if rating_change > 0.2 and sentiment_change > 5:
            return 'improving'
        elif rating_change < -0.2 and sentiment_change < -5:
            return 'declining'
        else:
            return 'stable'

    def _generate_trend_insights(
        self,
        trends: List[Dict[str, Any]],
        trend_direction: str
    ) -> List[str]:
        """Generate insights from trend analysis."""
        insights = []

        if trend_direction == 'improving':
            insights.append("Positive trend: User satisfaction is increasing over time")
        elif trend_direction == 'declining':
            insights.append("WARNING: User satisfaction is declining - immediate action needed")
        else:
            insights.append("Sentiment is stable - maintain current quality")

        # Review velocity insight
        if len(trends) >= 2:
            recent_reviews = trends[-1]['total_reviews']
            previous_reviews = trends[-2]['total_reviews']

            if recent_reviews > previous_reviews * 1.5:
                insights.append("Review volume increasing - growing user base or recent controversy")

        return insights


def analyze_reviews(
    app_name: str,
    reviews: List[Dict[str, Any]]
) -> Dict[str, Any]:
    """
    Convenience function to perform comprehensive review analysis.

    Args:
        app_name: App name
        reviews: List of review dictionaries

    Returns:
        Complete review analysis
    """
    analyzer = ReviewAnalyzer(app_name)

    return {
        'sentiment_analysis': analyzer.analyze_sentiment(reviews),
        'common_themes': analyzer.extract_common_themes(reviews),
        'issues_identified': analyzer.identify_issues(reviews),
        'feature_requests': analyzer.find_feature_requests(reviews)
    }

```

### scripts/launch_checklist.py

```python
"""
Launch checklist module for App Store Optimization.
Generates comprehensive pre-launch and update checklists.
"""

from typing import Dict, List, Any, Optional
from datetime import datetime, timedelta


class LaunchChecklistGenerator:
    """Generates comprehensive checklists for app launches and updates."""

    def __init__(self, platform: str = 'both'):
        """
        Initialize checklist generator.

        Args:
            platform: 'apple', 'google', or 'both'
        """
        if platform not in ['apple', 'google', 'both']:
            raise ValueError("Platform must be 'apple', 'google', or 'both'")

        self.platform = platform

    def generate_prelaunch_checklist(
        self,
        app_info: Dict[str, Any],
        launch_date: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Generate comprehensive pre-launch checklist.

        Args:
            app_info: App information (name, category, target_audience)
            launch_date: Target launch date (YYYY-MM-DD)

        Returns:
            Complete pre-launch checklist
        """
        checklist = {
            'app_info': app_info,
            'launch_date': launch_date,
            'checklists': {}
        }

        # Generate platform-specific checklists
        if self.platform in ['apple', 'both']:
            checklist['checklists']['apple'] = self._generate_apple_checklist(app_info)

        if self.platform in ['google', 'both']:
            checklist['checklists']['google'] = self._generate_google_checklist(app_info)

        # Add universal checklist items
        checklist['checklists']['universal'] = self._generate_universal_checklist(app_info)

        # Generate timeline
        if launch_date:
            checklist['timeline'] = self._generate_launch_timeline(launch_date)

        # Calculate completion status
        checklist['summary'] = self._calculate_checklist_summary(checklist['checklists'])

        return checklist

    def validate_app_store_compliance(
        self,
        app_data: Dict[str, Any],
        platform: str = 'apple'
    ) -> Dict[str, Any]:
        """
        Validate compliance with app store guidelines.

        Args:
            app_data: App data including metadata, privacy policy, etc.
            platform: 'apple' or 'google'

        Returns:
            Compliance validation report
        """
        validation_results = {
            'platform': platform,
            'is_compliant': True,
            'errors': [],
            'warnings': [],
            'recommendations': []
        }

        if platform == 'apple':
            self._validate_apple_compliance(app_data, validation_results)
        elif platform == 'google':
            self._validate_google_compliance(app_data, validation_results)

        # Determine overall compliance
        validation_results['is_compliant'] = len(validation_results['errors']) == 0

        return validation_results

    def create_update_plan(
        self,
        current_version: str,
        planned_features: List[str],
        update_frequency: str = 'monthly'
    ) -> Dict[str, Any]:
        """
        Create update cadence and feature rollout plan.

        Args:
            current_version: Current app version
            planned_features: List of planned features
            update_frequency: 'weekly', 'biweekly', 'monthly', 'quarterly'

        Returns:
            Update plan with cadence and feature schedule
        """
        # Calculate next versions
        next_versions = self._calculate_next_versions(
            current_version,
            update_frequency,
            len(planned_features)
        )

        # Distribute features across versions
        feature_schedule = self._distribute_features(
            planned_features,
            next_versions
        )

        # Generate "What's New" templates
        whats_new_templates = [
            self._generate_whats_new_template(version_data)
            for version_data in feature_schedule
        ]

        return {
            'current_version': current_version,
            'update_frequency': update_frequency,
            'planned_updates': len(feature_schedule),
            'feature_schedule': feature_schedule,
            'whats_new_templates': whats_new_templates,
            'recommendations': self._generate_update_recommendations(update_frequency)
        }

    def optimize_launch_timing(
        self,
        app_category: str,
        target_audience: str,
        current_date: Optional[str] = None
    ) -> Dict[str, Any]:
        """
        Recommend optimal launch timing.

        Args:
            app_category: App category
            target_audience: Target audience description
            current_date: Current date (YYYY-MM-DD), defaults to today

        Returns:
            Launch timing recommendations
        """
        if not current_date:
            current_date = datetime.now().strftime('%Y-%m-%d')

        # Analyze launch timing factors
        day_of_week_rec = self._recommend_day_of_week(app_category)
        seasonal_rec = self._recommend_seasonal_timing(app_category, current_date)
        competitive_rec = self._analyze_competitive_timing(app_category)

        # Calculate optimal dates
        optimal_dates = self._calculate_optimal_dates(
            current_date,
            day_of_week_rec,
            seasonal_rec
        )

        return {
            'current_date': current_date,
            'optimal_launch_dates': optimal_dates,
            'day_of_week_recommendation': day_of_week_rec,
            'seasonal_considerations': seasonal_rec,
            'competitive_timing': competitive_rec,
            'final_recommendation': self._generate_timing_recommendation(
                optimal_dates,
                seasonal_rec
            )
        }

    def plan_seasonal_campaigns(
        self,
        app_category: str,
        current_month: int = None
    ) -> Dict[str, Any]:
        """
        Identify seasonal opportunities for ASO campaigns.

        Args:
            app_category: App category
            current_month: Current month (1-12), defaults to current

        Returns:
            Seasonal campaign opportunities
        """
        if not current_month:
            current_month = datetime.now().month

        # Identify relevant seasonal events
        seasonal_opportunities = self._identify_seasonal_opportunities(
            app_category,
            current_month
        )

        # Generate campaign ideas
        campaigns = [
            self._generate_seasonal_campaign(opportunity)
            for opportunity in seasonal_opportunities
        ]

        return {
            'current_month': current_month,
            'category': app_category,
            'seasonal_opportunities': seasonal_opportunities,
            'campaign_ideas': campaigns,
            'implementation_timeline': self._create_seasonal_timeline(campaigns)
        }

    def _generate_apple_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
        """Generate Apple App Store specific checklist."""
        return [
            {
                'category': 'App Store Connect Setup',
                'items': [
                    {'task': 'App Store Connect account created', 'status': 'pending'},
                    {'task': 'App bundle ID registered', 'status': 'pending'},
                    {'task': 'App Privacy declarations completed', 'status': 'pending'},
                    {'task': 'Age rating questionnaire completed', 'status': 'pending'}
                ]
            },
            {
                'category': 'Metadata (Apple)',
                'items': [
                    {'task': 'App title (30 chars max)', 'status': 'pending'},
                    {'task': 'Subtitle (30 chars max)', 'status': 'pending'},
                    {'task': 'Promotional text (170 chars max)', 'status': 'pending'},
                    {'task': 'Description (4000 chars max)', 'status': 'pending'},
                    {'task': 'Keywords (100 chars, comma-separated)', 'status': 'pending'},
                    {'task': 'Category selection (primary + secondary)', 'status': 'pending'}
                ]
            },
            {
                'category': 'Visual Assets (Apple)',
                'items': [
                    {'task': 'App icon (1024x1024px)', 'status': 'pending'},
                    {'task': 'Screenshots (iPhone 6.7" required)', 'status': 'pending'},
                    {'task': 'Screenshots (iPhone 5.5" required)', 'status': 'pending'},
                    {'task': 'Screenshots (iPad Pro 12.9" if iPad app)', 'status': 'pending'},
                    {'task': 'App preview video (optional but recommended)', 'status': 'pending'}
                ]
            },
            {
                'category': 'Technical Requirements (Apple)',
                'items': [
                    {'task': 'Build uploaded to App Store Connect', 'status': 'pending'},
                    {'task': 'TestFlight testing completed', 'status': 'pending'},
                    {'task': 'App tested on required iOS versions', 'status': 'pending'},
                    {'task': 'Crash-free rate > 99%', 'status': 'pending'},
                    {'task': 'All links in app/metadata working', 'status': 'pending'}
                ]
            },
            {
                'category': 'Legal & Privacy (Apple)',
                'items': [
                    {'task': 'Privacy Policy URL provided', 'status': 'pending'},
                    {'task': 'Terms of Service URL (if applicable)', 'status': 'pending'},
                    {'task': 'Data collection declarations accurate', 'status': 'pending'},
                    {'task': 'Third-party SDKs disclosed', 'status': 'pending'}
                ]
            }
        ]

    def _generate_google_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
        """Generate Google Play Store specific checklist."""
        return [
            {
                'category': 'Play Console Setup',
                'items': [
                    {'task': 'Google Play Console account created', 'status': 'pending'},
                    {'task': 'Developer profile completed', 'status': 'pending'},
                    {'task': 'Payment merchant account linked (if paid app)', 'status': 'pending'},
                    {'task': 'Content rating questionnaire completed', 'status': 'pending'}
                ]
            },
            {
                'category': 'Metadata (Google)',
                'items': [
                    {'task': 'App title (50 chars max)', 'status': 'pending'},
                    {'task': 'Short description (80 chars max)', 'status': 'pending'},
                    {'task': 'Full description (4000 chars max)', 'status': 'pending'},
                    {'task': 'Category selection', 'status': 'pending'},
                    {'task': 'Tags (up to 5)', 'status': 'pending'}
                ]
            },
            {
                'category': 'Visual Assets (Google)',
                'items': [
                    {'task': 'App icon (512x512px)', 'status': 'pending'},
                    {'task': 'Feature graphic (1024x500px)', 'status': 'pending'},
                    {'task': 'Screenshots (2-8 required, phone)', 'status': 'pending'},
                    {'task': 'Screenshots (tablet, if applicable)', 'status': 'pending'},
                    {'task': 'Promo video (YouTube link, optional)', 'status': 'pending'}
                ]
            },
            {
                'category': 'Technical Requirements (Google)',
                'items': [
                    {'task': 'APK/AAB uploaded to Play Console', 'status': 'pending'},
                    {'task': 'Internal testing completed', 'status': 'pending'},
                    {'task': 'App tested on required Android versions', 'status': 'pending'},
                    {'task': 'Target API level meets requirements', 'status': 'pending'},
                    {'task': 'All permissions justified', 'status': 'pending'}
                ]
            },
            {
                'category': 'Legal & Privacy (Google)',
                'items': [
                    {'task': 'Privacy Policy URL provided', 'status': 'pending'},
                    {'task': 'Data safety section completed', 'status': 'pending'},
                    {'task': 'Ads disclosure (if applicable)', 'status': 'pending'},
                    {'task': 'In-app purchase disclosure (if applicable)', 'status': 'pending'}
                ]
            }
        ]

    def _generate_universal_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
        """Generate universal (both platforms) checklist."""
        return [
            {
                'category': 'Pre-Launch Marketing',
                'items': [
                    {'task': 'Landing page created', 'status': 'pending'},
                    {'task': 'Social media accounts setup', 'status': 'pending'},
                    {'task': 'Press kit prepared', 'status': 'pending'},
                    {'task': 'Beta tester feedback collected', 'status': 'pending'},
                    {'task': 'Launch announcement drafted', 'status': 'pending'}
                ]
            },
            {
                'category': 'ASO Preparation',
                'items': [
                    {'task': 'Keyword research completed', 'status': 'pending'},
                    {'task': 'Competitor analysis done', 'status': 'pending'},
                    {'task': 'A/B test plan created for post-launch', 'status': 'pending'},
                    {'task': 'Analytics tracking configured', 'status': 'pending'}
                ]
            },
            {
                'category': 'Quality Assurance',
                'items': [
                    {'task': 'All core features tested', 'status': 'pending'},
                    {'task': 'User flows validated', 'status': 'pending'},
                    {'task': 'Performance testing completed', 'status': 'pending'},
                    {'task': 'Accessibility features tested', 'status': 'pending'},
                    {'task': 'Security audit completed', 'status': 'pending'}
                ]
            },
            {
                'category': 'Support Infrastructure',
                'items': [
                    {'task': 'Support email/system setup', 'status': 'pending'},
                    {'task': 'FAQ page created', 'status': 'pending'},
                    {'task': 'Documentation for users prepared', 'status': 'pending'},
                    {'task': 'Team trained on handling reviews', 'status': 'pending'}
                ]
            }
        ]

    def _generate_launch_timeline(self, launch_date: str) -> List[Dict[str, Any]]:
        """Generate timeline with milestones leading to launch."""
        launch_dt = datetime.strptime(launch_date, '%Y-%m-%d')

        milestones = [
            {
                'date': (launch_dt - timedelta(days=90)).strftime('%Y-%m-%d'),
                'milestone': '90 days before: Complete keyword research and competitor analysis'
            },
            {
                'date': (launch_dt - timedelta(days=60)).strftime('%Y-%m-%d'),
                'milestone': '60 days before: Finalize metadata and visual assets'
            },
            {
                'date': (launch_dt - timedelta(days=45)).strftime('%Y-%m-%d'),
                'milestone': '45 days before: Begin beta testing program'
            },
            {
                'date': (launch_dt - timedelta(days=30)).strftime('%Y-%m-%d'),
                'milestone': '30 days before: Submit app for review (Apple typically takes 1-2 days, Google instant)'
            },
            {
                'date': (launch_dt - timedelta(days=14)).strftime('%Y-%m-%d'),
                'milestone': '14 days before: Prepare launch marketing materials'
            },
            {
                'date': (launch_dt - timedelta(days=7)).strftime('%Y-%m-%d'),
                'milestone': '7 days before: Set up analytics and monitoring'
            },
            {
                'date': launch_dt.strftime('%Y-%m-%d'),
                'milestone': 'Launch Day: Release app and execute marketing plan'
            },
            {
                'date': (launch_dt + timedelta(days=7)).strftime('%Y-%m-%d'),
                'milestone': '7 days after: Monitor metrics, respond to reviews, address critical issues'
            },
            {
                'date': (launch_dt + timedelta(days=30)).strftime('%Y-%m-%d'),
                'milestone': '30 days after: Analyze launch metrics, plan first update'
            }
        ]

        return milestones

    def _calculate_checklist_summary(self, checklists: Dict[str, List[Dict[str, Any]]]) -> Dict[str, Any]:
        """Calculate completion summary."""
        total_items = 0
        completed_items = 0

        for platform, categories in checklists.items():
            for category in categories:
                for item in category['items']:
                    total_items += 1
                    if item['status'] == 'completed':
                        completed_items += 1

        completion_percentage = (completed_items / total_items * 100) if total_items > 0 else 0

        return {
            'total_items': total_items,
            'completed_items': completed_items,
            'pending_items': total_items - completed_items,
            'completion_percentage': round(completion_percentage, 1),
            'is_ready_to_launch': completion_percentage == 100
        }

    def _validate_apple_compliance(
        self,
        app_data: Dict[str, Any],
        validation_results: Dict[str, Any]
    ) -> None:
        """Validate Apple App Store compliance."""
        # Check for required fields
        if not app_data.get('privacy_policy_url'):
            validation_results['errors'].append("Privacy Policy URL is required")

        if not app_data.get('app_icon'):
            validation_results['errors'].append("App icon (1024x1024px) is required")

        # Check metadata character limits
        title = app_data.get('title', '')
        if len(title) > 30:
            validation_results['errors'].append(f"Title exceeds 30 characters ({len(title)})")

        # Warnings for best practices
        subtitle = app_data.get('subtitle', '')
        if not subtitle:
            validation_results['warnings'].append("Subtitle is empty - consider adding for better discoverability")

        keywords = app_data.get('keywords', '')
        if len(keywords) < 80:
            validation_results['warnings'].append(
                f"Keywords field underutilized ({len(keywords)}/100 chars) - add more keywords"
            )

    def _validate_google_compliance(
        self,
        app_data: Dict[str, Any],
        validation_results: Dict[str, Any]
    ) -> None:
        """Validate Google Play Store compliance."""
        # Check for required fields
        if not app_data.get('privacy_policy_url'):
            validation_results['errors'].append("Privacy Policy URL is required")

        if not app_data.get('feature_graphic'):
            validation_results['errors'].append("Feature graphic (1024x500px) is required")

        # Check metadata character limits
        title = app_data.get('title', '')
        if len(title) > 50:
            validation_results['errors'].append(f"Title exceeds 50 characters ({len(title)})")

        short_desc = app_data.get('short_description', '')
        if len(short_desc) > 80:
            validation_results['errors'].append(f"Short description exceeds 80 characters ({len(short_desc)})")

        # Warnings
        if not short_desc:
            validation_results['warnings'].append("Short description is empty")

    def _calculate_next_versions(
        self,
        current_version: str,
        update_frequency: str,
        feature_count: int
    ) -> List[str]:
        """Calculate next version numbers."""
        # Parse current version (assume semantic versioning)
        parts = current_version.split('.')
        major, minor, patch = int(parts[0]), int(parts[1]), int(parts[2] if len(parts) > 2 else 0)

        versions = []
        for i in range(feature_count):
            if update_frequency == 'weekly':
                patch += 1
            elif update_frequency == 'biweekly':
                patch += 1
            elif update_frequency == 'monthly':
                minor += 1
                patch = 0
            else:  # quarterly
                minor += 1
                patch = 0

            versions.append(f"{major}.{minor}.{patch}")

        return versions

    def _distribute_features(
        self,
        features: List[str],
        versions: List[str]
    ) -> List[Dict[str, Any]]:
        """Distribute features across versions."""
        features_per_version = max(1, len(features) // len(versions))

        schedule = []
        for i, version in enumerate(versions):
            start_idx = i * features_per_version
            end_idx = start_idx + features_per_version if i < len(versions) - 1 else len(features)

            schedule.append({
                'version': version,
                'features': features[start_idx:end_idx],
                'release_priority': 'high' if i == 0 else ('medium' if i < len(versions) // 2 else 'low')
            })

        return schedule

    def _generate_whats_new_template(self, version_data: Dict[str, Any]) -> Dict[str, str]:
        """Generate What's New template for version."""
        features_list = '\n'.join([f"• {feature}" for feature in version_data['features']])

        template = f"""Version {version_data['version']}

{features_list}

We're constantly improving your experience. Thanks for using [App Name]!

Have feedback? Contact us at support@[company].com"""

        return {
            'version': version_data['version'],
            'template': template
        }

    def _generate_update_recommendations(self, update_frequency: str) -> List[str]:
        """Generate recommendations for update strategy."""
        recommendations = []

        if update_frequency == 'weekly':
            recommendations.append("Weekly updates show active development but ensure quality doesn't suffer")
        elif update_frequency == 'monthly':
            recommendations.append("Monthly updates are optimal for most apps - balance features and stability")

        recommendations.extend([
            "Include bug fixes in every update",
            "Update 'What's New' section with each release",
            "Respond to reviews mentioning fixed issues"
        ])

        return recommendations

    def _recommend_day_of_week(self, app_category: str) -> Dict[str, Any]:
        """Recommend best day of week to launch."""
        # General recommendations based on category
        if app_category.lower() in ['games', 'entertainment']:
            return {
                'recommended_day': 'Thursday',
                'rationale': 'People download entertainment apps before weekend'
            }
        elif app_category.lower() in ['productivity', 'business']:
            return {
                'recommended_day': 'Tuesday',
                'rationale': 'Business users most active mid-week'
            }
        else:
            return {
                'recommended_day': 'Wednesday',
                'rationale': 'Mid-week provides good balance and review potential'
            }

    def _recommend_seasonal_timing(self, app_category: str, current_date: str) -> Dict[str, Any]:
        """Recommend seasonal timing considerations."""
        current_dt = datetime.strptime(current_date, '%Y-%m-%d')
        month = current_dt.month

        # Avoid certain periods
        avoid_periods = []
        if month == 12:
            avoid_periods.append("Late December - low user engagement during holidays")
        if month in [7, 8]:
            avoid_periods.append("Summer months - some categories see lower engagement")

        # Recommend periods
        good_periods = []
        if month in [1, 9]:
            good_periods.append("New Year/Back-to-school - high user engagement")
        if month in [10, 11]:
            good_periods.append("Pre-holiday season - good for shopping/gift apps")

        return {
            'current_month': month,
            'avoid_periods': avoid_periods,
            'good_periods': good_periods
        }

    def _analyze_competitive_timing(self, app_category: str) -> Dict[str, str]:
        """Analyze competitive timing considerations."""
        return {
            'recommendation': 'Research competitor launch schedules in your category',
            'strategy': 'Avoid launching same week as major competitor updates'
        }

    def _calculate_optimal_dates(
        self,
        current_date: str,
        day_rec: Dict[str, Any],
        seasonal_rec: Dict[str, Any]
    ) -> List[str]:
        """Calculate optimal launch dates."""
        current_dt = datetime.strptime(current_date, '%Y-%m-%d')

        # Find next occurrence of recommended day
        target_day = day_rec['recommended_day']
        days_map = {'Monday': 0, 'Tuesday': 1, 'Wednesday': 2, 'Thursday': 3, 'Friday': 4}
        target_day_num = days_map.get(target_day, 2)

        days_ahead = (target_day_num - current_dt.weekday()) % 7
        if days_ahead == 0:
            days_ahead = 7

        next_target_date = current_dt + timedelta(days=days_ahead)

        optimal_dates = [
            next_target_date.strftime('%Y-%m-%d'),
            (next_target_date + timedelta(days=7)).strftime('%Y-%m-%d'),
            (next_target_date + timedelta(days=14)).strftime('%Y-%m-%d')
        ]

        return optimal_dates

    def _generate_timing_recommendation(
        self,
        optimal_dates: List[str],
        seasonal_rec: Dict[str, Any]
    ) -> str:
        """Generate final timing recommendation."""
        if seasonal_rec['avoid_periods']:
            return f"Consider launching in {optimal_dates[1]} to avoid {seasonal_rec['avoid_periods'][0]}"
        elif seasonal_rec['good_periods']:
            return f"Launch on {optimal_dates[0]} to capitalize on {seasonal_rec['good_periods'][0]}"
        else:
            return f"Recommended launch date: {optimal_dates[0]}"

    def _identify_seasonal_opportunities(
        self,
        app_category: str,
        current_month: int
    ) -> List[Dict[str, Any]]:
        """Identify seasonal opportunities for category."""
        opportunities = []

        # Universal opportunities
        if current_month == 1:
            opportunities.append({
                'event': 'New Year Resolutions',
                'dates': 'January 1-31',
                'relevance': 'high' if app_category.lower() in ['health', 'fitness', 'productivity'] else 'medium'
            })

        if current_month in [11, 12]:
            opportunities.append({
                'event': 'Holiday Shopping Season',
                'dates': 'November-December',
                'relevance': 'high' if app_category.lower() in ['shopping', 'gifts'] else 'low'
            })

        # Category-specific
        if app_category.lower() == 'education' and current_month in [8, 9]:
            opportunities.append({
                'event': 'Back to School',
                'dates': 'August-September',
                'relevance': 'high'
            })

        return opportunities

    def _generate_seasonal_campaign(self, opportunity: Dict[str, Any]) -> Dict[str, Any]:
        """Generate campaign idea for seasonal opportunity."""
        return {
            'event': opportunity['event'],
            'campaign_idea': f"Create themed visuals and messaging for {opportunity['event']}",
            'metadata_updates': 'Update app description and screenshots with seasonal themes',
            'promotion_strategy': 'Consider limited-time features or discounts'
        }

    def _create_seasonal_timeline(self, campaigns: List[Dict[str, Any]]) -> List[str]:
        """Create implementation timeline for campaigns."""
        return [
            f"30 days before: Plan {campaign['event']} campaign strategy"
            for campaign in campaigns
        ]


def generate_launch_checklist(
    platform: str,
    app_info: Dict[str, Any],
    launch_date: Optional[str] = None
) -> Dict[str, Any]:
    """
    Convenience function to generate launch checklist.

    Args:
        platform: Platform ('apple', 'google', or 'both')
        app_info: App information
        launch_date: Target launch date

    Returns:
        Complete launch checklist
    """
    generator = LaunchChecklistGenerator(platform)
    return generator.generate_prelaunch_checklist(app_info, launch_date)

```

### scripts/localization_helper.py

```python
"""
Localization helper module for App Store Optimization.
Manages multi-language ASO optimization strategies.
"""

from typing import Dict, List, Any, Optional, Tuple


class LocalizationHelper:
    """Helps manage multi-language ASO optimization."""

    # Priority markets by language (based on app store revenue and user base)
    PRIORITY_MARKETS = {
        'tier_1': [
            {'language': 'en-US', 'market': 'United States', 'revenue_share': 0.25},
            {'language': 'zh-CN', 'market': 'China', 'revenue_share': 0.20},
            {'language': 'ja-JP', 'market': 'Japan', 'revenue_share': 0.10},
            {'language': 'de-DE', 'market': 'Germany', 'revenue_share': 0.08},
            {'language': 'en-GB', 'market': 'United Kingdom', 'revenue_share': 0.06}
        ],
        'tier_2': [
            {'language': 'fr-FR', 'market': 'France', 'revenue_share': 0.05},
            {'language': 'ko-KR', 'market': 'South Korea', 'revenue_share': 0.05},
            {'language': 'es-ES', 'market': 'Spain', 'revenue_share': 0.03},
            {'language': 'it-IT', 'market': 'Italy', 'revenue_share': 0.03},
            {'language': 'pt-BR', 'market': 'Brazil', 'revenue_share': 0.03}
        ],
        'tier_3': [
            {'language': 'ru-RU', 'market': 'Russia', 'revenue_share': 0.02},
            {'language': 'es-MX', 'market': 'Mexico', 'revenue_share': 0.02},
            {'language': 'nl-NL', 'market': 'Netherlands', 'revenue_share': 0.02},
            {'language': 'sv-SE', 'market': 'Sweden', 'revenue_share': 0.01},
            {'language': 'pl-PL', 'market': 'Poland', 'revenue_share': 0.01}
        ]
    }

    # Character limit multipliers by language (some languages need more/less space)
    CHAR_MULTIPLIERS = {
        'en': 1.0,
        'zh': 0.6,  # Chinese characters are more compact
        'ja': 0.7,  # Japanese uses kanji
        'ko': 0.8,  # Korean is relatively compact
        'de': 1.3,  # German words are typically longer
        'fr': 1.2,  # French tends to be longer
        'es': 1.1,  # Spanish slightly longer
        'pt': 1.1,  # Portuguese similar to Spanish
        'ru': 1.1,  # Russian similar length
        'ar': 1.0,  # Arabic varies
        'it': 1.1   # Italian similar to Spanish
    }

    def __init__(self, app_category: str = 'general'):
        """
        Initialize localization helper.

        Args:
            app_category: App category to prioritize relevant markets
        """
        self.app_category = app_category
        self.localization_plans = []

    def identify_target_markets(
        self,
        current_market: str = 'en-US',
        budget_level: str = 'medium',
        target_market_count: int = 5
    ) -> Dict[str, Any]:
        """
        Recommend priority markets for localization.

        Args:
            current_market: Current/primary market
            budget_level: 'low', 'medium', or 'high'
            target_market_count: Number of markets to target

        Returns:
            Prioritized market recommendations
        """
        # Determine tier priorities based on budget
        if budget_level == 'low':
            priority_tiers = ['tier_1']
            max_markets = min(target_market_count, 3)
        elif budget_level == 'medium':
            priority_tiers = ['tier_1', 'tier_2']
            max_markets = min(target_market_count, 8)
        else:  # high budget
            priority_tiers = ['tier_1', 'tier_2', 'tier_3']
            max_markets = target_market_count

        # Collect markets from priority tiers
        recommended_markets = []
        for tier in priority_tiers:
            for market in self.PRIORITY_MARKETS[tier]:
                if market['language'] != current_market:
                    recommended_markets.append({
                        **market,
                        'tier': tier,
                        'estimated_translation_cost': self._estimate_translation_cost(
                            market['language']
                        )
                    })

        # Sort by revenue share and limit
        recommended_markets.sort(key=lambda x: x['revenue_share'], reverse=True)
        recommended_markets = recommended_markets[:max_markets]

        # Calculate potential ROI
        total_potential_revenue_share = sum(m['revenue_share'] for m in recommended_markets)

        return {
            'recommended_markets': recommended_markets,
            'total_markets': len(recommended_markets),
            'estimated_total_revenue_lift': f"{total_potential_revenue_share*100:.1f}%",
            'estimated_cost': self._estimate_total_localization_cost(recommended_markets),
            'implementation_priority': self._prioritize_implementation(recommended_markets)
        }

    def translate_metadata(
        self,
        source_metadata: Dict[str, str],
        source_language: str,
        target_language: str,
        platform: str = 'apple'
    ) -> Dict[str, Any]:
        """
        Generate localized metadata with character limit considerations.

        Args:
            source_metadata: Original metadata (title, description, etc.)
            source_language: Source language code (e.g., 'en')
            target_language: Target language code (e.g., 'es')
            platform: 'apple' or 'google'

        Returns:
            Localized metadata with character limit validation
        """
        # Get character multiplier
        target_lang_code = target_language.split('-')[0]
        char_multiplier = self.CHAR_MULTIPLIERS.get(target_lang_code, 1.0)

        # Platform-specific limits
        if platform == 'apple':
            limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
        else:
            limits = {'title': 50, 'short_description': 80, 'description': 4000}

        localized_metadata = {}
        warnings = []

        for field, text in source_metadata.items():
            if field not in limits:
                continue

            # Estimate target length
            estimated_length = int(len(text) * char_multiplier)
            limit = limits[field]

            localized_metadata[field] = {
                'original_text': text,
                'original_length': len(text),
                'estimated_target_length': estimated_length,
                'character_limit': limit,
                'fits_within_limit': estimated_length <= limit,
                'translation_notes': self._get_translation_notes(
                    field,
                    target_language,
                    estimated_length,
                    limit
                )
            }

            if estimated_length > limit:
                warnings.append(
                    f"{field}: Estimated length ({estimated_length}) may exceed limit ({limit}) - "
                    f"condensing may be required"
                )

        return {
            'source_language': source_language,
            'target_language': target_language,
            'platform': platform,
            'localized_fields': localized_metadata,
            'character_multiplier': char_multiplier,
            'warnings': warnings,
            'recommendations': self._generate_translation_recommendations(
                target_language,
                warnings
            )
        }

    def adapt_keywords(
        self,
        source_keywords: List[str],
        source_language: str,
        target_language: str,
        target_market: str
    ) -> Dict[str, Any]:
        """
        Adapt keywords for target market (not just direct translation).

        Args:
            source_keywords: Original keywords
            source_language: Source language code
            target_language: Target language code
            target_market: Target market (e.g., 'France', 'Japan')

        Returns:
            Adapted keyword recommendations
        """
        # Cultural adaptation considerations
        cultural_notes = self._get_cultural_keyword_considerations(target_market)

        # Search behavior differences
        search_patterns = self._get_search_patterns(target_market)

        adapted_keywords = []
        for keyword in source_keywords:
            adapted_keywords.append({
                'source_keyword': keyword,
                'adaptation_strategy': self._determine_adaptation_strategy(
                    keyword,
                    target_market
                ),
                'cultural_considerations': cultural_notes.get(keyword, []),
                'priority': 'high' if keyword in source_keywords[:3] else 'medium'
            })

        return {
            'source_language': source_language,
            'target_language': target_language,
            'target_market': target_market,
            'adapted_keywords': adapted_keywords,
            'search_behavior_notes': search_patterns,
            'recommendations': [
                'Use native speakers for keyword research',
                'Test keywords with local users before finalizing',
                'Consider local competitors\' keyword strategies',
                'Monitor search trends in target market'
            ]
        }

    def validate_translations(
        self,
        translated_metadata: Dict[str, str],
        target_language: str,
        platform: str = 'apple'
    ) -> Dict[str, Any]:
        """
        Validate translated metadata for character limits and quality.

        Args:
            translated_metadata: Translated text fields
            target_language: Target language code
            platform: 'apple' or 'google'

        Returns:
            Validation report
        """
        # Platform limits
        if platform == 'apple':
            limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
        else:
            limits = {'title': 50, 'short_description': 80, 'description': 4000}

        validation_results = {
            'is_valid': True,
            'field_validations': {},
            'errors': [],
            'warnings': []
        }

        for field, text in translated_metadata.items():
            if field not in limits:
                continue

            actual_length = len(text)
            limit = limits[field]
            is_within_limit = actual_length <= limit

            validation_results['field_validations'][field] = {
                'text': text,
                'length': actual_length,
                'limit': limit,
                'is_valid': is_within_limit,
                'usage_percentage': round((actual_length / limit) * 100, 1)
            }

            if not is_within_limit:
                validation_results['is_valid'] = False
                validation_results['errors'].append(
                    f"{field} exceeds limit: {actual_length}/{limit} characters"
                )

        # Quality checks
        quality_issues = self._check_translation_quality(
            translated_metadata,
            target_language
        )

        validation_results['quality_checks'] = quality_issues

        if quality_issues:
            validation_results['warnings'].extend(
                [f"Quality issue: {issue}" for issue in quality_issues]
            )

        return validation_results

    def calculate_localization_roi(
        self,
        target_markets: List[str],
        current_monthly_downloads: int,
        localization_cost: float,
        expected_lift_percentage: float = 0.15
    ) -> Dict[str, Any]:
        """
        Estimate ROI of localization investment.

        Args:
            target_markets: List of market codes
            current_monthly_downloads: Current monthly downloads
            localization_cost: Total cost to localize
            expected_lift_percentage: Expected download increase (default 15%)

        Returns:
            ROI analysis
        """
        # Estimate market-specific lift
        market_data = []
        total_expected_lift = 0

        for market_code in target_markets:
            # Find market in priority lists
            market_info = None
            for tier_name, markets in self.PRIORITY_MARKETS.items():
                for m in markets:
                    if m['language'] == market_code:
                        market_info = m
                        break

            if not market_info:
                continue

            # Estimate downloads from this market
            market_downloads = int(current_monthly_downloads * market_info['revenue_share'])
            expected_increase = int(market_downloads * expected_lift_percentage)
            total_expected_lift += expected_increase

            market_data.append({
                'market': market_info['market'],
                'current_monthly_downloads': market_downloads,
                'expected_increase': expected_increase,
                'revenue_potential': market_info['revenue_share']
            })

        # Calculate payback period (assuming $2 revenue per download)
        revenue_per_download = 2.0
        monthly_additional_revenue = total_expected_lift * revenue_per_download
        payback_months = (localization_cost / monthly_additional_revenue) if monthly_additional_revenue > 0 else float('inf')

        return {
            'markets_analyzed': len(market_data),
            'market_breakdown': market_data,
            'total_expected_monthly_lift': total_expected_lift,
            'expected_monthly_revenue_increase': f"${monthly_additional_revenue:,.2f}",
            'localization_cost': f"${localization_cost:,.2f}",
            'payback_period_months': round(payback_months, 1) if payback_months != float('inf') else 'N/A',
            'annual_roi': f"{((monthly_additional_revenue * 12 - localization_cost) / localization_cost * 100):.1f}%" if payback_months != float('inf') else 'Negative',
            'recommendation': self._generate_roi_recommendation(payback_months)
        }

    def _estimate_translation_cost(self, language: str) -> Dict[str, float]:
        """Estimate translation cost for a language."""
        # Base cost per word (professional translation)
        base_cost_per_word = 0.12

        # Language-specific multipliers
        multipliers = {
            'zh-CN': 1.5,  # Chinese requires specialist
            'ja-JP': 1.5,  # Japanese requires specialist
            'ko-KR': 1.3,
            'ar-SA': 1.4,  # Arabic (right-to-left)
            'default': 1.0
        }

        multiplier = multipliers.get(language, multipliers['default'])

        # Typical word counts for app store metadata
        typical_word_counts = {
            'title': 5,
            'subtitle': 5,
            'description': 300,
            'keywords': 20,
            'screenshots': 50  # Caption text
        }

        total_words = sum(typical_word_counts.values())
        estimated_cost = total_words * base_cost_per_word * multiplier

        return {
            'cost_per_word': base_cost_per_word * multiplier,
            'total_words': total_words,
            'estimated_cost': round(estimated_cost, 2)
        }

    def _estimate_total_localization_cost(self, markets: List[Dict[str, Any]]) -> str:
        """Estimate total cost for multiple markets."""
        total = sum(m['estimated_translation_cost']['estimated_cost'] for m in markets)
        return f"${total:,.2f}"

    def _prioritize_implementation(self, markets: List[Dict[str, Any]]) -> List[Dict[str, str]]:
        """Create phased implementation plan."""
        phases = []

        # Phase 1: Top revenue markets
        phase_1 = [m for m in markets[:3]]
        if phase_1:
            phases.append({
                'phase': 'Phase 1 (First 30 days)',
                'markets': ', '.join([m['market'] for m in phase_1]),
                'rationale': 'Highest revenue potential markets'
            })

        # Phase 2: Remaining tier 1 and top tier 2
        phase_2 = [m for m in markets[3:6]]
        if phase_2:
            phases.append({
                'phase': 'Phase 2 (Days 31-60)',
                'markets': ', '.join([m['market'] for m in phase_2]),
                'rationale': 'Strong revenue markets with good ROI'
            })

        # Phase 3: Remaining markets
        phase_3 = [m for m in markets[6:]]
        if phase_3:
            phases.append({
                'phase': 'Phase 3 (Days 61-90)',
                'markets': ', '.join([m['market'] for m in phase_3]),
                'rationale': 'Complete global coverage'
            })

        return phases

    def _get_translation_notes(
        self,
        field: str,
        target_language: str,
        estimated_length: int,
        limit: int
    ) -> List[str]:
        """Get translation-specific notes for field."""
        notes = []

        if estimated_length > limit:
            notes.append(f"Condensing required - aim for {limit - 10} characters to allow buffer")

        if field == 'title' and target_language.startswith('zh'):
            notes.append("Chinese characters convey more meaning - may need fewer characters")

        if field == 'keywords' and target_language.startswith('de'):
            notes.append("German compound words may be longer - prioritize shorter keywords")

        return notes

    def _generate_translation_recommendations(
        self,
        target_language: str,
        warnings: List[str]
    ) -> List[str]:
        """Generate translation recommendations."""
        recommendations = [
            "Use professional native speakers for translation",
            "Test translations with local users before finalizing"
        ]

        if warnings:
            recommendations.append("Work with translator to condense text while preserving meaning")

        if target_language.startswith('zh') or target_language.startswith('ja'):
            recommendations.append("Consider cultural context and local idioms")

        return recommendations

    def _get_cultural_keyword_considerations(self, target_market: str) -> Dict[str, List[str]]:
        """Get cultural considerations for keywords by market."""
        # Simplified example - real implementation would be more comprehensive
        considerations = {
            'China': ['Avoid politically sensitive terms', 'Consider local alternatives to blocked services'],
            'Japan': ['Honorific language important', 'Technical terms often use katakana'],
            'Germany': ['Privacy and security terms resonate', 'Efficiency and quality valued'],
            'France': ['French language protection laws', 'Prefer French terms over English'],
            'default': ['Research local search behavior', 'Test with native speakers']
        }

        return considerations.get(target_market, considerations['default'])

    def _get_search_patterns(self, target_market: str) -> List[str]:
        """Get search pattern notes for market."""
        patterns = {
            'China': ['Use both simplified characters and romanization', 'Brand names often romanized'],
            'Japan': ['Mix of kanji, hiragana, and katakana', 'English words common in tech'],
            'Germany': ['Compound words common', 'Specific technical terminology'],
            'default': ['Research local search trends', 'Monitor competitor keywords']
        }

        return patterns.get(target_market, patterns['default'])

    def _determine_adaptation_strategy(self, keyword: str, target_market: str) -> str:
        """Determine how to adapt keyword for market."""
        # Simplified logic
        if target_market in ['China', 'Japan', 'Korea']:
            return 'full_localization'  # Complete translation needed
        elif target_market in ['Germany', 'France', 'Spain']:
            return 'adapt_and_translate'  # Some adaptation needed
        else:
            return 'direct_translation'  # Direct translation usually sufficient

    def _check_translation_quality(
        self,
        translated_metadata: Dict[str, str],
        target_language: str
    ) -> List[str]:
        """Basic quality checks for translations."""
        issues = []

        # Check for untranslated placeholders
        for field, text in translated_metadata.items():
            if '[' in text or '{' in text or 'TODO' in text.upper():
                issues.append(f"{field} contains placeholder text")

        # Check for excessive punctuation
        for field, text in translated_metadata.items():
            if text.count('!') > 3:
                issues.append(f"{field} has excessive exclamation marks")

        return issues

    def _generate_roi_recommendation(self, payback_months: float) -> str:
        """Generate ROI recommendation."""
        if payback_months <= 3:
            return "Excellent ROI - proceed immediately"
        elif payback_months <= 6:
            return "Good ROI - recommended investment"
        elif payback_months <= 12:
            return "Moderate ROI - consider if strategic market"
        else:
            return "Low ROI - reconsider or focus on higher-priority markets first"


def plan_localization_strategy(
    current_market: str,
    budget_level: str,
    monthly_downloads: int
) -> Dict[str, Any]:
    """
    Convenience function to plan localization strategy.

    Args:
        current_market: Current market code
        budget_level: Budget level
        monthly_downloads: Current monthly downloads

    Returns:
        Complete localization plan
    """
    helper = LocalizationHelper()

    target_markets = helper.identify_target_markets(
        current_market=current_market,
        budget_level=budget_level
    )

    # Extract market codes
    market_codes = [m['language'] for m in target_markets['recommended_markets']]

    # Calculate ROI
    estimated_cost = float(target_markets['estimated_cost'].replace('$', '').replace(',', ''))

    roi_analysis = helper.calculate_localization_roi(
        market_codes,
        monthly_downloads,
        estimated_cost
    )

    return {
        'target_markets': target_markets,
        'roi_analysis': roi_analysis
    }

```

### assets/aso-audit-template.md

```markdown
# ASO Audit Template

Use this template to conduct a systematic App Store Optimization audit.

---

## App Information

| Field | Value |
|-------|-------|
| App Name | |
| Platform | [ ] iOS [ ] Android |
| Category | |
| Current Downloads | |
| Current Rating | |
| Audit Date | |

---

## Metadata Audit

### Title Analysis

| Criterion | iOS (30 chars) | Android (50 chars) |
|-----------|----------------|---------------------|
| Current Title | | |
| Character Count | /30 | /50 |
| Primary Keyword Present | [ ] Yes [ ] No | [ ] Yes [ ] No |
| Brand Name Position | | |

**Title Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

### Subtitle / Short Description

| Criterion | iOS Subtitle (30 chars) | Android Short Desc (80 chars) |
|-----------|-------------------------|-------------------------------|
| Current Text | | |
| Character Count | /30 | /80 |
| Keywords Included | | |
| Benefit-Focused | [ ] Yes [ ] No | [ ] Yes [ ] No |

**Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

### Keyword Field (iOS Only)

| Criterion | Status |
|-----------|--------|
| Current Keywords | |
| Character Count | /100 |
| Duplicates Present | [ ] Yes [ ] No |
| Plurals Included | [ ] Yes [ ] No |
| Brand Names Included | [ ] Yes [ ] No |

**Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

### Full Description

| Criterion | iOS | Android |
|-----------|-----|---------|
| Character Count | /4000 | /4000 |
| Primary Keyword Density | % | % |
| Secondary Keywords (count) | | |
| Feature Bullets Present | [ ] Yes [ ] No | [ ] Yes [ ] No |
| Social Proof Included | [ ] Yes [ ] No | [ ] Yes [ ] No |
| CTA Present | [ ] Yes [ ] No | [ ] Yes [ ] No |

**Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

---

## Visual Asset Audit

### App Icon

| Criterion | Status |
|-----------|--------|
| Recognizable at 60x60px | [ ] Yes [ ] No |
| Distinct from competitors | [ ] Yes [ ] No |
| Matches app design | [ ] Yes [ ] No |
| No text/words | [ ] Yes [ ] No |

**Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

### Screenshots

| Screenshot | Caption | Feature Shown | Score |
|------------|---------|---------------|-------|
| 1 (Hero) | | | /10 |
| 2 | | | /10 |
| 3 | | | /10 |
| 4 | | | /10 |
| 5 | | | /10 |

| Criterion | Status |
|-----------|--------|
| Total Screenshots | /10 (iOS) or /8 (Android) |
| Captions Present | [ ] Yes [ ] No |
| Consistent Style | [ ] Yes [ ] No |
| First 3 Show Value | [ ] Yes [ ] No |
| Device Frames Used | [ ] Yes [ ] No |

**Overall Screenshot Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

### App Preview Video

| Criterion | Status |
|-----------|--------|
| Video Present | [ ] Yes [ ] No |
| Duration | seconds |
| Shows Core Features | [ ] Yes [ ] No |
| Hook in First 5 Seconds | [ ] Yes [ ] No |
| CTA at End | [ ] Yes [ ] No |

**Score:** ___/10

---

## Keyword Performance Audit

### Current Keyword Rankings

| Keyword | Current Rank | Volume | Competition | Score |
|---------|--------------|--------|-------------|-------|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |

### Keyword Opportunities

| Keyword | Current Rank | Potential | Action |
|---------|--------------|-----------|--------|
| | | | |
| | | | |
| | | | |

---

## Rating & Review Audit

### Rating Summary

| Metric | Value |
|--------|-------|
| Current Average Rating | /5.0 |
| Total Ratings | |
| Ratings (Last 30 Days) | |
| 5-Star Percentage | % |
| 1-Star Percentage | % |

### Review Analysis

| Category | Count | Common Themes |
|----------|-------|---------------|
| Positive (4-5 stars) | | |
| Neutral (3 stars) | | |
| Negative (1-2 stars) | | |

### Response Rate

| Metric | Value |
|--------|-------|
| Reviews Responded | % |
| Avg Response Time | hours |

**Rating Score:** ___/10

**Recommendations:**
- [ ]
- [ ]

---

## Competitor Comparison

### Top 3 Competitors

| Metric | Your App | Competitor 1 | Competitor 2 | Competitor 3 |
|--------|----------|--------------|--------------|--------------|
| Name | | | | |
| Rating | | | | |
| Total Ratings | | | | |
| Downloads | | | | |
| Title Keywords | | | | |
| Screenshot Count | | | | |

### Competitive Gaps

| Gap Identified | Opportunity |
|----------------|-------------|
| | |
| | |
| | |

---

## Overall ASO Score

| Category | Weight | Score | Weighted |
|----------|--------|-------|----------|
| Title/Metadata | 25% | /10 | |
| Keywords | 25% | /10 | |
| Visual Assets | 25% | /10 | |
| Ratings/Reviews | 25% | /10 | |
| **TOTAL** | 100% | | **/100** |

---

## Priority Action Items

### High Priority (This Week)

1. [ ]
2. [ ]
3. [ ]

### Medium Priority (This Month)

1. [ ]
2. [ ]
3. [ ]

### Low Priority (This Quarter)

1. [ ]
2. [ ]
3. [ ]

---

## Audit Sign-Off

| Role | Name | Date |
|------|------|------|
| Auditor | | |
| Reviewer | | |
| App Owner | | |

---

## Notes

_Additional observations and context:_

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# App Store Optimization (ASO) Skill

**Version**: 1.0.0
**Last Updated**: November 7, 2025
**Author**: Claude Skills Factory

## Overview

A comprehensive App Store Optimization (ASO) skill that provides complete capabilities for researching, optimizing, and tracking mobile app performance on the Apple App Store and Google Play Store. This skill empowers app developers and marketers to maximize their app's visibility, downloads, and success in competitive app marketplaces.

## What This Skill Does

This skill provides end-to-end ASO capabilities across seven key areas:

1. **Research & Analysis**: Keyword research, competitor analysis, market trends, review sentiment
2. **Metadata Optimization**: Title, description, keywords with platform-specific character limits
3. **Conversion Optimization**: A/B testing framework, visual asset optimization
4. **Rating & Review Management**: Sentiment analysis, response strategies, issue identification
5. **Launch & Update Strategies**: Pre-launch checklists, timing optimization, update planning
6. **Analytics & Tracking**: ASO scoring, keyword rankings, performance benchmarking
7. **Localization**: Multi-language strategy, translation management, ROI analysis

## Key Features

### Comprehensive Keyword Research
- Search volume and competition analysis
- Long-tail keyword discovery
- Competitor keyword extraction
- Keyword difficulty scoring
- Strategic prioritization

### Platform-Specific Metadata Optimization
- **Apple App Store**:
  - Title (30 chars)
  - Subtitle (30 chars)
  - Promotional Text (170 chars)
  - Description (4000 chars)
  - Keywords field (100 chars)
- **Google Play Store**:
  - Title (50 chars)
  - Short Description (80 chars)
  - Full Description (4000 chars)
- Character limit validation
- Keyword density analysis
- Multiple optimization strategies

### Competitor Intelligence
- Automated competitor discovery
- Metadata strategy analysis
- Visual asset assessment
- Gap identification
- Competitive positioning

### ASO Health Scoring
- 0-100 overall score
- Four-category breakdown (Metadata, Ratings, Keywords, Conversion)
- Strengths and weaknesses identification
- Prioritized action recommendations
- Expected impact estimates

### Scientific A/B Testing
- Test design and hypothesis formulation
- Sample size calculation
- Statistical significance analysis
- Duration estimation
- Implementation recommendations

### Global Localization
- Market prioritization (Tier 1/2/3)
- Translation cost estimation
- Character limit adaptation by language
- Cultural keyword considerations
- ROI analysis

### Review Intelligence
- Sentiment analysis
- Common theme extraction
- Bug and issue identification
- Feature request clustering
- Professional response templates

### Launch Planning
- Platform-specific checklists
- Timeline generation
- Compliance validation
- Optimal timing recommendations
- Seasonal campaign planning

## Python Modules

This skill includes 8 powerful Python modules:

### 1. keyword_analyzer.py
**Purpose**: Analyzes keywords for search volume, competition, and relevance

**Key Functions**:
- `analyze_keyword()`: Single keyword analysis
- `compare_keywords()`: Multi-keyword comparison and ranking
- `find_long_tail_opportunities()`: Generate long-tail variations
- `calculate_keyword_density()`: Analyze keyword usage in text
- `extract_keywords_from_text()`: Extract keywords from reviews/descriptions

### 2. metadata_optimizer.py
**Purpose**: Optimizes titles, descriptions, keywords with character limit validation

**Key Functions**:
- `optimize_title()`: Generate optimal title options
- `optimize_description()`: Create conversion-focused descriptions
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
- `validate_character_limits()`: Ensure platform compliance
- `calculate_keyword_density()`: Analyze keyword integration

### 3. competitor_analyzer.py
**Purpose**: Analyzes competitor ASO strategies

**Key Functions**:
- `analyze_competitor()`: Single competitor deep-dive
- `compare_competitors()`: Multi-competitor analysis
- `identify_gaps()`: Find competitive opportunities
- `_calculate_competitive_strength()`: Score competitor ASO quality

### 4. aso_scorer.py
**Purpose**: Calculates comprehensive ASO health score

**Key Functions**:
- `calculate_overall_score()`: 0-100 ASO health score
- `score_metadata_quality()`: Evaluate metadata optimization
- `score_ratings_reviews()`: Assess rating quality and volume
- `score_keyword_performance()`: Analyze ranking positions
- `score_conversion_metrics()`: Evaluate conversion rates
- `generate_recommendations()`: Prioritized improvement actions

### 5. ab_test_planner.py
**Purpose**: Plans and tracks A/B tests for ASO elements

**Key Functions**:
- `design_test()`: Create test hypothesis and structure
- `calculate_sample_size()`: Determine required visitors
- `calculate_significance()`: Assess statistical validity
- `track_test_results()`: Monitor ongoing tests
- `generate_test_report()`: Create comprehensive test reports

### 6. localization_helper.py
**Purpose**: Manages multi-language ASO optimization

**Key Functions**:
- `identify_target_markets()`: Prioritize localization markets
- `translate_metadata()`: Adapt metadata for languages
- `adapt_keywords()`: Cultural keyword adaptation
- `validate_translations()`: Character limit validation
- `calculate_localization_roi()`: Estimate investment returns

### 7. review_analyzer.py
**Purpose**: Analyzes user reviews for actionable insights

**Key Functions**:
- `analyze_sentiment()`: Calculate sentiment distribution
- `extract_common_themes()`: Identify frequent topics
- `identify_issues()`: Surface bugs and problems
- `find_feature_requests()`: Extract desired features
- `track_sentiment_trends()`: Monitor changes over time
- `generate_response_templates()`: Create review responses

### 8. launch_checklist.py
**Purpose**: Generates comprehensive launch and update checklists

**Key Functions**:
- `generate_prelaunch_checklist()`: Complete submission validation
- `validate_app_store_compliance()`: Check guidelines compliance
- `create_update_plan()`: Plan update cadence
- `optimize_launch_timing()`: Recommend launch dates
- `plan_seasonal_campaigns()`: Identify seasonal opportunities

## Installation

### For Claude Code (Desktop/CLI)

#### Project-Level Installation
```bash
# Copy skill folder to project
cp -r app-store-optimization /path/to/your/project/.claude/skills/

# Claude will auto-load the skill when working in this project
```

#### User-Level Installation (Available in All Projects)
```bash
# Copy skill folder to user-level skills
cp -r app-store-optimization ~/.claude/skills/

# Claude will load this skill in all your projects
```

### For Claude Apps (Browser)

1. Use the `skill-creator` skill to import the skill
2. Or manually import via Claude Apps interface

### Verification

To verify installation:
```bash
# Check if skill folder exists
ls ~/.claude/skills/app-store-optimization/

# You should see:
# SKILL.md
# keyword_analyzer.py
# metadata_optimizer.py
# competitor_analyzer.py
# aso_scorer.py
# ab_test_planner.py
# localization_helper.py
# review_analyzer.py
# launch_checklist.py
# sample_input.json
# expected_output.json
# HOW_TO_USE.md
# README.md
```

## Usage Examples

### Example 1: Complete Keyword Research

```
Hey Claude—I just added the "app-store-optimization" skill. Can you research keywords for my fitness app? I'm targeting people who want home workouts, yoga, and meal planning. Analyze top competitors like Nike Training Club and Peloton.
```

**What Claude will do**:
- Use `keyword_analyzer.py` to research keywords
- Use `competitor_analyzer.py` to analyze Nike Training Club and Peloton
- Provide prioritized keyword list with search volumes, competition levels
- Identify gaps and long-tail opportunities
- Recommend primary keywords for title and secondary keywords for description

### Example 2: Optimize App Store Metadata

```
Hey Claude—I just added the "app-store-optimization" skill. Optimize my app's metadata for both Apple App Store and Google Play Store:
- App: FitFlow
- Category: Health & Fitness
- Features: AI workout plans, nutrition tracking, progress photos
- Keywords: fitness app, workout planner, home fitness
```

**What Claude will do**:
- Use `metadata_optimizer.py` to create optimized titles (multiple options)
- Generate platform-specific descriptions (short and full)
- Optimize Apple's 100-character keyword field
- Validate all character limits
- Calculate keyword density
- Provide before/after comparison

### Example 3: Calculate ASO Health Score

```
Hey Claude—I just added the "app-store-optimization" skill. Calculate my app's ASO score:
- Average rating: 4.3 stars (8,200 ratings)
- Keywords in top 10: 4
- Keywords in top 50: 15
- Conversion rate: 3.8%
- Title: "FitFlow - Home Workouts"
- Description: 1,500 characters with 3 keyword mentions
```

**What Claude will do**:
- Use `aso_scorer.py` to calculate overall score (0-100)
- Break down by category (Metadata: X/25, Ratings: X/25, Keywords: X/25, Conversion: X/25)
- Identify strengths and weaknesses
- Generate prioritized recommendations
- Estimate impact of improvements

### Example 4: A/B Test Planning

```
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon. My current conversion rate is 4.2%. How many visitors do I need and how long should I run the test?
```

**What Claude will do**:
- Use `ab_test_planner.py` to design test
- Calculate required sample size (based on minimum detectable effect)
- Estimate test duration for low/medium/high traffic scenarios
- Provide test structure and success metrics
- Explain how to analyze results

### Example 5: Review Sentiment Analysis

```
Hey Claude—I just added the "app-store-optimization" skill. Analyze my last 500 reviews and tell me:
- Overall sentiment
- Most common complaints
- Top feature requests
- Bugs needing immediate fixes
```

**What Claude will do**:
- Use `review_analyzer.py` to process reviews
- Calculate sentiment distribution
- Extract common themes
- Identify and prioritize issues
- Cluster feature requests
- Generate response templates

### Example 6: Pre-Launch Checklist

```
Hey Claude—I just added the "app-store-optimization" skill. Generate a complete pre-launch checklist for both app stores. My launch date is March 15, 2026.
```

**What Claude will do**:
- Use `launch_checklist.py` to generate checklists
- Create Apple App Store checklist (metadata, assets, technical, legal)
- Create Google Play Store checklist (metadata, assets, technical, legal)
- Add universal checklist (marketing, QA, support)
- Generate timeline with milestones
- Calculate completion percentage

## Best Practices

### Keyword Research
1. Start with 20-30 seed keywords
2. Analyze top 5 competitors in your category
3. Balance high-volume and long-tail keywords
4. Prioritize relevance over search volume
5. Update keyword research quarterly

### Metadata Optimization
1. Front-load keywords in title (first 15 characters most important)
2. Use every available character (don't waste space)
3. Write for humans first, search engines second
4. A/B test major changes before committing
5. Update descriptions with each major release

### A/B Testing
1. Test one element at a time (icon vs. screenshots vs. title)
2. Run tests to statistical significance (90%+ confidence)
3. Test high-impact elements first (icon has biggest impact)
4. Allow sufficient duration (at least 1 week, preferably 2-3)
5. Document learnings for future tests

### Localization
1. Start with top 5 revenue markets (US, China, Japan, Germany, UK)
2. Use professional translators, not machine translation
3. Test translations with native speakers
4. Adapt keywords for cultural context
5. Monitor ROI by market

### Review Management
1. Respond to reviews within 24-48 hours
2. Always be professional, even with negative reviews
3. Address specific issues raised
4. Thank users for positive feedback
5. Use insights to prioritize product improvements

## Technical Requirements

- **Python**: 3.7+ (for Python modules)
- **Platform Support**: Apple App Store, Google Play Store
- **Data Formats**: JSON input/output
- **Dependencies**: Standard library only (no external packages required)

## Limitations

### Data Dependencies
- Keyword search volumes are estimates (no official Apple/Google data)
- Competitor data limited to publicly available information
- Review analysis requires access to public reviews
- Historical data may not be available for new apps

### Platform Constraints
- Apple: Metadata changes require app submission (except Promotional Text)
- Google: Metadata changes take 1-2 hours to index
- A/B testing requires significant traffic for statistical significance
- Store algorithms are proprietary and change without notice

### Scope
- Does not include paid user acquisition (Apple Search Ads, Google Ads)
- Does not cover in-app analytics implementation
- Does not handle technical app development
- Focuses on organic discovery and conversion optimization

## Troubleshooting

### Issue: Python modules not found
**Solution**: Ensure all .py files are in the same directory as SKILL.md

### Issue: Character limit validation failing
**Solution**: Check that you're using the correct platform ('apple' or 'google')

### Issue: Keyword research returning limited results
**Solution**: Provide more context about your app, features, and target audience

### Issue: ASO score seems inaccurate
**Solution**: Ensure you're providing accurate metrics (ratings, keyword rankings, conversion rate)

## Version History

### Version 1.0.0 (November 7, 2025)
- Initial release
- 8 Python modules with comprehensive ASO capabilities
- Support for both Apple App Store and Google Play Store
- Keyword research, metadata optimization, competitor analysis
- ASO scoring, A/B testing, localization, review analysis
- Launch planning and seasonal campaign tools

## Support & Feedback

This skill is designed to help app developers and marketers succeed in competitive app marketplaces. For the best results:

1. Provide detailed context about your app
2. Include specific metrics when available
3. Ask follow-up questions for clarification
4. Iterate based on results

## Credits

Developed by Claude Skills Factory
Based on industry-standard ASO best practices
Platform requirements current as of November 2025

## License

This skill is provided as-is for use with Claude Code and Claude Apps. Customize and extend as needed for your specific use cases.

---

**Ready to optimize your app?** Start with keyword research, then move to metadata optimization, and finally implement A/B testing for continuous improvement. The skill handles everything from pre-launch planning to ongoing optimization.

For detailed usage examples, see [HOW_TO_USE.md](HOW_TO_USE.md).

```

### _meta.json

```json
{
  "owner": "alirezarezvani",
  "slug": "app-store-optimization",
  "displayName": "App Store Optimization",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1770402531210,
    "commit": "https://github.com/openclaw/skills/commit/792941c23326c617caa53dfd8c19be5e2ab88ae6"
  },
  "history": [
    {
      "version": "0.1.0",
      "publishedAt": 1770028069136,
      "commit": "https://github.com/clawdbot/skills/commit/d9a0182cc89c260cdec408409c1a65a87804d144"
    }
  ]
}

```

app-store-optimization | SkillHub