seo-wordpress-manager
Batch update Yoast SEO metadata (titles, descriptions, focus keyphrases) in WordPress via GraphQL. Use when the user wants to update SEO metadata, optimize titles, fix meta descriptions, or manage Yoast SEO fields across multiple posts. Supports preview mode, progress tracking, and resume capability.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install dragosroua-claude-content-skills-seo-wordpress-manager
Repository
Skill path: skills/seo-wordpress-manager
Batch update Yoast SEO metadata (titles, descriptions, focus keyphrases) in WordPress via GraphQL. Use when the user wants to update SEO metadata, optimize titles, fix meta descriptions, or manage Yoast SEO fields across multiple posts. Supports preview mode, progress tracking, and resume capability.
Open repositoryBest for
Primary workflow: Grow & Distribute.
Technical facets: Full Stack, Tech Writer.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: dragosroua.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install seo-wordpress-manager into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/dragosroua/claude-content-skills before adding seo-wordpress-manager to shared team environments
- Use seo-wordpress-manager for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: seo-wordpress-manager
description: Batch update Yoast SEO metadata (titles, descriptions, focus keyphrases) in WordPress via GraphQL. Use when the user wants to update SEO metadata, optimize titles, fix meta descriptions, or manage Yoast SEO fields across multiple posts. Supports preview mode, progress tracking, and resume capability.
---
# SEO WordPress Manager Skill
## Purpose
This skill manages Yoast SEO metadata in WordPress sites via the WPGraphQL API. It enables batch updates of:
- SEO titles
- Meta descriptions
- Focus keyphrases
- Open Graph metadata
## When to Use This Skill
- User asks to "update SEO titles" or "fix meta descriptions"
- User wants to batch process WordPress posts for SEO
- User mentions Yoast SEO optimization
- User needs to update SEO metadata across multiple posts
## Prerequisites
### WordPress Setup Required
1. **WPGraphQL plugin** installed and activated
2. **WPGraphQL for Yoast SEO** extension installed
3. **Application Password** created for authentication
### Yoast SEO GraphQL Mutations
Add this to your theme's `functions.php` to enable mutations:
```php
// Enable Yoast SEO mutations via WPGraphQL
add_action('graphql_register_types', function() {
register_graphql_mutation('updatePostSeo', [
'inputFields' => [
'postId' => ['type' => 'Int', 'description' => 'Post ID'],
'title' => ['type' => 'String', 'description' => 'SEO Title'],
'metaDesc' => ['type' => 'String', 'description' => 'Meta Description'],
'focusKeyphrase' => ['type' => 'String', 'description' => 'Focus Keyphrase'],
],
'outputFields' => [
'success' => ['type' => 'Boolean'],
'post' => ['type' => 'Post'],
],
'mutateAndGetPayload' => function($input) {
$post_id = absint($input['postId']);
if (!current_user_can('edit_post', $post_id)) {
throw new \GraphQL\Error\UserError('You do not have permission to edit this post.');
}
if (isset($input['title'])) {
update_post_meta($post_id, '_yoast_wpseo_title', sanitize_text_field($input['title']));
}
if (isset($input['metaDesc'])) {
update_post_meta($post_id, '_yoast_wpseo_metadesc', sanitize_textarea_field($input['metaDesc']));
}
if (isset($input['focusKeyphrase'])) {
update_post_meta($post_id, '_yoast_wpseo_focuskw', sanitize_text_field($input['focusKeyphrase']));
}
return [
'success' => true,
'post' => get_post($post_id),
];
}
]);
});
```
## Configuration
Create a `config.json` in the skill directory:
```json
{
"wordpress": {
"graphql_url": "https://your-site.com/graphql",
"username": "your-username",
"app_password": "your-app-password"
},
"batch": {
"size": 10,
"delay_seconds": 1
},
"state_file": "./seo_update_progress.json"
}
```
Or use environment variables:
- `WP_GRAPHQL_URL`
- `WP_USERNAME`
- `WP_APP_PASSWORD`
## Workflow
### Step 1: Analyze Posts for SEO Issues
```bash
python scripts/analyze_seo.py --all --output analysis.json
```
This fetches posts and identifies SEO issues (missing titles, too long descriptions, etc.).
Output includes instructions for Claude to generate optimized SEO content.
### Step 2: Generate Optimized SEO Content
Claude analyzes the `analysis.json` output and generates a `changes.json` file with:
- Optimized SEO titles (50-60 chars)
- Compelling meta descriptions (150-160 chars)
- Relevant focus keyphrases
### Step 3: Preview Changes (Dry Run)
```bash
python scripts/preview_changes.py --input changes.json
```
### Step 4: Apply Updates
```bash
python scripts/yoast_batch_updater.py --input changes.json --apply
```
### Step 5: Resume if Interrupted
```bash
python scripts/yoast_batch_updater.py --resume
```
## Input Format
The skill expects a JSON file with changes:
```json
{
"updates": [
{
"post_id": 123,
"post_title": "Original Post Title",
"current": {
"seo_title": "Old Title | Site Name",
"meta_desc": "Old description"
},
"new": {
"seo_title": "New Optimized Title | Site Name",
"meta_desc": "New compelling meta description under 160 chars"
}
}
]
}
```
## Output
The skill produces:
1. **Preview report** showing before/after for each post
2. **Progress state file** for resuming interrupted batches
3. **Final report** with success/failure counts
## Safety Features
- **Dry-run mode** by default - preview before applying
- **Confirmation prompt** before batch updates
- **Progress tracking** - resume interrupted sessions
- **Rate limiting** - configurable delay between API calls
- **Backup** - logs current values before changing
## Example Usage
User: "Update the meta descriptions for all posts in the 'tutorials' category to be more compelling"
Claude will:
1. Run `analyze_seo.py` to fetch posts and identify SEO issues
2. Analyze each post's content and current SEO data
3. Generate optimized titles, descriptions, and keyphrases
4. Create a `changes.json` file with the improvements
5. Run `preview_changes.py` to show before/after comparison
6. Ask for confirmation
7. Run `yoast_batch_updater.py --apply` to apply changes
8. Report results with success/failure counts
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### scripts/analyze_seo.py
```python
#!/usr/bin/env python3
"""
SEO Analyzer
Fetches posts and identifies SEO issues for Claude to optimize.
This script prepares data for Claude to analyze and generate improvements.
"""
import sys
import json
import argparse
from pathlib import Path
from dataclasses import dataclass, asdict
from typing import Optional
# Add shared modules to path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "shared"))
from config_loader import SkillConfig
from utils import save_json, timestamp
from wp_graphql_client import WPGraphQLClient, load_credentials_from_config
@dataclass
class SEOIssue:
"""Represents an SEO issue found in a post"""
post_id: int
post_title: str
post_url: str
post_excerpt: str
current_seo_title: str
current_meta_desc: str
current_focus_keyphrase: str
issues: list[str]
priority: str # "high", "medium", "low"
def analyze_title(title: str, post_title: str) -> list[str]:
"""Analyze SEO title for issues"""
issues = []
if not title or title.strip() == "":
issues.append("MISSING: No SEO title set")
return issues
length = len(title)
if length > 60:
issues.append(f"TOO_LONG: Title is {length} chars (max 60)")
elif length < 30:
issues.append(f"TOO_SHORT: Title is {length} chars (min 30 recommended)")
# Check if it's just the post title (no optimization)
if title.strip().lower() == post_title.strip().lower():
issues.append("GENERIC: SEO title is same as post title (not optimized)")
return issues
def analyze_description(description: str, default_patterns: list[str] = None) -> list[str]:
"""Analyze meta description for issues"""
issues = []
default_patterns = default_patterns or []
if not description or description.strip() == "":
issues.append("MISSING: No meta description set")
return issues
length = len(description)
if length > 160:
issues.append(f"TOO_LONG: Description is {length} chars (max 160)")
elif length < 120:
issues.append(f"TOO_SHORT: Description is {length} chars (min 120 recommended)")
# Check for default/generic patterns
desc_lower = description.lower()
for pattern in default_patterns:
if pattern.lower() in desc_lower:
issues.append(f"DEFAULT: Contains generic text '{pattern}'")
break
return issues
def analyze_keyphrase(keyphrase: str) -> list[str]:
"""Analyze focus keyphrase for issues"""
issues = []
if not keyphrase or keyphrase.strip() == "":
issues.append("MISSING: No focus keyphrase set")
return issues
def determine_priority(issues: list[str]) -> str:
"""Determine priority based on issues"""
if any("MISSING" in issue for issue in issues):
return "high"
if any("TOO_LONG" in issue or "DEFAULT" in issue for issue in issues):
return "medium"
return "low"
def analyze_posts(
posts: list[dict],
default_patterns: list[str] = None,
check_keyphrase: bool = False
) -> list[SEOIssue]:
"""Analyze posts for SEO issues"""
results = []
for post in posts:
seo = post.get("seo", {}) or {}
issues = []
# Analyze each field
title_issues = analyze_title(
seo.get("title", ""),
post.get("title", "")
)
issues.extend(title_issues)
desc_issues = analyze_description(
seo.get("metaDesc", ""),
default_patterns
)
issues.extend(desc_issues)
if check_keyphrase:
kw_issues = analyze_keyphrase(seo.get("focuskw", ""))
issues.extend(kw_issues)
# Only include posts with issues
if issues:
# Get excerpt from content if available
excerpt = ""
if post.get("excerpt"):
excerpt = post["excerpt"][:300]
elif post.get("content"):
# Strip HTML and get first 300 chars
import re
text = re.sub(r'<[^>]+>', '', post["content"])
excerpt = text[:300].strip()
results.append(SEOIssue(
post_id=post.get("databaseId", 0),
post_title=post.get("title", ""),
post_url=post.get("uri", ""),
post_excerpt=excerpt,
current_seo_title=seo.get("title", ""),
current_meta_desc=seo.get("metaDesc", ""),
current_focus_keyphrase=seo.get("focuskw", ""),
issues=issues,
priority=determine_priority(issues)
))
return results
def generate_analysis_report(issues: list[SEOIssue]) -> dict:
"""Generate analysis report for Claude to process"""
# Group by priority
high = [i for i in issues if i.priority == "high"]
medium = [i for i in issues if i.priority == "medium"]
low = [i for i in issues if i.priority == "low"]
# Count issue types
issue_counts = {}
for issue in issues:
for i in issue.issues:
issue_type = i.split(":")[0]
issue_counts[issue_type] = issue_counts.get(issue_type, 0) + 1
return {
"metadata": {
"analyzed_at": timestamp(),
"total_posts_analyzed": len(issues),
"posts_with_issues": len(issues)
},
"summary": {
"high_priority": len(high),
"medium_priority": len(medium),
"low_priority": len(low),
"issue_breakdown": issue_counts
},
"posts_needing_optimization": [asdict(i) for i in issues],
"instructions_for_claude": """
## How to Process This Analysis
For each post in `posts_needing_optimization`, generate optimized SEO content:
### SEO Title Guidelines:
- Length: 50-60 characters
- Structure: Primary Keyword - Secondary | Brand
- Front-load important keywords
- Make it compelling and click-worthy
### Meta Description Guidelines:
- Length: 150-160 characters
- Include a call-to-action
- Include the focus keyphrase naturally
- Make it compelling - this is your ad copy in search results
### Focus Keyphrase Guidelines:
- One primary keyword/phrase per post
- Long-tail keywords often perform better
- Match search intent
### Output Format:
Generate a changes.json file with this structure:
```json
{
"updates": [
{
"post_id": 123,
"post_title": "Original Title",
"current": {
"seo_title": "old title",
"meta_desc": "old description",
"focus_keyphrase": "old keyword"
},
"new": {
"seo_title": "New Optimized Title | Brand",
"meta_desc": "Compelling 150-160 char description with CTA",
"focus_keyphrase": "target keyword"
}
}
]
}
```
"""
}
def main():
parser = argparse.ArgumentParser(description="Analyze WordPress posts for SEO issues")
parser.add_argument("--config", help="Path to config.json")
parser.add_argument("--output", "-o", help="Output file path")
parser.add_argument("--category", help="Filter by category slug")
parser.add_argument("--limit", type=int, help="Limit number of posts")
parser.add_argument("--all", action="store_true", help="Fetch all posts")
parser.add_argument("--check-keyphrase", action="store_true",
help="Include focus keyphrase in analysis")
parser.add_argument("--default-patterns", nargs="+",
help="Patterns to detect default/generic descriptions")
args = parser.parse_args()
# Load config
config_path = Path(args.config) if args.config else None
if not config_path:
default_config = Path(__file__).parent.parent / "config.json"
if default_config.exists():
config_path = default_config
try:
credentials = load_credentials_from_config(config_path)
except ValueError as e:
print(f"Configuration error: {e}", file=sys.stderr)
sys.exit(1)
client = WPGraphQLClient(credentials)
# Fetch posts
print("Fetching posts from WordPress...")
if args.all:
posts = client.get_all_posts_with_seo(category=args.category)
else:
limit = args.limit or 100
result = client.get_posts_with_seo(limit=limit, category=args.category)
posts = result.get("posts", {}).get("nodes", [])
print(f"Fetched {len(posts)} posts")
# Analyze posts
print("Analyzing SEO issues...")
default_patterns = args.default_patterns or []
issues = analyze_posts(
posts,
default_patterns=default_patterns,
check_keyphrase=args.check_keyphrase
)
print(f"Found {len(issues)} posts with SEO issues")
# Generate report
report = generate_analysis_report(issues)
# Output
if args.output:
output_path = Path(args.output)
save_json(report, output_path)
print(f"\nAnalysis saved to: {output_path}")
else:
print(json.dumps(report, indent=2))
# Print summary
summary = report["summary"]
print(f"\n{'=' * 60}")
print("SEO ANALYSIS SUMMARY")
print(f"{'=' * 60}")
print(f"Posts with issues: {len(issues)}")
print(f" High priority: {summary['high_priority']}")
print(f" Medium priority: {summary['medium_priority']}")
print(f" Low priority: {summary['low_priority']}")
print(f"\nIssue breakdown:")
for issue_type, count in summary["issue_breakdown"].items():
print(f" {issue_type}: {count}")
print(f"\nNext step: Have Claude review the analysis and generate changes.json")
if __name__ == "__main__":
main()
```
### scripts/preview_changes.py
```python
#!/usr/bin/env python3
"""
Preview SEO changes before applying them
Shows side-by-side comparison of current vs proposed changes
"""
import sys
import json
import argparse
from pathlib import Path
from typing import Optional
try:
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.text import Text
RICH_AVAILABLE = True
except ImportError:
RICH_AVAILABLE = False
def truncate(text: str, max_len: int = 60) -> str:
"""Truncate text with ellipsis"""
if not text:
return "(empty)"
if len(text) <= max_len:
return text
return text[:max_len - 3] + "..."
def length_indicator(text: str, max_len: int) -> str:
"""Show length with warning if over limit"""
if not text:
return "0"
length = len(text)
if length > max_len:
return f"{length} (OVER by {length - max_len})"
return str(length)
def print_plain_preview(changes: dict) -> None:
"""Print preview without rich formatting"""
updates = changes.get("updates", [])
print("=" * 80)
print("SEO CHANGES PREVIEW")
print("=" * 80)
print(f"Total posts to update: {len(updates)}")
print()
for i, update in enumerate(updates, 1):
post_id = update.get("post_id")
post_title = update.get("post_title", "Unknown")
current = update.get("current", {})
new = update.get("new", {})
print(f"\n{'─' * 80}")
print(f"[{i}/{len(updates)}] Post ID: {post_id}")
print(f"Title: {post_title}")
print()
# SEO Title
if new.get("seo_title"):
print("SEO Title:")
print(f" CURRENT: {truncate(current.get('seo_title', ''), 70)}")
print(f" NEW: {truncate(new.get('seo_title', ''), 70)}")
print(f" Length: {length_indicator(current.get('seo_title', ''), 60)} → {length_indicator(new.get('seo_title', ''), 60)}")
print()
# Meta Description
if new.get("meta_desc"):
print("Meta Description:")
print(f" CURRENT: {truncate(current.get('meta_desc', ''), 70)}")
print(f" NEW: {truncate(new.get('meta_desc', ''), 70)}")
print(f" Length: {length_indicator(current.get('meta_desc', ''), 160)} → {length_indicator(new.get('meta_desc', ''), 160)}")
print()
# Focus Keyphrase
if new.get("focus_keyphrase"):
print("Focus Keyphrase:")
print(f" CURRENT: {current.get('focus_keyphrase', '(none)')}")
print(f" NEW: {new.get('focus_keyphrase', '(none)')}")
print()
print("=" * 80)
print(f"SUMMARY: {len(updates)} posts will be updated")
print("=" * 80)
def print_rich_preview(changes: dict) -> None:
"""Print preview with rich formatting"""
console = Console()
updates = changes.get("updates", [])
console.print(Panel.fit(
f"[bold]SEO Changes Preview[/bold]\nTotal posts: {len(updates)}",
border_style="blue"
))
for i, update in enumerate(updates, 1):
post_id = update.get("post_id")
post_title = update.get("post_title", "Unknown")
current = update.get("current", {})
new = update.get("new", {})
table = Table(title=f"[{i}/{len(updates)}] {truncate(post_title, 50)} (ID: {post_id})")
table.add_column("Field", style="cyan")
table.add_column("Current", style="red")
table.add_column("New", style="green")
table.add_column("Length", style="yellow")
# SEO Title
if new.get("seo_title"):
curr_title = current.get("seo_title", "")
new_title = new.get("seo_title", "")
table.add_row(
"SEO Title",
truncate(curr_title, 40),
truncate(new_title, 40),
f"{len(curr_title)} → {len(new_title)}"
)
# Meta Description
if new.get("meta_desc"):
curr_desc = current.get("meta_desc", "")
new_desc = new.get("meta_desc", "")
table.add_row(
"Meta Desc",
truncate(curr_desc, 40),
truncate(new_desc, 40),
f"{len(curr_desc)} → {len(new_desc)}"
)
# Focus Keyphrase
if new.get("focus_keyphrase"):
table.add_row(
"Keyphrase",
current.get("focus_keyphrase", "(none)"),
new.get("focus_keyphrase", "(none)"),
"-"
)
console.print(table)
console.print()
# Summary
title_changes = sum(1 for u in updates if u.get("new", {}).get("seo_title"))
desc_changes = sum(1 for u in updates if u.get("new", {}).get("meta_desc"))
kw_changes = sum(1 for u in updates if u.get("new", {}).get("focus_keyphrase"))
summary = Table(title="Summary", show_header=False)
summary.add_column("Metric", style="cyan")
summary.add_column("Count", style="yellow")
summary.add_row("Total posts", str(len(updates)))
summary.add_row("Title changes", str(title_changes))
summary.add_row("Description changes", str(desc_changes))
summary.add_row("Keyphrase changes", str(kw_changes))
console.print(Panel(summary, border_style="green"))
def validate_changes(changes: dict) -> list[str]:
"""Validate changes and return list of warnings"""
warnings = []
updates = changes.get("updates", [])
for update in updates:
post_id = update.get("post_id")
new = update.get("new", {})
# Check title length
if new.get("seo_title") and len(new["seo_title"]) > 60:
warnings.append(f"Post {post_id}: SEO title is {len(new['seo_title'])} chars (max 60)")
# Check description length
if new.get("meta_desc"):
desc_len = len(new["meta_desc"])
if desc_len > 160:
warnings.append(f"Post {post_id}: Meta description is {desc_len} chars (max 160)")
elif desc_len < 120:
warnings.append(f"Post {post_id}: Meta description is {desc_len} chars (min 120 recommended)")
return warnings
def main():
parser = argparse.ArgumentParser(description="Preview SEO changes")
parser.add_argument("--input", "-i", required=True, help="Path to changes JSON file")
parser.add_argument("--validate", "-v", action="store_true", help="Show validation warnings")
parser.add_argument("--plain", action="store_true", help="Use plain text output (no colors)")
args = parser.parse_args()
# Load changes
input_path = Path(args.input)
if not input_path.exists():
print(f"Error: File not found: {input_path}", file=sys.stderr)
sys.exit(1)
with open(input_path, "r", encoding="utf-8") as f:
changes = json.load(f)
# Validate
if args.validate:
warnings = validate_changes(changes)
if warnings:
print("VALIDATION WARNINGS:")
print("-" * 40)
for warning in warnings:
print(f" ! {warning}")
print()
# Preview
if args.plain or not RICH_AVAILABLE:
print_plain_preview(changes)
else:
print_rich_preview(changes)
if __name__ == "__main__":
main()
```
### scripts/yoast_batch_updater.py
```python
#!/usr/bin/env python3
"""
Yoast SEO Batch Updater
Applies SEO changes to WordPress posts via GraphQL with progress tracking
"""
import sys
import json
import time
import argparse
from pathlib import Path
from typing import Optional
from datetime import datetime
# Add shared modules to path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "shared"))
from config_loader import SkillConfig
from utils import ProgressTracker, save_json, load_json, ensure_dir, timestamp
from wp_graphql_client import WPGraphQLClient, load_credentials_from_config
class YoastBatchUpdater:
"""Batch update Yoast SEO fields with progress tracking"""
def __init__(
self,
client: WPGraphQLClient,
state_file: Path,
backup_file: Path,
batch_size: int = 10,
delay_seconds: float = 1.0,
dry_run: bool = True
):
self.client = client
self.tracker = ProgressTracker(state_file)
self.backup_file = backup_file
self.batch_size = batch_size
self.delay_seconds = delay_seconds
self.dry_run = dry_run
self.backup_data: list[dict] = []
self.results: list[dict] = []
def load_changes(self, changes_file: Path) -> list[dict]:
"""Load changes from JSON file"""
data = load_json(changes_file)
return data.get("updates", [])
def backup_current_values(self, updates: list[dict]) -> None:
"""Backup current SEO values before making changes"""
self.backup_data = [
{
"post_id": u["post_id"],
"post_title": u.get("post_title", ""),
"original_values": u.get("current", {}),
"backup_timestamp": timestamp()
}
for u in updates
]
ensure_dir(self.backup_file.parent)
save_json({"backups": self.backup_data, "created_at": timestamp()}, self.backup_file)
print(f"Backup saved to: {self.backup_file}")
def apply_update(self, update: dict) -> dict:
"""Apply a single update and return result"""
post_id = update["post_id"]
new_values = update.get("new", {})
result = {
"post_id": post_id,
"post_title": update.get("post_title", ""),
"success": False,
"error": None,
"changes_applied": {},
"timestamp": timestamp()
}
if self.dry_run:
result["success"] = True
result["dry_run"] = True
result["changes_applied"] = new_values
return result
try:
# Apply the update via GraphQL
response = self.client.update_post_seo(
post_id=post_id,
title=new_values.get("seo_title"),
meta_desc=new_values.get("meta_desc"),
focus_keyphrase=new_values.get("focus_keyphrase")
)
update_result = response.get("updatePostSeo", {})
if update_result.get("success"):
result["success"] = True
result["changes_applied"] = new_values
else:
result["error"] = "Update returned success=false"
except Exception as e:
result["error"] = str(e)
return result
def run(self, changes_file: Path, resume: bool = False) -> dict:
"""Run the batch update process"""
updates = self.load_changes(changes_file)
if not updates:
return {"error": "No updates found in changes file"}
# Determine which updates to process
if resume and not self.tracker.is_complete():
remaining_ids = set(self.tracker.get_remaining())
updates_to_process = [u for u in updates if str(u["post_id"]) in remaining_ids]
print(f"Resuming: {len(updates_to_process)} posts remaining")
else:
updates_to_process = updates
# Start fresh tracking
self.tracker.start([str(u["post_id"]) for u in updates])
# Backup current values
self.backup_current_values(updates)
print(f"\n{'DRY RUN - ' if self.dry_run else ''}Processing {len(updates_to_process)} updates...")
print(f"Batch size: {self.batch_size}, Delay: {self.delay_seconds}s")
print("-" * 60)
# Process updates
for i, update in enumerate(updates_to_process, 1):
post_id = str(update["post_id"])
post_title = update.get("post_title", "Unknown")[:40]
self.tracker.mark_current(post_id)
print(f"[{i}/{len(updates_to_process)}] Processing: {post_title} (ID: {post_id})...", end=" ")
result = self.apply_update(update)
self.results.append(result)
if result["success"]:
self.tracker.mark_completed(post_id)
status = "OK (dry-run)" if result.get("dry_run") else "OK"
print(f"[{status}]")
else:
self.tracker.mark_failed(post_id, result.get("error"))
print(f"[FAILED: {result.get('error', 'Unknown error')}]")
# Rate limiting
if i < len(updates_to_process):
time.sleep(self.delay_seconds)
# Generate summary
stats = self.tracker.get_stats()
summary = {
"completed_at": timestamp(),
"dry_run": self.dry_run,
"total_processed": len(self.results),
"successful": stats["completed"],
"failed": stats["failed"],
"results": self.results
}
return summary
def generate_report(self, summary: dict, output_file: Optional[Path] = None) -> str:
"""Generate a markdown report of the update process"""
lines = [
"# SEO Batch Update Report",
"",
f"**Completed:** {summary.get('completed_at', 'N/A')}",
f"**Mode:** {'DRY RUN' if summary.get('dry_run') else 'LIVE'}",
"",
"## Summary",
"",
f"- Total processed: {summary.get('total_processed', 0)}",
f"- Successful: {summary.get('successful', 0)}",
f"- Failed: {summary.get('failed', 0)}",
"",
]
# Successful updates
successful = [r for r in summary.get("results", []) if r.get("success")]
if successful:
lines.extend([
"## Successful Updates",
"",
])
for result in successful[:20]: # Limit to first 20
lines.append(f"- **{result['post_title']}** (ID: {result['post_id']})")
changes = result.get("changes_applied", {})
if changes.get("seo_title"):
lines.append(f" - Title: {changes['seo_title'][:50]}...")
if changes.get("meta_desc"):
lines.append(f" - Description: {changes['meta_desc'][:50]}...")
if len(successful) > 20:
lines.append(f"\n... and {len(successful) - 20} more")
lines.append("")
# Failed updates
failed = [r for r in summary.get("results", []) if not r.get("success")]
if failed:
lines.extend([
"## Failed Updates",
"",
])
for result in failed:
lines.append(f"- **{result['post_title']}** (ID: {result['post_id']})")
lines.append(f" - Error: {result.get('error', 'Unknown')}")
lines.append("")
report = "\n".join(lines)
if output_file:
ensure_dir(output_file.parent)
with open(output_file, "w", encoding="utf-8") as f:
f.write(report)
return report
def main():
parser = argparse.ArgumentParser(description="Yoast SEO Batch Updater")
parser.add_argument("--input", "-i", help="Path to changes JSON file")
parser.add_argument("--resume", action="store_true", help="Resume interrupted batch")
parser.add_argument("--dry-run", action="store_true", default=True,
help="Preview changes without applying (default)")
parser.add_argument("--apply", action="store_true", help="Actually apply changes")
parser.add_argument("--config", type=str, help="Path to config.json")
parser.add_argument("--batch-size", type=int, default=10, help="Batch size")
parser.add_argument("--delay", type=float, default=1.0, help="Delay between requests (seconds)")
parser.add_argument("--report", type=str, help="Output report file path")
args = parser.parse_args()
# Validate arguments
if not args.input and not args.resume:
print("Error: --input is required (unless using --resume)", file=sys.stderr)
sys.exit(1)
# Load config
config_path = Path(args.config) if args.config else None
if not config_path:
default_config = Path(__file__).parent.parent / "config.json"
if default_config.exists():
config_path = default_config
skill_config = SkillConfig("seo-wordpress-manager", config_path)
# Setup paths
state_dir = Path(__file__).parent.parent / "state"
state_file = state_dir / "seo_update_progress.json"
backup_file = state_dir / "seo_backup.json"
# Determine dry_run mode
dry_run = not args.apply
if not dry_run:
print("\n" + "=" * 60)
print("WARNING: LIVE MODE - Changes will be applied to WordPress!")
print("=" * 60)
confirm = input("Type 'yes' to confirm: ")
if confirm.lower() != "yes":
print("Aborted.")
sys.exit(0)
try:
credentials = load_credentials_from_config(config_path)
except ValueError as e:
print(f"Configuration error: {e}", file=sys.stderr)
sys.exit(1)
client = WPGraphQLClient(credentials)
updater = YoastBatchUpdater(
client=client,
state_file=state_file,
backup_file=backup_file,
batch_size=args.batch_size,
delay_seconds=args.delay,
dry_run=dry_run
)
# Determine input file
if args.resume:
# Look for the original input file from state
if not state_file.exists():
print("Error: No state file found to resume from", file=sys.stderr)
sys.exit(1)
# We need the original changes file to resume
if not args.input:
print("Error: --input is required when resuming", file=sys.stderr)
sys.exit(1)
changes_file = Path(args.input)
else:
changes_file = Path(args.input)
if not changes_file.exists():
print(f"Error: Changes file not found: {changes_file}", file=sys.stderr)
sys.exit(1)
# Run the batch update
summary = updater.run(changes_file, resume=args.resume)
# Generate report
report_file = Path(args.report) if args.report else state_dir / "seo_update_report.md"
report = updater.generate_report(summary, report_file)
print("\n" + "=" * 60)
print("BATCH UPDATE COMPLETE")
print("=" * 60)
print(f"Successful: {summary.get('successful', 0)}")
print(f"Failed: {summary.get('failed', 0)}")
print(f"Report saved to: {report_file}")
if summary.get("dry_run"):
print("\nThis was a DRY RUN. To apply changes, use --apply flag.")
if __name__ == "__main__":
main()
```