onboarding
One-time team setup that creates Kurt profile and foundation rules
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install boringdata-kurt-demo-onboarding-skill
Repository
Skill path: .claude/skills/onboarding-skill
One-time team setup that creates Kurt profile and foundation rules
Open repositoryBest for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: boringdata.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install onboarding into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/boringdata/kurt-demo before adding onboarding to shared team environments
- Use onboarding for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: onboarding
description: One-time team setup that creates Kurt profile and foundation rules
---
# Onboarding Skill
**Purpose:** Organizational setup and profile management
**Entry:** `/create-profile` or `/update-profile` commands
**Output:** `.kurt/profile.md` + foundation rules + indexed content
---
## Overview
This skill manages organizational onboarding through modular operations:
1. **create-profile** - Complete onboarding flow for new teams
2. **update-profile** - Update existing profile (selective updates)
3. **setup-content** - Map and fetch organizational content
4. **setup-analytics** - Configure analytics for domains (optional)
5. **setup-rules** - Extract foundation rules (publisher, style, personas)
**Key principle:** Operations are standalone and composable. They can be invoked independently OR as part of create-profile/update-profile flows.
---
## Operations
### create-profile
Complete onboarding flow for new teams.
**Entry:** `/create-profile` or `onboarding create-profile`
**Flow:**
1. Questionnaire - Capture team context
2. Map content - Organizational websites
3. Setup analytics - Optional traffic configuration
4. Extract rules - Foundation rules
5. Create profile - Generate `.kurt/profile.md`
See: `subskills/create-profile.md`
### update-profile
Update existing profile with selective changes.
**Entry:** `/update-profile` or `onboarding update-profile`
**Options:**
- Update content map (add/remove domains)
- Update analytics configuration
- Re-extract foundation rules
- Update team information
See: `subskills/update-profile.md`
### setup-content
Map and fetch organizational content.
**Entry:** `onboarding setup-content`
**Called by:**
- create-profile subskill
- update-profile subskill
- project-management check-onboarding (if content missing)
See: `subskills/map-content.md`
### setup-analytics
Configure analytics for organizational domains.
**Entry:** `onboarding setup-analytics`
**Called by:**
- create-profile subskill (optional)
- update-profile subskill
- project-management check-onboarding (if user wants analytics)
See: `subskills/setup-analytics.md`
### setup-rules
Extract foundation rules from organizational content.
**Entry:** `onboarding setup-rules`
**Called by:**
- create-profile subskill
- update-profile subskill
- project-management check-onboarding (if rules missing)
See: `subskills/extract-foundation.md`
---
## Routing Logic
```bash
# Parse first argument to determine operation
OPERATION=$1
case "$OPERATION" in
create-profile)
# Full onboarding flow
invoke: subskills/create-profile.md
;;
update-profile)
# Selective updates to existing profile
invoke: subskills/update-profile.md
;;
setup-content)
# Map and fetch organizational content
invoke: subskills/map-content.md
;;
setup-analytics)
# Configure analytics
invoke: subskills/setup-analytics.md
;;
setup-rules)
# Extract foundation rules
invoke: subskills/extract-foundation.md
;;
*)
echo "Unknown operation: $OPERATION"
echo ""
echo "Available operations:"
echo " create-profile - Complete onboarding for new teams"
echo " update-profile - Update existing profile"
echo " setup-content - Map organizational content"
echo " setup-analytics - Configure analytics"
echo " setup-rules - Extract foundation rules"
echo ""
echo "Usage: onboarding <operation>"
exit 1
;;
esac
```
---
## Step 1: Welcome Message
```
═══════════════════════════════════════════════════════
Welcome to Kurt!
═══════════════════════════════════════════════════════
I'll help you set up Kurt for your team. This takes 5-10 minutes.
You can skip questions you're unsure about - we'll help you discover
the answers as you work.
Press Enter to start...
```
Wait for user to press Enter.
---
## Step 2: Run Questionnaire
Invoke: `onboarding-skill/subskills/questionnaire`
This captures:
- Company name, team, industry
- Communication goals
- Content types created
- Target personas
- Source content URLs
- CMS configuration (optional)
- Workflow description (optional)
**Output:** JSON file with all captured data at `.kurt/temp/onboarding-data.json`
---
## Step 3: Map Content (if sources provided)
Check if sources were provided:
```bash
SOURCES=$(jq -r '.sources | length' .kurt/temp/onboarding-data.json)
if [ "$SOURCES" -gt 0 ]; then
echo ""
echo "───────────────────────────────────────────────────────"
echo "Step 1/3: Mapping Content Sources"
echo "───────────────────────────────────────────────────────"
echo ""
# Invoke map-content subskill
# This calls kurt CLI to map and fetch
fi
```
Invoke: `onboarding-skill/subskills/map-content`
**Input:** Sources from onboarding-data.json
**Output:** Updates onboarding-data.json with content stats
---
## Step 4: Setup Analytics (optional, if content fetched)
Check if content was fetched:
```bash
CONTENT_FETCHED=$(jq -r '.content_fetched' .kurt/temp/onboarding-data.json)
if [ "$CONTENT_FETCHED" = "true" ]; then
echo ""
echo "───────────────────────────────────────────────────────"
echo "Step 2/4: Analytics Setup (Optional)"
echo "───────────────────────────────────────────────────────"
echo ""
fi
```
Invoke: `onboarding-skill/subskills/setup-analytics`
**Input:** Domains detected from content in onboarding-data.json
**Output:** Updates onboarding-data.json with analytics configuration
**Note:** Optional step - user can skip without blocking
---
## Step 5: Extract Foundation Rules (if content fetched)
Check if content was fetched:
```bash
CONTENT_FETCHED=$(jq -r '.content_fetched' .kurt/temp/onboarding-data.json)
if [ "$CONTENT_FETCHED" = "true" ]; then
echo ""
echo "───────────────────────────────────────────────────────"
echo "Step 3/4: Extract Foundation Rules"
echo "───────────────────────────────────────────────────────"
echo ""
fi
```
Invoke: `onboarding-skill/subskills/extract-foundation`
**Input:** Content status from onboarding-data.json
**Output:** Creates rules files, updates onboarding-data.json with rule paths
---
## Step 6: Create Profile
Always run (even if steps 3-5 skipped):
```
───────────────────────────────────────────────────────
Step 4/4: Creating Your Profile
───────────────────────────────────────────────────────
Generating your Kurt profile...
```
Invoke: `onboarding-skill/subskills/create-profile`
**Input:** All data from onboarding-data.json
**Output:** `.kurt/profile.md`
---
## Step 6: Cleanup
```bash
# Remove temporary data file
rm .kurt/temp/onboarding-data.json
# Create temp directory for future use
mkdir -p .kurt/temp
```
---
## Step 7: Success Message
```
═══════════════════════════════════════════════════════
🎉 Setup Complete!
═══════════════════════════════════════════════════════
Your Kurt profile is ready:
Location: .kurt/profile.md
{{COMPLETION_SUMMARY}}
───────────────────────────────────────────────────────
What's Next?
───────────────────────────────────────────────────────
{{NEXT_STEPS}}
Need help? Type /help or see .kurt/profile.md for your setup
```
**Completion summary based on what was done:**
- If content mapped: "✓ Content mapped: X documents"
- If rules extracted: "✓ Rules extracted: publisher, style, personas"
- If workflow mentioned: "Suggestion: Run `workflow-skill add` to codify your workflow"
**Next steps based on gaps:**
- If no content: "1. Map content: kurt map url <website>"
- If no rules: "2. Extract rules: writing-rules-skill publisher --auto-discover"
- If workflow mentioned: "3. Define workflow: workflow-skill add"
- Always: "4. Create project: /create-project"
---
## Error Handling
**If subskill fails:**
```
⚠️ {{SUBSKILL_NAME}} failed
Error: {{ERROR_MESSAGE}}
Options:
a) Retry
b) Skip this step
c) Cancel onboarding
Choose: _
```
**If .kurt/ directory doesn't exist:**
```
⚠️ Kurt not initialized
Run: kurt init
Then retry: /start
```
---
## Integration Points
**Called by:**
- `/create-profile` slash command → create-profile operation
- `/update-profile` slash command → update-profile operation
- `project-management check-onboarding` → Can invoke setup-content, setup-analytics, setup-rules if incomplete
**Invokes:**
- `onboarding-skill/subskills/questionnaire` - Capture team context
- `onboarding-skill/subskills/map-content` - Map organizational content
- `onboarding-skill/subskills/setup-analytics` - Configure analytics
- `onboarding-skill/subskills/extract-foundation` - Extract foundation rules
- `onboarding-skill/subskills/create-profile` - Generate/update profile
- `onboarding-skill/subskills/update-profile` - Selective profile updates
- `project-management extract-rules --foundation-only` - Delegates rule extraction
- `writing-rules-skill` - For extracting rules (via extract-foundation)
**Calls:**
- `kurt CLI` - For content mapping (`kurt map`), fetching (`kurt fetch`), analytics
- `writing-rules-skill` - For rule extraction
- `project-management extract-rules` - Delegates foundation rule extraction
**Creates:**
- `.kurt/profile.md` - Team profile with organizational context
- `.kurt/temp/onboarding-data.json` - Temporary data (deleted after create-profile)
- Foundation rules - Publisher profile, primary voice, personas
- Analytics configuration - Stored in profile.md
---
*This skill owns all organizational setup. Operations are composable and can be invoked independently or as part of create/update flows.*
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### subskills/create-profile.md
```markdown
# Create Profile Subskill
**Purpose:** Generate `.kurt/profile.md` from collected onboarding data
**Parent Skill:** onboarding-skill
**Input:** `.kurt/temp/onboarding-data.json`
**Output:** `.kurt/profile.md`
---
## Overview
This subskill takes all collected data from the onboarding process and generates a complete team profile file.
---
## Step 1: Load Template and Data
```bash
# Load profile template
TEMPLATE=$(cat .kurt/templates/profile-template.md)
# Load onboarding data
DATA_FILE=".kurt/temp/onboarding-data.json"
# Extract all fields
COMPANY_NAME=$(jq -r '.company_name // "Not specified"' "$DATA_FILE")
TEAM_NAME=$(jq -r '.team_name // "Not specified"' "$DATA_FILE")
INDUSTRY=$(jq -r '.industry // "Not specified"' "$DATA_FILE")
TEAM_ROLE=$(jq -r '.team_role // "Content team"' "$DATA_FILE")
# Dates
CREATED_DATE=$(date +%Y-%m-%d)
UPDATED_DATE=$(date +%Y-%m-%d)
```
---
## Step 2: Format Lists
### Goals
```bash
GOALS=$(jq -r '.goals[]?' "$DATA_FILE")
GOALS_LIST=""
if [ -n "$GOALS" ]; then
while IFS= read -r goal; do
GOALS_LIST+="- $goal\n"
done <<< "$GOALS"
else
GOALS_LIST="- To be defined\n"
fi
```
### Content Types
```bash
CONTENT_TYPES=$(jq -r '.content_types[]?' "$DATA_FILE")
CONTENT_TYPES_LIST=""
if [ -n "$CONTENT_TYPES" ]; then
while IFS= read -r type; do
CONTENT_TYPES_LIST+="- $type\n"
done <<< "$CONTENT_TYPES"
else
CONTENT_TYPES_LIST="- To be defined\n"
fi
```
### Personas
```bash
KNOWN_PERSONAS=$(jq -r '.known_personas[]?' "$DATA_FILE")
PERSONA_DESC=$(jq -r '.persona_description // ""' "$DATA_FILE")
PERSONAS_TO_DISCOVER=$(jq -r '.personas_to_discover // false' "$DATA_FILE")
KNOWN_PERSONAS_LIST=""
TO_DISCOVER_PERSONAS=""
if [ -n "$KNOWN_PERSONAS" ]; then
while IFS= read -r persona; do
KNOWN_PERSONAS_LIST+="- $persona\n"
done <<< "$KNOWN_PERSONAS"
else
KNOWN_PERSONAS_LIST="None specified yet\n"
fi
if [ "$PERSONAS_TO_DISCOVER" = "true" ]; then
TO_DISCOVER_PERSONAS="To be extracted from content"
if [ -n "$PERSONA_DESC" ]; then
TO_DISCOVER_PERSONAS+=": $PERSONA_DESC"
fi
else
TO_DISCOVER_PERSONAS="N/A"
fi
```
### Sources
```bash
COMPANY_WEBSITE=$(jq -r '.company_website // "Not specified"' "$DATA_FILE")
DOCS_URL=$(jq -r '.docs_url // ""' "$DATA_FILE")
BLOG_URL=$(jq -r '.blog_url // ""' "$DATA_FILE")
RESEARCH_SOURCES=$(jq -r '.research_sources[]?' "$DATA_FILE")
OTHER_COMPANY_SOURCES=""
[ -n "$DOCS_URL" ] && OTHER_COMPANY_SOURCES+="- Documentation: $DOCS_URL\n"
[ -n "$BLOG_URL" ] && OTHER_COMPANY_SOURCES+="- Blog: $BLOG_URL\n"
RESEARCH_SOURCES_LIST=""
if [ -n "$RESEARCH_SOURCES" ]; then
while IFS= read -r source; do
RESEARCH_SOURCES_LIST+="- $source\n"
done <<< "$RESEARCH_SOURCES"
else
RESEARCH_SOURCES_LIST="None specified\n"
fi
# Content status
CONTENT_FETCHED=$(jq -r '.content_fetched // false' "$DATA_FILE")
TOTAL_DOCS=$(jq -r '.content_stats.total_documents // 0' "$DATA_FILE")
FETCHED_DOCS=$(jq -r '.content_stats.fetched // 0' "$DATA_FILE")
if [ "$CONTENT_FETCHED" = "true" ]; then
COMPANY_CONTENT_STATUS="Fetched and indexed ($FETCHED_DOCS documents)"
RESEARCH_CONTENT_STATUS="Fetched and indexed"
else
COMPANY_CONTENT_STATUS="Not yet mapped"
RESEARCH_CONTENT_STATUS="Not yet mapped"
fi
```
### CMS
```bash
CMS_PLATFORM=$(jq -r '.cms_platform // "none"' "$DATA_FILE")
CMS_CONFIGURED=$(jq -r '.cms_configured // false' "$DATA_FILE")
if [ "$CMS_PLATFORM" != "none" ] && [ "$CMS_CONFIGURED" = "true" ]; then
CMS_STATUS="Configured ($CMS_PLATFORM)"
else
CMS_STATUS="Not configured"
fi
```
### Analytics
```bash
ANALYTICS_CONFIGURED=$(jq -r '.analytics_configured // false' "$DATA_FILE")
ANALYTICS_DOMAINS_JSON=$(jq -r '.analytics_domains // []' "$DATA_FILE")
ANALYTICS_DOMAINS_COUNT=$(echo "$ANALYTICS_DOMAINS_JSON" | jq 'length')
if [ "$ANALYTICS_CONFIGURED" = "true" ] && [ "$ANALYTICS_DOMAINS_COUNT" -gt 0 ]; then
# Analytics is configured, format domain details
ANALYTICS_DOMAINS=""
for i in $(seq 0 $(($ANALYTICS_DOMAINS_COUNT - 1))); do
DOMAIN=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].domain")
PLATFORM=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].platform")
LAST_SYNCED=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].last_synced")
PAGEVIEWS=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].pageviews_60d")
DOCS_WITH_DATA=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].documents_with_data")
P25=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].thresholds.p25")
P75=$(echo "$ANALYTICS_DOMAINS_JSON" | jq -r ".[$i].thresholds.p75")
ANALYTICS_DOMAINS+="**$DOMAIN** ($PLATFORM)\n"
ANALYTICS_DOMAINS+="- Last synced: $LAST_SYNCED\n"
ANALYTICS_DOMAINS+="- Documents with traffic data: $DOCS_WITH_DATA\n"
ANALYTICS_DOMAINS+="- Total pageviews (60d): $PAGEVIEWS\n"
ANALYTICS_DOMAINS+="- Traffic thresholds:\n"
ANALYTICS_DOMAINS+=" - HIGH: >$P75 views/month (top 25%)\n"
ANALYTICS_DOMAINS+=" - MEDIUM: $P25-$P75 views/month (middle 50%)\n"
ANALYTICS_DOMAINS+=" - LOW: 1-$P25 views/month (bottom 25%)\n"
ANALYTICS_DOMAINS+=" - ZERO: 0 views/month\n\n"
done
else
# Analytics not configured
ANALYTICS_DOMAINS=""
fi
```
### Competitors
```bash
COMPETITORS_JSON=$(jq -r '.competitors // []' "$DATA_FILE")
COMPETITORS_COUNT=$(echo "$COMPETITORS_JSON" | jq 'length')
if [ "$COMPETITORS_COUNT" -gt 0 ]; then
COMPETITORS_LIST=""
for i in $(seq 0 $(($COMPETITORS_COUNT - 1))); do
COMPETITOR=$(echo "$COMPETITORS_JSON" | jq -r ".[$i]")
COMPETITORS_LIST+="- $COMPETITOR\n"
done
else
COMPETITORS_LIST="None specified yet\n\nAdd competitors during competitive analysis projects or update profile.md directly."
fi
```
### Workflows
```bash
WORKFLOW_DESC=$(jq -r '.workflow_description // ""' "$DATA_FILE")
HAS_WORKFLOW=$(jq -r '.has_workflow_to_create // false' "$DATA_FILE")
if [ -n "$WORKFLOW_DESC" ]; then
WORKFLOWS_LIST="To be created: $WORKFLOW_DESC\n\nRun: workflow-skill add"
else
WORKFLOWS_LIST="None defined yet\n\nRun: workflow-skill add to create workflows"
fi
```
---
## Step 3: Format Extracted Rules
### Publisher
```bash
PUBLISHER_EXTRACTED=$(jq -r '.rules_extracted.publisher.extracted // false' "$DATA_FILE")
PUBLISHER_PATH=$(jq -r '.rules_extracted.publisher.path // ""' "$DATA_FILE")
if [ "$PUBLISHER_EXTRACTED" = "true" ]; then
PUBLISHER_STATUS="Extracted"
else
PUBLISHER_STATUS="Not yet extracted"
PUBLISHER_PATH="Run: writing-rules-skill publisher --auto-discover"
fi
```
### Style
```bash
STYLE_EXTRACTED=$(jq -r '.rules_extracted.style.extracted // false' "$DATA_FILE")
STYLE_PATH=$(jq -r '.rules_extracted.style.path // ""' "$DATA_FILE")
STYLE_NAME=$(jq -r '.rules_extracted.style.name // ""' "$DATA_FILE")
if [ "$STYLE_EXTRACTED" = "true" ]; then
STYLE_COUNT=1
STYLE_LIST="- $STYLE_NAME ($STYLE_PATH)\n"
else
STYLE_COUNT=0
STYLE_LIST="None yet. Run: writing-rules-skill style --type corporate --auto-discover\n"
fi
```
### Structure
```bash
# Count structure files (may exist from previous work)
STRUCTURE_FILES=$(ls rules/structure/*.md 2>/dev/null | wc -l)
if [ "$STRUCTURE_FILES" -gt 0 ]; then
STRUCTURE_COUNT=$STRUCTURE_FILES
STRUCTURE_LIST=""
for file in rules/structure/*.md; do
name=$(basename "$file" .md)
STRUCTURE_LIST+="- $name ($file)\n"
done
else
STRUCTURE_COUNT=0
STRUCTURE_LIST="None yet. Run: writing-rules-skill structure --type <type> --auto-discover\n"
fi
```
### Personas
```bash
PERSONAS_EXTRACTED=$(jq -r '.rules_extracted.personas.extracted // false' "$DATA_FILE")
PERSONA_COUNT=$(jq -r '.rules_extracted.personas.count // 0' "$DATA_FILE")
PERSONA_FILES=$(jq -r '.rules_extracted.personas.files[]?' "$DATA_FILE")
if [ "$PERSONAS_EXTRACTED" = "true" ]; then
PERSONA_LIST=""
while IFS= read -r file; do
name=$(basename "$file" .md)
PERSONA_LIST+="- $name ($file)\n"
done <<< "$PERSONA_FILES"
else
PERSONA_LIST="None yet. Run: writing-rules-skill persona --audience-type all --auto-discover\n"
fi
```
### Custom Rules
```bash
# Check if custom rule types exist
CUSTOM_RULES=$(yq '.rule_types | to_entries | .[] | select(.value.built_in != true) | .key' rules/rules-config.yaml 2>/dev/null)
if [ -n "$CUSTOM_RULES" ]; then
CUSTOM_RULES_STATUS="Custom rule types defined:\n"
while IFS= read -r rule_type; do
CUSTOM_RULES_STATUS+="- $rule_type\n"
done <<< "$CUSTOM_RULES"
else
CUSTOM_RULES_STATUS="None. Create with: writing-rules-skill add\n"
fi
```
---
## Step 4: Calculate Next Steps
```bash
NEXT_STEPS_LIST=""
# Content not mapped
if [ "$TOTAL_DOCS" -eq 0 ]; then
NEXT_STEPS_LIST+="1. Map content sources:\n"
[ -n "$COMPANY_WEBSITE" ] && NEXT_STEPS_LIST+=" kurt map url $COMPANY_WEBSITE --cluster-urls\n"
NEXT_STEPS_LIST+="\n"
fi
# Content mapped but not fetched
if [ "$TOTAL_DOCS" -gt 0 ] && [ "$CONTENT_FETCHED" != "true" ]; then
NEXT_STEPS_LIST+="2. Fetch content:\n"
NEXT_STEPS_LIST+=" kurt fetch --with-status NOT_FETCHED\n\n"
fi
# Publisher not extracted
if [ "$PUBLISHER_EXTRACTED" != "true" ]; then
NEXT_STEPS_LIST+="3. Extract publisher profile:\n"
NEXT_STEPS_LIST+=" writing-rules-skill publisher --auto-discover\n\n"
fi
# Style not extracted
if [ "$STYLE_EXTRACTED" != "true" ]; then
NEXT_STEPS_LIST+="4. Extract style guide:\n"
NEXT_STEPS_LIST+=" writing-rules-skill style --type corporate --auto-discover\n\n"
fi
# Personas not extracted
if [ "$PERSONAS_EXTRACTED" != "true" ]; then
NEXT_STEPS_LIST+="5. Extract personas:\n"
NEXT_STEPS_LIST+=" writing-rules-skill persona --audience-type all --auto-discover\n\n"
fi
# Workflow to create
if [ "$HAS_WORKFLOW" = "true" ]; then
NEXT_STEPS_LIST+="6. Define workflow:\n"
NEXT_STEPS_LIST+=" workflow-skill add\n\n"
fi
# Always suggest creating project
NEXT_STEPS_LIST+="7. Create your first project:\n"
NEXT_STEPS_LIST+=" /create-project\n"
# If nothing needed
if [ -z "$NEXT_STEPS_LIST" ]; then
NEXT_STEPS_LIST="You're all set! Create a project with:\n /create-project\n"
fi
```
---
## Step 5: Replace Template Placeholders
```bash
# Replace all placeholders
PROFILE_CONTENT="$TEMPLATE"
# Simple replacements
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{CREATED_DATE\}\}/$CREATED_DATE}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{UPDATED_DATE\}\}/$UPDATED_DATE}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{COMPANY_NAME\}\}/$COMPANY_NAME}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{INDUSTRY\}\}/$INDUSTRY}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{TEAM_NAME\}\}/$TEAM_NAME}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{TEAM_ROLE\}\}/$TEAM_ROLE}"
# Lists (using printf for newlines)
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{GOALS_LIST\}\}/$(printf "$GOALS_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{CONTENT_TYPES_LIST\}\}/$(printf "$CONTENT_TYPES_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{KNOWN_PERSONAS_LIST\}\}/$(printf "$KNOWN_PERSONAS_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{TO_DISCOVER_PERSONAS\}\}/$TO_DISCOVER_PERSONAS}"
# Sources
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{COMPANY_WEBSITE\}\}/$COMPANY_WEBSITE}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{DOCS_URL\}\}/$DOCS_URL}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{BLOG_URL\}\}/$BLOG_URL}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{OTHER_COMPANY_SOURCES\}\}/$(printf "$OTHER_COMPANY_SOURCES")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{RESEARCH_SOURCES_LIST\}\}/$(printf "$RESEARCH_SOURCES_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{COMPANY_CONTENT_STATUS\}\}/$COMPANY_CONTENT_STATUS}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{RESEARCH_CONTENT_STATUS\}\}/$RESEARCH_CONTENT_STATUS}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{CMS_PLATFORM\}\}/$CMS_PLATFORM}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{CMS_STATUS\}\}/$CMS_STATUS}"
# Analytics (handle conditional sections)
if [ "$ANALYTICS_CONFIGURED" = "true" ]; then
# Remove the {{^ANALYTICS_CONFIGURED}} section
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed '/{{^ANALYTICS_CONFIGURED}}/,/{{\/ANALYTICS_CONFIGURED}}/d')
# Remove conditional tags
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed 's/{{#ANALYTICS_CONFIGURED}}//')
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed 's/{{\/ANALYTICS_CONFIGURED}}//')
# Replace analytics placeholders
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed "s|{{#ANALYTICS_DOMAINS}}|$ANALYTICS_DOMAINS|")
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed 's/{{\/ANALYTICS_DOMAINS}}//')
else
# Remove the {{#ANALYTICS_CONFIGURED}} section
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed '/{{#ANALYTICS_CONFIGURED}}/,/{{\/ANALYTICS_CONFIGURED}}/d')
# Remove conditional tags for not-configured section
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed 's/{{^ANALYTICS_CONFIGURED}}//')
PROFILE_CONTENT=$(echo "$PROFILE_CONTENT" | sed 's/{{\/ANALYTICS_CONFIGURED}}//')
fi
# Competitors
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{COMPETITORS_LIST\}\}/$(printf "$COMPETITORS_LIST")}"
# Workflows
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{WORKFLOWS_LIST\}\}/$(printf "$WORKFLOWS_LIST")}"
# Rules
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{PUBLISHER_STATUS\}\}/$PUBLISHER_STATUS}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{PUBLISHER_PATH\}\}/$PUBLISHER_PATH}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{STYLE_COUNT\}\}/$STYLE_COUNT}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{STYLE_LIST\}\}/$(printf "$STYLE_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{STRUCTURE_COUNT\}\}/$STRUCTURE_COUNT}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{STRUCTURE_LIST\}\}/$(printf "$STRUCTURE_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{PERSONA_COUNT\}\}/$PERSONA_COUNT}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{PERSONA_LIST\}\}/$(printf "$PERSONA_LIST")}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{CUSTOM_RULES_STATUS\}\}/$(printf "$CUSTOM_RULES_STATUS")}"
# Content stats
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{TOTAL_DOCS\}\}/$TOTAL_DOCS}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{COMPANY_DOCS\}\}/$FETCHED_DOCS}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{RESEARCH_DOCS\}\}/0}"
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{LAST_INDEXED\}\}/$CREATED_DATE}"
# Next steps
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{NEXT_STEPS_LIST\}\}/$(printf "$NEXT_STEPS_LIST")}"
# User notes (empty for now)
PROFILE_CONTENT="${PROFILE_CONTENT//\{\{USER_NOTES\}\}/}"
```
---
## Step 6: Write Profile File
```bash
echo "Creating your Kurt profile..."
# Write to .kurt/profile.md
echo "$PROFILE_CONTENT" > .kurt/profile.md
if [ $? -eq 0 ]; then
echo "✓ Profile created: .kurt/profile.md"
else
echo "❌ Error: Could not create profile file"
echo ""
echo "Check permissions on .kurt/ directory"
exit 1
fi
```
---
## Step 7: Display Profile Preview
```
───────────────────────────────────────────────────────
Profile Created
───────────────────────────────────────────────────────
Company: {{COMPANY_NAME}}
Team: {{TEAM_NAME}}
Content types: {{CONTENT_TYPE_COUNT}}
Rules extracted: {{RULES_EXTRACTED_COUNT}}
Content indexed: {{TOTAL_DOCS}} documents
───────────────────────────────────────────────────────
View your full profile:
cat .kurt/profile.md
Update your profile anytime by editing:
.kurt/profile.md
```
---
## Step 8: Return Success
Return success to parent skill for final success message.
---
## Error Handling
**If template missing:**
```
❌ Error: Profile template not found
Expected: .kurt/templates/profile-template.md
Please check your Kurt plugin installation.
```
**If .kurt/ directory not writable:**
```
❌ Error: Cannot write to .kurt/ directory
Check permissions:
ls -la .kurt/
```
**If data file missing:**
```
❌ Error: Onboarding data not found
Expected: .kurt/temp/onboarding-data.json
This file should have been created by the questionnaire subskill.
Please restart onboarding: /start
```
---
## Output
Creates `.kurt/profile.md` with complete team profile based on all collected data.
---
*This subskill generates the final profile file from collected onboarding data.*
```
### subskills/update-profile.md
```markdown
# Update Profile Subskill
**Purpose:** Selectively update existing profile
**Parent Skill:** onboarding-skill
**Pattern:** Menu-driven updates to profile sections
---
## Overview
This subskill allows users to update specific parts of their team profile without running the full onboarding flow.
**Update options:**
- Content map (add/remove organizational domains)
- Analytics configuration (add domains, re-sync)
- Foundation rules (re-extract with new content)
- Team information (company/team details)
---
## Step 1: Check Profile Exists
```bash
# Verify profile exists
if [ ! -f ".kurt/profile.md" ]; then
echo "⚠️ No profile found"
echo ""
echo "You need to create a profile first."
echo ""
echo "Run: /create-profile"
exit 1
fi
echo "✓ Profile found"
echo ""
```
---
## Step 2: Show Update Menu
```
═══════════════════════════════════════════════════════
Update Profile
═══════════════════════════════════════════════════════
What would you like to update?
a) Content Map - Add/remove organizational domains
b) Analytics - Configure or update analytics for domains
c) Foundation Rules - Re-extract publisher, style, personas
d) Team Information - Update company/team details
e) All of the above
f) Cancel
Choose: _
```
**Wait for user response**
---
## Step 3: Route to Operations
Based on user choice, invoke the appropriate operation(s):
### Option (a): Update Content Map
```
Updating content map...
```
**Invoke:** `onboarding setup-content`
This will:
1. Show current domains in content map
2. Ask if user wants to add or remove domains
3. For new domains: Run map → cluster → fetch workflow
4. For removal: Remove from sources/ and update profile
**After completion:**
```
Content map updated.
Would you like to re-extract foundation rules with the new content? (Y/n):
```
If yes, invoke `onboarding setup-rules`
---
### Option (b): Update Analytics
```
Updating analytics configuration...
```
**Invoke:** `onboarding setup-analytics`
This will:
1. Show current analytics configuration
2. Offer to add new domains or update existing
3. Run analytics onboard/sync for selected domains
---
### Option (c): Update Foundation Rules
```
Re-extracting foundation rules...
```
**Prerequisites check:**
```bash
# Verify content is available
content_count=$(kurt content list --with-status FETCHED 2>/dev/null | wc -l | tr -d ' ')
if [ "$content_count" -lt 10 ]; then
echo "⚠️ Need at least 10 fetched pages to extract rules"
echo ""
echo "Would you like to add more content first? (Y/n):"
read -p "> " choice
if [ "$choice" != "n" ]; then
# Invoke setup-content
onboarding setup-content
fi
fi
```
**Invoke:** `onboarding setup-rules`
This delegates to `project-management extract-rules --foundation-only` which will:
1. Show current rules
2. Offer to re-extract publisher, style, or personas
3. Use writing-rules-skill with preview mode
---
### Option (d): Update Team Information
```
Updating team information...
```
**Invoke:** `onboarding-skill/subskills/questionnaire --update-only`
This will:
1. Show current team information from profile
2. Ask which fields to update
3. Update onboarding-data.json (or just update directly)
---
### Option (e): Update All
Run all update operations in sequence:
```
Running complete profile update...
Step 1/4: Content Map
```
Invoke: `onboarding setup-content`
```
Step 2/4: Analytics
```
Invoke: `onboarding setup-analytics`
```
Step 3/4: Foundation Rules
```
Invoke: `onboarding setup-rules`
```
Step 4/4: Team Information
```
Invoke: `onboarding-skill/subskills/questionnaire --update-only`
---
### Option (f): Cancel
```
Update cancelled.
```
Exit without changes.
---
## Step 4: Regenerate Profile
After any updates complete, regenerate the profile.md file:
```
Updating profile...
```
**Invoke:** `onboarding-skill/subskills/create-profile --update`
This will:
1. Load existing profile data
2. Merge with any updated information
3. Regenerate `.kurt/profile.md`
---
## Step 5: Show Updated Summary
```
═══════════════════════════════════════════════════════
✅ Profile Updated
═══════════════════════════════════════════════════════
{{COMPANY_NAME}} - {{TEAM_NAME}}
Updated:
{{#if CONTENT_UPDATED}}
✓ Content Map - {{DOMAIN_COUNT}} domains
{{/if}}
{{#if ANALYTICS_UPDATED}}
✓ Analytics - {{ANALYTICS_DOMAIN_COUNT}} domains configured
{{/if}}
{{#if RULES_UPDATED}}
✓ Foundation Rules - Re-extracted
{{/if}}
{{#if INFO_UPDATED}}
✓ Team Information - Updated
{{/if}}
Profile location: .kurt/profile.md
───────────────────────────────────────────────────────
What's Next?
───────────────────────────────────────────────────────
{{#if CONTENT_UPDATED or RULES_UPDATED}}
Your updated foundation is ready for project work.
{{/if}}
{{#if NOT ANALYTICS_CONFIGURED}}
💡 Consider setting up analytics for traffic-based content prioritization.
Run: /update-profile → choose option (b)
{{/if}}
Ready to create a project? Run: /create-project
```
---
## Error Handling
### If operation fails
```
⚠️ Update failed: {{OPERATION_NAME}}
Error: {{ERROR_MESSAGE}}
Options:
a) Retry this operation
b) Skip and continue with other updates
c) Cancel all updates
Choose: _
```
### If profile is corrupted
```
⚠️ Profile appears to be corrupted
The profile file exists but couldn't be parsed.
Options:
a) View profile file (for debugging)
b) Backup and recreate (runs /create-profile)
c) Cancel
Choose: _
```
---
## Key Design Principles
1. **Selective updates** - Users choose what to update, not all-or-nothing
2. **Delegates to operations** - Reuses setup-content, setup-analytics, setup-rules
3. **Profile regeneration** - Always regenerates profile.md after changes
4. **Non-destructive** - Can add content/analytics without removing existing
5. **Fail gracefully** - Errors in one operation don't block others
---
*This subskill provides selective profile updates by orchestrating onboarding operations.*
```
### subskills/map-content.md
```markdown
# Map Content Subskill
**Purpose:** Map and fetch content from sources captured in questionnaire
**Parent Skill:** onboarding-skill
**Input:** `.kurt/temp/onboarding-data.json`
**Output:** Updates JSON with content statistics
---
## Overview
This subskill takes the sources from the questionnaire and:
1. Maps content using `kurt map url`
2. Optionally fetches content using `kurt fetch`
3. Updates onboarding-data.json with content stats
---
## Step 1: Load Sources from JSON
```bash
# Read sources from onboarding data
COMPANY_WEBSITE=$(jq -r '.company_website // empty' .kurt/temp/onboarding-data.json)
DOCS_URL=$(jq -r '.docs_url // empty' .kurt/temp/onboarding-data.json)
RESEARCH_SOURCES=$(jq -r '.research_sources[]?' .kurt/temp/onboarding-data.json)
CMS_PLATFORM=$(jq -r '.cms_platform // "none"' .kurt/temp/onboarding-data.json)
CMS_CONFIGURED=$(jq -r '.cms_configured // false' .kurt/temp/onboarding-data.json)
# Build list of all URLs to map
URLS_TO_MAP=()
[ -n "$COMPANY_WEBSITE" ] && URLS_TO_MAP+=("$COMPANY_WEBSITE")
[ -n "$DOCS_URL" ] && URLS_TO_MAP+=("$DOCS_URL")
while IFS= read -r url; do
[ -n "$url" ] && URLS_TO_MAP+=("$url")
done <<< "$RESEARCH_SOURCES"
```
---
## Step 2: Map Content Sources
```
───────────────────────────────────────────────────────
Mapping Content Sources
───────────────────────────────────────────────────────
```
**For each URL:**
```bash
for url in "${URLS_TO_MAP[@]}"; do
echo ""
echo "Discovering content from: $url"
echo ""
# Map URL with clustering
kurt map url "$url" --cluster-urls
if [ $? -eq 0 ]; then
echo "✓ Mapped successfully"
else
echo "⚠️ Failed to map: $url"
echo ""
echo "Options:"
echo " a) Retry"
echo " b) Skip this source"
echo " c) Cancel mapping"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
# Retry
kurt map url "$url" --cluster-urls
;;
b)
# Skip
echo "Skipped: $url"
continue
;;
c)
# Cancel
echo "Mapping cancelled"
exit 1
;;
esac
fi
done
```
**If CMS configured:**
```bash
if [ "$CMS_CONFIGURED" = "true" ] && [ "$CMS_PLATFORM" != "none" ]; then
echo ""
echo "Mapping CMS content from: $CMS_PLATFORM"
echo ""
# Map CMS with clustering
kurt map cms --platform "$CMS_PLATFORM" --cluster-urls
if [ $? -eq 0 ]; then
echo "✓ CMS content mapped"
else
echo "⚠️ Failed to map CMS content"
fi
fi
```
---
## Step 3: Display Mapping Summary
```bash
# Get content statistics
TOTAL_MAPPED=$(kurt content list --with-status NOT_FETCHED | wc -l)
CLUSTERS=$(kurt cluster-urls --format json | jq '. | length')
```
```
───────────────────────────────────────────────────────
Content Discovery Summary
───────────────────────────────────────────────────────
✓ Discovered {{TOTAL_MAPPED}} documents
✓ Organized into {{CLUSTERS}} topic clusters
{{#if COMPANY_WEBSITE}}
• {{COMPANY_WEBSITE}}: {{COMPANY_PAGES}} pages
{{/if}}
{{#if DOCS_URL}}
• {{DOCS_URL}}: {{DOCS_PAGES}} pages
{{/if}}
{{#each RESEARCH_SOURCES}}
• {{this}}: {{PAGES}} pages
{{/each}}
{{#if CMS_CONFIGURED}}
• CMS ({{CMS_PLATFORM}}): {{CMS_DOCS}} documents
{{/if}}
───────────────────────────────────────────────────────
Would you like to fetch this content now? (y/n):
```
**Wait for user response**
---
## Step 4: Fetch Content (If User Confirms)
**If yes:**
```
Fetching and indexing content...
This may take a few minutes...
```
```bash
# Fetch all mapped content
kurt fetch --with-status NOT_FETCHED
if [ $? -eq 0 ]; then
FETCHED_COUNT=$(kurt content list --with-status FETCHED | wc -l)
echo ""
echo "✓ $FETCHED_COUNT documents fetched and indexed"
echo "✓ Content ready for analysis"
echo ""
# Update JSON
jq '.content_fetched = true' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
else
echo "⚠️ Some content failed to fetch"
echo ""
echo "You can retry later with: kurt fetch --with-status NOT_FETCHED"
echo ""
# Update JSON with partial fetch
jq '.content_fetched = false' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
fi
```
**If no:**
```
Skipped. You can fetch content later with:
kurt fetch --with-status NOT_FETCHED
```
```bash
# Update JSON
jq '.content_fetched = false' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
```
---
## Step 5: Update JSON with Content Stats
```bash
# Get final stats
TOTAL_DOCS=$(kurt content list | wc -l)
FETCHED_DOCS=$(kurt content list --with-status FETCHED | wc -l)
NOT_FETCHED_DOCS=$(kurt content list --with-status NOT_FETCHED | wc -l)
# Update JSON with stats
jq --arg total "$TOTAL_DOCS" \
--arg fetched "$FETCHED_DOCS" \
--arg not_fetched "$NOT_FETCHED_DOCS" \
'.content_stats = {
"total_documents": ($total | tonumber),
"fetched": ($fetched | tonumber),
"not_fetched": ($not_fetched | tonumber)
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
```
---
## Step 6: Return Control to Parent
Return success. Parent skill (onboarding-skill) continues to next step (extract-foundation).
---
## Error Handling
**If kurt CLI not available:**
```
❌ Error: kurt CLI not found
Please install kurt-core:
pip install kurt-core
Then verify installation:
kurt --version
```
Exit with error code 1.
**If kurt database not initialized:**
```
❌ Error: Kurt database not initialized
Please run:
kurt init
Then retry: /start
```
Exit with error code 1.
**If all URLs fail to map:**
```
⚠️ No content could be mapped
This might be because:
• URLs are inaccessible
• No sitemap found and crawling failed
• Network issues
Options:
a) Retry with different URLs
b) Skip content mapping (add sources later)
c) Cancel onboarding
Choose: _
```
**If fetch fails:**
```
⚠️ Content fetch failed
Error: {{ERROR_MESSAGE}}
Options:
a) Retry fetch
b) Continue without fetching (can fetch later)
c) Cancel onboarding
Choose: _
```
---
## Output Format
Updates `.kurt/temp/onboarding-data.json` with:
```json
{
...existing fields...,
"content_fetched": true,
"content_stats": {
"total_documents": 47,
"fetched": 47,
"not_fetched": 0
}
}
```
---
*This subskill handles all content mapping and fetching using kurt CLI.*
```
### subskills/setup-analytics.md
```markdown
# Setup Analytics Subskill
**Purpose:** Configure analytics for organizational domains (optional)
**Parent Skill:** onboarding-skill
**Input:** `.kurt/temp/onboarding-data.json`
**Output:** Updates JSON with analytics configuration
---
## Overview
This subskill offers analytics setup for the domains discovered during content mapping:
1. Detects which domains have content
2. Offers analytics setup (PostHog) for each domain
3. Tests connection and syncs initial data
4. Updates onboarding-data.json with analytics config
**Analytics is optional** - users can skip or set up later.
---
## Step 1: Check Prerequisites
```bash
# Verify content has been mapped
CONTENT_FETCHED=$(jq -r '.content_fetched // false' .kurt/temp/onboarding-data.json)
if [ "$CONTENT_FETCHED" != "true" ]; then
echo "⚠️ No content fetched yet. Skipping analytics setup."
echo ""
echo "You can set up analytics later with:"
echo " kurt analytics onboard <domain>"
echo ""
# Update JSON and skip
jq '.analytics_configured = false | .analytics_skipped = true' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
exit 0
fi
```
---
## Step 2: Detect Domains from Content
```bash
# Get unique domains from fetched content
DOMAINS=$(kurt content list --with-status FETCHED --format json | \
jq -r '.[] | .source_url' | \
sed -E 's|^https?://([^/]+).*|\1|' | \
sed 's/^www\.//' | \
sort -u)
# Count domains
DOMAIN_COUNT=$(echo "$DOMAINS" | wc -l | tr -d ' ')
if [ "$DOMAIN_COUNT" -eq 0 ]; then
echo "⚠️ No domains found in content. Skipping analytics."
jq '.analytics_configured = false | .analytics_skipped = true' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
exit 0
fi
```
---
## Step 3: Offer Analytics Setup
```
───────────────────────────────────────────────────────
Analytics Integration (Optional)
───────────────────────────────────────────────────────
```
**Explain benefits:**
```
💡 Analytics Integration
Kurt can integrate with PostHog web analytics to help you:
• Prioritize high-traffic pages for updates
• Identify pages losing traffic (needs refresh)
• Spot zero-traffic pages (potentially orphaned)
• Make data-driven content decisions
Example: When updating tutorials, Kurt will prioritize the ones
getting the most traffic for maximum impact.
Setup takes ~2-3 minutes per domain.
```
**Show detected domains:**
```
We detected content from {{DOMAIN_COUNT}} domain(s):
{{#each DOMAINS}}
• {{this}}
{{/each}}
Would you like to set up analytics? (Y/n):
```
**Wait for user response**
---
## Step 4: Handle User Choice
### If User Declines (n)
```
Skipped. You can set up analytics later with:
kurt analytics onboard <domain>
Continuing without analytics...
```
```bash
# Update JSON
jq '.analytics_configured = false | .analytics_skipped = true | .analytics_domains = []' \
.kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
```
Return to parent skill.
### If User Accepts (Y/yes)
```
Great! Let's set up analytics for your domains.
```
Continue to Step 5.
---
## Step 5: Configure Analytics for Each Domain
```
───────────────────────────────────────────────────────
Configuring Analytics
───────────────────────────────────────────────────────
```
**For each domain:**
```bash
CONFIGURED_DOMAINS=()
for domain in $DOMAINS; do
echo ""
echo "Setting up analytics for: $domain"
echo ""
# Prompt for PostHog details
echo "Platform: PostHog (currently the only supported platform)"
echo ""
read -p "PostHog Project ID (phc_...): " PROJECT_ID
# Validate project ID format
if [[ ! "$PROJECT_ID" =~ ^phc_ ]]; then
echo "⚠️ Invalid format. PostHog project IDs start with 'phc_'"
echo ""
echo "Options:"
echo " a) Retry"
echo " b) Skip this domain"
echo " c) Skip all analytics"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
# Retry this domain
continue
;;
b)
# Skip this domain
echo "Skipped: $domain"
continue
;;
c)
# Skip all
echo "Skipping all analytics setup"
break
;;
esac
fi
read -sp "PostHog API Key (phx_...): " API_KEY
echo ""
# Validate API key format
if [[ ! "$API_KEY" =~ ^phx_ ]]; then
echo "⚠️ Invalid format. PostHog API keys start with 'phx_'"
echo ""
echo "Options:"
echo " a) Retry"
echo " b) Skip this domain"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
# Retry this domain
continue
;;
b)
# Skip this domain
echo "Skipped: $domain"
continue
;;
esac
fi
echo ""
echo "Testing connection..."
# Run onboard command
kurt analytics onboard "https://$domain" \
--platform posthog \
--project-id "$PROJECT_ID" \
--api-key "$API_KEY"
if [ $? -eq 0 ]; then
echo "✓ Connected to PostHog"
echo ""
# Initial sync
echo "Syncing initial analytics data..."
kurt analytics sync "$domain"
if [ $? -eq 0 ]; then
echo "✓ Analytics configured for $domain"
CONFIGURED_DOMAINS+=("$domain")
else
echo "⚠️ Sync failed for $domain"
echo "You can retry later with: kurt analytics sync $domain"
# Still count as configured since connection works
CONFIGURED_DOMAINS+=("$domain")
fi
else
echo "❌ Failed to connect to PostHog"
echo ""
echo "Please check:"
echo " • Project ID is correct"
echo " • API key has read permissions"
echo " • Network connection is working"
echo ""
echo "Options:"
echo " a) Retry this domain"
echo " b) Skip this domain"
echo " c) Skip all remaining domains"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
# Retry - will loop again
continue
;;
b)
# Skip this domain
echo "Skipped: $domain"
continue
;;
c)
# Skip all remaining
echo "Skipping remaining domains"
break
;;
esac
fi
echo ""
done
```
---
## Step 6: Display Analytics Summary
```bash
CONFIGURED_COUNT=${#CONFIGURED_DOMAINS[@]}
if [ $CONFIGURED_COUNT -gt 0 ]; then
echo ""
echo "───────────────────────────────────────────────────────"
echo "Analytics Configuration Complete"
echo "───────────────────────────────────────────────────────"
echo ""
echo "✓ Configured analytics for $CONFIGURED_COUNT domain(s):"
for domain in "${CONFIGURED_DOMAINS[@]}"; do
# Get stats for this domain
TOTAL_VIEWS=$(kurt analytics summary "$domain" --format json 2>/dev/null | jq -r '.pageviews_60d_total // 0')
DOCS_WITH_DATA=$(kurt analytics summary "$domain" --format json 2>/dev/null | jq -r '.documents_with_data // 0')
echo " • $domain"
echo " - $DOCS_WITH_DATA documents with traffic data"
if [ "$TOTAL_VIEWS" -gt 0 ]; then
echo " - $TOTAL_VIEWS pageviews (last 60 days)"
fi
done
echo ""
echo "✓ Analytics data will help prioritize content updates"
echo "✓ Data auto-syncs when stale (>7 days)"
echo ""
else
echo ""
echo "⚠️ No domains configured with analytics"
echo ""
echo "You can set up analytics later with:"
echo " kurt analytics onboard <domain>"
echo ""
fi
```
---
## Step 7: Update JSON with Analytics Config
```bash
# Build JSON array of configured domains
DOMAINS_JSON="[]"
for domain in "${CONFIGURED_DOMAINS[@]}"; do
# Get analytics metadata
PLATFORM="posthog"
LAST_SYNCED=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Get traffic stats if available
STATS=$(kurt analytics summary "$domain" --format json 2>/dev/null || echo '{}')
PAGEVIEWS_60D=$(echo "$STATS" | jq -r '.pageviews_60d_total // 0')
DOCS_WITH_DATA=$(echo "$STATS" | jq -r '.documents_with_data // 0')
P25=$(echo "$STATS" | jq -r '.p25_pageviews_30d // 0')
P75=$(echo "$STATS" | jq -r '.p75_pageviews_30d // 0')
# Add to array
DOMAINS_JSON=$(echo "$DOMAINS_JSON" | jq \
--arg domain "$domain" \
--arg platform "$PLATFORM" \
--arg last_synced "$LAST_SYNCED" \
--arg pageviews "$PAGEVIEWS_60D" \
--arg docs_with_data "$DOCS_WITH_DATA" \
--arg p25 "$P25" \
--arg p75 "$P75" \
'. + [{
"domain": $domain,
"platform": $platform,
"last_synced": $last_synced,
"pageviews_60d": ($pageviews | tonumber),
"documents_with_data": ($docs_with_data | tonumber),
"thresholds": {
"p25": ($p25 | tonumber),
"p75": ($p75 | tonumber)
}
}]')
done
# Update onboarding JSON
jq \
--argjson domains "$DOMAINS_JSON" \
--arg configured "$([ $CONFIGURED_COUNT -gt 0 ] && echo 'true' || echo 'false')" \
'.analytics_configured = ($configured == "true") |
.analytics_skipped = false |
.analytics_domains = $domains' \
.kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
```
---
## Step 8: Return Control to Parent
Return success. Parent skill (onboarding-skill) continues to next step (extract-foundation).
---
## Error Handling
**If kurt analytics command not available:**
```
❌ Error: Analytics commands not available
Your kurt-core version may not support analytics.
Please upgrade kurt-core:
pip install --upgrade kurt-core
Then verify:
kurt analytics --help
```
Exit with error code 1.
**If PostHog connection fails repeatedly:**
```
⚠️ Unable to connect to PostHog after multiple attempts
Common issues:
• Incorrect project ID or API key
• Network connectivity problems
• PostHog service outage
You can:
• Continue without analytics (skip)
• Set up analytics later: kurt analytics onboard <domain>
• Check PostHog status: https://status.posthog.com
```
User can choose to skip or cancel.
**If sync fails but connection works:**
```
⚠️ Connection successful but initial sync failed
This might be temporary. You can:
• Continue setup (retry sync later)
• Retry sync now
• Skip this domain
Analytics will still be configured, but no data yet.
```
---
## Output Format
Updates `.kurt/temp/onboarding-data.json` with:
```json
{
...existing fields...,
"analytics_configured": true,
"analytics_skipped": false,
"analytics_domains": [
{
"domain": "docs.company.com",
"platform": "posthog",
"last_synced": "2025-11-02T12:00:00Z",
"pageviews_60d": 45234,
"documents_with_data": 221,
"thresholds": {
"p25": 45,
"p75": 890
}
},
{
"domain": "blog.company.com",
"platform": "posthog",
"last_synced": "2025-11-02T12:05:00Z",
"pageviews_60d": 12890,
"documents_with_data": 65,
"thresholds": {
"p25": 20,
"p75": 450
}
}
]
}
```
---
## Notes
- **Optional step:** Users can skip entirely without blocking onboarding
- **Per-domain:** Each domain gets its own PostHog configuration
- **Validation:** Project ID and API key formats are validated
- **Initial sync:** Runs immediately after setup to get baseline data
- **Error recovery:** Multiple retry options if setup fails
- **Future-proof:** Designed to support additional platforms (GA4, Plausible)
---
*This subskill handles optional analytics setup during onboarding.*
```
### subskills/extract-foundation.md
```markdown
# Extract Foundation Subskill
**Purpose:** Extract foundation rules (publisher, style, personas) from fetched content
**Parent Skill:** onboarding-skill
**Input:** `.kurt/temp/onboarding-data.json`
**Output:** Updates JSON with extracted rule paths
---
## Overview
This subskill extracts the core rules needed for consistent content creation:
1. Publisher profile (company context)
2. Style guide (writing voice)
3. Target personas (audience profiles)
---
## Step 1: Check Content Status
```bash
# Check if content was fetched
CONTENT_FETCHED=$(jq -r '.content_fetched' .kurt/temp/onboarding-data.json)
if [ "$CONTENT_FETCHED" != "true" ]; then
echo "⚠️ Content not fetched yet"
echo ""
echo "Foundation rules require indexed content for extraction."
echo ""
echo "Options:"
echo " a) Skip rule extraction (can extract later)"
echo " b) Go back and fetch content"
echo " c) Cancel onboarding"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
# Skip extraction
jq '.rules_extracted.skipped = true' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
exit 0
;;
b)
# Invoke map-content again
echo "Returning to content fetching..."
exit 2 # Signal to parent to re-run map-content
;;
c)
echo "Onboarding cancelled"
exit 1
;;
esac
fi
# Check minimum content requirement
FETCHED_COUNT=$(jq -r '.content_stats.fetched' .kurt/temp/onboarding-data.json)
if [ "$FETCHED_COUNT" -lt 3 ]; then
echo "⚠️ Insufficient content for reliable rule extraction"
echo ""
echo "Found: $FETCHED_COUNT documents"
echo "Recommended: 5-10 documents minimum"
echo ""
echo "Options:"
echo " a) Continue anyway (rules may be less reliable)"
echo " b) Add more content sources"
echo " c) Skip rule extraction"
echo ""
read -p "Choose: " choice
case "$choice" in
a)
echo "Continuing with available content..."
;;
b)
echo "Please add more content sources, then retry /start"
exit 1
;;
c)
jq '.rules_extracted.skipped = true' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
exit 0
;;
esac
fi
```
---
## Step 2-4: Extract Foundation Rules
**Delegate to extract-rules subskill:**
```
───────────────────────────────────────────────────────
Extracting Foundation Rules
───────────────────────────────────────────────────────
Analyzing your content to extract:
• Publisher profile (company context)
• Primary voice (writing style)
• Personas (audience profiles)
```
**Invoke:** `project-management extract-rules --foundation-only`
This delegates to the extract-rules subskill which will:
1. Check prerequisites (content indexed)
2. Show preview of documents to analyze
3. Extract publisher profile
4. Extract primary voice
5. Extract personas
6. Show summary of extracted rules
**The extract-rules subskill owns the extraction logic and provides:**
- Document preview before extraction
- Progress reporting
- Error handling
- Retry logic
**After extraction completes:**
```bash
# Get paths of extracted rules
PUBLISHER_PATH=$(ls rules/publisher/publisher-profile.md 2>/dev/null)
STYLE_FILES=$(ls rules/style/*.md 2>/dev/null)
PERSONA_FILES=$(ls rules/personas/*.md 2>/dev/null)
PERSONA_COUNT=$(echo "$PERSONA_FILES" | wc -l | tr -d ' ')
# Update JSON with rule paths
if [ -n "$PUBLISHER_PATH" ]; then
jq --arg path "$PUBLISHER_PATH" \
'.rules_extracted.publisher = {
"extracted": true,
"path": $path
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
else
jq '.rules_extracted.publisher = {
"extracted": false,
"error": "Extraction failed or skipped"
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
fi
if [ -n "$STYLE_FILES" ]; then
STYLE_FILE=$(echo "$STYLE_FILES" | head -1)
STYLE_NAME=$(basename "$STYLE_FILE" .md)
jq --arg path "$STYLE_FILE" \
--arg name "$STYLE_NAME" \
'.rules_extracted.style = {
"extracted": true,
"path": $path,
"name": $name
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
else
jq '.rules_extracted.style = {
"extracted": false,
"error": "Extraction failed or skipped"
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
fi
if [ "$PERSONA_COUNT" -gt 0 ]; then
PERSONAS_JSON=$(echo "$PERSONA_FILES" | jq -R -s 'split("\n") | map(select(length > 0))')
jq --argjson personas "$PERSONAS_JSON" \
--arg count "$PERSONA_COUNT" \
'.rules_extracted.personas = {
"extracted": true,
"count": ($count | tonumber),
"files": $personas
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
else
jq '.rules_extracted.personas = {
"extracted": false,
"error": "Extraction failed or skipped"
}' .kurt/temp/onboarding-data.json > .kurt/temp/onboarding-data.tmp.json
mv .kurt/temp/onboarding-data.tmp.json .kurt/temp/onboarding-data.json
fi
```
---
## Step 5: Summary
```
───────────────────────────────────────────────────────
Foundation Rules Extracted
───────────────────────────────────────────────────────
{{#if PUBLISHER_EXTRACTED}}
✓ Publisher Profile
rules/publisher/publisher-profile.md
{{/if}}
{{#if STYLE_EXTRACTED}}
✓ Style Guide ({{STYLE_TYPE}})
{{STYLE_PATH}}
{{/if}}
{{#if PERSONAS_EXTRACTED}}
✓ Personas ({{PERSONA_COUNT}})
{{#each PERSONA_FILES}}
{{this}}
{{/each}}
{{/if}}
{{#if ANY_FAILED}}
⚠️ Some extractions failed
You can extract missing rules later with:
{{#if NOT PUBLISHER_EXTRACTED}} writing-rules-skill publisher --auto-discover{{/if}}
{{#if NOT STYLE_EXTRACTED}} writing-rules-skill style --type corporate --auto-discover{{/if}}
{{#if NOT PERSONAS_EXTRACTED}} writing-rules-skill persona --audience-type all --auto-discover{{/if}}
{{/if}}
```
---
## Step 6: Return Control
Return success. Parent skill continues to create-profile step.
---
## Error Handling
**If writing-rules-skill not available:**
```
❌ Error: writing-rules-skill not found
This skill should be available in .claude/skills/
Check your Kurt plugin installation.
```
**If all extractions fail:**
```
⚠️ No rules could be extracted
This might be because:
• Content is insufficient (need 5+ documents)
• Content is not indexed (need FETCHED status)
• Content doesn't have clear patterns
Options:
a) Skip rule extraction (create profile without rules)
b) Add more content and retry
c) Cancel onboarding
Choose: _
```
**If partial success:**
```
⚠️ Some rules extracted, but not all
Extracted: {{SUCCESS_LIST}}
Failed: {{FAILED_LIST}}
Continue with partial extraction? (y/n):
```
---
## Output Format
Updates `.kurt/temp/onboarding-data.json` with:
```json
{
...existing fields...,
"rules_extracted": {
"publisher": {
"extracted": true,
"path": "rules/publisher/publisher-profile.md"
},
"style": {
"extracted": true,
"path": "rules/style/technical-developer-voice.md",
"name": "technical-developer-voice",
"type": "technical-docs"
},
"personas": {
"extracted": true,
"count": 2,
"files": [
"rules/personas/backend-engineer.md",
"rules/personas/platform-engineer.md"
]
}
}
}
```
---
*This subskill extracts foundation rules using the writing-rules-skill.*
```