Back to skills
SkillHub ClubResearch & OpsFull StackTech WriterDesigner

discover-brand

This skill orchestrates autonomous discovery of brand materials across enterprise platforms (Notion, Confluence, Google Drive, Box, SharePoint, Figma, Gong, Granola, Slack). It should be used when the user asks to "discover brand materials", "find brand documents", "search for brand guidelines", "audit brand content", "what brand materials do we have", "find our style guide", "where are our brand docs", "do we have a style guide", "discover brand voice", "brand content audit", or "find brand assets".

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
9,735
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C58.6

Install command

npx @skill-hub/cli install anthropics-knowledge-work-plugins-discover-brand

Repository

anthropics/knowledge-work-plugins

Skill path: partner-built/brand-voice/skills/discover-brand

This skill orchestrates autonomous discovery of brand materials across enterprise platforms (Notion, Confluence, Google Drive, Box, SharePoint, Figma, Gong, Granola, Slack). It should be used when the user asks to "discover brand materials", "find brand documents", "search for brand guidelines", "audit brand content", "what brand materials do we have", "find our style guide", "where are our brand docs", "do we have a style guide", "discover brand voice", "brand content audit", or "find brand assets".

Open repository

Best for

Primary workflow: Research & Ops.

Technical facets: Full Stack, Tech Writer, Designer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: anthropics.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install discover-brand into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/anthropics/knowledge-work-plugins before adding discover-brand to shared team environments
  • Use discover-brand for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: discover-brand
description: >
  This skill orchestrates autonomous discovery of brand materials across enterprise
  platforms (Notion, Confluence, Google Drive, Box, SharePoint, Figma, Gong, Granola, Slack).
  It should be used when the user asks to "discover brand materials",
  "find brand documents", "search for brand guidelines", "audit brand content",
  "what brand materials do we have", "find our style guide", "where are our brand docs",
  "do we have a style guide", "discover brand voice", "brand content audit",
  or "find brand assets".
---

# Brand Discovery

Orchestrate autonomous discovery of brand materials across enterprise platforms. This skill coordinates the discover-brand agent to search connected platforms (Notion, Confluence, Google Drive, Box, Microsoft 365, Figma, Gong, Granola, Slack), triage sources, and produce a structured discovery report with open questions.

## Discovery Workflow

### 0. Orient the User

Before starting, briefly explain what's about to happen so the user knows what to expect:

"Here's how brand discovery works:

1. **Search** — I'll search your connected platforms (Notion, Google Drive, Slack, etc.) for brand-related materials: style guides, pitch decks, templates, transcripts, and more.
2. **Analyze** — I'll categorize and rank what I find, pull the best sources, and produce a discovery report with what I found, any conflicts, and open questions.
3. **Generate guidelines** — Once you've reviewed the report, I can generate a structured brand voice guideline document from the results.
4. **Save** — Guidelines are saved to `.claude/brand-voice-guidelines.md` in your working folder once you approve them. Nothing is written until that step.

The search usually takes a few minutes depending on how many platforms are connected. Ready to get started?"

Wait for the user to confirm before proceeding. If they have questions about the process, answer them first.

### 1. Check Settings

Read `.claude/brand-voice.local.md` if it exists. Extract:
- Company name
- Which platforms are enabled (notion, confluence, google-drive, box, microsoft-365, figma, gong, granola, slack)
- Search depth preference (standard or deep)
- Max sources limit
- Any known brand material locations listed under "Known Brand Materials"

If no settings file exists, proceed with all connected platforms and standard search depth.

### 2. Validate Platform Coverage

Before confirming scope, check which platforms are actually connected and classify them:

**Document platforms** (where brand guidelines, style guides, templates, and decks live):
- Notion, Confluence, Google Drive, Box, Microsoft 365 (SharePoint/OneDrive)

**Supplementary platforms** (valuable for patterns, but not where brand docs are stored):
- Slack, Gong, Granola, Figma

Apply these rules:

1. **If zero document platforms are connected**: **Stop.** Tell the user: "You don't have any document storage platforms connected (Google Drive, SharePoint, Notion, Confluence, or Box). Brand guidelines and style guides almost always live on one of these. Please connect at least one before running discovery. Gong/Granola/Slack transcripts are valuable supplements but unlikely to contain formal brand documents."

2. **If no Google Drive AND no Microsoft 365 AND no Box**: **Warn** (but proceed): "None of your primary file storage platforms (Google Drive, SharePoint, Box) are connected. Brand documents frequently live on these platforms. Discovery will proceed with [connected platforms], but results may have significant gaps. Consider connecting Google Drive or SharePoint."

3. **If only one platform total is connected**: **Warn** (but proceed): "Only [platform] is connected. Discovery works best with 2+ platforms for cross-source validation. Results from a single platform will have lower confidence scores."

### 3. Confirm Scope with User

Before launching discovery, confirm:
- Which platforms to search (default: all connected)
- Whether to include conversation transcripts (Gong, Granola) or just documents
- Any known locations to prioritize

Keep this brief — one question, not a questionnaire.

### 4. Delegate to Discover-Brand Agent

Launch the discover-brand agent via the Task tool. Provide:
- Company name (from settings or user input)
- Enabled platforms
- Search depth
- Any known URLs or locations to check first

The agent executes the 4-phase discovery algorithm autonomously:
1. **Broad Discovery** — parallel searches across platforms
2. **Source Triage** — categorize and rank sources
3. **Deep Fetch** — retrieve and extract from top sources
4. **Discovery Report** — structured output with open questions

### 5. Present Discovery Report

When the agent returns, present the report to the user with a summary:
- Total sources found and analyzed
- Key brand elements discovered
- Any conflicts between sources
- Open questions requiring team input

### 6. Offer Next Steps

After presenting the report, offer:
1. **Generate guidelines now** — chain to `/brand-voice:generate-guidelines` using discovery report as input
2. **Resolve open questions first** — work through high-priority questions before generating
3. **Save report** — store the discovery report to Notion or as a local file
4. **Expand search** — search additional platforms or deeper if coverage is low

## Open Questions

Open questions arise when the discovery agent encounters ambiguity it cannot resolve:
- Conflicting documents (e.g., 2023 style guide vs. 2024 brand update)
- Missing critical sections (e.g., no social media guidelines found)
- Inconsistent terminology across platforms

Every open question includes an agent recommendation. Present questions as "confirm or override" — not dead ends.

## Integration with Other Skills

- **Guideline Generation**: The discovery report is returned by the discover-brand agent via the Task tool. Pass it directly to the guideline-generation skill as structured input, replacing the need for users to manually gather sources.
- **Brand Voice Enforcement**: Once guidelines are generated from discovery, enforcement uses them automatically.

## Error Handling

- If zero platforms are connected, inform the user which platforms the plugin supports and how to connect them.
- If all searches return empty results, flag the discovery as "low coverage" and suggest the user provide documents manually or check platform connections.
- If a platform is connected but returns permission errors, note the gap and continue with other platforms.

## Reference Files

For detailed discovery patterns and algorithms, consult:

- **`references/search-strategies.md`** — Platform-specific search queries, query patterns by platform, and tips for maximizing discovery coverage
- **`references/source-ranking.md`** — Source category definitions, ranking algorithm weights, and triage decision criteria


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/search-strategies.md

```markdown
# Search Strategies by Platform

Platform-specific query patterns and tips for maximizing brand material discovery.

## Notion (Primary Discovery Engine)

Notion AI Search federates across connected sources (Google Drive, SharePoint, OneDrive, Slack, Jira, Teams), making it the most valuable single search endpoint.

### Query Patterns

**Direct brand searches:**
- "brand guidelines"
- "style guide"
- "brand voice"
- "tone of voice"
- "messaging framework"

**Sales and marketing content:**
- "pitch deck"
- "sales playbook"
- "email templates"
- "value proposition"
- "competitive positioning"

**Operational brand content:**
- "brand update"
- "rebrand"
- "launch messaging"
- "press release template"
- "social media guidelines"

### Tips
- Notion AI Search returns results from connected sources — one search covers multiple platforms
- Search for the company name + "brand" to find company-specific guidelines
- Check for databases titled "Brand Assets", "Marketing Resources", or "Content Library"
- Look for pages tagged with "brand", "marketing", "style"

## Atlassian Confluence

Enterprises often store official brand documentation in Confluence spaces.

### Query Patterns

**Space-level search:**
- Search spaces named: "Marketing", "Brand", "Communications", "Sales Enablement"
- Check for spaces with labels: "brand", "style-guide", "guidelines"

**Page-level search:**
- "brand style guide"
- "voice and tone guidelines"
- "messaging framework"
- "content standards"
- "editorial guidelines"

**Template search:**
- "email template"
- "proposal template"
- "presentation template"

### Tips
- Confluence spaces often have hierarchical page trees — find the brand root page and explore children
- Check for recently updated pages (brand guidelines evolve)
- Look for pages with many watchers — indicates important shared content
- Search attachments for PDF brand guides uploaded to Confluence pages

## Box

Cloud file storage — official brand documents, shared decks, and style guides frequently live here.

### Query Patterns

**Folder search:**
- "Brand Guidelines"
- "Marketing Assets"
- "Brand Kit"
- "Style Guide"
- "Sales Collateral"

**Document search:**
- "brand guide" (PDF, DOCX, Word)
- "style guide" (PDF, DOCX, Word)
- "messaging document"
- "brand standards"

### Tips
- Box often contains the "source of truth" brand guides as PDFs or Word docs
- Check shared folders with company-wide access
- Look for folders shared with the marketing or brand team
- Search for recently modified brand documents to find the latest version
- Use Box metadata search to filter by content type or custom attributes

## Google Drive

Google Drive stores shared brand documents, marketing materials, and official style guides.

### Query Patterns

**Folder search:**
- "Brand Guidelines"
- "Marketing"
- "Brand Assets"
- "Style Guide"

**Document search:**
- "brand guide" (PDFs, Google Docs, Google Slides)
- "style guide"
- "messaging framework"
- "brand standards"
- "brand voice"

### Tips
- Check shared drives — brand materials often live in team-wide shared drives
- Look for recently modified brand documents to find the latest version
- Search by owner (marketing team members) to surface brand-owned content
- Google Docs and Slides are common formats for living brand documents
- Check for Google Slides presentations with brand overview decks

## Microsoft 365 (SharePoint / OneDrive)

Enterprise organizations often centralize brand documentation on SharePoint sites.

### Query Patterns

**SharePoint site search:**
- Search marketing or communications SharePoint sites
- "brand guidelines"
- "style guide"
- "brand standards"
- "messaging framework"

**Document library search:**
- Check document libraries in marketing/communications sites
- "brand guide" (Word, PDF, PowerPoint)
- "brand book"
- "editorial guidelines"

**OneDrive search:**
- "brand" files in shared OneDrive folders
- "style guide" shared with the organization

### Tips
- Check marketing/communications SharePoint sites first — most common location for brand docs
- Search document libraries for PDF brand guides and PowerPoint brand decks
- Look for brand-tagged content using SharePoint metadata
- OneDrive shared folders may contain working drafts of brand materials
- Use SharePoint search and document library tools when available

## Slack

Slack channels contain informal brand discussions, decisions, and evolving brand voice patterns.

### Query Patterns

**Channel search:**
- Look for channels: #brand, #marketing, #brand-voice, #style-guide, #creative
- Check channel topics and descriptions for brand-related content

**Message search:**
- "brand guidelines"
- "brand voice"
- "tone of voice"
- "style guide"
- "brand update"

**Pinned messages:**
- Check pinned messages in #brand and #marketing channels
- Pinned items often contain approved brand resources or decisions

### Tips
- Search #brand and #marketing channels first — most likely to contain brand discussions
- Look for pinned messages — teams often pin brand guidelines and decisions
- Find brand discussion threads for context on brand evolution
- Slack is a conversational source — ranks as CONVERSATIONAL tier in source triage
- Recent messages may reveal brand changes not yet documented formally

## Gong (Conversation Intelligence)

Sales call transcripts reveal how the brand actually communicates in practice.

### Query Patterns

**Call search:**
- Search for calls tagged: "won", "closed-won" (successful brand application)
- Filter by top performers (their language patterns define implicit brand voice)
- Look for calls tagged: "demo", "discovery", "closing"

**Transcript search:**
- Search transcripts for company-specific value propositions
- Look for recurring phrases across successful calls
- Find calls where competitors are discussed (reveals positioning)

### Tips
- Focus on successful calls — they represent brand voice that works
- Compare top performer language with average performer language
- Look for consistent opening lines and closing patterns
- Note objection handling language — reveals brand positioning under pressure

## Granola (Meeting Intelligence)

Meeting notes and transcripts from the AI notepad for meetings.

### Query Patterns

**Meeting search:**
- Query for meetings related to: "brand", "positioning", "messaging"
- List recent meetings and filter for customer-facing calls
- Search for strategy sessions and brand planning meetings

**Transcript retrieval:**
- Get full transcripts from high-signal meetings
- Look for recurring themes in meeting notes
- Find meetings with key stakeholders (marketing leads, executives)

### Tips
- Granola captures both meeting notes and transcripts — both are valuable
- Meeting notes often contain summarized brand decisions and action items
- Look for recurring meeting series (weekly brand syncs, marketing standups)
- Cross-reference Granola meeting notes with Gong call transcripts when both are available

## Figma (Brand Design Systems)

Visual brand elements inform voice and tone indirectly.

### Query Patterns

**File search:**
- "brand design system"
- "design tokens"
- "brand kit"
- "style guide"
- "component library"

**Component search:**
- Look for files with brand colors, typography scales
- Find component documentation with usage guidelines
- Check for content/copy guidelines in design system docs

### Tips
- Figma design systems often include writing guidelines alongside visual specs
- Check file descriptions and comments for brand context
- Look for "voice and tone" sections within design system documentation
- Brand personality descriptions in design files reveal voice attributes

## Cross-Platform Search Tips

### Maximizing Coverage
1. Start with Notion AI Search (broadest coverage via federation)
2. Follow up with Confluence for enterprise documentation
3. Check Google Drive for official brand documents
4. Check Box for cloud-stored brand documents
5. Search SharePoint/OneDrive for enterprise files
6. Search Slack for brand discussions and decisions
7. Search Gong for conversational brand patterns
8. Search Granola for meeting transcripts and notes
9. Review Figma for design-embedded brand guidelines

### Avoiding Duplicates
- Track source URLs to detect the same document across platforms
- When the same content appears on multiple platforms, prefer the most recently updated version
- Note when Notion federation returns results from other platforms to avoid double-searching

### Recency Focus
- Focus on content from the last 12 months for operational, conversational, and contextual sources
- Only search further back when looking for explicit brand documents (style guides, brand books)
- When results include older content, prefer the most recently updated version
- Flag any non-AUTHORITATIVE source older than 12 months — it may reflect outdated positioning

### Handling No Results
- If a platform returns zero results, try broader queries ("marketing", "sales")
- Check if the platform is actually connected and authenticated
- Note the gap in the discovery report — the absence of content on a platform is itself useful information

```

### references/source-ranking.md

```markdown
# Source Ranking Algorithm

How to categorize, rank, and prioritize discovered brand sources.

## Source Categories

### AUTHORITATIVE
Official, approved brand documentation. Highest trust level.

**Signals:**
- Published style guides or brand books
- C-suite or marketing leadership authored/approved
- Lives in an official "Brand" folder or Confluence space
- Has version numbers or approval dates
- Referenced by other documents as "the brand guide"

**Examples:**
- "Acme Corp Brand Guidelines v3.2.pdf"
- "Official Style Guide" page in Confluence Marketing space
- Brand book in Google Drive with company-wide sharing
- Brand book in Box with company-wide sharing
- Official Style Guide in SharePoint Marketing site

**Trust weight:** 1.0 (baseline)

### OPERATIONAL
Brand applied in practice. Shows how guidelines manifest in real content.

**Signals:**
- Templates actively used by teams
- Sales playbooks with messaging guidance
- Email sequences with established tone
- Presentation templates with brand messaging

**Examples:**
- "Cold Email Templates Q4 2024"
- "Enterprise Sales Playbook"
- "Customer Success Response Templates"
- Pitch deck templates in Google Slides
- Email templates in Outlook
- Sales playbook on SharePoint

**Trust weight:** 0.8

### CONVERSATIONAL
Implicit brand voice from actual communications.

**Signals:**
- Sales call transcripts (especially successful ones)
- Meeting notes with customer-facing language
- Internal discussions about positioning
- Slack threads discussing brand decisions

**Examples:**
- Gong recordings of top performer calls
- Meeting notes from brand strategy sessions
- Customer success call transcripts
- Slack #brand channel discussions about tone

**Trust weight:** 0.6 (valuable for patterns, not prescriptive)

### CONTEXTUAL
Background information that informs brand but doesn't define it directly.

**Signals:**
- Design files without explicit brand guidelines
- Competitor analysis documents
- Industry reports
- Product documentation

**Examples:**
- Figma component library (visual only)
- "Competitive Landscape Q3 2024"
- Product feature specifications

**Trust weight:** 0.3

### STALE
Outdated content superseded by newer versions.

**Signals:**
- Older version when a newer version exists
- Pre-rebrand materials after a rebrand
- Documents explicitly marked as deprecated
- Content more than 2 years old without updates

**Examples:**
- "Brand Guidelines v1.0" when v3.2 exists
- "2022 Style Guide" when "2024 Brand Update" exists
- Documents in an "Archive" or "Deprecated" folder

**Trust weight:** 0.1 (flag for review, do not rely on)

## Ranking Algorithm

Apply these five ranking factors in order of priority:

### 1. Recency (Weight: 30%)

More recent sources are more likely to reflect current brand voice.

- **Score 1.0**: Updated within last 6 months
- **Score 0.7**: Updated within last year
- **Score 0.4**: Updated within last 2 years
- **Score 0.1**: Older than 2 years

When two sources conflict, the more recent one wins unless the older source is explicitly marked as the "official" guide.

Always prefer the most recent version of any document. When multiple sources cover the same topic, weight the newest one heavily. Flag any non-AUTHORITATIVE source older than 12 months in the discovery report.

### Recency Cutoffs

In addition to soft recency scoring, apply hard cutoffs to prevent stale content from polluting discovery:

**AUTHORITATIVE sources**: No hard cutoff. Official brand guides remain valid regardless of age unless explicitly superseded by a newer version.

**OPERATIONAL, CONVERSATIONAL, CONTEXTUAL sources**: Exclude from deep fetch if older than 12 months, with one exception: if zero sources in a category fall within the 12-month window, include the single most recent source from that category and flag it as potentially stale.

**STALE sources**: Exclude entirely from deep fetch. Include in the discovery report for reference only.

These cutoffs apply at the deep-fetch stage (Phase 3). All sources are still collected during broad discovery (Phase 1) and triage (Phase 2) — the cutoffs filter what gets fully retrieved and analyzed.

### 2. Explicitness (Weight: 25%)

Sources that explicitly define brand voice outrank those that merely demonstrate it.

- **Score 1.0**: Explicit brand instructions ("Our voice is...")
- **Score 0.7**: Documented tone guidelines ("Emails should be...")
- **Score 0.4**: Implicit patterns in templates or examples
- **Score 0.2**: Inferred from conversational patterns

### 3. Authority (Weight: 20%)

Higher organizational authority indicates more trustworthy brand definitions.

- **Score 1.0**: Official brand team or C-suite authored
- **Score 0.7**: Marketing leadership authored
- **Score 0.4**: Team leads or senior ICs
- **Score 0.2**: Individual contributor or unknown author

### 4. Specificity (Weight: 15%)

Detailed, actionable guidance outranks vague principles.

- **Score 1.0**: Specific rules with examples ("Use 'platform' not 'tool'")
- **Score 0.7**: Detailed guidelines ("Tone should be warm but professional")
- **Score 0.4**: General principles ("Be authentic")
- **Score 0.2**: Abstract values only ("We believe in innovation")

### 5. Cross-Source Consistency (Weight: 10%)

Elements corroborated across multiple sources rank higher.

- **Score 1.0**: Appears in 3+ independent sources
- **Score 0.7**: Appears in 2 independent sources
- **Score 0.4**: Appears in 1 source only
- **Score 0.1**: Contradicted by another source

## Composite Score Calculation

```
final_score = (recency × 0.30) + (explicitness × 0.25) + (authority × 0.20)
            + (specificity × 0.15) + (consistency × 0.10)
```

Multiply by category trust weight:
```
ranked_score = final_score × category_trust_weight
```

### Example Scoring

**Source: "Brand Voice Guidelines v3.2" (Confluence, updated 3 months ago)**
- Recency: 1.0 (3 months old)
- Explicitness: 1.0 (explicit brand instructions)
- Authority: 1.0 (marketing VP authored)
- Specificity: 0.7 (good guidelines, some gaps)
- Consistency: 0.7 (corroborated by email templates)
- Category: AUTHORITATIVE (1.0)
- **Final: (1.0×0.30 + 1.0×0.25 + 1.0×0.20 + 0.7×0.15 + 0.7×0.10) × 1.0 = 0.925**

**Source: "Top Performer Call — Enterprise Close" (Gong, 2 months ago)**
- Recency: 1.0
- Explicitness: 0.2 (implicit patterns only)
- Authority: 0.4 (senior AE)
- Specificity: 0.7 (specific phrases used)
- Consistency: 0.4 (single source)
- Category: CONVERSATIONAL (0.6)
- **Final: (1.0×0.30 + 0.2×0.25 + 0.4×0.20 + 0.7×0.15 + 0.4×0.10) × 0.6 = 0.345**

## Adaptive Scoring: No Authoritative Sources

When discovery finds **zero AUTHORITATIVE sources**, the scoring algorithm adapts to reflect that conversational and operational sources are the primary brand evidence.

### Adjusted Trust Weights (No Authoritative Sources)

| Category | Default Weight | Adapted Weight | Rationale |
|----------|---------------|----------------|-----------|
| AUTHORITATIVE | 1.0 | 1.0 | (n/a — none found) |
| OPERATIONAL | 0.8 | 0.9 | Templates become primary explicit evidence |
| CONVERSATIONAL | 0.6 | 0.85 | Transcripts are the best signal for how the brand actually communicates |
| CONTEXTUAL | 0.3 | 0.4 | Design and competitive context more valuable without formal docs |
| STALE | 0.1 | 0.2 | Even old docs matter more when nothing current exists |

### Adjusted Explicitness Scoring (No Authoritative Sources)

When no authoritative sources exist, conversational patterns carry more prescriptive weight:

- **Score 0.2 → 0.5**: "Inferred from conversational patterns" — these ARE the brand evidence now
- **Score 0.4 → 0.6**: "Implicit patterns in templates or examples"
- Other explicitness scores unchanged

### Example: Transcript Scoring With Adaptation

**Source: "Top Performer Call — Enterprise Close" (Gong, 2 months ago)**
- Recency: 1.0
- Explicitness: 0.5 (adapted from 0.2 — patterns are primary evidence)
- Authority: 0.4 (senior AE)
- Specificity: 0.7 (specific phrases used)
- Consistency: 0.4 (single source)
- Category: CONVERSATIONAL (0.85 adapted)
- **Adapted score: (1.0×0.30 + 0.5×0.25 + 0.4×0.20 + 0.7×0.15 + 0.4×0.10) × 0.85 = 0.552**

This puts the transcript well above the 0.5 deep-fetch threshold, ensuring conversational sources meaningfully contribute to guideline generation.

### When to Apply

Apply adaptive scoring when:
- Phase 2 triage produces zero AUTHORITATIVE sources
- Flag in the discovery report: "No formal brand guidelines found — scoring adapted to weight conversational and operational sources higher"

## Triage Decision Criteria

### Include in Deep Fetch (Top 5-15 sources)
- Ranked score > 0.5
- All AUTHORITATIVE sources regardless of score
- At least one source per category if available (this overrides the score threshold)
- At least one source per platform if available

### Flag for Review
- Sources with conflicting information
- STALE sources that may still be referenced by teams
- Sources with high specificity but low authority

### Exclude
- Ranked score < 0.1
- Clearly irrelevant results (e.g., "brand" used in product name, not brand guidelines)
- Duplicate content already captured from another platform

```