Back to skills
SkillHub ClubShip Full StackFull StackIntegration

hook-sdk-integration

LLM invocation patterns from hooks via SDK. Use when you need background agents, CLI calls, or cost optimization.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
0
Hot score
74
Updated
March 20, 2026
Overall rating
C2.6
Composite score
2.6
Best-practice grade
B84.0

Install command

npx @skill-hub/cli install chkim-su-skillmaker-hook-sdk-integration

Repository

chkim-su/skillmaker

Skill path: skills/hook-sdk-integration

LLM invocation patterns from hooks via SDK. Use when you need background agents, CLI calls, or cost optimization.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack, Integration.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: chkim-su.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install hook-sdk-integration into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/chkim-su/skillmaker before adding hook-sdk-integration to shared team environments
  • Use hook-sdk-integration for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: hook-sdk-integration
description: LLM invocation patterns from hooks via SDK. Use when you need background agents, CLI calls, or cost optimization.
allowed-tools: ["Read", "Grep", "Glob"]
---

# Hook SDK Integration

Patterns for invoking LLM calls from hooks using u-llm-sdk/claude-only-sdk.

## IMPORTANT: SDK Detailed Guide

**Load when implementing SDK**:
```
Skill("skillmaker:llm-sdk-guide")
```

This skill covers SDK call pattern **interfaces**.
`llm-sdk-guide` covers SDK **detailed APIs and types**.

## Quick Start

```bash
# Background agent pattern (non-blocking)
(python3 sdk-agent.py "$INPUT" &)
echo '{"status": "started"}'
exit 0
```

## Key Findings (Verified: 2025-12-30)

| Item | Result |
|------|--------|
| SDK calls | Possible from hooks |
| Latency | ~30s (CLI session initialization) |
| Background | Non-blocking execution possible (0.01s return) |
| Cost | Included in subscription (no additional API cost) |

## Architecture

```
Hook (bash) → Background (&) → SDK (Python) → CLI → Subscription usage
     │                                                    │
     └─── Immediate return (0.01s) ───────────────────────┘
```

## Pattern Selection

| Situation | Pattern | Reason |
|-----------|---------|--------|
| Need fast evaluation | `type: "prompt"` | In-session execution, fast |
| Need isolation | Direct CLI call | Separate MCP config possible |
| Complex logic | SDK + Background | Type-safe, non-blocking |
| Cost reduction | Local LLM (ollama) | Free, privacy |

## SDK Configuration (Python)

```python
from u_llm_sdk import LLM, LLMConfig
from llm_types import Provider, ModelTier, AutoApproval

config = LLMConfig(
    provider=Provider.CLAUDE,
    tier=ModelTier.LOW,
    auto_approval=AutoApproval.FULL,
    timeout=60.0,
)

async with LLM(config) as llm:
    result = await llm.run("Your prompt")
```

## Cost Structure

| Method | Cost |
|--------|------|
| `type: "prompt"` | Included in subscription |
| Claude CLI | Included in subscription |
| SDK via CLI | Included in subscription |
| Direct API | Per-token billing |

## References

- **[LLM SDK Detailed Guide](../llm-sdk-guide/SKILL.md)** - SDK API details
- [SDK Integration Patterns](references/sdk-patterns.md)
- [Background Agent Implementation](references/background-agent.md)
- [Cost Optimization](references/cost-optimization.md)
- [Real-World Projects](references/real-world-projects.md)


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### ../llm-sdk-guide/SKILL.md

```markdown
---
name: llm-sdk-guide
description: U-llm-sdk and claude-only-sdk patterns. Use when working on projects with LLM service, designing LLM integrations, or implementing AI-powered features.
allowed-tools: ["Read", "Grep", "Glob"]
---

# LLM SDK Guide

Two SDKs for LLM integration:
- **U-llm-sdk**: Multi-provider (Claude, Codex, Gemini) with unified `LLMResult`
- **claude-only-sdk**: Claude-specific advanced features (agents, sessions, orchestration)

## Architecture: CLI-based (NOT API)

**IMPORTANT**: These SDKs wrap CLI tools, NOT direct API calls!

```
SDK Layer (Python)
    │
    ▼
asyncio.create_subprocess_exec()  ← Spawns CLI process
    │
    ▼
CLI Execution (e.g., `claude -p "prompt" --mcp-config ...`)
    │
    ▼
Separate Session with own context window
```

This enables:
- **MCP Isolation**: Each spawned process has its own MCP config (`--mcp-config`)
- **Context Separation**: Child process context doesn't pollute parent session
- **Session Independence**: Each call creates isolated Claude Code session

## U-llm-sdk Quick Start

```python
from u_llm_sdk import LLM, LLMConfig
from llm_types import Provider, ModelTier, AutoApproval

config = LLMConfig(
    provider=Provider.CLAUDE,
    tier=ModelTier.HIGH,
    auto_approval=AutoApproval.EDITS_ONLY,
)

async with LLM(config) as llm:
    result = await llm.run("Your prompt")
    print(result.text)
```

## claude-only-sdk Quick Start

```python
from claude_only_sdk import ClaudeAdvanced, SessionTemplate

async with ClaudeAdvanced() as client:
    result = await client.run_with_template(
        "Review src/auth.py",
        SessionTemplate.SECURITY_ANALYST,
    )
```

## MCP Isolation Pattern

Use `mcp_config` to run isolated sessions with specific MCP servers:

```python
from claude_only_sdk import ClaudeAdvanced, ClaudeAdvancedConfig

config = ClaudeAdvancedConfig(
    mcp_config="./config/serena.mcp.json",  # Only Serena MCP loaded
    timeout=300.0,
)

async with ClaudeAdvanced(config) as client:
    # This runs in separate session with only Serena MCP
    result = await client.run("Use Serena to analyze code")
```

This pattern is useful for:
- Keeping heavy MCP servers out of main session context
- Running specialized tools in isolated environments
- Reducing token usage in main conversation

## Core Components

### U-llm-sdk
| Component | Purpose |
|-----------|---------|
| `LLM` / `LLMSync` | Async/Sync clients |
| `LLMConfig` | Unified configuration |
| `BaseProvider` | Provider abstraction |
| `InterventionHook` | RAG integration protocol |

### claude-only-sdk
| Component | Purpose |
|-----------|---------|
| `ClaudeAdvanced` | Extended Claude client |
| `AgentDefinition` | Specialized agent config |
| `SessionManager` | Virtual session injection |
| `TaskExecutor` | Parallel task execution |
| `AutonomousOrchestrator` | Prompt-based parallelization |

## Key Patterns

### Provider Selection
```python
async with LLM.auto() as llm:  # Claude > Codex > Gemini
    result = await llm.run("prompt")
```

### Session Continuity
```python
llm = LLM(config).resume(session_id)
async with llm:
    result = await llm.run("Continue...")
```

### Agent Definition (Claude)
```python
planner = AgentDefinition(
    name="planner",
    system_prompt="You are a planning specialist...",
    tier=ModelTier.HIGH,
    allowed_tools=["Read", "Grep", "Glob"],
)
```

## Output Handling

```python
result: LLMResult
result.text              # Response text (may be empty for FILE_EDIT!)
result.files_modified    # List[FileChange]
result.commands_run      # List[CommandRun]
result.session_id        # For continuation
result.result_type       # TEXT/CODE/FILE_EDIT/COMMAND/MIXED
```

For detailed patterns: [references/u-llm-sdk.md]
For Claude features: [references/claude-only-sdk.md]
For types reference: [references/llm-types.md]

```

### references/sdk-patterns.md

```markdown
# SDK 통합 패턴

## SDK 아키텍처 (CLI 기반)

SDK Layer (Python) → asyncio.create_subprocess_exec → Claude CLI → Anthropic API

**핵심**: SDK는 직접 API 호출하지 않고 CLI를 spawn함

## u-llm-sdk 기본 사용

```python
from u_llm_sdk import LLM, LLMConfig
from llm_types import Provider, ModelTier, AutoApproval

config = LLMConfig(
    provider=Provider.CLAUDE,
    tier=ModelTier.LOW,
    auto_approval=AutoApproval.FULL,
    timeout=60.0,
)

async with LLM(config) as llm:
    result = await llm.run("Your prompt")
```

## claude-only-sdk MCP 격리

```python
from claude_only_sdk import ClaudeAdvanced, ClaudeAdvancedConfig

config = ClaudeAdvancedConfig(
    mcp_config="./config/serena.mcp.json",
    timeout=300.0,
)

async with ClaudeAdvanced(config) as client:
    result = await client.run("Use Serena to analyze code")
```

## LLMResult 구조

| 필드 | 설명 |
|------|------|
| result.text | 응답 텍스트 |
| result.files_modified | List[FileChange] |
| result.commands_run | List[CommandRun] |
| result.session_id | 세션 ID |
| result.result_type | TEXT/CODE/FILE_EDIT/COMMAND/MIXED |

## Provider 선택

| Provider | CLI | 설명 |
|----------|-----|------|
| Provider.CLAUDE | claude | Anthropic Claude |
| Provider.CODEX | codex | OpenAI Codex |
| Provider.GEMINI | gemini | Google Gemini |

## 세션 재개

```python
llm = LLM(config).resume(session_id)
async with llm:
    result = await llm.run("Continue...")
```

```

### references/background-agent.md

```markdown
# Background Agent 구현

## 검증 결과 (2025-12-30)

| 테스트 | 결과 |
|--------|------|
| Hook 반환 시간 | 0.010초 (즉시) |
| Background LLM 호출 | 성공 |
| 메인 세션 영향 | 없음 |

## 패턴: Non-blocking SDK 호출

```bash
#!/bin/bash
# background-sdk-hook.sh

INPUT=$(cat)
SESSION_ID=$(echo "$INPUT" | jq -r '.session_id')
LOG_DIR=".claude/hooks/logs"
mkdir -p "$LOG_DIR"

# Background 프로세스 시작
(
    python3 /path/to/sdk-agent.py "$SESSION_ID" > "$LOG_DIR/bg-$SESSION_ID.json" 2>&1
) &

BACKGROUND_PID=$!

# 즉시 반환 (비차단)
jq -n --arg pid "$BACKGROUND_PID" '{"status": "started", "pid": $pid}'
exit 0
```

## Python Background Agent

```python
#!/usr/bin/env python3
# sdk-agent.py

import asyncio
import json
import sys
from pathlib import Path

sys.path.insert(0, "/path/to/u-llm-sdk/src")

async def background_agent(session_id: str):
    result = {"session_id": session_id, "completed": False}
    
    try:
        from u_llm_sdk import LLM, LLMConfig
        from llm_types import Provider, ModelTier, AutoApproval

        config = LLMConfig(
            provider=Provider.CLAUDE,
            tier=ModelTier.LOW,
            auto_approval=AutoApproval.FULL,
            timeout=60.0,
        )

        async with LLM(config) as llm:
            llm_result = await llm.run("Your evaluation prompt")
            result["response"] = llm_result.text[:500]
            result["completed"] = True

    except Exception as e:
        result["error"] = str(e)

    return result

if __name__ == "__main__":
    session_id = sys.argv[1] if len(sys.argv) > 1 else "unknown"
    output = asyncio.run(background_agent(session_id))
    print(json.dumps(output, indent=2))
```

## 결과 수집 패턴

Background agent 결과를 다음 Hook에서 읽기:

```bash
#!/bin/bash
# stop-hook-collect-results.sh

INPUT=$(cat)
SESSION_ID=$(echo "$INPUT" | jq -r '.session_id')
RESULT_FILE=".claude/hooks/logs/bg-$SESSION_ID.json"

if [[ -f "$RESULT_FILE" ]]; then
    RESULT=$(cat "$RESULT_FILE")
    COMPLETED=$(echo "$RESULT" | jq -r '.completed')
    
    if [[ "$COMPLETED" == "true" ]]; then
        RESPONSE=$(echo "$RESULT" | jq -r '.response')
        echo "Background agent result: $RESPONSE"
    fi
fi

exit 0
```

## 설정 예시 (settings.json)

```json
{
  "hooks": {
    "PostToolUse": [{
      "matcher": "Edit|Write",
      "hooks": [{
        "type": "command",
        "command": ".claude/hooks/background-sdk-hook.sh"
      }]
    }],
    "Stop": [{
      "hooks": [{
        "type": "command",
        "command": ".claude/hooks/stop-hook-collect-results.sh"
      }]
    }]
  }
}
```

## 주의사항

1. **타임아웃**: Background 프로세스는 Hook 타임아웃(60초)과 무관
2. **결과 파일**: 세션 ID로 구분하여 충돌 방지
3. **정리**: SessionEnd 훅에서 임시 파일 정리 권장

```

### references/cost-optimization.md

```markdown
# 비용 최적화

## 비용 구조 (2025-12-30 검증)

### Claude Code 구독

| 플랜 | 월 비용 | 포함 사용량 |
|------|---------|------------|
| Pro | $20 | ~40-80시간 Sonnet/주 |
| Max 5x | $100 | ~140시간 Sonnet/주 |
| Max 20x | $200 | ~480시간 Sonnet/주 |

### API 직접 과금

| 모델 | 입력 (MTok) | 출력 (MTok) |
|------|-------------|-------------|
| Opus 4.5 | $5 | $25 |
| Sonnet 4 | $3 | $15 |
| Opus 4/4.1 | $15 | $75 |

## SDK 비용 = 구독 사용량

```
SDK → CLI → 구독 사용량 (추가 비용 없음)
     └─ 직접 API 호출 아님!
```

**핵심**: SDK가 CLI를 spawn하므로 구독자에게는 추가 API 비용 없음

## 비용 최적화 전략

### 1. ModelTier 선택

```python
from llm_types import ModelTier

# 비용 순서: LOW < MEDIUM < HIGH
config = LLMConfig(tier=ModelTier.LOW)  # 간단한 평가
config = LLMConfig(tier=ModelTier.MEDIUM)  # 일반 작업
config = LLMConfig(tier=ModelTier.HIGH)  # 복잡한 분석
```

### 2. 프롬프트 최소화

```python
# Bad: 긴 프롬프트
await llm.run("Please carefully analyze this command and determine if it is safe...")

# Good: 짧은 프롬프트
await llm.run("Safe? YES/NO: " + command)
```

### 3. 선택적 호출

```python
# 모든 명령에 호출하지 말고 위험 패턴만
DANGEROUS_PATTERNS = ["rm ", "sudo ", "chmod ", "DROP TABLE"]

if any(p in command for p in DANGEROUS_PATTERNS):
    result = await llm.run(f"Is this dangerous? {command}")
```

### 4. 로컬 LLM 대안

| 옵션 | 비용 | 품질 | 속도 |
|------|------|------|------|
| Claude (구독) | 포함 | 높음 | 보통 |
| ollama (로컬) | 무료 | 중간 | 느림 |
| llama.cpp | 무료 | 중간 | 느림 |

```bash
# ollama 사용 예시
RESULT=$(ollama run llama3.2 "Is this safe? $COMMAND" 2>/dev/null)
```

## type: "prompt" vs SDK

| 특성 | type: "prompt" | SDK |
|------|----------------|-----|
| 비용 | 구독 포함 | 구독 포함 |
| 속도 | 빠름 | 느림 (30초) |
| 복잡도 | 낮음 | 높음 |
| 커스터마이징 | 제한적 | 자유로움 |

**권장**: 간단한 평가는 `type: "prompt"`, 복잡한 로직은 SDK

```

### references/real-world-projects.md

```markdown
# 실제 프로젝트 사례

## GitHub 프로젝트

### 1. claude-code-hooks-mastery
**URL**: https://github.com/disler/claude-code-hooks-mastery

**특징**:
- 8가지 Hook lifecycle 이벤트 데모
- UV 단일 파일 스크립트
- JSON payload 캡처

**구조**:
```
.claude/hooks/
├── capture_user_prompt.py
├── capture_pre_tool_use.py
├── capture_post_tool_use.py
└── capture_stop.py
```

### 2. claude-hooks (TypeScript)
**URL**: https://github.com/johnlindquist/claude-hooks

**특징**:
- TypeScript 타입 안전
- 모든 Hook 타입에 대한 typed payload
- 세션 로그 저장

### 3. claude-code-infrastructure-showcase
**URL**: https://github.com/diet103/claude-code-infrastructure-showcase

**특징**:
- 6개월 실사용 인프라
- skill-activation-prompt Hook
- 10개 전문 agent
- 3개 slash command

**구조**:
```
.claude/
├── hooks/
│   ├── skill-activation-prompt.sh
│   ├── post-tool-use-tracker.sh
│   └── tsc-check.sh
├── agents/
└── commands/
```

### 4. claude-hooks (Python)
**URL**: https://github.com/decider/claude-hooks

**특징**:
- Python 기반 validation
- 품질 검사 자동화
- 알림 통합

## 활용 패턴

### 패턴 1: Skill Auto-Activation

```bash
# skill-activation-prompt.sh
INPUT=$(cat)
PROMPT=$(echo "$INPUT" | jq -r '.prompt' | tr '[:upper:]' '[:lower:]')

if echo "$PROMPT" | grep -qE "커밋|commit"; then
    echo "💡 추천: /commit"
fi
```

### 패턴 2: TypeScript 검사

```bash
# tsc-check.sh (PostToolUse:Edit)
INPUT=$(cat)
FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty')

if [[ "$FILE" == *.ts ]] || [[ "$FILE" == *.tsx ]]; then
    npx tsc --noEmit "$FILE" 2>&1 || exit 2
fi
```

### 패턴 3: Git Branch per Session

```bash
# session-branch.sh (SessionStart)
INPUT=$(cat)
SESSION_ID=$(echo "$INPUT" | jq -r '.session_id')

git checkout -b "claude/$SESSION_ID" 2>/dev/null || true
```

## GitButler 통합

**URL**: https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks

**접근법**:
- 세션별 Git index 분리
- PreToolUse/PostToolUse에서 파일 추적
- Stop에서 세션 브랜치로 커밋

## Anthropic 공식 Best Practice

**URL**: https://www.anthropic.com/engineering/claude-code-best-practices

**주요 내용**:
- Headless mode로 GitHub 이벤트 자동화
- /project:fix-github-issue 커맨드 패턴
- 라벨 자동 할당

## 플러그인 생태계 (2025.11~)

**URL**: https://www.anthropic.com/news/claude-code-plugins

**특징**:
- slash command, agent, MCP, hook 패키지
- 한 줄 설치
- Dan Ávila: DevOps, 문서 생성, 테스트
- Seth Hobson: 80+ 전문 sub-agent

```

hook-sdk-integration | SkillHub