streaming-llm-responses
Implement real-time streaming UI patterns for AI chat applications. Use when adding response lifecycle handlers, progress indicators, client effects, or thread state synchronization. Covers onResponseStart/End, onEffect, ProgressUpdateEvent, and client tools. NOT when building basic chat without real-time feedback.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install mjunaidca-mjs-agent-skills-streaming-llm-responses
Repository
Skill path: .claude/skills/streaming-llm-responses
Implement real-time streaming UI patterns for AI chat applications. Use when adding response lifecycle handlers, progress indicators, client effects, or thread state synchronization. Covers onResponseStart/End, onEffect, ProgressUpdateEvent, and client tools. NOT when building basic chat without real-time feedback.
Open repositoryBest for
Primary workflow: Analyze Data & AI.
Technical facets: Full Stack, Frontend, Data / AI.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: mjunaidca.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install streaming-llm-responses into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/mjunaidca/mjs-agent-skills before adding streaming-llm-responses to shared team environments
- Use streaming-llm-responses for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: streaming-llm-responses
description: |
Implement real-time streaming UI patterns for AI chat applications. Use when adding response
lifecycle handlers, progress indicators, client effects, or thread state synchronization.
Covers onResponseStart/End, onEffect, ProgressUpdateEvent, and client tools.
NOT when building basic chat without real-time feedback.
---
# Streaming LLM Responses
Build responsive, real-time chat interfaces with streaming feedback.
## Quick Start
```typescript
import { useChatKit } from "@openai/chatkit-react";
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
onResponseStart: () => setIsResponding(true),
onResponseEnd: () => setIsResponding(false),
onEffect: ({ name, data }) => {
if (name === "update_status") updateUI(data);
},
});
```
---
## Response Lifecycle
```
User sends message
↓
onResponseStart() fires
↓
[Streaming: tokens arrive, ProgressUpdateEvents shown]
↓
onResponseEnd() fires
↓
UI unlocks, ready for next interaction
```
---
## Core Patterns
### 1. Response Lifecycle Handlers
Lock UI during AI response to prevent race conditions:
```typescript
function ChatWithLifecycle() {
const [isResponding, setIsResponding] = useState(false);
const lockInteraction = useAppStore((s) => s.lockInteraction);
const unlockInteraction = useAppStore((s) => s.unlockInteraction);
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
onResponseStart: () => {
setIsResponding(true);
lockInteraction(); // Disable map/canvas/form interactions
},
onResponseEnd: () => {
setIsResponding(false);
unlockInteraction();
},
onError: ({ error }) => {
console.error("ChatKit error:", error);
setIsResponding(false);
unlockInteraction();
},
});
return (
<div>
{isResponding && <LoadingOverlay />}
<ChatKit control={chatkit.control} />
</div>
);
}
```
### 2. Client Effects (Fire-and-Forget)
Server sends effects to update client UI without expecting a response:
**Backend - Streaming Effects:**
```python
from chatkit.types import ClientEffectEvent
async def respond(self, thread, item, context):
# ... agent processing ...
# Fire client effect to update UI
yield ClientEffectEvent(
name="update_status",
data={
"state": {"energy": 80, "happiness": 90},
"flash": "Status updated!"
}
)
# Another effect
yield ClientEffectEvent(
name="show_notification",
data={"message": "Task completed!"}
)
```
**Frontend - Handling Effects:**
```typescript
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
onEffect: ({ name, data }) => {
switch (name) {
case "update_status":
applyStatusUpdate(data.state);
if (data.flash) setFlashMessage(data.flash);
break;
case "add_marker":
addMapMarker(data);
break;
case "select_mode":
setSelectionMode(data.mode);
break;
}
},
});
```
### 3. Progress Updates
Show "Searching...", "Loading...", "Analyzing..." during long operations:
```python
from chatkit.types import ProgressUpdateEvent
@function_tool
async def search_articles(ctx: AgentContext, query: str) -> str:
"""Search for articles matching the query."""
yield ProgressUpdateEvent(message="Searching articles...")
results = await article_store.search(query)
yield ProgressUpdateEvent(message=f"Found {len(results)} articles...")
for i, article in enumerate(results):
if i % 5 == 0:
yield ProgressUpdateEvent(
message=f"Processing article {i+1}/{len(results)}..."
)
return format_results(results)
```
### 4. Thread Lifecycle Events
Track thread changes for persistence and UI updates:
```typescript
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
onThreadChange: ({ threadId }) => {
setThreadId(threadId);
if (threadId) localStorage.setItem("lastThreadId", threadId);
clearSelections();
},
onThreadLoadStart: ({ threadId }) => {
setIsLoadingThread(true);
},
onThreadLoadEnd: ({ threadId }) => {
setIsLoadingThread(false);
},
});
```
### 5. Client Tools (State Query)
AI needs to read client-side state to make decisions:
**Backend - Defining Client Tool:**
```python
@function_tool(name_override="get_selected_items")
async def get_selected_items(ctx: AgentContext) -> dict:
"""Get the items currently selected on the canvas.
This is a CLIENT TOOL - executed in browser, result comes back.
"""
yield ProgressUpdateEvent(message="Reading selection...")
pass # Actual execution happens on client
```
**Frontend - Handling Client Tools:**
```typescript
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
onClientTool: ({ name, params }) => {
switch (name) {
case "get_selected_items":
return { itemIds: selectedItemIds };
case "get_current_viewport":
return {
center: mapRef.current.getCenter(),
zoom: mapRef.current.getZoom(),
};
case "get_form_data":
return { values: formRef.current.getValues() };
default:
throw new Error(`Unknown client tool: ${name}`);
}
},
});
```
---
## Client Effects vs Client Tools
| Type | Direction | Response Required | Use Case |
|------|-----------|-------------------|----------|
| **Client Effect** | Server → Client | No (fire-and-forget) | Update UI, show notifications |
| **Client Tool** | Server → Client → Server | Yes (return value) | Get client state for AI decision |
---
## Common Patterns by Use Case
### Interactive Map/Canvas
```typescript
onResponseStart: () => lockCanvas(),
onResponseEnd: () => unlockCanvas(),
onEffect: ({ name, data }) => {
if (name === "add_marker") addMarker(data);
if (name === "pan_to") panTo(data.location);
},
onClientTool: ({ name }) => {
if (name === "get_selection") return getSelectedItems();
},
```
### Form-Based UI
```typescript
onResponseStart: () => setFormDisabled(true),
onResponseEnd: () => setFormDisabled(false),
onClientTool: ({ name }) => {
if (name === "get_form_values") return form.getValues();
},
```
### Game/Simulation
```typescript
onResponseStart: () => pauseSimulation(),
onResponseEnd: () => resumeSimulation(),
onEffect: ({ name, data }) => {
if (name === "update_entity") updateEntity(data);
if (name === "show_notification") showToast(data.message);
},
```
---
## Thread Title Generation
Dynamically update thread title based on conversation:
```python
class TitleAgent:
async def generate_title(self, first_message: str) -> str:
result = await Runner.run(
Agent(
name="TitleGenerator",
instructions="Generate a 3-5 word title.",
model="gpt-4o-mini", # Fast model
),
input=f"First message: {first_message}",
)
return result.final_output
# In ChatKitServer
async def respond(self, thread, item, context):
if not thread.title and item:
title = await self.title_agent.generate_title(item.content)
thread.title = title
await self.store.save_thread(thread, context)
```
---
## Anti-Patterns
1. **Not locking UI during response** - Leads to race conditions
2. **Blocking in effects** - Effects should be fire-and-forget
3. **Heavy computation in onEffect** - Use requestAnimationFrame for DOM updates
4. **Missing error handling** - Always handle onError to unlock UI
5. **Not persisting thread state** - Use onThreadChange to save context
---
## Verification
Run: `python3 scripts/verify.py`
Expected: `✓ streaming-llm-responses skill ready`
## If Verification Fails
1. Check: references/ folder has streaming-patterns.md
2. **Stop and report** if still failing
## References
- [references/streaming-patterns.md](references/streaming-patterns.md) - Complete streaming configuration
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/streaming-patterns.md
```markdown
# Streaming Patterns Reference
Complete useChatKit configuration with all streaming handlers.
## Full Configuration
```typescript
import { useChatKit } from "@openai/chatkit-react";
const chatkit = useChatKit({
api: { url: API_URL, domainKey: DOMAIN_KEY },
// === Lifecycle Events ===
onReady: () => {
console.log("ChatKit initialized");
},
onError: ({ error }) => {
console.error("ChatKit error:", error);
setIsResponding(false);
unlockInteraction();
},
onResponseStart: () => {
setIsResponding(true);
lockInteraction();
},
onResponseEnd: () => {
setIsResponding(false);
unlockInteraction();
},
// === Thread Events ===
onThreadChange: ({ threadId }) => {
setThreadId(threadId);
if (threadId) localStorage.setItem("lastThreadId", threadId);
clearSelections();
},
onThreadLoadStart: ({ threadId }) => {
console.log("Loading thread:", threadId);
setIsLoadingThread(true);
},
onThreadLoadEnd: ({ threadId }) => {
console.log("Thread loaded:", threadId);
setIsLoadingThread(false);
},
// === Client Interaction ===
onEffect: ({ name, data }) => {
switch (name) {
case "update_status":
applyStatusUpdate(data.state);
if (data.flash) setFlashMessage(data.flash);
break;
case "add_marker":
addMapMarker(data);
break;
case "pan_to":
panToLocation(data.location);
break;
case "select_mode":
setSelectionMode(data.mode);
break;
case "show_notification":
showToast(data.message);
break;
}
},
onClientTool: ({ name, params }) => {
switch (name) {
case "get_selected_items":
return { itemIds: selectedItemIds };
case "get_current_viewport":
return {
center: mapRef.current.getCenter(),
zoom: mapRef.current.getZoom(),
};
case "get_form_data":
return { values: formRef.current.getValues() };
default:
throw new Error(`Unknown client tool: ${name}`);
}
},
// === Analytics ===
onLog: ({ name, data }) => {
if (name === "message.feedback") {
trackFeedback(data);
}
if (name === "message.share") {
trackShare(data);
}
},
});
```
## Effect Catalog
Common effect types and their handling:
| Effect Name | Data Shape | UI Action |
|-------------|------------|-----------|
| `update_status` | `{ state: {...}, flash?: string }` | Update state store, show toast |
| `add_marker` | `{ lat, lng, label }` | Add map marker |
| `pan_to` | `{ location: [lat, lng] }` | Pan map to location |
| `select_mode` | `{ mode: string, lineId?: string }` | Enable selection mode |
| `show_notification` | `{ message: string, type?: string }` | Show toast notification |
| `update_entity` | `{ id, ...props }` | Update entity in store |
| `clear_selection` | `{}` | Clear current selection |
## Backend Effect Emission
```python
from chatkit.types import ClientEffectEvent, ProgressUpdateEvent
async def respond(self, thread, item, context):
# Progress updates during processing
yield ProgressUpdateEvent(message="Starting analysis...")
# Do work
result = await process_request(item.content)
yield ProgressUpdateEvent(message="Finalizing...")
# Fire client effect
yield ClientEffectEvent(
name="update_status",
data={
"state": result.state,
"flash": "Analysis complete!"
}
)
# Another effect
yield ClientEffectEvent(
name="pan_to",
data={"location": result.location}
)
```
## Client Tool Implementation
Client tools allow the AI to query client-side state:
```python
from agents import function_tool
from chatkit.types import ProgressUpdateEvent
@function_tool(name_override="get_viewport_bounds")
async def get_viewport_bounds(ctx: AgentContext) -> dict:
"""Get the current map viewport bounds.
Returns the northeast and southwest corners of the visible area.
"""
yield ProgressUpdateEvent(message="Reading viewport...")
# The actual execution happens on the client
# The return type documents expected response shape
pass
@function_tool(name_override="get_selected_features")
async def get_selected_features(ctx: AgentContext) -> list:
"""Get the currently selected map features.
Returns a list of feature IDs that the user has selected.
"""
yield ProgressUpdateEvent(message="Reading selection...")
pass
```
## Error Handling
Always unlock UI on error:
```typescript
onError: ({ error }) => {
console.error("ChatKit error:", error);
// Always unlock UI
setIsResponding(false);
unlockInteraction();
// Show user-friendly error
showErrorToast("Something went wrong. Please try again.");
// Optionally report to monitoring
reportError(error);
},
```
## Evidence Sources
Patterns derived from:
- `cat-lounge/backend/app/cat_agent.py`
- `cat-lounge/frontend/src/components/ChatKitPanel.tsx`
- `metro-map/backend/app/agents/metro_map_agent.py`
- `metro-map/frontend/src/components/ChatKitPanel.tsx`
- `news-guide/backend/app/agents/news_agent.py`
- `news-guide/backend/app/agents/title_agent.py`
```
### scripts/verify.py
```python
#!/usr/bin/env python3
"""Verify streaming-llm-responses skill has required references."""
import os
import sys
def main():
skill_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
refs_dir = os.path.join(skill_dir, "references")
required = ["streaming-patterns.md"]
missing = [r for r in required if not os.path.isfile(os.path.join(refs_dir, r))]
if not missing:
print("✓ streaming-llm-responses skill ready")
sys.exit(0)
else:
print(f"✗ Missing: {', '.join(missing)}")
sys.exit(1)
if __name__ == "__main__":
main()
```