Back to skills
SkillHub ClubShip Full StackFull Stack

llm

Guidelines for implementing LLM (Language Model) functionality in the application

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
10,265
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B77.6

Install command

npx @skill-hub/cli install elie222-inbox-zero-llm

Repository

elie222/inbox-zero

Skill path: .claude/skills/llm

Guidelines for implementing LLM (Language Model) functionality in the application

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: elie222.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install llm into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/elie222/inbox-zero before adding llm to shared team environments
  • Use llm for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: llm
description: Guidelines for implementing LLM (Language Model) functionality in the application
---
# LLM Implementation Guidelines

## Directory Structure

LLM-related code is organized in specific directories:

- `apps/web/utils/ai/` - Main LLM implementations
- `apps/web/utils/llms/` - Core LLM utilities and configurations
- `apps/web/__tests__/` - LLM-specific tests

## Key Files

- `utils/llms/index.ts` - Core LLM functionality
- `utils/llms/model.ts` - Model definitions and configurations
- `utils/usage.ts` - Usage tracking and monitoring

## Implementation Pattern

Follow this standard structure for LLM-related functions:

```typescript
import { z } from "zod";
import { createScopedLogger } from "@/utils/logger";
import { chatCompletionObject } from "@/utils/llms";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import { createGenerateObject } from "@/utils/llms";

export async function featureFunction(options: {
  inputData: InputType;
  emailAccount: EmailAccountWithAI;
}) {
  const { inputData, user } = options;

  if (!inputData || [other validation conditions]) {
    logger.warn("Invalid input for feature function");
    return null;
  }

  const system = `[Detailed system prompt that defines the LLM's role and task]`;

  const prompt = `[User prompt with context and specific instructions]

<data>
...
</data>

${emailAccount.about ? `<user_info>${emailAccount.about}</user_info>` : ""}`;

  const modelOptions = getModel(emailAccount.user);

  const generateObject = createGenerateObject({
    userEmail: emailAccount.email,
    label: "Feature Name",
    modelOptions,
  });


  const result = await generateObject({
    ...modelOptions,
    system,
    prompt,
    schema: z.object({
      field1: z.string(),
      field2: z.number(),
      nested: z.object({
        subfield: z.string(),
      }),
      array_field: z.array(z.string()),
    }),
  });

  return result.object;
}
```

## Best Practices

1. **System and User Prompts**:

   - Keep system prompts and user prompts separate
   - System prompt should define the LLM's role and task specifications
   - User prompt should contain the actual data and context

2. **Schema Validation**:

   - Always define a Zod schema for response validation
   - Make schemas as specific as possible to guide the LLM output

3. **Logging**:

   - Use descriptive scoped loggers for each feature
   - Log inputs and outputs with appropriate log levels
   - Include relevant context in log messages

4. **Error Handling**:

   - Implement early returns for invalid inputs
   - Use proper error types and logging
   - Implement fallbacks for AI failures
   - Add retry logic for transient failures using `withRetry`

5. **Input Formatting**:

   - Use XML-like tags to structure data in prompts
   - Remove excessive whitespace and truncate long inputs
   - Format data consistently across similar functions

6. **Type Safety**:

   - Use TypeScript types for all parameters and return values
   - Define clear interfaces for complex input/output structures

7. **Code Organization**:
   - Keep related AI functions in the same file or directory
   - Extract common patterns into utility functions
   - Document complex AI logic with clear comments

## Testing

See [llm-test.mdc](mdc:.cursor/rules/llm-test.mdc)
llm | SkillHub