Back to skills
SkillHub ClubRun DevOpsDevOpsBackendIntegration

moai-platform-neon

Provides detailed guidance for using Neon's serverless PostgreSQL, covering branching, auto-scaling compute, connection pooling, and PITR. Includes complete code examples for API integration, branch management, and edge function deployment with Drizzle and Prisma ORMs.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
838
Hot score
99
Updated
March 20, 2026
Overall rating
A8.6
Composite score
6.3
Best-practice grade
B71.9

Install command

npx @skill-hub/cli install modu-ai-moai-adk-moai-platform-neon
postgresqlserverlessdatabase-branchingedge-computingdevops

Repository

modu-ai/moai-adk

Skill path: src/moai_adk/templates/.claude/skills/moai-platform-neon

Provides detailed guidance for using Neon's serverless PostgreSQL, covering branching, auto-scaling compute, connection pooling, and PITR. Includes complete code examples for API integration, branch management, and edge function deployment with Drizzle and Prisma ORMs.

Open repository

Best for

Primary workflow: Run DevOps.

Technical facets: DevOps, Backend, Integration.

Target audience: Developers building serverless applications with PostgreSQL who need database branching for preview environments, auto-scaling compute, or edge deployment compatibility..

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: modu-ai.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install moai-platform-neon into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/modu-ai/moai-adk before adding moai-platform-neon to shared team environments
  • Use moai-platform-neon for cloud workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: "moai-platform-neon"
description: "Neon serverless PostgreSQL specialist covering auto-scaling, database branching, PITR, and connection pooling. Use when building serverless apps needing PostgreSQL, implementing preview environments, or optimizing database costs."
version: 2.0.0
category: "platform"
modularized: true
user-invocable: false
tags: ['neon', 'postgresql', 'serverless', 'branching', 'auto-scaling']
context7-libraries: ['/neondatabase/neon']
related-skills: ['moai-platform-supabase', 'moai-lang-typescript', 'moai-domain-database']
updated: 2026-01-08
status: "active"
allowed-tools:
  - Read
  - Write
  - Bash
  - Grep
  - Glob
  - mcp__context7__resolve-library-id
  - mcp__context7__get-library-docs
---

# moai-platform-neon: Neon Serverless PostgreSQL Specialist

## Quick Reference

Neon Serverless PostgreSQL Expertise: Specialized knowledge for Neon serverless PostgreSQL covering auto-scaling, scale-to-zero compute, database branching, Point-in-Time Recovery, and modern ORM integration.

### Core Capabilities

Serverless Compute: Auto-scaling PostgreSQL with scale-to-zero for cost optimization
Database Branching: Instant copy-on-write branches for dev, staging, and preview environments
Point-in-Time Recovery: 30-day PITR with instant restore to any timestamp
Connection Pooling: Built-in connection pooler for serverless and edge compatibility
PostgreSQL 16: Full PostgreSQL 16 compatibility with extensions support

### Quick Decision Guide

Need serverless PostgreSQL with auto-scaling: Use Neon
Need database branching for CI/CD: Use Neon branching
Need edge-compatible database: Use Neon with connection pooling
Need instant preview environments: Use Neon branch per PR
Need vector search: Consider Supabase with pgvector instead

### Context7 Documentation Access

To fetch the latest Neon documentation:

Step 1: Resolve the library ID using mcp__context7__resolve-library-id with library name "neondatabase/neon"

Step 2: Fetch documentation using mcp__context7__get-library-docs with the resolved Context7 ID, specifying topics like "branching", "connection pooling", or "auto-scaling"

---

## Module Index

This skill is organized into focused modules for progressive disclosure:

### Core Modules

[Database Branching](modules/branching-workflows.md): Copy-on-write branches for development, preview environments, and CI/CD integration with GitHub Actions

[Auto-Scaling and Compute](modules/auto-scaling.md): Compute unit configuration, scale-to-zero settings, and cost optimization strategies

[Connection Pooling](modules/connection-pooling.md): Serverless connection pooling for edge runtimes, WebSocket configuration, and pool sizing

[PITR and Backups](modules/pitr-backups.md): Point-in-time recovery, branch restoration, and backup strategies

### Supporting Files

[Reference Guide](reference.md): API reference, environment configuration, and provider comparison

[Code Examples](examples.md): Complete working examples for common integration patterns

---

## Implementation Guide

### Setup and Configuration

Package Installation:
```bash
npm install @neondatabase/serverless
npm install drizzle-orm  # Optional: Drizzle ORM
npm install @prisma/client prisma  # Optional: Prisma ORM
```

Environment Configuration:
```env
# Direct connection (for migrations)
DATABASE_URL=postgresql://user:[email protected]/dbname?sslmode=require

# Pooled connection (for serverless/edge)
DATABASE_URL_POOLED=postgresql://user:[email protected]/dbname?sslmode=require

# Neon API for branching
NEON_API_KEY=neon_api_key_xxx
NEON_PROJECT_ID=project-xxx
```

### Serverless Driver Usage

Basic Query Execution:
```typescript
import { neon } from '@neondatabase/serverless'

const sql = neon(process.env.DATABASE_URL!)

// Simple query
const users = await sql`SELECT * FROM users WHERE active = true`

// Parameterized query (SQL injection safe)
const userId = 'user-123'
const user = await sql`SELECT * FROM users WHERE id = ${userId}`

// Transaction support
const result = await sql.transaction([
  sql`UPDATE accounts SET balance = balance - 100 WHERE id = ${fromId}`,
  sql`UPDATE accounts SET balance = balance + 100 WHERE id = ${toId}`
])
```

### Drizzle ORM Integration

Schema Definition:
```typescript
import { pgTable, uuid, text, timestamp, boolean, jsonb } from 'drizzle-orm/pg-core'

export const users = pgTable('users', {
  id: uuid('id').primaryKey().defaultRandom(),
  email: text('email').notNull().unique(),
  name: text('name'),
  createdAt: timestamp('created_at').defaultNow(),
  metadata: jsonb('metadata')
})
```

Drizzle Client Setup:
```typescript
import { neon } from '@neondatabase/serverless'
import { drizzle } from 'drizzle-orm/neon-http'
import * as schema from './schema'

const sql = neon(process.env.DATABASE_URL!)
export const db = drizzle(sql, { schema })

// Query examples
const allUsers = await db.select().from(schema.users)
```

### Prisma ORM Integration

Prisma with Neon Serverless Driver:
```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import { PrismaNeon } from '@prisma/adapter-neon'
import { PrismaClient } from '@prisma/client'

neonConfig.webSocketConstructor = require('ws')

const pool = new Pool({ connectionString: process.env.DATABASE_URL })
const adapter = new PrismaNeon(pool)
export const prisma = new PrismaClient({ adapter })
```

---

## Provider Decision Guide

### When to Use Neon

Serverless Applications: Auto-scaling and scale-to-zero reduce costs significantly
Preview Environments: Instant branching enables per-PR databases with production data
Edge Deployment: Connection pooling provides edge runtime compatibility
Development Workflow: Branch from production for realistic development data
Cost Optimization: Pay only for active compute time with scale-to-zero

### When to Consider Alternatives

Need Vector Search: Consider Supabase with pgvector or dedicated vector database
Need Real-time Subscriptions: Consider Supabase or Convex for real-time features
Need NoSQL Flexibility: Consider Firestore or Convex for document storage
Need Built-in Auth: Consider Supabase for integrated authentication

### Pricing Reference

Free Tier: 3GB storage, 100 compute hours per month
Pro Tier: Usage-based pricing with additional storage and compute
Scale-to-Zero: No charges during idle periods

---

## Works Well With

- moai-platform-supabase - Alternative when RLS or pgvector needed
- moai-lang-typescript - TypeScript patterns for Drizzle and Prisma
- moai-domain-backend - Backend architecture with database integration
- moai-domain-database - General database patterns and optimization

---

Status: Production Ready
Version: 2.0.0
Generated with: MoAI-ADK Skill Factory v2.0
Last Updated: 2026-01-06
Technology: Neon Serverless PostgreSQL


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### modules/branching-workflows.md

```markdown
# Database Branching Workflows

## Overview

Neon database branching creates instant copy-on-write clones of your database, enabling isolated development environments, preview deployments, and safe testing with production data.

---

## Core Concepts

### Copy-on-Write Architecture

Branch Creation: Instant creation with no data copying
Storage Efficiency: Branches share unchanged data with parent
Write Isolation: Changes in branch do not affect parent
Inheritance: New data in parent does not propagate to existing branches

### Branch Types

Main Branch: Primary production database
Development Branch: Long-lived branch for development work
Feature Branch: Short-lived branch for specific features
Preview Branch: Ephemeral branch for PR preview environments
Restore Branch: Branch created for point-in-time recovery

---

## Branch Management API

### NeonBranchManager Implementation

```typescript
class NeonBranchManager {
  private apiKey: string
  private projectId: string
  private baseUrl = 'https://console.neon.tech/api/v2'

  constructor(apiKey: string, projectId: string) {
    this.apiKey = apiKey
    this.projectId = projectId
  }

  private async request(path: string, options: RequestInit = {}) {
    const response = await fetch(`${this.baseUrl}${path}`, {
      ...options,
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        'Content-Type': 'application/json',
        ...options.headers
      }
    })
    if (!response.ok) throw new Error(`Neon API error: ${response.statusText}`)
    return response.json()
  }

  async createBranch(name: string, parentId: string = 'main') {
    return this.request(`/projects/${this.projectId}/branches`, {
      method: 'POST',
      body: JSON.stringify({
        branch: { name, parent_id: parentId }
      })
    })
  }

  async deleteBranch(branchId: string) {
    return this.request(`/projects/${this.projectId}/branches/${branchId}`, {
      method: 'DELETE'
    })
  }

  async listBranches() {
    return this.request(`/projects/${this.projectId}/branches`)
  }

  async getBranchConnectionString(branchId: string) {
    const endpoints = await this.request(
      `/projects/${this.projectId}/branches/${branchId}/endpoints`
    )
    return endpoints.endpoints[0]?.connection_uri
  }
}
```

---

## Preview Environment Pattern

### Per-PR Database Branches

```typescript
async function createPreviewEnvironment(prNumber: number) {
  const branchManager = new NeonBranchManager(
    process.env.NEON_API_KEY!,
    process.env.NEON_PROJECT_ID!
  )

  // Create branch from main with PR identifier
  const branch = await branchManager.createBranch(`pr-${prNumber}`, 'main')

  // Get connection string for the new branch
  const connectionString = await branchManager.getBranchConnectionString(branch.branch.id)

  return {
    branchId: branch.branch.id,
    branchName: branch.branch.name,
    connectionString,
    createdAt: new Date().toISOString()
  }
}

async function cleanupPreviewEnvironment(prNumber: number) {
  const branchManager = new NeonBranchManager(
    process.env.NEON_API_KEY!,
    process.env.NEON_PROJECT_ID!
  )

  // Find and delete the PR branch
  const { branches } = await branchManager.listBranches()
  const prBranch = branches.find(b => b.name === `pr-${prNumber}`)

  if (prBranch) {
    await branchManager.deleteBranch(prBranch.id)
  }
}
```

---

## GitHub Actions Integration

### Preview Environment Workflow

```yaml
name: Preview Environment

on:
  pull_request:
    types: [opened, synchronize, closed]

env:
  NEON_API_KEY: ${{ secrets.NEON_API_KEY }}
  NEON_PROJECT_ID: ${{ secrets.NEON_PROJECT_ID }}

jobs:
  create-preview:
    if: github.event.action != 'closed'
    runs-on: ubuntu-latest
    outputs:
      branch_id: ${{ steps.create-branch.outputs.branch_id }}
      database_url: ${{ steps.create-branch.outputs.database_url }}
    steps:
      - name: Create Neon Branch
        id: create-branch
        run: |
          RESPONSE=$(curl -s -X POST \
            -H "Authorization: Bearer $NEON_API_KEY" \
            -H "Content-Type: application/json" \
            -d '{"branch":{"name":"pr-${{ github.event.number }}"}}' \
            "https://console.neon.tech/api/v2/projects/$NEON_PROJECT_ID/branches")

          BRANCH_ID=$(echo $RESPONSE | jq -r '.branch.id')
          echo "branch_id=$BRANCH_ID" >> $GITHUB_OUTPUT

          # Get connection string
          ENDPOINTS=$(curl -s \
            -H "Authorization: Bearer $NEON_API_KEY" \
            "https://console.neon.tech/api/v2/projects/$NEON_PROJECT_ID/branches/$BRANCH_ID/endpoints")

          DATABASE_URL=$(echo $ENDPOINTS | jq -r '.endpoints[0].connection_uri')
          echo "database_url=$DATABASE_URL" >> $GITHUB_OUTPUT

      - name: Comment PR with Database URL
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `Preview database created: \`pr-${{ github.event.number }}\`\n\nBranch ID: \`${{ steps.create-branch.outputs.branch_id }}\``
            })

  deploy-preview:
    needs: create-preview
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Migrations
        env:
          DATABASE_URL: ${{ needs.create-preview.outputs.database_url }}
        run: |
          npm ci
          npm run db:migrate

      - name: Deploy Preview
        env:
          DATABASE_URL: ${{ needs.create-preview.outputs.database_url }}
        run: |
          # Deploy to preview environment (Vercel, Netlify, etc.)
          echo "Deploying with preview database..."

  cleanup-preview:
    if: github.event.action == 'closed'
    runs-on: ubuntu-latest
    steps:
      - name: Find Branch ID
        id: find-branch
        run: |
          BRANCHES=$(curl -s \
            -H "Authorization: Bearer $NEON_API_KEY" \
            "https://console.neon.tech/api/v2/projects/$NEON_PROJECT_ID/branches")

          BRANCH_ID=$(echo $BRANCHES | jq -r '.branches[] | select(.name == "pr-${{ github.event.number }}") | .id')
          echo "branch_id=$BRANCH_ID" >> $GITHUB_OUTPUT

      - name: Delete Neon Branch
        if: steps.find-branch.outputs.branch_id != ''
        run: |
          curl -X DELETE \
            -H "Authorization: Bearer $NEON_API_KEY" \
            "https://console.neon.tech/api/v2/projects/$NEON_PROJECT_ID/branches/${{ steps.find-branch.outputs.branch_id }}"
```

---

## Development Workflow Pattern

### Feature Branch Database

```typescript
interface FeatureBranchConfig {
  featureName: string
  baseBranch?: string
  autoCleanupDays?: number
}

async function createFeatureBranch(config: FeatureBranchConfig) {
  const { featureName, baseBranch = 'main', autoCleanupDays = 7 } = config

  const branchManager = new NeonBranchManager(
    process.env.NEON_API_KEY!,
    process.env.NEON_PROJECT_ID!
  )

  // Create branch with feature prefix
  const branchName = `feature-${featureName}-${Date.now()}`
  const branch = await branchManager.createBranch(branchName, baseBranch)

  // Store cleanup metadata
  const metadata = {
    branchId: branch.branch.id,
    branchName,
    featureName,
    createdAt: new Date().toISOString(),
    expiresAt: new Date(Date.now() + autoCleanupDays * 24 * 60 * 60 * 1000).toISOString()
  }

  return {
    ...metadata,
    connectionString: await branchManager.getBranchConnectionString(branch.branch.id)
  }
}

async function cleanupExpiredBranches() {
  const branchManager = new NeonBranchManager(
    process.env.NEON_API_KEY!,
    process.env.NEON_PROJECT_ID!
  )

  const { branches } = await branchManager.listBranches()
  const now = new Date()

  for (const branch of branches) {
    // Check if branch is a feature branch and expired
    if (branch.name.startsWith('feature-')) {
      const createdAt = new Date(branch.created_at)
      const ageInDays = (now.getTime() - createdAt.getTime()) / (24 * 60 * 60 * 1000)

      if (ageInDays > 7) {
        console.log(`Cleaning up expired branch: ${branch.name}`)
        await branchManager.deleteBranch(branch.id)
      }
    }
  }
}
```

---

## Branch Reset Pattern

### Reset Branch to Parent State

```typescript
async function resetBranchToParent(branchName: string) {
  const branchManager = new NeonBranchManager(
    process.env.NEON_API_KEY!,
    process.env.NEON_PROJECT_ID!
  )

  // Find the current branch
  const { branches } = await branchManager.listBranches()
  const currentBranch = branches.find(b => b.name === branchName)

  if (!currentBranch) {
    throw new Error(`Branch ${branchName} not found`)
  }

  const parentId = currentBranch.parent_id

  // Delete current branch
  await branchManager.deleteBranch(currentBranch.id)

  // Recreate with same name from parent
  const newBranch = await branchManager.createBranch(branchName, parentId)

  return {
    branchId: newBranch.branch.id,
    connectionString: await branchManager.getBranchConnectionString(newBranch.branch.id),
    resetAt: new Date().toISOString()
  }
}
```

---

## Best Practices

### Branch Naming Conventions

Preview Branches: pr-{number} for pull request previews
Feature Branches: feature-{name}-{timestamp} for development
Staging Branches: staging or staging-{version} for staging environments
Restore Branches: restore-{timestamp} for point-in-time recovery

### Lifecycle Management

Automatic Cleanup: Configure scheduled cleanup for expired branches
Branch Limits: Monitor branch count against project limits
Connection Management: Use pooled connections for branch databases
Cost Awareness: Branches consume compute when active

### Security Considerations

Sensitive Data: Consider data masking for non-production branches
Access Control: Limit API key permissions for branch operations
Audit Trail: Log branch creation and deletion operations
Credential Rotation: Rotate branch credentials on schedule

---

Version: 2.0.0
Last Updated: 2026-01-06

```

### modules/auto-scaling.md

```markdown
# Auto-Scaling and Compute Management

## Overview

Neon auto-scaling automatically adjusts compute resources based on workload demand, with scale-to-zero capability for cost optimization during idle periods.

---

## Compute Unit Specifications

### Available Compute Sizes

0.25 CU Specifications:
- vCPU: 0.25
- RAM: 1 GB
- Use Case: Development, testing, low-traffic applications

0.5 CU Specifications:
- vCPU: 0.5
- RAM: 2 GB
- Use Case: Light production workloads, staging environments

1 CU Specifications:
- vCPU: 1
- RAM: 4 GB
- Use Case: Standard production applications

2 CU Specifications:
- vCPU: 2
- RAM: 8 GB
- Use Case: Medium workloads, moderate traffic

4 CU Specifications:
- vCPU: 4
- RAM: 16 GB
- Use Case: Heavy workloads, high traffic applications

8 CU Specifications:
- vCPU: 8
- RAM: 32 GB
- Use Case: High-performance requirements, data processing

---

## Auto-Scaling Configuration

### Configuration via API

```typescript
interface AutoScalingConfig {
  minCu: number      // Minimum compute units (0.25 for scale-to-zero)
  maxCu: number      // Maximum compute units
  suspendTimeout: number  // Seconds before suspension (0 to disable)
}

async function configureAutoScaling(endpointId: string, config: AutoScalingConfig) {
  const response = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/endpoints/${endpointId}`,
    {
      method: 'PATCH',
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        endpoint: {
          autoscaling_limit_min_cu: config.minCu,
          autoscaling_limit_max_cu: config.maxCu,
          suspend_timeout_seconds: config.suspendTimeout
        }
      })
    }
  )

  if (!response.ok) {
    throw new Error(`Failed to configure auto-scaling: ${response.statusText}`)
  }

  return response.json()
}
```

### Environment-Specific Configurations

Development Configuration:
```typescript
const devConfig: AutoScalingConfig = {
  minCu: 0.25,        // Scale to zero when idle
  maxCu: 0.5,         // Limited compute for development
  suspendTimeout: 300  // Suspend after 5 minutes idle
}
```

Staging Configuration:
```typescript
const stagingConfig: AutoScalingConfig = {
  minCu: 0.25,        // Scale to zero during off-hours
  maxCu: 1,           // Moderate compute for testing
  suspendTimeout: 600  // Suspend after 10 minutes idle
}
```

Production Configuration:
```typescript
const productionConfig: AutoScalingConfig = {
  minCu: 0.5,         // Always-on minimum for faster response
  maxCu: 4,           // Scale up for peak traffic
  suspendTimeout: 3600 // Suspend after 1 hour idle (or 0 to disable)
}
```

High-Traffic Production Configuration:
```typescript
const highTrafficConfig: AutoScalingConfig = {
  minCu: 1,           // Higher baseline for consistent performance
  maxCu: 8,           // Maximum scale for peak loads
  suspendTimeout: 0   // Never suspend (always-on)
}
```

---

## Scale-to-Zero Behavior

### How Scale-to-Zero Works

Idle Detection: Neon monitors connection activity and query execution
Suspension: After suspend_timeout seconds of inactivity, compute suspends
Wake-Up: First connection request triggers automatic wake-up
Cold Start: Typical wake-up time is 500ms to 2 seconds

### Cold Start Optimization

Connection Pooling Impact:
- Use pooled connections to reduce cold start frequency
- Pooler maintains connection state during suspension
- First query after wake-up experiences latency

Warming Strategies:
```typescript
// Health check endpoint to keep database warm
async function warmDatabase() {
  const sql = neon(process.env.DATABASE_URL_POOLED!)

  // Simple query to prevent suspension
  await sql`SELECT 1`
}

// Schedule periodic warming (every 4 minutes for 5-minute timeout)
setInterval(warmDatabase, 4 * 60 * 1000)
```

### Cost Implications

Scale-to-Zero Benefits:
- Zero compute charges during idle periods
- Ideal for development and low-traffic applications
- Automatic cost optimization without manual intervention

Trade-offs:
- Cold start latency on first request after suspension
- Not suitable for latency-sensitive applications
- Consider always-on minimum for production workloads

---

## Endpoint Management

### Get Current Endpoint Configuration

```typescript
async function getEndpointConfig(endpointId: string) {
  const response = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/endpoints/${endpointId}`,
    {
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`
      }
    }
  )

  return response.json()
}
```

### List All Endpoints

```typescript
async function listEndpoints() {
  const response = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/endpoints`,
    {
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`
      }
    }
  )

  return response.json()
}
```

### Monitor Endpoint Status

```typescript
interface EndpointStatus {
  id: string
  state: 'active' | 'idle' | 'suspended'
  currentCu: number
  lastActiveAt: string
}

async function getEndpointStatus(endpointId: string): Promise<EndpointStatus> {
  const { endpoint } = await getEndpointConfig(endpointId)

  return {
    id: endpoint.id,
    state: endpoint.current_state,
    currentCu: endpoint.autoscaling_limit_min_cu,
    lastActiveAt: endpoint.last_active
  }
}
```

---

## Cost Optimization Strategies

### Development Environments

Strategy: Aggressive scale-to-zero with low maximum compute

```typescript
// Minimize costs for development databases
await configureAutoScaling(devEndpointId, {
  minCu: 0.25,
  maxCu: 0.5,
  suspendTimeout: 180  // 3 minutes - quick suspension
})
```

### Staging Environments

Strategy: Balance between cost and performance for testing

```typescript
// Cost-effective staging with reasonable performance
await configureAutoScaling(stagingEndpointId, {
  minCu: 0.25,
  maxCu: 2,
  suspendTimeout: 600  // 10 minutes
})
```

### Production Environments

Strategy: Prioritize performance with cost awareness

```typescript
// Production with always-on minimum
await configureAutoScaling(prodEndpointId, {
  minCu: 0.5,         // Avoid cold starts
  maxCu: 4,           // Handle traffic spikes
  suspendTimeout: 0   // Never suspend
})
```

### Off-Hours Optimization

```typescript
// Reduce compute during off-peak hours
async function adjustForOffHours(endpointId: string, isOffHours: boolean) {
  const config = isOffHours
    ? { minCu: 0.25, maxCu: 1, suspendTimeout: 300 }
    : { minCu: 0.5, maxCu: 4, suspendTimeout: 0 }

  await configureAutoScaling(endpointId, config)
}

// Schedule-based adjustment
const hour = new Date().getUTCHours()
const isOffHours = hour >= 2 && hour < 8  // 2 AM - 8 AM UTC
await adjustForOffHours(productionEndpointId, isOffHours)
```

---

## Monitoring and Alerts

### Compute Usage Tracking

```typescript
interface ComputeMetrics {
  currentCu: number
  avgCu: number
  peakCu: number
  suspendedMinutes: number
  activeMinutes: number
}

async function getComputeMetrics(endpointId: string): Promise<ComputeMetrics> {
  // Fetch from Neon console API or monitoring integration
  const { endpoint } = await getEndpointConfig(endpointId)

  return {
    currentCu: endpoint.current_state === 'active' ? endpoint.autoscaling_limit_min_cu : 0,
    avgCu: 0,  // Calculate from historical data
    peakCu: endpoint.autoscaling_limit_max_cu,
    suspendedMinutes: 0,  // Calculate from suspension logs
    activeMinutes: 0  // Calculate from activity logs
  }
}
```

### Alert Thresholds

High Compute Alert: Notify when consistently at max CU
Frequent Suspensions: Alert if cold starts affecting performance
Cost Threshold: Alert when monthly compute exceeds budget

---

## Best Practices

### Configuration Guidelines

Development: Use 0.25-0.5 CU range with short suspension timeout
Staging: Use 0.25-2 CU range with moderate suspension timeout
Production: Use 0.5-4 CU range, consider disabling suspension for critical apps
High-Traffic: Use 1-8 CU range with suspension disabled

### Performance Considerations

Connection Pooling: Always use pooled connections with auto-scaling
Query Optimization: Optimize queries to reduce compute time
Indexing: Proper indexes reduce CPU usage and allow smaller compute
Caching: Implement application-level caching to reduce database load

### Cost Management

Monitor Usage: Track compute hours and identify optimization opportunities
Right-Size: Adjust max CU based on actual peak usage
Schedule Scaling: Reduce compute during known low-traffic periods
Branch Cleanup: Delete unused branches to avoid dormant compute costs

---

Version: 2.0.0
Last Updated: 2026-01-06

```

### modules/connection-pooling.md

```markdown
# Connection Pooling for Serverless

## Overview

Neon provides built-in connection pooling optimized for serverless environments, eliminating connection overhead and enabling edge runtime compatibility.

---

## Connection Types

### Direct Connection

Purpose: Database migrations, admin operations, session-based work
Format: postgresql://user:[email protected]/dbname?sslmode=require
Characteristics: Full PostgreSQL protocol, session persistence, limited connections

### Pooled Connection

Purpose: Application queries, serverless functions, edge runtimes
Format: postgresql://user:[email protected]/dbname?sslmode=require
Characteristics: Connection multiplexing, HTTP-based driver support, high concurrency

---

## Environment Configuration

### Dual Connection Setup

```env
# Direct connection for migrations and admin tasks
DATABASE_URL=postgresql://user:[email protected]/dbname?sslmode=require

# Pooled connection for application queries
DATABASE_URL_POOLED=postgresql://user:[email protected]/dbname?sslmode=require
```

### Connection Selection Logic

```typescript
// Use appropriate connection based on context
function getConnectionString(context: 'migration' | 'application' | 'edge'): string {
  switch (context) {
    case 'migration':
      // Direct connection for schema changes
      return process.env.DATABASE_URL!
    case 'application':
      // Pooled connection for general queries
      return process.env.DATABASE_URL_POOLED!
    case 'edge':
      // Pooled connection required for edge runtimes
      return process.env.DATABASE_URL_POOLED!
    default:
      return process.env.DATABASE_URL_POOLED!
  }
}
```

---

## Serverless Driver Configuration

### HTTP-Based Driver

```typescript
import { neon } from '@neondatabase/serverless'

// HTTP driver - ideal for serverless and edge
const sql = neon(process.env.DATABASE_URL_POOLED!)

// Simple query
const users = await sql`SELECT * FROM users LIMIT 10`

// Parameterized query
const userId = 'user-123'
const user = await sql`SELECT * FROM users WHERE id = ${userId}`
```

### WebSocket-Based Driver

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import ws from 'ws'

// Required for Node.js environments
neonConfig.webSocketConstructor = ws

// WebSocket pool for session-based operations
const pool = new Pool({
  connectionString: process.env.DATABASE_URL_POOLED!,
  max: 10  // Maximum connections in local pool
})

const client = await pool.connect()
try {
  await client.query('BEGIN')
  // Transaction operations
  await client.query('COMMIT')
} finally {
  client.release()
}
```

---

## Edge Runtime Integration

### Vercel Edge Functions

```typescript
import { neon } from '@neondatabase/serverless'

export const config = {
  runtime: 'edge'
}

export default async function handler(request: Request) {
  // Must use pooled connection for edge
  const sql = neon(process.env.DATABASE_URL_POOLED!)

  const { searchParams } = new URL(request.url)
  const limit = parseInt(searchParams.get('limit') || '10')

  const users = await sql`
    SELECT id, name, email
    FROM users
    WHERE active = true
    ORDER BY created_at DESC
    LIMIT ${limit}
  `

  return Response.json(users, {
    headers: {
      'Cache-Control': 'public, s-maxage=60'
    }
  })
}
```

### Cloudflare Workers

```typescript
import { neon } from '@neondatabase/serverless'

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const sql = neon(env.DATABASE_URL_POOLED)

    const data = await sql`SELECT COUNT(*) as count FROM users`

    return Response.json(data)
  }
}
```

### Next.js App Router

```typescript
// app/api/users/route.ts
import { neon } from '@neondatabase/serverless'
import { NextResponse } from 'next/server'

export const runtime = 'edge'

export async function GET() {
  const sql = neon(process.env.DATABASE_URL_POOLED!)

  const users = await sql`
    SELECT id, name, email, created_at
    FROM users
    WHERE active = true
    LIMIT 100
  `

  return NextResponse.json(users)
}
```

---

## Pool Configuration

### Pool Sizing Guidelines

Development Environment:
- Max Connections: 5-10
- Reason: Limited concurrent requests, cost optimization

Staging Environment:
- Max Connections: 10-25
- Reason: Moderate testing load, simulates production

Production Environment:
- Max Connections: 25-100
- Reason: Handle concurrent requests, account for connection overhead

### WebSocket Pool Configuration

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import ws from 'ws'

neonConfig.webSocketConstructor = ws

const pool = new Pool({
  connectionString: process.env.DATABASE_URL_POOLED!,
  max: 25,                    // Maximum connections
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 5000  // Connection timeout
})

// Graceful shutdown
process.on('SIGTERM', async () => {
  await pool.end()
  process.exit(0)
})
```

---

## Connection Pooling with ORMs

### Drizzle with Pooled Connection

```typescript
import { neon } from '@neondatabase/serverless'
import { drizzle } from 'drizzle-orm/neon-http'
import * as schema from './schema'

// HTTP driver with pooled connection
const sql = neon(process.env.DATABASE_URL_POOLED!)
export const db = drizzle(sql, { schema })

// All queries go through pooled connection
const users = await db.select().from(schema.users)
```

### Prisma with Pooled Connection

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import { PrismaNeon } from '@prisma/adapter-neon'
import { PrismaClient } from '@prisma/client'
import ws from 'ws'

neonConfig.webSocketConstructor = ws

const pool = new Pool({ connectionString: process.env.DATABASE_URL_POOLED! })
const adapter = new PrismaNeon(pool)

export const prisma = new PrismaClient({
  adapter,
  log: process.env.NODE_ENV === 'development' ? ['query'] : []
})
```

---

## Transaction Handling

### HTTP Driver Transactions

```typescript
import { neon } from '@neondatabase/serverless'

const sql = neon(process.env.DATABASE_URL_POOLED!)

// Transaction with array of statements
const result = await sql.transaction([
  sql`INSERT INTO orders (user_id, total) VALUES (${userId}, ${total}) RETURNING id`,
  sql`UPDATE users SET order_count = order_count + 1 WHERE id = ${userId}`,
  sql`INSERT INTO order_events (order_id, event) VALUES (${orderId}, 'created')`
])
```

### WebSocket Pool Transactions

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import ws from 'ws'

neonConfig.webSocketConstructor = ws
const pool = new Pool({ connectionString: process.env.DATABASE_URL_POOLED! })

async function transferFunds(fromId: string, toId: string, amount: number) {
  const client = await pool.connect()

  try {
    await client.query('BEGIN')

    // Check balance
    const { rows } = await client.query(
      'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE',
      [fromId]
    )

    if (rows[0].balance < amount) {
      throw new Error('Insufficient funds')
    }

    // Perform transfer
    await client.query(
      'UPDATE accounts SET balance = balance - $1 WHERE id = $2',
      [amount, fromId]
    )
    await client.query(
      'UPDATE accounts SET balance = balance + $1 WHERE id = $2',
      [amount, toId]
    )

    await client.query('COMMIT')
  } catch (error) {
    await client.query('ROLLBACK')
    throw error
  } finally {
    client.release()
  }
}
```

---

## Error Handling

### Connection Error Handling

```typescript
import { neon } from '@neondatabase/serverless'

const sql = neon(process.env.DATABASE_URL_POOLED!)

async function queryWithRetry<T>(query: () => Promise<T>, maxRetries = 3): Promise<T> {
  let lastError: Error | null = null

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await query()
    } catch (error) {
      lastError = error as Error

      // Retry on connection errors
      if (error instanceof Error && error.message.includes('ECONNREFUSED')) {
        console.log(`Connection attempt ${attempt} failed, retrying...`)
        await new Promise(resolve => setTimeout(resolve, 1000 * attempt))
        continue
      }

      // Don't retry on other errors
      throw error
    }
  }

  throw lastError
}

// Usage
const users = await queryWithRetry(() => sql`SELECT * FROM users`)
```

### Pool Exhaustion Handling

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import ws from 'ws'

neonConfig.webSocketConstructor = ws

const pool = new Pool({
  connectionString: process.env.DATABASE_URL_POOLED!,
  max: 25,
  connectionTimeoutMillis: 5000
})

async function queryWithPoolMonitoring<T>(query: () => Promise<T>): Promise<T> {
  const waitingCount = pool.waitingCount
  const totalCount = pool.totalCount

  if (waitingCount > 10) {
    console.warn(`High pool contention: ${waitingCount} waiting, ${totalCount} total`)
  }

  return query()
}
```

---

## Best Practices

### Connection Selection

Always Use Pooled for Application Code: Reduces connection overhead
Use Direct for Migrations: Requires full PostgreSQL protocol
Edge Runtimes Require Pooled: Direct connections not supported

### Performance Optimization

Reuse Connections: Create pool once, reuse across requests
Avoid Connection Leaks: Always release connections in finally blocks
Monitor Pool Metrics: Track waiting count and total connections
Set Appropriate Timeouts: Balance between availability and resource usage

### Security Considerations

SSL Always Required: Neon enforces sslmode=require
Credential Rotation: Rotate database credentials periodically
Environment Variables: Never hardcode connection strings
IP Allowlists: Configure IP restrictions for production

---

Version: 2.0.0
Last Updated: 2026-01-06

```

### modules/pitr-backups.md

```markdown
# Point-in-Time Recovery and Backups

## Overview

Neon provides automatic Point-in-Time Recovery (PITR) with up to 30 days of retention, enabling instant database restoration to any timestamp without manual backup management.

---

## PITR Fundamentals

### How PITR Works

Write-Ahead Logging: All database changes recorded to WAL
Continuous Backup: WAL continuously streamed to Neon storage
Instant Restore: Branch creation from any point in retention window
Copy-on-Write: Restore creates new branch without affecting original

### Retention Periods

Free Tier: 7 days of PITR history
Pro Tier: 7-30 days configurable
Enterprise: Custom retention periods

---

## Restore Operations

### Restore to Specific Timestamp

```typescript
async function restoreToPoint(timestamp: Date, branchName?: string) {
  const name = branchName || `restore-${timestamp.toISOString().replace(/[:.]/g, '-')}`

  const response = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/branches`,
    {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        branch: {
          name,
          parent_id: 'main',
          parent_timestamp: timestamp.toISOString()
        }
      })
    }
  )

  if (!response.ok) {
    throw new Error(`PITR restore failed: ${response.statusText}`)
  }

  return response.json()
}
```

### Common Restore Scenarios

Restore to 1 Hour Ago:
```typescript
const oneHourAgo = new Date(Date.now() - 60 * 60 * 1000)
const restored = await restoreToPoint(oneHourAgo, 'restore-1-hour')
```

Restore to Yesterday:
```typescript
const yesterday = new Date()
yesterday.setDate(yesterday.getDate() - 1)
yesterday.setHours(0, 0, 0, 0)
const restored = await restoreToPoint(yesterday, 'restore-yesterday')
```

Restore to Specific Datetime:
```typescript
const specificTime = new Date('2024-01-15T14:30:00Z')
const restored = await restoreToPoint(specificTime, 'restore-jan15-1430')
```

---

## Restore Workflow

### Complete Restore Process

```typescript
interface RestoreResult {
  branchId: string
  branchName: string
  connectionString: string
  restoredTo: string
  createdAt: string
}

async function performRestore(
  targetTimestamp: Date,
  options: {
    branchName?: string
    sourceBranch?: string
    validateFirst?: boolean
  } = {}
): Promise<RestoreResult> {
  const {
    branchName = `restore-${Date.now()}`,
    sourceBranch = 'main',
    validateFirst = true
  } = options

  // Validate timestamp is within retention period
  if (validateFirst) {
    const retentionDays = 30
    const oldestAllowed = new Date(Date.now() - retentionDays * 24 * 60 * 60 * 1000)

    if (targetTimestamp < oldestAllowed) {
      throw new Error(`Timestamp ${targetTimestamp.toISOString()} is outside ${retentionDays}-day retention period`)
    }

    if (targetTimestamp > new Date()) {
      throw new Error('Cannot restore to future timestamp')
    }
  }

  // Create restore branch
  const response = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/branches`,
    {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        branch: {
          name: branchName,
          parent_id: sourceBranch,
          parent_timestamp: targetTimestamp.toISOString()
        }
      })
    }
  )

  if (!response.ok) {
    const error = await response.text()
    throw new Error(`Restore failed: ${error}`)
  }

  const data = await response.json()

  // Get connection string for restored branch
  const endpointsResponse = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/branches/${data.branch.id}/endpoints`,
    {
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`
      }
    }
  )

  const endpointsData = await endpointsResponse.json()

  return {
    branchId: data.branch.id,
    branchName: data.branch.name,
    connectionString: endpointsData.endpoints[0]?.connection_uri,
    restoredTo: targetTimestamp.toISOString(),
    createdAt: new Date().toISOString()
  }
}
```

---

## Data Validation After Restore

### Verify Restored Data

```typescript
import { neon } from '@neondatabase/serverless'

async function validateRestore(
  originalConnectionString: string,
  restoredConnectionString: string,
  tableName: string
): Promise<boolean> {
  const originalDb = neon(originalConnectionString)
  const restoredDb = neon(restoredConnectionString)

  // Compare row counts
  const [originalCount] = await originalDb`SELECT COUNT(*) as count FROM ${tableName}`
  const [restoredCount] = await restoredDb`SELECT COUNT(*) as count FROM ${tableName}`

  console.log(`Original count: ${originalCount.count}, Restored count: ${restoredCount.count}`)

  // Compare checksums for critical tables
  const [originalChecksum] = await originalDb`
    SELECT md5(string_agg(id::text, '')) as checksum
    FROM ${tableName}
    ORDER BY id
  `
  const [restoredChecksum] = await restoredDb`
    SELECT md5(string_agg(id::text, '')) as checksum
    FROM ${tableName}
    ORDER BY id
  `

  const isValid = originalChecksum.checksum === restoredChecksum.checksum

  console.log(`Checksum validation: ${isValid ? 'PASS' : 'FAIL'}`)

  return isValid
}
```

### Restore Verification Checklist

```typescript
interface VerificationResult {
  tableChecks: Map<string, boolean>
  rowCounts: Map<string, number>
  overallStatus: 'success' | 'warning' | 'failure'
  issues: string[]
}

async function verifyRestore(connectionString: string): Promise<VerificationResult> {
  const sql = neon(connectionString)

  const criticalTables = ['users', 'orders', 'payments', 'products']
  const tableChecks = new Map<string, boolean>()
  const rowCounts = new Map<string, number>()
  const issues: string[] = []

  for (const table of criticalTables) {
    try {
      const [result] = await sql`
        SELECT COUNT(*) as count FROM ${sql(table)}
      `
      rowCounts.set(table, parseInt(result.count))
      tableChecks.set(table, true)
    } catch (error) {
      tableChecks.set(table, false)
      issues.push(`Table ${table}: ${error}`)
    }
  }

  const failedChecks = [...tableChecks.values()].filter(v => !v).length

  return {
    tableChecks,
    rowCounts,
    overallStatus: failedChecks === 0 ? 'success' : failedChecks < criticalTables.length / 2 ? 'warning' : 'failure',
    issues
  }
}
```

---

## Automated Backup Verification

### Scheduled PITR Testing

```typescript
interface BackupTestResult {
  testId: string
  timestamp: Date
  restoreSuccess: boolean
  validationSuccess: boolean
  duration: number
  cleanedUp: boolean
}

async function testPITRCapability(): Promise<BackupTestResult> {
  const testId = `pitr-test-${Date.now()}`
  const startTime = Date.now()
  let restoreSuccess = false
  let validationSuccess = false
  let cleanedUp = false

  try {
    // Test restore from 1 hour ago
    const testTimestamp = new Date(Date.now() - 60 * 60 * 1000)

    const restore = await performRestore(testTimestamp, {
      branchName: testId,
      validateFirst: true
    })

    restoreSuccess = true

    // Validate restored data
    const verification = await verifyRestore(restore.connectionString)
    validationSuccess = verification.overallStatus === 'success'

    // Cleanup test branch
    await fetch(
      `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/branches/${restore.branchId}`,
      {
        method: 'DELETE',
        headers: {
          'Authorization': `Bearer ${process.env.NEON_API_KEY}`
        }
      }
    )
    cleanedUp = true

  } catch (error) {
    console.error('PITR test failed:', error)
  }

  return {
    testId,
    timestamp: new Date(),
    restoreSuccess,
    validationSuccess,
    duration: Date.now() - startTime,
    cleanedUp
  }
}
```

### GitHub Actions PITR Test

```yaml
name: PITR Verification

on:
  schedule:
    - cron: '0 2 * * *'  # Daily at 2 AM UTC
  workflow_dispatch:

jobs:
  test-pitr:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Run PITR Test
        env:
          NEON_API_KEY: ${{ secrets.NEON_API_KEY }}
          NEON_PROJECT_ID: ${{ secrets.NEON_PROJECT_ID }}
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
        run: npm run test:pitr

      - name: Report Results
        if: failure()
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: 'PITR Verification Failed',
              body: 'Daily PITR verification test failed. Please investigate immediately.',
              labels: ['critical', 'infrastructure']
            })
```

---

## Disaster Recovery Procedures

### Emergency Restore Procedure

Step 1: Identify Recovery Point
- Determine exact timestamp before incident
- Account for timezone differences (use UTC)
- Consider data propagation delays

Step 2: Create Restore Branch
```typescript
const incidentTime = new Date('2024-01-15T14:30:00Z')
const safeTime = new Date(incidentTime.getTime() - 5 * 60 * 1000)  // 5 minutes before
const restored = await performRestore(safeTime, { branchName: 'emergency-restore' })
```

Step 3: Validate Restored Data
- Check critical table row counts
- Verify recent transactions
- Confirm data integrity

Step 4: Switch Production
- Update connection strings to restored branch
- Monitor for errors
- Keep original branch for investigation

Step 5: Post-Incident Cleanup
- Document timeline and decisions
- Archive investigation branch
- Update runbooks as needed

### Production Cutover

```typescript
async function cutoverToRestored(restoredBranchId: string) {
  // Get connection details for restored branch
  const endpoints = await fetch(
    `https://console.neon.tech/api/v2/projects/${process.env.NEON_PROJECT_ID}/branches/${restoredBranchId}/endpoints`,
    {
      headers: { 'Authorization': `Bearer ${process.env.NEON_API_KEY}` }
    }
  ).then(r => r.json())

  const newConnectionString = endpoints.endpoints[0]?.connection_uri

  // Update environment configuration
  // This varies by platform (Vercel, Railway, etc.)
  console.log('New connection string:', newConnectionString)
  console.log('Update DATABASE_URL_POOLED in your platform')

  return {
    newConnectionString,
    cutoverTime: new Date().toISOString(),
    instructions: [
      '1. Update DATABASE_URL_POOLED environment variable',
      '2. Trigger application redeployment',
      '3. Monitor for connection errors',
      '4. Verify data integrity in production'
    ]
  }
}
```

---

## Best Practices

### Retention Configuration

Align Retention with Compliance: Match retention period to regulatory requirements
Consider Recovery Objectives: Longer retention for lower RPO requirements
Balance Cost: Longer retention increases storage costs

### Testing Strategy

Regular PITR Tests: Weekly or daily automated restore tests
Full Recovery Drills: Quarterly complete disaster recovery exercises
Document Procedures: Maintain up-to-date runbooks

### Monitoring

Track PITR Health: Monitor WAL streaming status
Alert on Issues: Immediate notification of backup failures
Retention Warnings: Alert before data exits retention window

---

Version: 2.0.0
Last Updated: 2026-01-06

```

### reference.md

```markdown
# Neon Platform Reference Guide

## API Reference

### Neon Management API

Base URL: https://console.neon.tech/api/v2

Authentication: Bearer token via NEON_API_KEY header

### Common Endpoints

Project Endpoints:
- GET /projects - List all projects
- GET /projects/{project_id} - Get project details
- PATCH /projects/{project_id} - Update project settings

Branch Endpoints:
- GET /projects/{project_id}/branches - List branches
- POST /projects/{project_id}/branches - Create branch
- DELETE /projects/{project_id}/branches/{branch_id} - Delete branch
- GET /projects/{project_id}/branches/{branch_id}/endpoints - Get branch endpoints

Endpoint Endpoints:
- GET /projects/{project_id}/endpoints - List compute endpoints
- PATCH /projects/{project_id}/endpoints/{endpoint_id} - Update endpoint settings

### API Response Formats

Branch Creation Response:
```json
{
  "branch": {
    "id": "br-xxx",
    "name": "feature-branch",
    "project_id": "project-xxx",
    "parent_id": "br-main",
    "created_at": "2024-01-01T00:00:00Z",
    "current_state": "ready"
  },
  "endpoints": [
    {
      "id": "ep-xxx",
      "branch_id": "br-xxx",
      "host": "ep-xxx.region.neon.tech",
      "connection_uri": "postgresql://..."
    }
  ]
}
```

Endpoint Configuration Response:
```json
{
  "endpoint": {
    "id": "ep-xxx",
    "autoscaling_limit_min_cu": 0.25,
    "autoscaling_limit_max_cu": 4,
    "suspend_timeout_seconds": 300,
    "current_state": "active"
  }
}
```

---

## Environment Configuration

### Connection String Formats

Direct Connection (for migrations and admin tasks):
```
postgresql://user:[email protected]/dbname?sslmode=require
```

Pooled Connection (for serverless and edge):
```
postgresql://user:[email protected]/dbname?sslmode=require
```

### Environment Variables

Required Variables:
- DATABASE_URL: Primary connection string for migrations
- DATABASE_URL_POOLED: Pooled connection for application use
- NEON_API_KEY: API key for branch management
- NEON_PROJECT_ID: Project identifier for API calls

Optional Variables:
- NEON_BRANCH_ID: Specific branch identifier
- NEON_ENDPOINT_ID: Specific endpoint identifier

### Connection String Parameters

SSL Mode Options:
- sslmode=require: Require SSL (recommended)
- sslmode=verify-full: Verify SSL certificate

Connection Pool Options:
- connection_limit: Maximum connections (pooled only)
- pool_timeout: Connection timeout in seconds

---

## Compute Unit Reference

### Compute Unit (CU) Specifications

0.25 CU: 0.25 vCPU, 1 GB RAM - Development and testing
0.5 CU: 0.5 vCPU, 2 GB RAM - Light production workloads
1 CU: 1 vCPU, 4 GB RAM - Standard production
2 CU: 2 vCPU, 8 GB RAM - Medium workloads
4 CU: 4 vCPU, 16 GB RAM - Heavy workloads
8 CU: 8 vCPU, 32 GB RAM - High-performance requirements

### Auto-Scaling Configuration

Minimum CU: Lowest compute level (0.25 for scale-to-zero)
Maximum CU: Highest compute level for peak load
Suspend Timeout: Seconds of inactivity before scaling to zero

Recommended Settings by Use Case:

Development: min 0.25, max 0.5, timeout 300
Staging: min 0.25, max 1, timeout 600
Production: min 0.5, max 4, timeout 3600
High-Traffic: min 1, max 8, timeout 0 (never suspend)

---

## PostgreSQL Extensions

### Supported Extensions

Core Extensions:
- pg_stat_statements: Query performance statistics
- pg_trgm: Trigram text similarity
- uuid-ossp: UUID generation
- hstore: Key-value storage
- pgcrypto: Cryptographic functions

Spatial Extensions:
- postgis: Spatial and geographic objects
- postgis_topology: Topology support

Full-Text Search:
- pg_search: Full-text search improvements
- unaccent: Remove accents from text

JSON Processing:
- jsonb_plperl: Perl JSON functions
- jsonb_plpython3u: Python JSON functions

### Extension Installation

Enable Extension:
```sql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
CREATE EXTENSION IF NOT EXISTS uuid-ossp;
CREATE EXTENSION IF NOT EXISTS postgis;
```

---

## Provider Comparison Matrix

### Feature Comparison

Serverless Compute:
- Neon: Full auto-scaling with scale-to-zero
- Supabase: Fixed compute tiers
- PlanetScale: MySQL-based serverless

Database Branching:
- Neon: Instant copy-on-write branches
- Supabase: Manual database cloning
- PlanetScale: Schema-only branching

Point-in-Time Recovery:
- Neon: 30-day PITR with instant restore
- Supabase: 7-day backups (Pro tier)
- PlanetScale: Continuous backups

Connection Pooling:
- Neon: Built-in HTTP and WebSocket pooler
- Supabase: PgBouncer integration
- PlanetScale: Native connection handling

Edge Compatibility:
- Neon: Full edge runtime support
- Supabase: Edge functions with pooler
- PlanetScale: Edge-compatible driver

### Pricing Comparison

Free Tier Storage:
- Neon: 3 GB
- Supabase: 500 MB
- PlanetScale: 5 GB

Free Tier Compute:
- Neon: 100 compute hours
- Supabase: Fixed 500 MB RAM
- PlanetScale: 1 billion row reads

---

## Error Codes and Troubleshooting

### Common Error Codes

Connection Errors:
- ECONNREFUSED: Endpoint suspended, will wake on next request
- SSL_CERTIFICATE_REQUIRED: Missing sslmode=require parameter
- TOO_MANY_CONNECTIONS: Pool exhausted, use pooled connection

Branch Errors:
- BRANCH_NOT_FOUND: Invalid branch ID
- BRANCH_LIMIT_EXCEEDED: Project branch limit reached
- PARENT_BRANCH_NOT_FOUND: Invalid parent for branching

API Errors:
- 401 Unauthorized: Invalid or expired API key
- 403 Forbidden: Insufficient permissions
- 429 Too Many Requests: Rate limit exceeded

### Troubleshooting Guide

Slow Cold Starts:
- Increase minimum compute units
- Reduce suspend timeout
- Use connection pooling

Connection Timeouts:
- Switch to pooled connection string
- Verify network connectivity
- Check endpoint status via API

Branch Creation Failures:
- Verify parent branch exists
- Check project branch limits
- Ensure API key has write permissions

---

## Security Best Practices

### Connection Security

Always Use SSL: Set sslmode=require in connection strings
Rotate Credentials: Regularly rotate database passwords and API keys
Use Environment Variables: Never hardcode credentials in source code
Restrict IP Access: Configure IP allow lists for production

### API Key Management

Scoped Keys: Create keys with minimal required permissions
Key Rotation: Rotate API keys on regular schedule
Audit Access: Monitor API key usage via Neon console
Revoke Unused: Remove keys no longer in use

### Database Security

Row-Level Security: Implement RLS for multi-tenant applications
Prepared Statements: Always use parameterized queries
Audit Logging: Enable pg_stat_statements for query monitoring
Backup Verification: Regularly test PITR restoration

---

Version: 2.0.0
Last Updated: 2026-01-06

```

### examples.md

```markdown
# Neon Platform Code Examples

## Basic Serverless Driver

### Simple Query Execution

```typescript
import { neon } from '@neondatabase/serverless'

const sql = neon(process.env.DATABASE_URL!)

// Simple select
const users = await sql`SELECT * FROM users WHERE active = true`

// Parameterized query (SQL injection safe)
const userId = 'user-123'
const user = await sql`SELECT * FROM users WHERE id = ${userId}`

// Insert with returning
const newUser = await sql`
  INSERT INTO users (email, name)
  VALUES (${email}, ${name})
  RETURNING *
`

// Update with conditions
const updated = await sql`
  UPDATE users
  SET last_login = NOW()
  WHERE id = ${userId}
  RETURNING *
`
```

### Transaction Support

```typescript
import { neon } from '@neondatabase/serverless'

const sql = neon(process.env.DATABASE_URL!)

// Transaction with multiple statements
const result = await sql.transaction([
  sql`UPDATE accounts SET balance = balance - 100 WHERE id = ${fromId}`,
  sql`UPDATE accounts SET balance = balance + 100 WHERE id = ${toId}`,
  sql`INSERT INTO transfers (from_id, to_id, amount) VALUES (${fromId}, ${toId}, 100)`
])
```

---

## WebSocket Connection

### Session-Based Operations

```typescript
import { Pool, neonConfig } from '@neondatabase/serverless'
import ws from 'ws'

// Required for Node.js environments
neonConfig.webSocketConstructor = ws

const pool = new Pool({ connectionString: process.env.DATABASE_URL })

// Use pool for session-based operations
const client = await pool.connect()
try {
  await client.query('BEGIN')
  await client.query('INSERT INTO logs (message) VALUES ($1)', ['Action started'])
  await client.query('UPDATE counters SET value = value + 1 WHERE name = $1', ['actions'])
  await client.query('INSERT INTO logs (message) VALUES ($1)', ['Action completed'])
  await client.query('COMMIT')
} catch (error) {
  await client.query('ROLLBACK')
  throw error
} finally {
  client.release()
}
```

---

## Drizzle ORM Integration

### Complete Schema Definition

```typescript
// schema.ts
import { pgTable, uuid, text, timestamp, boolean, jsonb, integer } from 'drizzle-orm/pg-core'
import { relations } from 'drizzle-orm'

export const users = pgTable('users', {
  id: uuid('id').primaryKey().defaultRandom(),
  email: text('email').notNull().unique(),
  name: text('name'),
  avatarUrl: text('avatar_url'),
  createdAt: timestamp('created_at').defaultNow(),
  updatedAt: timestamp('updated_at').defaultNow(),
  metadata: jsonb('metadata')
})

export const projects = pgTable('projects', {
  id: uuid('id').primaryKey().defaultRandom(),
  name: text('name').notNull(),
  description: text('description'),
  ownerId: uuid('owner_id').references(() => users.id, { onDelete: 'cascade' }),
  isPublic: boolean('is_public').default(false),
  starCount: integer('star_count').default(0),
  createdAt: timestamp('created_at').defaultNow(),
  updatedAt: timestamp('updated_at').defaultNow()
})

export const usersRelations = relations(users, ({ many }) => ({
  projects: many(projects)
}))

export const projectsRelations = relations(projects, ({ one }) => ({
  owner: one(users, {
    fields: [projects.ownerId],
    references: [users.id]
  })
}))
```

### Drizzle Client with Queries

```typescript
// db.ts
import { neon } from '@neondatabase/serverless'
import { drizzle } from 'drizzle-orm/neon-http'
import { eq, desc, and, like } from 'drizzle-orm'
import * as schema from './schema'

const sql = neon(process.env.DATABASE_URL!)
export const db = drizzle(sql, { schema })

// Query examples
async function getUserWithProjects(userId: string) {
  return db.query.users.findFirst({
    where: eq(schema.users.id, userId),
    with: {
      projects: {
        orderBy: desc(schema.projects.createdAt)
      }
    }
  })
}

async function searchProjects(query: string, limit = 10) {
  return db
    .select()
    .from(schema.projects)
    .where(
      and(
        eq(schema.projects.isPublic, true),
        like(schema.projects.name, `%${query}%`)
      )
    )
    .orderBy(desc(schema.projects.starCount))
    .limit(limit)
}

async function createProject(data: { name: string; ownerId: string; description?: string }) {
  const [project] = await db
    .insert(schema.projects)
    .values(data)
    .returning()
  return project
}
```

---

## Prisma ORM Integration

### Prisma Schema

```prisma
// schema.prisma
generator client {
  provider        = "prisma-client-js"
  previewFeatures = ["driverAdapters"]
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        String    @id @default(uuid())
  email     String    @unique
  name      String?
  avatarUrl String?   @map("avatar_url")
  projects  Project[]
  createdAt DateTime  @default(now()) @map("created_at")
  updatedAt DateTime  @updatedAt @map("updated_at")

  @@map("users")
}

model Project {
  id          String   @id @default(uuid())
  name        String
  description String?
  owner       User     @relation(fields: [ownerId], references: [id], onDelete: Cascade)
  ownerId     String   @map("owner_id")
  isPublic    Boolean  @default(false) @map("is_public")
  starCount   Int      @default(0) @map("star_count")
  createdAt   DateTime @default(now()) @map("created_at")
  updatedAt   DateTime @updatedAt @map("updated_at")

  @@map("projects")
}
```

### Prisma Client Setup

```typescript
// db.ts
import { Pool, neonConfig } from '@neondatabase/serverless'
import { PrismaNeon } from '@prisma/adapter-neon'
import { PrismaClient } from '@prisma/client'

neonConfig.webSocketConstructor = require('ws')

const pool = new Pool({ connectionString: process.env.DATABASE_URL })
const adapter = new PrismaNeon(pool)
export const prisma = new PrismaClient({ adapter })

// Query examples
async function getUserWithProjects(userId: string) {
  return prisma.user.findUnique({
    where: { id: userId },
    include: {
      projects: {
        orderBy: { createdAt: 'desc' }
      }
    }
  })
}

async function searchProjects(query: string, limit = 10) {
  return prisma.project.findMany({
    where: {
      isPublic: true,
      name: { contains: query, mode: 'insensitive' }
    },
    orderBy: { starCount: 'desc' },
    take: limit
  })
}
```

---

## Branch Management

### Complete Branch Manager Class

```typescript
class NeonBranchManager {
  private apiKey: string
  private projectId: string
  private baseUrl = 'https://console.neon.tech/api/v2'

  constructor(apiKey: string, projectId: string) {
    this.apiKey = apiKey
    this.projectId = projectId
  }

  private async request<T>(path: string, options: RequestInit = {}): Promise<T> {
    const response = await fetch(`${this.baseUrl}${path}`, {
      ...options,
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        'Content-Type': 'application/json',
        ...options.headers
      }
    })
    if (!response.ok) {
      const error = await response.text()
      throw new Error(`Neon API error: ${response.status} - ${error}`)
    }
    return response.json()
  }

  async createBranch(name: string, parentId: string = 'main') {
    return this.request<{ branch: Branch; endpoints: Endpoint[] }>(
      `/projects/${this.projectId}/branches`,
      {
        method: 'POST',
        body: JSON.stringify({
          branch: { name, parent_id: parentId }
        })
      }
    )
  }

  async createBranchAtTimestamp(name: string, timestamp: Date, parentId: string = 'main') {
    return this.request<{ branch: Branch; endpoints: Endpoint[] }>(
      `/projects/${this.projectId}/branches`,
      {
        method: 'POST',
        body: JSON.stringify({
          branch: {
            name,
            parent_id: parentId,
            parent_timestamp: timestamp.toISOString()
          }
        })
      }
    )
  }

  async deleteBranch(branchId: string) {
    return this.request<{ branch: Branch }>(
      `/projects/${this.projectId}/branches/${branchId}`,
      { method: 'DELETE' }
    )
  }

  async listBranches() {
    return this.request<{ branches: Branch[] }>(
      `/projects/${this.projectId}/branches`
    )
  }

  async getBranchEndpoints(branchId: string) {
    return this.request<{ endpoints: Endpoint[] }>(
      `/projects/${this.projectId}/branches/${branchId}/endpoints`
    )
  }

  async getBranchConnectionString(branchId: string): Promise<string | undefined> {
    const { endpoints } = await this.getBranchEndpoints(branchId)
    return endpoints[0]?.connection_uri
  }
}

interface Branch {
  id: string
  name: string
  project_id: string
  parent_id: string
  created_at: string
  current_state: string
}

interface Endpoint {
  id: string
  branch_id: string
  host: string
  connection_uri: string
}
```

---

## Edge Function Integration

### Vercel Edge Function

```typescript
import { neon } from '@neondatabase/serverless'

export const config = {
  runtime: 'edge'
}

export default async function handler(request: Request) {
  const sql = neon(process.env.DATABASE_URL_POOLED!)

  const { searchParams } = new URL(request.url)
  const limit = parseInt(searchParams.get('limit') || '10')

  const users = await sql`
    SELECT id, name, email, created_at
    FROM users
    WHERE active = true
    ORDER BY created_at DESC
    LIMIT ${limit}
  `

  return Response.json(users, {
    headers: {
      'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300'
    }
  })
}
```

### Next.js App Router

```typescript
// app/api/users/route.ts
import { neon } from '@neondatabase/serverless'
import { NextResponse } from 'next/server'

export const runtime = 'edge'

export async function GET(request: Request) {
  const sql = neon(process.env.DATABASE_URL_POOLED!)

  const users = await sql`SELECT * FROM users WHERE active = true LIMIT 100`

  return NextResponse.json(users)
}

export async function POST(request: Request) {
  const sql = neon(process.env.DATABASE_URL_POOLED!)
  const { email, name } = await request.json()

  const [user] = await sql`
    INSERT INTO users (email, name)
    VALUES (${email}, ${name})
    RETURNING *
  `

  return NextResponse.json(user, { status: 201 })
}
```

---

## Migration Workflow

### Drizzle Migrations

```typescript
// migrate.ts
import { migrate } from 'drizzle-orm/neon-http/migrator'
import { neon } from '@neondatabase/serverless'
import { drizzle } from 'drizzle-orm/neon-http'

async function runMigrations() {
  // Use direct connection for migrations (not pooled)
  const sql = neon(process.env.DATABASE_URL!)
  const db = drizzle(sql)

  console.log('Running migrations...')

  await migrate(db, { migrationsFolder: './drizzle' })

  console.log('Migrations completed successfully')
}

runMigrations().catch(console.error)
```

### Drizzle Config

```typescript
// drizzle.config.ts
import type { Config } from 'drizzle-kit'

export default {
  schema: './src/db/schema.ts',
  out: './drizzle',
  driver: 'pg',
  dbCredentials: {
    connectionString: process.env.DATABASE_URL!
  }
} satisfies Config
```

---

Version: 2.0.0
Last Updated: 2026-01-06

```

moai-platform-neon | SkillHub