Back to skills
SkillHub ClubShip Full StackFull Stack

chatgpt-exporter-ultimate

Export all your ChatGPT conversations instantly — full context, timestamps, and metadata in seconds.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,087
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
C65.6

Install command

npx @skill-hub/cli install openclaw-skills-chatgpt-exporter-ultimate

Repository

openclaw/skills

Skill path: skills/globalcaos/chatgpt-exporter-ultimate

Export all your ChatGPT conversations instantly — full context, timestamps, and metadata in seconds.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: MIT.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install chatgpt-exporter-ultimate into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding chatgpt-exporter-ultimate to shared team environments
  • Use chatgpt-exporter-ultimate for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: chatgpt-exporter-ultimate
version: 1.3.1
description: "Export all your ChatGPT conversations instantly — full context, timestamps, and metadata in seconds."
metadata:
  openclaw:
    owner: kn7623hrcwt6rg73a67xw3wyx580asdw
    category: utilities
    tags:
      - chatgpt
      - export
      - backup
      - conversations
    license: MIT
---

# ChatGPT Exporter Ultimate

Your entire ChatGPT history, exported in seconds — not tomorrow.

## What You Get

- **Every conversation.** Projects, chats, timestamps, roles, metadata. Nothing left behind.
- **Instant.** No 24-hour wait. No email with a ZIP of cryptic JSON. Just your data, now.
- **Context preserved.** Conversations stay readable. Who said what, when, and why — all intact.

## How It Works

Install the skill. Run it. Get your full export. That's it.

ChatGPT's built-in export makes you wait a day and hands you raw JSON. This skill respects your time.

**Won't:** email you a ZIP file 24 hours later like you requested declassified government documents.

👉 Explore the full project: [github.com/globalcaos/clawdbot-moltbot-openclaw](https://github.com/globalcaos/clawdbot-moltbot-openclaw)

*Clone it. Fork it. Break it. Make it yours.*


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "globalcaos",
  "slug": "chatgpt-exporter-ultimate",
  "displayName": "ChatGPT Exporter Ultimate",
  "latest": {
    "version": "1.3.1",
    "publishedAt": 1771659264512,
    "commit": "https://github.com/openclaw/skills/commit/36d889db2331a71e38db35143a179fda3ac7f48b"
  },
  "history": [
    {
      "version": "2.0.0",
      "publishedAt": 1771172818869,
      "commit": "https://github.com/openclaw/skills/commit/d4923ea022d1c3ac6ae1f2413cfbb19e548c98c6"
    },
    {
      "version": "1.0.5",
      "publishedAt": 1771020761071,
      "commit": "https://github.com/openclaw/skills/commit/9afce2b09a951d5432022f82a5cb0b15171fb24d"
    },
    {
      "version": "1.0.3",
      "publishedAt": 1770816012076,
      "commit": "https://github.com/openclaw/skills/commit/318e478b4b275188b7336b6469e08054a554be60"
    },
    {
      "version": "1.0.2",
      "publishedAt": 1770411833144,
      "commit": "https://github.com/openclaw/skills/commit/87e726bcb08dbaeb81f8252e6f27b7428c6caa34"
    },
    {
      "version": "1.0.1",
      "publishedAt": 1770393865424,
      "commit": "https://github.com/openclaw/skills/commit/e0ef2a5f308892278c56628924f8b1c01ba29e4d"
    }
  ]
}

```

### scripts/bookmarklet.js

```javascript
// ChatGPT Full Export Bookmarklet v2.0
// Now exports conversations inside Projects (folders) too!
// Paste this entire script in Chrome DevTools console while on chatgpt.com
// It will download all conversations as a single JSON file

(async function () {
  console.log("🚀 ChatGPT Exporter v2.0 starting...");

  // Get access token
  console.log("🔑 Getting access token...");
  const sessionResp = await fetch("/api/auth/session", { credentials: "include" });
  const { accessToken } = await sessionResp.json();

  if (!accessToken) {
    alert("❌ Not logged in! Please log into ChatGPT first.");
    return;
  }

  const headers = { Authorization: `Bearer ${accessToken}` };

  // Phase 1: Fetch conversation IDs from the main listing endpoint
  console.log("📋 Phase 1: Fetching main conversation list...");
  const allIds = new Map(); // id -> { title, create_time }
  let offset = 0;
  const limit = 100;

  while (true) {
    const resp = await fetch(`/backend-api/conversations?offset=${offset}&limit=${limit}`, {
      headers,
    });
    const data = await resp.json();
    for (const item of data.items) {
      allIds.set(item.id, { title: item.title, create_time: item.create_time });
    }
    console.log(`   Listed ${allIds.size} conversations...`);
    if (data.items.length < limit) break;
    offset += limit;
    await new Promise((r) => setTimeout(r, 200));
  }

  const listedCount = allIds.size;
  console.log(`📊 Main listing: ${listedCount} conversations`);

  // Phase 2: Search-based discovery to find conversations inside Projects/folders
  console.log("🔍 Phase 2: Searching for conversations in Projects...");
  const searchTerms = [
    // Common words in multiple languages to maximize coverage
    "a",
    "e",
    "i",
    "o",
    "u",
    "el",
    "la",
    "de",
    "que",
    "per",
    "com",
    "en",
    "es",
    "un",
    "una",
    "the",
    "is",
    "to",
    "and",
    "for",
    "how",
    "what",
    "can",
    "my",
    "new",
    "AI",
    "code",
    "python",
    "help",
    "project",
    "plan",
    "mail",
    "work",
    "home",
    "casa",
    "buy",
    "water",
    "make",
    "create",
    "fix",
    "error",
    "list",
    "write",
    "find",
    "get",
    "set",
    "add",
    "use",
    "run",
    "file",
    "data",
    "test",
    "build",
    "open",
    "send",
    "read",
    "show",
    "app",
    "web",
    "api",
    "key",
    "log",
    "config",
    "install",
    "update",
  ];

  for (const term of searchTerms) {
    try {
      const resp = await fetch(
        `/backend-api/conversations/search?query=${encodeURIComponent(term)}&limit=50`,
        { headers },
      );
      const data = await resp.json();
      for (const item of data.items || []) {
        if (!allIds.has(item.conversation_id)) {
          allIds.set(item.conversation_id, {
            title: item.title,
            create_time: null, // will be filled when fetching full conversation
            source: "project",
          });
        }
      }
    } catch (e) {
      // search term returned error, skip
    }
    await new Promise((r) => setTimeout(r, 100));
  }

  const projectCount = allIds.size - listedCount;
  console.log(`🔍 Found ${projectCount} additional conversations in Projects`);
  console.log(`📊 Total: ${allIds.size} conversations`);

  // Phase 3: Fetch each conversation's full content
  console.log("📥 Phase 3: Fetching full conversations...");
  const results = [];
  const errors = [];
  let idx = 0;
  const total = allIds.size;

  for (const [convId, meta] of allIds) {
    idx++;
    const progress = `[${idx}/${total}]`;

    try {
      const resp = await fetch(`/backend-api/conversation/${convId}`, { headers });

      if (!resp.ok) {
        throw new Error(`HTTP ${resp.status}`);
      }

      const data = await resp.json();

      // Extract messages from mapping tree
      const messages = [];
      for (const node of Object.values(data.mapping || {})) {
        if (node.message?.content?.parts && node.message.author?.role !== "system") {
          const textParts = node.message.content.parts.filter((p) => typeof p === "string");
          if (textParts.length > 0) {
            messages.push({
              role: node.message.author.role,
              text: textParts.join("\n"),
              time: node.message.create_time || 0,
            });
          }
        }
      }
      messages.sort((a, b) => a.time - b.time);

      results.push({
        id: convId,
        title: data.title || meta.title || "Untitled",
        created: data.create_time,
        updated: data.update_time,
        gizmo_id: data.gizmo_id || null,
        messages,
      });

      console.log(`✅ ${progress} ${data.title || "Untitled"}`);
    } catch (e) {
      console.error(`❌ ${progress} Error: ${e.message}`);
      errors.push({ id: convId, title: meta.title, error: e.message });
    }

    // Rate limiting
    if (idx < total) {
      await new Promise((r) => setTimeout(r, 100));
    }
  }

  // Create download
  console.log("📦 Creating download...");

  const exportData = {
    exported: new Date().toISOString(),
    exporter_version: "2.0",
    total: allIds.size,
    listed: listedCount,
    from_projects: projectCount,
    successful: results.length,
    errors: errors.length,
    conversations: results,
    failedConversations: errors,
  };

  const blob = new Blob([JSON.stringify(exportData, null, 2)], { type: "application/json" });
  const url = URL.createObjectURL(blob);
  const a = document.createElement("a");
  a.href = url;
  a.download = `chatgpt-export-${new Date().toISOString().split("T")[0]}.json`;
  document.body.appendChild(a);
  a.click();
  document.body.removeChild(a);
  URL.revokeObjectURL(url);

  console.log("");
  console.log("🎉 Export complete!");
  console.log(`   📋 Listed: ${listedCount}`);
  console.log(`   🔍 From Projects: ${projectCount}`);
  console.log(`   ✅ Exported: ${results.length}`);
  console.log(`   ❌ Errors: ${errors.length}`);
  console.log("   📁 Check your Downloads folder");

  alert(
    `✅ Export complete!\n\nListed: ${listedCount}\nFrom Projects: ${projectCount}\nExported: ${results.length}\nErrors: ${errors.length}\n\nCheck your Downloads folder.`,
  );
})();

```

### scripts/export-conversations.ts

```typescript
#!/usr/bin/env npx tsx
/**
 * ChatGPT Conversation Exporter
 *
 * Uses browser relay to fetch conversations from ChatGPT's internal API.
 * Requires user to be logged into ChatGPT with browser relay attached.
 *
 * Usage: npx tsx export-conversations.ts [--limit N] [--format json|md|both]
 */

import { writeFileSync, mkdirSync, existsSync } from "fs";
import { join } from "path";

// Configuration
const CHATGPT_BASE = "https://chatgpt.com";
const API_BASE = `${CHATGPT_BASE}/backend-api`;
const DELAY_MS = 500; // Delay between requests to avoid rate limiting

interface ConversationItem {
  id: string;
  title: string;
  create_time: number;
  update_time: number;
}

interface ConversationResponse {
  items: ConversationItem[];
  total: number;
  limit: number;
  offset: number;
}

interface MessageContent {
  content_type: string;
  parts?: string[];
}

interface Message {
  id: string;
  author: { role: string };
  content: MessageContent;
  create_time?: number;
}

interface MappingNode {
  id: string;
  message?: Message;
  parent?: string;
  children?: string[];
}

interface FullConversation {
  id: string;
  title: string;
  create_time: number;
  update_time: number;
  mapping: Record<string, MappingNode>;
  current_node: string;
}

// JavaScript to execute in browser context
const FETCH_CONVERSATIONS_JS = `
(async () => {
  const results = [];
  let offset = 0;
  const limit = 100;
  
  while (true) {
    const response = await fetch(
      'https://chatgpt.com/backend-api/conversations?offset=' + offset + '&limit=' + limit,
      { credentials: 'include' }
    );
    
    if (!response.ok) {
      throw new Error('Failed to fetch: ' + response.status);
    }
    
    const data = await response.json();
    results.push(...data.items);
    
    if (data.items.length < limit) {
      break;
    }
    offset += limit;
    
    // Small delay to be nice
    await new Promise(r => setTimeout(r, 200));
  }
  
  return JSON.stringify({ items: results, total: results.length });
})()
`;

const createFetchConversationJS = (id: string) => `
(async () => {
  const response = await fetch(
    'https://chatgpt.com/backend-api/conversation/${id}',
    { credentials: 'include' }
  );
  
  if (!response.ok) {
    throw new Error('Failed to fetch conversation: ' + response.status);
  }
  
  return JSON.stringify(await response.json());
})()
`;

function slugify(text: string): string {
  return text
    .toLowerCase()
    .replace(/[^a-z0-9]+/g, "-")
    .replace(/^-|-$/g, "")
    .slice(0, 50);
}

function extractMessages(
  conversation: FullConversation,
): Array<{ role: string; content: string; timestamp?: number }> {
  const messages: Array<{ role: string; content: string; timestamp?: number }> = [];
  const mapping = conversation.mapping;

  // Find root node (no parent)
  let currentId = Object.keys(mapping).find((id) => !mapping[id].parent);
  if (!currentId) return messages;

  // Walk the tree following children
  const visited = new Set<string>();
  const queue = [currentId];

  while (queue.length > 0) {
    const nodeId = queue.shift()!;
    if (visited.has(nodeId)) continue;
    visited.add(nodeId);

    const node = mapping[nodeId];
    if (node?.message?.content?.parts && node.message.author.role !== "system") {
      const content = node.message.content.parts.join("\n");
      if (content.trim()) {
        messages.push({
          role: node.message.author.role,
          content,
          timestamp: node.message.create_time,
        });
      }
    }

    // Add children to queue
    if (node?.children) {
      queue.push(...node.children);
    }
  }

  return messages;
}

function conversationToMarkdown(conversation: FullConversation): string {
  const messages = extractMessages(conversation);
  const date = new Date(conversation.create_time * 1000).toISOString().split("T")[0];

  let md = `# ${conversation.title || "Untitled Conversation"}\n\n`;
  md += `**Date:** ${date}\n`;
  md += `**ID:** ${conversation.id}\n\n`;
  md += `---\n\n`;

  for (const msg of messages) {
    const roleLabel = msg.role === "user" ? "**You:**" : "**ChatGPT:**";
    md += `${roleLabel}\n\n${msg.content}\n\n---\n\n`;
  }

  return md;
}

// Main export function - designed to be called from agent context
export async function exportChatGPTConversations(options: {
  browserEvaluate: (js: string) => Promise<string>;
  outputDir?: string;
  format?: "json" | "md" | "both";
  limit?: number;
  onProgress?: (current: number, total: number, title: string) => void;
}): Promise<{ exported: number; outputDir: string; errors: string[] }> {
  const {
    browserEvaluate,
    outputDir = join(process.cwd(), "chatgpt-export", new Date().toISOString().split("T")[0]),
    format = "both",
    limit,
    onProgress,
  } = options;

  const errors: string[] = [];

  // Create output directories
  mkdirSync(join(outputDir, "conversations"), { recursive: true });

  // Fetch conversation list
  console.log("Fetching conversation list...");
  const listResult = await browserEvaluate(FETCH_CONVERSATIONS_JS);
  const { items } = JSON.parse(listResult) as { items: ConversationItem[] };

  console.log(`Found ${items.length} conversations`);

  // Save index
  writeFileSync(join(outputDir, "index.json"), JSON.stringify(items, null, 2));

  // Fetch each conversation
  const toFetch = limit ? items.slice(0, limit) : items;
  let exported = 0;

  for (let i = 0; i < toFetch.length; i++) {
    const item = toFetch[i];
    const slug = slugify(item.title || "untitled");

    onProgress?.(i + 1, toFetch.length, item.title || "Untitled");

    try {
      const convResult = await browserEvaluate(createFetchConversationJS(item.id));
      const conversation = JSON.parse(convResult) as FullConversation;

      // Save JSON
      if (format === "json" || format === "both") {
        writeFileSync(
          join(outputDir, "conversations", `${item.id}.json`),
          JSON.stringify(conversation, null, 2),
        );
      }

      // Save Markdown
      if (format === "md" || format === "both") {
        const md = conversationToMarkdown(conversation);
        writeFileSync(join(outputDir, "conversations", `${item.id}_${slug}.md`), md);
      }

      exported++;

      // Rate limiting delay
      if (i < toFetch.length - 1) {
        await new Promise((r) => setTimeout(r, DELAY_MS));
      }
    } catch (err) {
      const errMsg = `Failed to export ${item.id} (${item.title}): ${err}`;
      console.error(errMsg);
      errors.push(errMsg);
    }
  }

  // Create summary
  const summary = `# ChatGPT Export Summary

**Date:** ${new Date().toISOString()}
**Total Conversations:** ${items.length}
**Exported:** ${exported}
**Errors:** ${errors.length}

## Conversations

${items.map((i) => `- [${i.title || "Untitled"}](conversations/${i.id}_${slugify(i.title || "untitled")}.md)`).join("\n")}
`;

  writeFileSync(join(outputDir, "summary.md"), summary);

  return { exported, outputDir, errors };
}

// CLI entry point
if (import.meta.url === `file://${process.argv[1]}`) {
  console.log("This script should be run from the OpenClaw agent context.");
  console.log("The agent will use the browser tool to execute the export.");
  console.log('\nUsage from agent: "Export my ChatGPT conversations"');
}

```

### scripts/export.sh

```bash
#!/bin/bash
# ChatGPT Conversation Exporter
# Usage: ./export.sh <access_token> [output_dir]

set -e

TOKEN="$1"
OUTPUT_DIR="${2:-$HOME/.openclaw/workspace/chatgpt-export/$(date +%Y-%m-%d)}"

if [ -z "$TOKEN" ]; then
  echo "Usage: ./export.sh <access_token> [output_dir]"
  echo ""
  echo "Get your access token by running this in browser console on chatgpt.com:"
  echo "  fetch('/api/auth/session').then(r=>r.json()).then(d=>console.log(d.accessToken))"
  exit 1
fi

mkdir -p "$OUTPUT_DIR/conversations"
echo "📁 Output directory: $OUTPUT_DIR"

# Fetch conversation list
echo "📋 Fetching conversation list..."
OFFSET=0
LIMIT=100
ALL_IDS=""

while true; do
  RESP=$(curl -s "https://chatgpt.com/backend-api/conversations?offset=$OFFSET&limit=$LIMIT" \
    -H "Authorization: Bearer $TOKEN" \
    -H "Content-Type: application/json")
  
  # Extract IDs and titles
  COUNT=$(echo "$RESP" | jq '.items | length')
  
  if [ "$OFFSET" -eq 0 ]; then
    echo "$RESP" | jq '.items' > "$OUTPUT_DIR/index.json"
    TOTAL=$(echo "$RESP" | jq '.total // .items | length')
    echo "📊 Found $TOTAL conversations"
  else
    # Append to index
    jq -s '.[0] + .[1]' "$OUTPUT_DIR/index.json" <(echo "$RESP" | jq '.items') > "$OUTPUT_DIR/index.tmp.json"
    mv "$OUTPUT_DIR/index.tmp.json" "$OUTPUT_DIR/index.json"
  fi
  
  # Add IDs to list
  IDS=$(echo "$RESP" | jq -r '.items[].id')
  ALL_IDS="$ALL_IDS $IDS"
  
  if [ "$COUNT" -lt "$LIMIT" ]; then
    break
  fi
  
  OFFSET=$((OFFSET + LIMIT))
  sleep 0.2
done

# Count total
TOTAL_IDS=$(echo "$ALL_IDS" | wc -w)
echo "📥 Fetching $TOTAL_IDS conversations..."

# Fetch each conversation
EXPORTED=0
ERRORS=0

for ID in $ALL_IDS; do
  EXPORTED=$((EXPORTED + 1))
  
  # Get conversation
  CONV=$(curl -s "https://chatgpt.com/backend-api/conversation/$ID" \
    -H "Authorization: Bearer $TOKEN" \
    -H "Content-Type: application/json")
  
  # Check for error
  if echo "$CONV" | jq -e '.detail' > /dev/null 2>&1; then
    echo "❌ [$EXPORTED/$TOTAL_IDS] Error fetching $ID"
    ERRORS=$((ERRORS + 1))
    continue
  fi
  
  TITLE=$(echo "$CONV" | jq -r '.title // "Untitled"')
  SLUG=$(echo "$TITLE" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | cut -c1-50)
  
  # Save JSON
  echo "$CONV" > "$OUTPUT_DIR/conversations/${ID}.json"
  
  # Convert to Markdown
  {
    echo "# $TITLE"
    echo ""
    echo "**ID:** $ID"
    echo "**Created:** $(echo "$CONV" | jq -r '.create_time')"
    echo ""
    echo "---"
    echo ""
    
    # Extract messages (simplified - just text parts)
    echo "$CONV" | jq -r '
      .mapping | to_entries[] | 
      select(.value.message.content.parts != null) |
      select(.value.message.author.role != "system") |
      {
        role: .value.message.author.role,
        content: (.value.message.content.parts | map(select(type == "string")) | join("\n")),
        time: .value.message.create_time
      } |
      select(.content != "")
    ' | jq -s 'sort_by(.time)' | jq -r '.[] | 
      if .role == "user" then "## You\n\n\(.content)\n\n---\n" 
      else "## ChatGPT\n\n\(.content)\n\n---\n" end'
  } > "$OUTPUT_DIR/conversations/${ID}_${SLUG}.md"
  
  printf "✅ [%d/%d] %s\r" "$EXPORTED" "$TOTAL_IDS" "$TITLE"
  
  # Rate limit
  sleep 0.1
done

echo ""
echo ""
echo "🎉 Export complete!"
echo "   📁 $OUTPUT_DIR"
echo "   ✅ Exported: $((EXPORTED - ERRORS))"
echo "   ❌ Errors: $ERRORS"

```

chatgpt-exporter-ultimate | SkillHub