Back to skills
SkillHub ClubWrite Technical DocsFull StackTech Writer

feishu-comments

Read comments from Feishu documents. Use when: user asks to check/read/fetch comments on a Feishu doc, review feedback on a document, or collaborate on document revisions via comments.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,127
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
S96.0

Install command

npx @skill-hub/cli install openclaw-skills-feishu-comments

Repository

openclaw/skills

Skill path: skills/deadblue22/feishu-comments

Read comments from Feishu documents. Use when: user asks to check/read/fetch comments on a Feishu doc, review feedback on a document, or collaborate on document revisions via comments.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install feishu-comments into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding feishu-comments to shared team environments
  • Use feishu-comments for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: feishu-comments
description: |
  Read comments from Feishu documents. Use when: user asks to check/read/fetch comments on a Feishu doc, review feedback on a document, or collaborate on document revisions via comments.
---

# Feishu Document Comments

Fetch comments from Feishu docx documents via the Drive Comment API.

## Requirements

- **Feishu app credentials** configured in `~/.openclaw/openclaw.json` (reads `appId` and `appSecret` from `channels.feishu`)
- **System dependencies**: `curl`, `python3` (must be available on PATH)
- **Feishu app permission**: `docs:document.comment:read` or `drive:drive`

## Usage

Run the bundled script to get all comments on a document:

```bash
bash skills/feishu-comments/scripts/get_comments.sh <doc_token>
```

To fetch specific comments by ID:

```bash
bash skills/feishu-comments/scripts/get_comments.sh <doc_token> "id1,id2,id3"
```

Resolve `skills/` paths relative to the workspace directory.

## When to Use

- After `feishu_doc` `list_blocks` shows `comment_ids` on blocks
- When user asks to review or check comments on a document
- During document collaboration review cycles

## Output Format

Each comment shows:
- Comment ID, status (Open/Resolved), scope (Global/Local)
- Quoted text (for local/inline comments)
- All replies with user ID and text content

## Extracting doc_token

From URL `https://xxx.feishu.cn/docx/ABC123def` β†’ doc_token = `ABC123def`

For wiki pages, first use `feishu_wiki` to get `obj_token`, then use that as the doc_token.

## How It Works

The bundled shell script:
1. Reads Feishu app credentials (`appId`, `appSecret`) from `~/.openclaw/openclaw.json`
2. Obtains a `tenant_access_token` via the Feishu auth API
3. Calls the Drive Comment API to list and batch-query comments
4. Formats and outputs comment content to stdout

No data is sent to any third party beyond the Feishu/Lark API endpoints.

## Limitations

- Read-only (cannot create or reply to comments)
- API error responses are printed to stderr (may contain request IDs but no sensitive data)


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "deadblue22",
  "slug": "feishu-comments",
  "displayName": "Feishu Comments",
  "latest": {
    "version": "1.2.0",
    "publishedAt": 1773115811951,
    "commit": "https://github.com/openclaw/skills/commit/84aa50692bedec4e8fcf3af71f1ae0e1d3fdc057"
  },
  "history": [
    {
      "version": "1.1.0",
      "publishedAt": 1772605597563,
      "commit": "https://github.com/openclaw/skills/commit/7978ef61682d5225720a3d7f5b8a7be7bf7a8ef4"
    },
    {
      "version": "1.0.1",
      "publishedAt": 1772471595971,
      "commit": "https://github.com/openclaw/skills/commit/b047f7a49cef98b3e169219931b55a749e0cd349"
    }
  ]
}

```

### scripts/get_comments.sh

```bash
#!/bin/bash
# Fetch comments from a Feishu docx document with orphan detection
# Usage: get_comments.sh <doc_token> [comment_id1,comment_id2,...] [--all]
# By default only shows Open + anchored comments. Use --all to include orphaned/resolved.
# If comment_ids not provided, fetches all comments first, then batch queries them.

set -euo pipefail

DOC_TOKEN="${1:?Usage: get_comments.sh <doc_token> [comment_id1,comment_id2,...] [--all]}"
COMMENT_IDS="${2:-}"
SHOW_ALL="${3:-}"

# Read credentials from openclaw config
CONFIG_FILE="$HOME/.openclaw/openclaw.json"
APP_ID=$(grep -m1 '"appId"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/')
APP_SECRET=$(grep -m1 '"appSecret"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/')

# Detect domain (feishu vs lark)
DOMAIN=$(grep -m1 '"domain"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/' || echo "feishu")
if [ "$DOMAIN" = "lark" ]; then
  API_BASE="https://open.larksuite.com"
else
  API_BASE="https://open.feishu.cn"
fi

# Get tenant_access_token
TOKEN_RESP=$(curl -s -X POST "${API_BASE}/open-apis/auth/v3/tenant_access_token/internal" \
  -H "Content-Type: application/json" \
  -d "{\"app_id\":\"${APP_ID}\",\"app_secret\":\"${APP_SECRET}\"}")

TENANT_TOKEN=$(echo "$TOKEN_RESP" | python3 -c "import sys,json; print(json.load(sys.stdin)['tenant_access_token'])" 2>/dev/null)

if [ -z "$TENANT_TOKEN" ]; then
  echo "Error: Failed to get tenant_access_token"
  echo "$TOKEN_RESP"
  exit 1
fi

# Get document raw content for orphan detection
DOC_CONTENT=$(curl -s -X GET \
  "${API_BASE}/open-apis/docx/v1/documents/${DOC_TOKEN}/raw_content" \
  -H "Authorization: Bearer ${TENANT_TOKEN}" | python3 -c "
import sys, json
data = json.load(sys.stdin)
print(data.get('data', {}).get('content', ''))
" 2>/dev/null || echo "")

# If no comment_ids provided, list all comments first
if [ -z "$COMMENT_IDS" ] || [ "$COMMENT_IDS" = "--all" ]; then
  # Handle case where --all is in position 2
  if [ "$COMMENT_IDS" = "--all" ]; then
    SHOW_ALL="--all"
    COMMENT_IDS=""
  fi

  ALL_COMMENTS=$(curl -s -X GET \
    "${API_BASE}/open-apis/drive/v1/files/${DOC_TOKEN}/comments?file_type=docx&user_id_type=open_id" \
    -H "Authorization: Bearer ${TENANT_TOKEN}")
  
  # Extract comment IDs
  COMMENT_IDS=$(echo "$ALL_COMMENTS" | python3 -c "
import sys, json
data = json.load(sys.stdin)
if data.get('code') != 0:
    print(json.dumps(data, indent=2, ensure_ascii=False), file=sys.stderr)
    sys.exit(1)
items = data.get('data', {}).get('items', [])
if not items:
    print('No comments found.', file=sys.stderr)
    sys.exit(0)
ids = [item['comment_id'] for item in items]
print(','.join(ids))
" 2>&1)

  # If the output starts with 'No comments' or is an error, print and exit
  if echo "$COMMENT_IDS" | grep -q "^No comments\|^Error\|^{"; then
    echo "$COMMENT_IDS"
    exit 0
  fi
fi

# Convert comma-separated IDs to JSON array
IDS_JSON=$(echo "$COMMENT_IDS" | python3 -c "
import sys
ids = sys.stdin.read().strip().split(',')
import json
print(json.dumps(ids))
")

# Batch query comments
RESULT=$(curl -s -X POST \
  "${API_BASE}/open-apis/drive/v1/files/${DOC_TOKEN}/comments/batch_query?file_type=docx&user_id_type=open_id" \
  -H "Authorization: Bearer ${TENANT_TOKEN}" \
  -H "Content-Type: application/json" \
  -d "{\"comment_ids\": ${IDS_JSON}}")

# Pretty print with orphan detection
echo "$RESULT" | python3 -c "
import sys, json

data = json.load(sys.stdin)
if data.get('code') != 0:
    print(json.dumps(data, indent=2, ensure_ascii=False))
    sys.exit(1)

items = data.get('data', {}).get('items', [])
if not items:
    print('No comments found.')
    sys.exit(0)

doc_content = '''${DOC_CONTENT}'''
show_all = '${SHOW_ALL}' == '--all'

active_count = 0
orphaned_count = 0
resolved_count = 0

for item in items:
    cid = item.get('comment_id', '?')
    is_solved = item.get('is_solved', False)
    is_whole = item.get('is_whole', False)
    quote = item.get('quote', '')
    
    # Detect orphan: quote text no longer in document
    quote_snippet = quote[:50] if quote else ''
    is_orphaned = bool(quote_snippet and quote_snippet not in doc_content) if doc_content else False
    
    if is_solved:
        resolved_count += 1
        status = 'βœ… Resolved'
    elif is_orphaned:
        orphaned_count += 1
        status = 'πŸ‘» Orphaned (anchor text gone)'
    else:
        active_count += 1
        status = 'πŸ’¬ Open'
    
    scope = 'Global' if is_whole else 'Local'
    
    # Default: skip resolved and orphaned unless --all
    if not show_all and (is_solved or is_orphaned):
        continue
    
    print(f'--- Comment {cid} [{status}] ({scope}) ---')
    if quote:
        print(f'  Quote: \"{quote}\"')
    
    replies = item.get('reply_list', {}).get('replies', [])
    for r in replies:
        uid = r.get('user_id', '?')
        elements = r.get('content', {}).get('elements', [])
        text_parts = []
        for el in elements:
            t = el.get('type', '')
            if t == 'text_run':
                text_parts.append(el.get('text_run', {}).get('text', ''))
            elif t == 'person':
                text_parts.append(f'@{el.get(\"person\", {}).get(\"user_id\", \"?\")}')
            elif t == 'docs_link':
                text_parts.append(el.get('docs_link', {}).get('url', ''))
        text = ''.join(text_parts)
        print(f'  [{uid}]: {text}')
    print()

# Summary line
print(f'--- Summary: {active_count} active, {orphaned_count} orphaned, {resolved_count} resolved ---')
"

```

### scripts/resolve_comments.sh

```bash
#!/bin/bash
# Resolve (close) comments on a Feishu docx document
# Usage: resolve_comments.sh <doc_token> <comment_id1,comment_id2,...>
# Or:    resolve_comments.sh <doc_token> --orphaned   (auto-resolve all orphaned comments)

set -euo pipefail

DOC_TOKEN="${1:?Usage: resolve_comments.sh <doc_token> <comment_id1,...|--orphaned>}"
TARGET="${2:?Usage: resolve_comments.sh <doc_token> <comment_id1,...|--orphaned>}"

# Read credentials from openclaw config
CONFIG_FILE="$HOME/.openclaw/openclaw.json"
APP_ID=$(grep -m1 '"appId"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/')
APP_SECRET=$(grep -m1 '"appSecret"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/')

# Detect domain
DOMAIN=$(grep -m1 '"domain"' "$CONFIG_FILE" | head -1 | sed 's/.*: *"\(.*\)".*/\1/' || echo "feishu")
if [ "$DOMAIN" = "lark" ]; then
  API_BASE="https://open.larksuite.com"
else
  API_BASE="https://open.feishu.cn"
fi

# Get tenant_access_token
TENANT_TOKEN=$(curl -s -X POST "${API_BASE}/open-apis/auth/v3/tenant_access_token/internal" \
  -H "Content-Type: application/json" \
  -d "{\"app_id\":\"${APP_ID}\",\"app_secret\":\"${APP_SECRET}\"}" | python3 -c "import sys,json; print(json.load(sys.stdin)['tenant_access_token'])" 2>/dev/null)

if [ -z "$TENANT_TOKEN" ]; then
  echo "Error: Failed to get tenant_access_token"
  exit 1
fi

# If --orphaned, find orphaned comment IDs automatically
if [ "$TARGET" = "--orphaned" ]; then
  # Get doc content
  DOC_CONTENT=$(curl -s -X GET \
    "${API_BASE}/open-apis/docx/v1/documents/${DOC_TOKEN}/raw_content" \
    -H "Authorization: Bearer ${TENANT_TOKEN}" | python3 -c "
import sys, json
data = json.load(sys.stdin)
print(data.get('data', {}).get('content', ''))
" 2>/dev/null || echo "")

  # List all comments
  ALL_COMMENTS=$(curl -s -X GET \
    "${API_BASE}/open-apis/drive/v1/files/${DOC_TOKEN}/comments?file_type=docx&user_id_type=open_id" \
    -H "Authorization: Bearer ${TENANT_TOKEN}")

  COMMENT_IDS_STR=$(echo "$ALL_COMMENTS" | python3 -c "
import sys, json
data = json.load(sys.stdin)
items = data.get('data', {}).get('items', [])
ids = [item['comment_id'] for item in items]
print(','.join(ids))
" 2>/dev/null)

  if [ -z "$COMMENT_IDS_STR" ] || [ "$COMMENT_IDS_STR" = "" ]; then
    echo "No comments found."
    exit 0
  fi

  # Batch query for details
  IDS_JSON=$(echo "$COMMENT_IDS_STR" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read().strip().split(',')))")
  DETAIL=$(curl -s -X POST \
    "${API_BASE}/open-apis/drive/v1/files/${DOC_TOKEN}/comments/batch_query?file_type=docx&user_id_type=open_id" \
    -H "Authorization: Bearer ${TENANT_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "{\"comment_ids\": ${IDS_JSON}}")

  # Find orphaned (open + anchor text gone)
  TARGET=$(echo "$DETAIL" | DOC_CONTENT="$DOC_CONTENT" python3 -c "
import sys, json, os
data = json.load(sys.stdin)
doc_content = os.environ.get('DOC_CONTENT', '')
items = data.get('data', {}).get('items', [])
orphaned = []
for item in items:
    if item.get('is_solved', False):
        continue
    quote = item.get('quote', '')[:50]
    if quote and quote not in doc_content:
        orphaned.append(item['comment_id'])
if orphaned:
    print(','.join(orphaned))
else:
    print('')
" 2>/dev/null)

  if [ -z "$TARGET" ]; then
    echo "No orphaned comments found."
    exit 0
  fi
  echo "Found orphaned comments: $TARGET"
fi

# Resolve each comment
IFS=',' read -ra IDS <<< "$TARGET"
for cid in "${IDS[@]}"; do
  RESP=$(curl -s -X PATCH \
    "${API_BASE}/open-apis/drive/v1/files/${DOC_TOKEN}/comments/${cid}?file_type=docx" \
    -H "Authorization: Bearer ${TENANT_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '{"is_solved": true}')
  
  CODE=$(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('code', -1))" 2>/dev/null)
  if [ "$CODE" = "0" ]; then
    echo "βœ… Resolved: ${cid}"
  else
    echo "❌ Failed: ${cid} β€” $(echo "$RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('msg', 'unknown error'))" 2>/dev/null)"
  fi
done

```

feishu-comments | SkillHub