Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

querying-json

Extracts specific fields from JSON files efficiently using jq instead of reading entire files, saving 80-95% context. Use this skill when querying JSON files, filtering/transforming data, or getting specific field(s) from large JSON files

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
6
Hot score
82
Updated
March 20, 2026
Overall rating
C1.1
Composite score
1.1
Best-practice grade
B77.6

Install command

npx @skill-hub/cli install iota9star-my-skills-jq

Repository

iota9star/my-skills

Skill path: skills/jq

Extracts specific fields from JSON files efficiently using jq instead of reading entire files, saving 80-95% context. Use this skill when querying JSON files, filtering/transforming data, or getting specific field(s) from large JSON files

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: iota9star.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install querying-json into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/iota9star/my-skills before adding querying-json to shared team environments
  • Use querying-json for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: querying-json
description: Extracts specific fields from JSON files efficiently using jq instead of reading entire files, saving 80-95% context. Use this skill when querying JSON files, filtering/transforming data, or getting specific field(s) from large JSON files
---

# jq: JSON Data Extraction Tool

**Always invoke jq skill to extract JSON fields - do not execute bash commands directly.**

Use jq to extract specific fields from JSON files without loading entire file contents into context.

## When to Use jq vs Read

**Use jq when:**
- Need specific field(s) from structured data file
- File is large (>50 lines) and only need subset
- Querying nested structures
- Filtering/transforming data
- **Saves 80-95% context** vs reading entire file

**Just use Read when:**
- File is small (<50 lines)
- Need to understand overall structure
- Making edits (need full context anyway)

## Common File Types

JSON files where jq excels:
- package.json, tsconfig.json
- Lock files (package-lock.json, yarn.lock in JSON format)
- API responses
- Configuration files

## Default Strategy

**Invoke jq skill** for extracting specific fields from JSON files efficiently. Use instead of reading entire files to save 80-95% context.

Common workflow: fd skill → jq skill → other skills (fzf, sd, bat) for extraction and transformation.

## Quick Examples

```bash
# Get version from package.json
jq -r .version package.json

# Get nested dependency version
jq -r '.dependencies.react' package.json

# List all dependencies
jq -r '.dependencies | keys[]' package.json
```

## Pipeline Combinations
- **jq | fzf**: Interactive selection from JSON data
- **jq | sd**: Transform JSON to other formats
- **jq | bat**: View extracted JSON with syntax highlighting

## Skill Combinations

### For Discovery Phase
- **fd → jq**: Find JSON config files and extract specific fields
- **ripgrep → jq**: Find JSON usage patterns and extract values
- **extracting-code-structure → jq**: Understand API structures before extraction

### For Analysis Phase
- **jq → fzf**: Interactive selection from structured data
- **jq → bat**: View extracted data with syntax highlighting
- **jq → tokei**: Extract statistics from JSON output

### For Refactoring Phase
- **jq → sd**: Transform JSON data to other formats
- **jq → yq**: Convert JSON to YAML for different tools
- **jq → xargs**: Use extracted values as command arguments

### Multi-Skill Workflows
- **jq → yq → sd → bat**: JSON to YAML transformation with preview
- **jq → fzf → xargs**: Interactive selection and execution based on JSON data
- **fd → jq → ripgrep**: Find config files, extract values, search usage

### Common Integration Examples
```bash
# Extract and select dependencies
jq -r '.dependencies | keys[]' package.json | fzf --multi | xargs npm info

# Get and filter package versions
jq -r '.devDependencies | to_entries[] | "\(.key)@\(.value)"' package.json | fzf
```

### Note: Choosing Between jq and yq
- **JSON files** (package.json, tsconfig.json): Use `jq`
- **YAML files** (docker-compose.yml, GitHub Actions): Use `yq`

## Detailed Reference

For comprehensive jq patterns, syntax, and examples, load [jq guide](./reference/jq-guide.md) when needing:
- Array manipulation and filtering
- Complex nested data extraction
- Data transformation patterns
- Real-world workflow examples
- Error handling and edge cases
- Core patterns (80% of use cases)
- Real-world workflows
- Advanced patterns
- Pipe composition
- Error handling
- Integration with other tools


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### reference/jq-guide.md

```markdown
# jq: JSON Query and Extraction Reference

**Goal: Extract specific data from JSON without reading entire file.**

## The Essential Pattern

```bash
jq '.field' file.json
```

Use `-r` flag for raw output (removes quotes from strings):
```bash
jq -r '.field' file.json
```

**Use `-r` by default for string values** - cleaner output.

---

# Core Patterns (80% of Use Cases)

## 1. Extract Top-Level Field
```bash
jq -r '.version' package.json
jq -r '.name' package.json
```

## 2. Extract Nested Field
```bash
jq -r '.dependencies.react' package.json
jq -r '.scripts.build' package.json
jq -r '.config.database.host' config.json
```

## 3. Extract Multiple Fields
```bash
jq '{name, version, description}' package.json
```
Creates object with just those fields.

Or as separate lines:
```bash
jq -r '.name, .version' package.json
```

## 4. Extract from Array by Index
```bash
jq '.[0]' array.json           # First element
jq '.items[2]' data.json        # Third element
```

## 5. Extract All Array Elements
```bash
jq '.[]' array.json             # All elements
jq '.items[]' data.json         # All items
```

## 6. Extract Field from Each Array Element
```bash
jq -r '.dependencies | keys[]' package.json      # All dependency names
jq -r '.items[].name' data.json                  # Name from each item
```

## 7. Filter Array by Condition
```bash
jq '.items[] | select(.active == true)' data.json
jq '.items[] | select(.price > 100)' data.json
```

## 8. Get Object Keys
```bash
jq -r 'keys[]' object.json
jq -r '.dependencies | keys[]' package.json
```

## 9. Check if Field Exists
```bash
jq 'has("field")' file.json
```

## 10. Handle Missing Fields (Use // for Default)
```bash
jq -r '.field // "default"' file.json
```

---

# Common Real-World Workflows

## "What version is this package?"
```bash
jq -r '.version' package.json
```

## "What's the main entry point?"
```bash
jq -r '.main' package.json
```

## "List all dependencies"
```bash
jq -r '.dependencies | keys[]' package.json
```

## "What version of React?"
```bash
jq -r '.dependencies.react' package.json
```

## "List all scripts"
```bash
jq -r '.scripts | keys[]' package.json
```

## "Get specific script command"
```bash
jq -r '.scripts.build' package.json
```

## "Check TypeScript compiler options"
```bash
jq '.compilerOptions' tsconfig.json
```

## "Get target from tsconfig"
```bash
jq -r '.compilerOptions.target' tsconfig.json
```

## "List all services from docker-compose JSON"
```bash
jq -r '.services | keys[]' docker-compose.json
```

## "Get environment variables for a service"
```bash
jq '.services.api.environment' docker-compose.json
```

---

# Advanced Patterns (20% Use Cases)

## Combine Multiple Queries
```bash
jq '{version, deps: (.dependencies | keys)}' package.json
```

## Map Array Elements
```bash
jq '[.items[] | .name]' data.json          # Array of names
```

## Count Array Length
```bash
jq '.items | length' data.json
jq '.dependencies | length' package.json
```

## Sort Array
```bash
jq '.items | sort_by(.name)' data.json
```

## Group and Transform
```bash
jq 'group_by(.category)' data.json
```

## Complex Filter
```bash
jq '.items[] | select(.active and .price > 100) | .name' data.json
```

---

# Pipe Composition

jq uses `|` for piping within queries:
```bash
jq '.items | map(.name) | sort' data.json
```

Can also pipe to shell commands:
```bash
jq -r '.dependencies | keys[]' package.json | wc -l     # Count dependencies
jq -r '.dependencies | keys[]' package.json | sort      # Sorted dependency list
```

---

# Common Flags

- `-r` - Raw output (no quotes) - **USE THIS FOR STRINGS**
- `-c` - Compact output (single line)
- `-e` - Exit with error if output is false/null
- `-S` - Sort object keys
- `-M` - Monochrome (no colors)

**Default to `-r` for string extraction.**

---

# Handling Edge Cases

## If Field Might Not Exist
```bash
jq -r '.field // "not found"' file.json
```

## If Result Might Be Null
```bash
jq -r '.field // empty' file.json          # Output nothing if null
```

## If Array Might Be Empty
```bash
jq -r '.items[]? // empty' file.json       # ? suppresses errors
```

## Multiple Possible Paths
```bash
jq -r '.field1 // .field2 // "default"' file.json
```

---

# Error Handling

If field doesn't exist:
```bash
# BAD: jq '.nonexistent' file.json
# → null (but no error)

# GOOD: Check existence first
jq -e 'has("field")' file.json && jq '.field' file.json
```

Or use default:
```bash
jq -r '.field // "not found"' file.json
```

---

# Integration with Other Tools

## With ast-grep
```bash
# Get dependencies, then search code for usage
jq -r '.dependencies | keys[]' package.json | while read dep; do
  rg -l "from ['\"]$dep['\"]"
done
```

## With Edit Tool
Common workflow:
1. Use jq to extract current value
2. Modify value
3. Use Edit tool to update JSON (or jq for complex updates)

## Reading STDIN
```bash
echo '{"key":"value"}' | jq -r '.key'
```

---

# Best Practices

## 1. Always Use -r for String Fields
```bash
# BAD:  jq '.version' package.json  → "1.0.0" (with quotes)
# GOOD: jq -r '.version' package.json  → 1.0.0 (raw)
```

## 2. Test Queries on Small Examples First
```bash
echo '{"test":"value"}' | jq -r '.test'
```

## 3. Use // for Defaults
```bash
jq -r '.field // "default"' file.json
```

## 4. Use keys[] for Object Properties
```bash
jq -r 'keys[]' object.json
```

## 5. Combine with Shell Pipes
```bash
jq -r '.dependencies | keys[]' package.json | grep react
```

---

# Quick Reference

## Most Common Commands

```bash
# Single field
jq -r '.field' file.json

# Nested field
jq -r '.parent.child' file.json

# Array element
jq '.array[0]' file.json

# All array elements
jq '.array[]' file.json

# Object keys
jq -r 'keys[]' file.json

# Filter array
jq '.array[] | select(.field == "value")' file.json

# Multiple fields
jq '{field1, field2}' file.json

# With default
jq -r '.field // "default"' file.json
```

---

# When to Use Read Instead

Use Read tool when:
- File is < 50 lines
- Need to see overall structure
- Making edits (need full context)
- Exploring unknown JSON structure

Use jq when:
- File is large
- Know exactly what field(s) are needed
- Want to save context tokens

---

# Summary

**Default pattern:**
```bash
jq -r '.field' file.json
```

**Key principles:**
1. Use `-r` for string output (raw, no quotes)
2. Use `.` notation for nested fields
3. Use `[]` for array access
4. Use `//` for defaults
5. Use `keys[]` for object properties
6. Pipe with `|` inside jq, pipe to shell after

**Massive context savings: Extract only what is needed instead of reading entire JSON files.**

```

querying-json | SkillHub