Back to skills
SkillHub ClubShip Full StackFull Stack

fs-street

Fetches articles from Farnam Street RSS. Use when asking about decision-making, mental models, learning, or wisdom from Farnam Street blog.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,115
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
S100.0

Install command

npx @skill-hub/cli install openclaw-skills-fs-street

Repository

openclaw/skills

Skill path: skills/hjw21century/fs-street

Fetches articles from Farnam Street RSS. Use when asking about decision-making, mental models, learning, or wisdom from Farnam Street blog.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install fs-street into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding fs-street to shared team environments
  • Use fs-street for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: fs-street
description: Fetches articles from Farnam Street RSS. Use when asking about decision-making, mental models, learning, or wisdom from Farnam Street blog.
---

# Farnam Street

Fetches articles from Farnam Street blog, covering topics like mental models, decision-making, leadership, and learning.

## Quick Start

```
# Basic queries
昨天的文章
今天的FS文章
2024-06-13的文章

# Search
有哪些可用的日期
```

## Query Types

| Type | Examples | Description |
|------|----------|-------------|
| Relative date | `昨天的文章` `今天的文章` `前天` | Yesterday, today, day before |
| Absolute date | `2024-06-13的文章` | YYYY-MM-DD format |
| Date range | `有哪些日期` `可用的日期` | Show available dates |
| Topic search | `关于决策的文章` `思维模型` | Search by keyword |

## Workflow

```
- [ ] Step 1: Parse date from user request
- [ ] Step 2: Fetch RSS data
- [ ] Check content availability
- [ ] Format and display results
```

---

## Step 1: Parse Date

| User Input | Target Date | Calculation |
|------------|-------------|-------------|
| `昨天` | Yesterday | today - 1 day |
| `前天` | Day before | today - 2 days |
| `今天` | Today | Current date |
| `2024-06-13` | 2024-06-13 | Direct parse |

**Format**: Always use `YYYY-MM-DD`

---

## Step 2: Fetch RSS

```bash
python skills/fs-street/scripts/fetch_blog.py --date YYYY-MM-DD
```

**Available commands**:

```bash
# Get specific date
python skills/fs-street/scripts/fetch_blog.py --date 2024-06-13

# Get date range
python skills/fs-street/scripts/fetch_blog.py --date-range

# Relative dates
python skills/fs-street/scripts/fetch_blog.py --relative yesterday
```

**Requirements**: `pip install feedparser requests`

---

## Step 3: Check Content

### When NOT Found

```markdown
Sorry, no article available for 2024-06-14

Available date range: 2023-04-19 ~ 2024-06-13

Suggestions:
- View 2024-06-13 article
- View 2024-06-12 article
```

### Members Only Content

Some articles are marked `[FS Members]` - these are premium content and may only show a teaser.

---

## Step 4: Format Results

**Example Output**:

```markdown
# Farnam Street · 2024年6月13日

> Experts vs. Imitators: How to tell the difference between real expertise and imitation

## Content

If you want the highest quality information, you have to speak to the best people. The problem is many people claim to be experts, who really aren't.

**Key Insights**:
- Imitators can't answer questions at a deeper level
- Experts can tell you all the ways they've failed
- Imitators don't know the limits of their expertise

---
Source: Farnam Street
URL: https://fs.blog/experts-vs-imitators/
```

---

## Configuration

| Variable | Description | Default |
|----------|-------------|---------|
| RSS_URL | RSS feed URL | `https://fs.blog/feed/` |

No API keys required.

---

## Troubleshooting

| Issue | Solution |
|-------|----------|
| RSS fetch fails | Check network connectivity |
| Invalid date | Use YYYY-MM-DD format |
| No content | Check available date range |
| Members only | Some articles are premium content |

---

## CLI Reference

```bash
# Get specific date
python skills/fs-street/scripts/fetch_blog.py --date 2024-06-13

# Get date range
python skills/fs-street/scripts/fetch_blog.py --date-range

# Relative dates
python skills/fs-street/scripts/fetch_blog.py --relative yesterday
```


---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "hjw21century",
  "slug": "fs-street",
  "displayName": "Fs Street",
  "latest": {
    "version": "0.1.0",
    "publishedAt": 1770998364746,
    "commit": "https://github.com/openclaw/skills/commit/386f5b22d044b64864f4b7969527eae7d15c2617"
  },
  "history": []
}

```

### references/output-format.md

```markdown
# Output Format Reference

## Markdown Template

```markdown
# Farnam Street · {date}

> {article_title}

## Content

{article_content}

---

Source: Farnam Street
URL: {article_url}
```

## Fields

| Field | Description |
|-------|-------------|
| title | Article title |
| link | Article URL |
| pubDate | Publication date |
| content | Article content (HTML) |
| is_members_only | Whether this is premium content |

```

### scripts/fetch_blog.py

```python
#!/usr/bin/env python3
"""
Farnam Street Blog Fetcher
Fetches articles from Farnam Street RSS and returns structured data.
"""
import sys
import json
import argparse
from datetime import datetime, timezone, timedelta
from pathlib import Path

try:
    import feedparser
    import requests
except ImportError:
    print("Error: Required packages not installed.")
    print("Run: pip install feedparser requests")
    sys.exit(1)

# RSS URL
RSS_URL = "https://fs.blog/feed/"
REQUEST_TIMEOUT = 30


def fetch_rss():
    """Download and parse RSS from Farnam Street"""
    try:
        response = requests.get(RSS_URL, timeout=REQUEST_TIMEOUT)
        response.raise_for_status()
        return feedparser.parse(response.content)
    except requests.RequestException as e:
        print(json.dumps({"error": f"Failed to fetch RSS: {e}"}))
        sys.exit(1)


def get_date_range(feed):
    """Get available date range from RSS entries

    Returns:
        tuple: (min_date, max_date) in YYYY-MM-DD format, or (None, None)
    """
    dates = []
    for entry in feed.entries:
        # Parse from pubDate
        if hasattr(entry, 'published_parsed') and entry.published_parsed:
            dt = datetime(*entry.published_parsed[:6], tzinfo=timezone.utc)
            dates.append(dt.strftime("%Y-%m-%d"))

    if not dates:
        return None, None

    return min(dates), max(dates)


def extract_date_from_link(link):
    """Extract date from URL (FS Blog doesn't use dates in links)

    Args:
        link: URL string

    Returns:
        None (FS Blog doesn't encode dates in URLs)
    """
    # FS Blog doesn't encode dates in article URLs
    return None


def get_content_by_date(feed, target_date):
    """Extract content for a specific date

    Args:
        feed: Feedparser parsed feed
        target_date: Date string in YYYY-MM-DD format

    Returns:
        dict with keys: title, link, content, pubDate, or None if not found
    """
    target_dt = datetime.strptime(target_date, "%Y-%m-%d")

    for entry in feed.entries:
        # Check by pubDate
        if hasattr(entry, 'published_parsed') and entry.published_parsed:
            dt = datetime(*entry.published_parsed[:6], tzinfo=timezone.utc)
            entry_date = dt.strftime("%Y-%m-%d")

            if entry_date == target_date:
                return extract_entry_content(entry)

    return None


def extract_entry_content(entry):
    """Extract content from an RSS entry

    Returns:
        dict with keys: title, link, content, pubDate, is_members_only
    """
    # Check if members only
    title = entry.get("title", "")
    is_members_only = "[FS Members]" in title

    # Get full content
    if hasattr(entry, 'content') and entry.content:
        content = entry.content[0].get('value', '')
    elif hasattr(entry, 'summary'):
        content = entry.summary
    else:
        content = title

    # Check for members only in content
    if "Members Only content" in content or "Not a member? Join Us" in content:
        is_members_only = True

    return {
        "title": title,
        "link": entry.get("link", ""),
        "pubDate": entry.get("published"),
        "content": content,
        "is_members_only": is_members_only
    }


def search_by_keyword(feed, keyword):
    """Search articles by keyword in title

    Args:
        feed: Feedparser parsed feed
        keyword: Search keyword

    Returns:
        List of matching articles
    """
    results = []
    keyword_lower = keyword.lower()

    for entry in feed.entries:
        title = entry.get("title", "")
        if keyword_lower in title.lower():
            results.append({
                "title": title,
                "link": entry.get("link", ""),
                "pubDate": entry.get("published"),
                "summary": entry.get("summary", "")
            })

    return results


def main():
    parser = argparse.ArgumentParser(description='Fetch Farnam Street articles')
    parser.add_argument('--date-range', action='store_true', help='Show available date range')
    parser.add_argument('--date', type=str, help='Get content for specific date (YYYY-MM-DD)')
    parser.add_argument('--relative', type=str, choices=['yesterday', 'today', 'day-before'],
                       help='Relative date: yesterday, today, day-before')
    parser.add_argument('--search', type=str, help='Search articles by keyword')

    args = parser.parse_args()

    # Fetch RSS
    feed = fetch_rss()

    # Date range mode
    if args.date_range:
        min_date, max_date = get_date_range(feed)
        print(json.dumps({
            "min_date": min_date,
            "max_date": max_date,
            "total_entries": len(feed.entries)
        }, indent=2))
        return

    # Search mode
    if args.search:
        results = search_by_keyword(feed, args.search)
        print(json.dumps({
            "keyword": args.search,
            "count": len(results),
            "results": results[:10]  # Limit to 10 results
        }, indent=2, ensure_ascii=False))
        return

    # Calculate target date
    if args.relative:
        if args.relative == 'yesterday':
            target_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%d")
        elif args.relative == 'day-before':
            target_date = (datetime.now(timezone.utc) - timedelta(days=2)).strftime("%Y-%m-%d")
        else:  # today
            target_date = datetime.now(timezone.utc).strftime("%Y-%m-%d")
        date_arg = target_date
    elif args.date:
        target_date = args.date
        date_arg = target_date
    else:
        # Default: yesterday
        target_date = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%d")
        date_arg = target_date

    # Get content
    content = get_content_by_date(feed, target_date)

    if content:
        # Clean HTML entities
        content["content"] = content["content"].replace('&lt;', '<').replace('&gt;', '>').replace('&amp;', '&')

        print(json.dumps(content, indent=2, ensure_ascii=False))
    else:
        # Return empty result with available range
        min_date, max_date = get_date_range(feed)
        print(json.dumps({
            "error": "not_found",
            "message": f"No content found for {target_date}",
            "target_date": target_date,
            "available_range": {
                "min": min_date,
                "max": max_date
            }
        }, indent=2))


if __name__ == "__main__":
    main()

```