producthunt
Search and retrieve content from Product Hunt. Get posts, topics, users, and collections via the GraphQL API.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install resciencelab-opc-skills-producthunt
Repository
Skill path: skills/producthunt
Search and retrieve content from Product Hunt. Get posts, topics, users, and collections via the GraphQL API.
Open repositoryBest for
Primary workflow: Write Technical Docs.
Technical facets: Full Stack, Backend, Tech Writer.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: ReScienceLab.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install producthunt into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/ReScienceLab/opc-skills before adding producthunt to shared team environments
- Use producthunt for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: producthunt
description: Search and retrieve content from Product Hunt. Get posts, topics, users, and collections via the GraphQL API.
triggers:
- "producthunt"
- "product hunt"
- "PH"
- "launch"
---
# ProductHunt Skill
Get posts, topics, users, and collections from Product Hunt via the official GraphQL API.
## Prerequisites
Set access token in `~/.zshrc`:
```bash
export PRODUCTHUNT_ACCESS_TOKEN="your_developer_token"
```
Get your token from: https://www.producthunt.com/v2/oauth/applications
**Quick Check**:
```bash
cd <skill_directory>
python3 scripts/get_posts.py --limit 3
```
## Commands
All commands run from the skill directory.
### Posts
```bash
python3 scripts/get_post.py chatgpt # Get post by slug
python3 scripts/get_post.py 12345 # Get post by ID
python3 scripts/get_posts.py --limit 20 # Today's featured posts
python3 scripts/get_posts.py --topic ai --limit 10 # Posts in topic
python3 scripts/get_posts.py --after 2026-01-01 # Posts after date
python3 scripts/get_post_comments.py POST_ID --limit 20
```
### Topics
```bash
python3 scripts/get_topic.py artificial-intelligence # Get topic by slug
python3 scripts/get_topics.py --query "AI" --limit 20 # Search topics
python3 scripts/get_topics.py --limit 50 # Popular topics
```
### Users
```bash
python3 scripts/get_user.py rrhoover # Get user by username
python3 scripts/get_user_posts.py rrhoover --limit 20 # User's posts
```
### Collections
```bash
python3 scripts/get_collection.py SLUG_OR_ID # Get collection
python3 scripts/get_collections.py --featured --limit 20
```
## API Info
- **Endpoint**: https://api.producthunt.com/v2/api/graphql
- **Type**: GraphQL
- **Rate Limits**: 6250 complexity points / 15 min
- **Docs**: https://api.producthunt.com/v2/docs
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### scripts/get_posts.py
```python
#!/usr/bin/env python3
"""
Get posts with filters
Usage: python3 scripts/get_posts.py --featured --limit 20
python3 scripts/get_posts.py --topic ai --limit 10
"""
import argparse
from datetime import datetime, timezone
from producthunt_api import graphql, print_posts_list, print_pagination
QUERY = """
query GetPosts($first: Int, $after: String, $featured: Boolean, $topic: String, $postedAfter: DateTime, $postedBefore: DateTime) {
posts(first: $first, after: $after, featured: $featured, topic: $topic, postedAfter: $postedAfter, postedBefore: $postedBefore) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
name
tagline
slug
votesCount
commentsCount
url
website
featuredAt
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt posts")
parser.add_argument("--limit", "-l", type=int, default=20, help="Max posts")
parser.add_argument("--featured", "-f", action="store_true", help="Featured posts only")
parser.add_argument("--topic", "-t", help="Filter by topic slug")
parser.add_argument("--after", help="Posts after date (YYYY-MM-DD)")
parser.add_argument("--before", help="Posts before date (YYYY-MM-DD)")
parser.add_argument("--cursor", "-c", help="Pagination cursor")
args = parser.parse_args()
variables = {
"first": min(args.limit, 50),
"after": args.cursor,
"featured": True if args.featured else None,
"topic": args.topic,
}
if args.after:
variables["postedAfter"] = f"{args.after}T00:00:00Z"
if args.before:
variables["postedBefore"] = f"{args.before}T23:59:59Z"
data = graphql(QUERY, variables)
posts_data = data.get("posts", {})
edges = posts_data.get("edges", [])
posts = [e["node"] for e in edges]
filters = []
if args.featured:
filters.append("featured")
if args.topic:
filters.append(f"topic:{args.topic}")
if args.after:
filters.append(f"after:{args.after}")
label = f"posts({','.join(filters)})" if filters else "posts"
print_posts_list(posts, label)
print_pagination(posts_data.get("pageInfo"), posts_data.get("totalCount"))
if __name__ == "__main__":
main()
```
### scripts/get_post.py
```python
#!/usr/bin/env python3
"""
Get post by ID or slug
Usage: python3 scripts/get_post.py POST_ID_OR_SLUG
"""
import argparse
import json
from producthunt_api import graphql, clean_post, print_post
QUERY = """
query GetPost($id: ID, $slug: String) {
post(id: $id, slug: $slug) {
id
name
tagline
slug
description
votesCount
commentsCount
url
website
featuredAt
createdAt
makers { name username }
topics(first: 5) { edges { node { name slug } } }
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt post")
parser.add_argument("identifier", help="Post ID or slug")
parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
args = parser.parse_args()
variables = {}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["slug"] = args.identifier
data = graphql(QUERY, variables)
post = data.get("post")
if not post:
print(f"Post not found: {args.identifier}")
return
if args.json:
print(json.dumps(post, indent=2))
return
cleaned = clean_post(post)
print_post(cleaned)
if post.get("description"):
print(f"---")
desc = post["description"][:500]
print(f"description: {desc}")
topics = post.get("topics", {}).get("edges", [])
if topics:
topic_names = [e["node"]["slug"] for e in topics]
print(f"topics: {', '.join(topic_names)}")
if __name__ == "__main__":
main()
```
### scripts/get_post_comments.py
```python
#!/usr/bin/env python3
"""
Get comments on a post
Usage: python3 scripts/get_post_comments.py POST_ID --limit 20
"""
import argparse
from producthunt_api import graphql, print_comments_list, print_pagination
QUERY = """
query GetPostComments($id: ID, $slug: String, $first: Int, $after: String) {
post(id: $id, slug: $slug) {
id
name
commentsCount
comments(first: $first, after: $after) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
body
votesCount
createdAt
user { name username }
}
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get post comments")
parser.add_argument("identifier", help="Post ID or slug")
parser.add_argument("--limit", "-l", type=int, default=20, help="Max comments")
parser.add_argument("--cursor", "-c", help="Pagination cursor")
args = parser.parse_args()
variables = {"first": min(args.limit, 50), "after": args.cursor}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["slug"] = args.identifier
data = graphql(QUERY, variables)
post = data.get("post")
if not post:
print(f"Post not found: {args.identifier}")
return
print(f"post: {post.get('name')} (id:{post.get('id')})")
print(f"total_comments: {post.get('commentsCount')}")
comments_data = post.get("comments", {})
edges = comments_data.get("edges", [])
comments = [e["node"] for e in edges]
print_comments_list(comments)
print_pagination(comments_data.get("pageInfo"))
if __name__ == "__main__":
main()
```
### scripts/get_topic.py
```python
#!/usr/bin/env python3
"""
Get topic by ID or slug
Usage: python3 scripts/get_topic.py artificial-intelligence
"""
import argparse
import json
from producthunt_api import graphql, clean_topic, print_topic
QUERY = """
query GetTopic($id: ID, $slug: String) {
topic(id: $id, slug: $slug) {
id
name
slug
description
postsCount
followersCount
url
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt topic")
parser.add_argument("identifier", help="Topic ID or slug")
parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
args = parser.parse_args()
variables = {}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["slug"] = args.identifier
data = graphql(QUERY, variables)
topic = data.get("topic")
if not topic:
print(f"Topic not found: {args.identifier}")
return
if args.json:
print(json.dumps(topic, indent=2))
return
print_topic(clean_topic(topic))
if __name__ == "__main__":
main()
```
### scripts/get_topics.py
```python
#!/usr/bin/env python3
"""
Get topics with optional search
Usage: python3 scripts/get_topics.py --query "AI" --limit 20
"""
import argparse
from producthunt_api import graphql, print_topics_list, print_pagination
QUERY = """
query GetTopics($first: Int, $after: String, $query: String) {
topics(first: $first, after: $after, query: $query, order: FOLLOWERS_COUNT) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
name
slug
description
postsCount
followersCount
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt topics")
parser.add_argument("--query", "-q", help="Search query")
parser.add_argument("--limit", "-l", type=int, default=20, help="Max topics")
parser.add_argument("--cursor", "-c", help="Pagination cursor")
args = parser.parse_args()
variables = {
"first": min(args.limit, 50),
"after": args.cursor,
"query": args.query,
}
data = graphql(QUERY, variables)
topics_data = data.get("topics", {})
edges = topics_data.get("edges", [])
topics = [e["node"] for e in edges]
label = f"topics(query:{args.query})" if args.query else "topics"
print_topics_list(topics, label)
print_pagination(topics_data.get("pageInfo"), topics_data.get("totalCount"))
if __name__ == "__main__":
main()
```
### scripts/get_user.py
```python
#!/usr/bin/env python3
"""
Get user by username or ID
Usage: python3 scripts/get_user.py rrhoover
"""
import argparse
import json
from producthunt_api import graphql, clean_user, print_user
QUERY = """
query GetUser($id: ID, $username: String) {
user(id: $id, username: $username) {
id
name
username
headline
url
twitterUsername
websiteUrl
isMaker
createdAt
profileImage
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt user")
parser.add_argument("identifier", help="Username or user ID")
parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
args = parser.parse_args()
variables = {}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["username"] = args.identifier
data = graphql(QUERY, variables)
user = data.get("user")
if not user:
print(f"User not found: {args.identifier}")
return
if args.json:
print(json.dumps(user, indent=2))
return
print_user(clean_user(user))
if user.get("createdAt"):
print(f"joined: {user['createdAt']}")
if __name__ == "__main__":
main()
```
### scripts/get_user_posts.py
```python
#!/usr/bin/env python3
"""
Get user's posts (submitted or made)
Usage: python3 scripts/get_user_posts.py rrhoover --limit 20
"""
import argparse
from producthunt_api import graphql, print_posts_list, print_pagination
QUERY = """
query GetUserPosts($id: ID, $username: String, $first: Int, $after: String) {
user(id: $id, username: $username) {
id
name
username
submittedPosts(first: $first, after: $after) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
name
tagline
slug
votesCount
commentsCount
url
featuredAt
}
}
}
madePosts(first: $first, after: $after) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
name
tagline
slug
votesCount
commentsCount
url
featuredAt
}
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get user's posts")
parser.add_argument("identifier", help="Username or user ID")
parser.add_argument("--limit", "-l", type=int, default=20, help="Max posts")
parser.add_argument("--made", "-m", action="store_true", help="Show made posts instead of submitted")
parser.add_argument("--cursor", "-c", help="Pagination cursor")
args = parser.parse_args()
variables = {"first": min(args.limit, 50), "after": args.cursor}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["username"] = args.identifier
data = graphql(QUERY, variables)
user = data.get("user")
if not user:
print(f"User not found: {args.identifier}")
return
print(f"user: @{user.get('username')} ({user.get('name')})")
if args.made:
posts_data = user.get("madePosts", {})
label = "made_posts"
else:
posts_data = user.get("submittedPosts", {})
label = "submitted_posts"
edges = posts_data.get("edges", [])
posts = [e["node"] for e in edges]
print_posts_list(posts, label)
print_pagination(posts_data.get("pageInfo"), posts_data.get("totalCount"))
if __name__ == "__main__":
main()
```
### scripts/get_collection.py
```python
#!/usr/bin/env python3
"""
Get collection by ID or slug
Usage: python3 scripts/get_collection.py COLLECTION_SLUG
"""
import argparse
import json
from producthunt_api import graphql, clean_collection, format_count
QUERY = """
query GetCollection($id: ID, $slug: String) {
collection(id: $id, slug: $slug) {
id
name
tagline
description
url
followersCount
featuredAt
createdAt
user { name username }
posts(first: 10) {
totalCount
edges {
node {
id
name
tagline
votesCount
}
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt collection")
parser.add_argument("identifier", help="Collection ID or slug")
parser.add_argument("--json", "-j", action="store_true", help="Output as JSON")
args = parser.parse_args()
variables = {}
if args.identifier.isdigit():
variables["id"] = args.identifier
else:
variables["slug"] = args.identifier
data = graphql(QUERY, variables)
collection = data.get("collection")
if not collection:
print(f"Collection not found: {args.identifier}")
return
if args.json:
print(json.dumps(collection, indent=2))
return
print(f"id: {collection.get('id')}")
print(f"name: {collection.get('name')}")
print(f"tagline: {collection.get('tagline')}")
print(f"followers: {format_count(collection.get('followersCount'))}")
print(f"url: {collection.get('url')}")
user = collection.get("user", {})
if user:
print(f"creator: @{user.get('username')} ({user.get('name')})")
if collection.get("description"):
print(f"description: {collection['description'][:200]}")
posts_data = collection.get("posts", {})
posts = [e["node"] for e in posts_data.get("edges", [])]
if posts:
print(f"---")
print(f"posts[{posts_data.get('totalCount', len(posts))}]{{name,votes}}:")
for p in posts:
print(f" {p['name']},{format_count(p['votesCount'])}")
if __name__ == "__main__":
main()
```
### scripts/get_collections.py
```python
#!/usr/bin/env python3
"""
Get collections with filters
Usage: python3 scripts/get_collections.py --featured --limit 20
"""
import argparse
from producthunt_api import graphql, clean_collection, format_count, print_pagination
QUERY = """
query GetCollections($first: Int, $after: String, $featured: Boolean, $userId: ID) {
collections(first: $first, after: $after, featured: $featured, userId: $userId, order: FOLLOWERS_COUNT) {
totalCount
pageInfo { hasNextPage endCursor }
edges {
node {
id
name
tagline
url
followersCount
featuredAt
}
}
}
}
"""
def main():
parser = argparse.ArgumentParser(description="Get ProductHunt collections")
parser.add_argument("--limit", "-l", type=int, default=20, help="Max collections")
parser.add_argument("--featured", "-f", action="store_true", help="Featured collections only")
parser.add_argument("--user", "-u", help="Filter by user ID")
parser.add_argument("--cursor", "-c", help="Pagination cursor")
args = parser.parse_args()
variables = {
"first": min(args.limit, 50),
"after": args.cursor,
"featured": True if args.featured else None,
"userId": args.user,
}
data = graphql(QUERY, variables)
collections_data = data.get("collections", {})
edges = collections_data.get("edges", [])
collections = [e["node"] for e in edges]
filters = []
if args.featured:
filters.append("featured")
if args.user:
filters.append(f"user:{args.user}")
label = f"collections({','.join(filters)})" if filters else "collections"
print(f"{label}[{len(collections)}]{{name,tagline,followers}}:")
for c in collections:
tagline = (c.get('tagline') or '')[:40]
print(f" {c['name']},{tagline},{format_count(c['followersCount'])}")
print_pagination(collections_data.get("pageInfo"), collections_data.get("totalCount"))
if __name__ == "__main__":
main()
```