Marketplace
Find the right skill for the job.
Browse the full catalog through outcome-first channels, technical facets, rating filters, and server-side pagination built for a large public marketplace.
tikhub-api-helper
Search and query TikHub APIs for TikTok, Douyin, Xiaohongshu, Lemon8, Instagram, YouTube, Twitter, Reddit, and more. Use when user asks about needs to fetch data from social media platforms. Supports both English and Chinese queries.
resolving-sync-conflicts
This skill automatically detects and resolves synchronization conflicts between local task data and remote Todoist data, offering multiple resolution strategies to preserve user intent and prevent data loss during bi-directional sync.
Defense-in-Depth Validation
Validate at every layer data passes through to make bugs impossible
data-validation
This skill provides comprehensive data validation using Pydantic v2, ensuring data quality monitoring and schema alignment for PlanetScale PostgreSQL, primarily for API validation, database schema consistency, and data quality assurance.
license-finder
Imported from https://github.com/opendatahub-io/ai-helpers.
flow-nexus-neural
Provides tools to train and deploy neural networks in distributed E2B sandboxes. Supports multiple architectures like feedforward, LSTM, GAN, and transformer. Includes a template marketplace and distributed cluster training with federated learning options. Requires external MCP server setup and authentication.
k8s-diagnostics
Kubernetes diagnostics for metrics, health checks, resource comparisons, and cluster analysis. Use when analyzing cluster health, comparing environments, or gathering diagnostic data.
analyzing-time-series
Comprehensive diagnostic analysis of time series data. Use when users provide CSV time series data and want to understand its characteristics before forecasting - stationarity, seasonality, trend, forecastability, and transform recommendations.
cloudflare-mcp-server
This skill enables developers to build MCP (Model Context Protocol) servers on Cloudflare Workers, providing tools, resources, and reusable prompts to create AI-capable applications with stateful backends and multiple transport options.
data-visualization
Creates effective data visualizations using various libraries and tools, with focus on clarity and insight communication. Trigger keywords: chart, graph, plot, visualization, dashboard, matplotlib, d3, plotly, visualization.
dividend-tracking
Sync dividend data from Fidelity CSV to Dividends sheet. Reads dividend.csv from notebooks/updates/, calculates actual dividends received (shares × amount per share), writes to input area (rows 2-46), then clicks Add Dividend button to process. Triggers on sync dividends, update dividends, dividend tracker, layer 2 income, or monthly dividend analysis.
open-rag-eval
Imported from https://www.skillhub.club/skills/vectara-open-rag-eval.
modernize-scientific-stack
Guide for modernizing legacy Python 2 scientific computing code to Python 3 with modern libraries. This skill should be used when migrating scientific scripts involving data processing, numerical computation, or analysis from Python 2 to Python 3, or when updating deprecated scientific computing patterns to modern equivalents (pandas, numpy, pathlib).
obspy-data-api
An overview of the core data API of ObsPy, a Python framework for processing seismological data. It is useful for parsing common seismological file formats, or manipulating custom data into standard objects for downstream use cases such as ObsPy's signal processing routines or SeisBench's modeling API.
openspec
Imported from https://github.com/littleCareless/dish-ai-commit.
detecting-patterns
This skill analyzes productivity, task completion, meetings, habits, goals, and energy patterns from your daily logs and reviews to identify optimization opportunities and generate actionable insights for personal improvement.
Legacy Code Reviewer
Expert system for identifying deprecated patterns, suggesting refactoring to modern standards (Python 3.12+, ES2024+), checking test coverage, and leveraging AI-powered tools. Proactively applied when users request refactoring, updates, or analysis of legacy codebases.
password-recovery
Digital forensic skill for recovering passwords and sensitive data from disk images, deleted files, and binary data. This skill should be used when tasks involve extracting passwords from disk images, recovering deleted file contents, analyzing binary files for fragments, or forensic data recovery scenarios. Applies to tasks mentioning disk images, deleted files, password fragments, or data recovery.
raman-fitting
This skill provides guidance for Raman spectrum peak fitting tasks. It should be used when analyzing spectroscopic data, fitting Lorentzian or Gaussian peaks to Raman spectra, or working with graphene/carbon material characterization. The skill emphasizes critical data parsing verification, physical constraints from domain knowledge, and systematic debugging of curve fitting problems.
reshard-c4-data
Guidance for data resharding tasks that involve reorganizing files across directory structures with constraints on file sizes and directory contents. This skill applies when redistributing datasets, splitting large files, or reorganizing data into shards while maintaining constraints like maximum files per directory or maximum file sizes. Use when tasks involve resharding, data partitioning, or directory-constrained file reorganization.
programming
Python and R programming for data analysis, automation, and reproducible analytics
full_analysis
Imported from https://github.com/benchflow-ai/SkillsBench.
data-profiler
Profile datasets to understand schema, quality, and characteristics. Use when analyzing data files (CSV, JSON, Parquet), discovering dataset properties, assessing data quality, or when user mentions data profiling, schema detection, data analysis, or quality metrics. Provides basic and intermediate profiling including distributions, uniqueness, and pattern detection.
memory-optimization
Optimize Python code for reduced memory usage and improved memory efficiency. Use when asked to reduce memory footprint, fix memory leaks, optimize data structures for memory, handle large datasets efficiently, or diagnose memory issues. Covers object sizing, generator patterns, efficient data structures, and memory profiling strategies.