Marketplace

Find the right skill for the job.

Browse the full catalog through outcome-first channels, technical facets, rating filters, and server-side pagination built for a large public marketplace.

Start with the job to be done
3655 results
Analyze Data & AI · All facets
Page 55 of 153
SkillHub ClubAnalyze Data & AI

outlines

Guarantee valid JSON/XML/code structure during generation, use Pydantic models for type-safe outputs, support local models (Transformers, vLLM), and maximize inference speed with Outlines - dottxt.ai's structured generation library

C 4.0
Full StackData / AI
8.9K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

agent-browser

Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.

C 4.0
Full StackData / AITesting
8.9K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

cmux-browser

End-user browser automation with cmux. Use when you need to open sites, interact with pages, wait for state changes, and extract data from cmux browser surfaces.

C 4.0
Full StackData / AI
8.4K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

dummy-dataset

Generate realistic dummy datasets for testing with customizable columns, constraints, and output formats (CSV, JSON, SQL, Python script). Use when creating test data, building mock datasets, or generating sample data for development and demos.

C 4.0
Full StackData / AITesting
7.7K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

user-segmentation

Segment users from feedback data based on behavior, JTBD, and needs. Identifies at least 3 distinct user segments. Use when segmenting a user base, analyzing diverse user feedback, or building a segmentation model.

C 4.0
Full StackData / AI
7.7K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

sentiment-analysis

Analyze user feedback data to identify segments with sentiment scores, JTBD, and product satisfaction insights. Use when analyzing user feedback at scale, running sentiment analysis on reviews or surveys, or identifying satisfaction patterns.

C 4.0
Full StackData / AI
7.7K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

ab-test-analysis

Analyze A/B test results with statistical significance, sample size validation, confidence intervals, and ship/extend/stop recommendations. Use when evaluating experiment results, checking if a test reached significance, interpreting split test data, or deciding whether to ship a variant.

C 4.0
Full StackData / AITesting
7.7K
rank 87
hot 99
SkillHub ClubAnalyze Data & AI

init

Create a new AgentHub collaboration session with task, agent count, and evaluation criteria.

C 4.0
Full StackData / AI
5.9K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

product-analytics

Use when defining product KPIs, building metric dashboards, running cohort or retention analysis, or interpreting feature adoption trends across product stages.

C 4.0
Full StackData / AI
5.8K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

sentence-transformers

Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.

C 4.0
Full StackData / AI
Sentence TransformersEmbeddingsSemantic SimilarityRAG
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

implementing-llms-litgpt

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

C 4.0
Full StackData / AI
Model ArchitectureLitGPTLightning AILLM Implementation
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

tensorboard

Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit

C 4.0
Full StackData / AITesting
MLOpsTensorBoardVisualizationTraining Metrics
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

nemo-evaluator-sdk

Evaluates LLMs across 100+ benchmarks from 18+ harnesses (MMLU, HumanEval, GSM8K, safety, VLM) with multi-backend execution. Use when needing scalable evaluation on local Docker, Slurm HPC, or cloud platforms. NVIDIA's enterprise-grade platform with container-first architecture for reproducible benchmarking.

C 4.0
Full StackBackendDevOps
EvaluationNeMoNVIDIABenchmarking
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

weights-and-biases

Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform

C 4.0
Full StackData / AI
MLOpsWeights And BiasesWandBExperiment Tracking
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

evaluating-code-models

Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.

C 4.0
Full StackData / AITesting
EvaluationCode GenerationHumanEvalMBPP
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

hqq-quantization

Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.

C 4.0
Full StackDevOpsData / AI
QuantizationHQQOptimizationMemory Efficiency
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

skypilot-multi-cloud-orchestration

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

C 4.0
Full StackData / AI
InfrastructureMulti-CloudOrchestrationGPU
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

constitutional-ai

Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.

C 4.0
Full StackData / AI
Safety AlignmentConstitutional AIRLAIFSelf-Critique
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

dspy

Build complex AI systems with declarative programming, optimize prompts automatically, create modular RAG systems and agents with DSPy - Stanford NLP's framework for systematic LM programming

C 4.0
Full StackData / AI
Prompt EngineeringDSPyDeclarative ProgrammingRAG
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

lambda-labs-gpu-cloud

Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

C 4.0
Full StackData / AI
InfrastructureGPU CloudTrainingInference
5.2K
rank 85
hot 99
SkillHub ClubAnalyze Data & AI

tests

Imported from https://github.com/PrimeIntellect-ai/verifiers.

C 4.0
Full StackData / AITesting
3.9K
rank 83
hot 99
SkillHub ClubAnalyze Data & AI

environments

Imported from https://github.com/PrimeIntellect-ai/verifiers.

C 4.0
Full StackData / AI
3.9K
rank 83
hot 99
SkillHub ClubAnalyze Data & AI

envs

Imported from https://github.com/PrimeIntellect-ai/verifiers.

C 4.0
Full StackData / AI
3.9K
rank 83
hot 99
SkillHub ClubAnalyze Data & AI

evaluate-environments

Run and analyze evaluations for verifiers environments using prime eval. Use when asked to smoke-test environments, run benchmark sweeps, resume interrupted evaluations, compare models, inspect sample-level outputs, or produce evaluation summaries suitable for deciding next steps.

C 4.0
Full StackData / AITesting
3.9K
rank 83
hot 99
Previous
Page 55 of 153
Next