Back to skills
SkillHub ClubShip Full StackFull Stack

regression-models

Bayesian regression models including linear, logistic, Poisson, negative binomial, and robust regression with Stan and JAGS implementations.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
781
Hot score
99
Updated
March 19, 2026
Overall rating
C4.6
Composite score
4.6
Best-practice grade
B80.4

Install command

npx @skill-hub/cli install benchflow-ai-skillsbench-regression-models

Repository

benchflow-ai/SkillsBench

Skill path: registry/terminal_bench_2.0/full_batch_reviewed/terminal_bench_2_0_rstan-to-pystan/environment/skills/regression-models

Bayesian regression models including linear, logistic, Poisson, negative binomial, and robust regression with Stan and JAGS implementations.

Open repository

Best for

Primary workflow: Ship Full Stack.

Technical facets: Full Stack.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: benchflow-ai.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install regression-models into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/benchflow-ai/SkillsBench before adding regression-models to shared team environments
  • Use regression-models for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: regression-models
description: Bayesian regression models including linear, logistic, Poisson, negative binomial, and robust regression with Stan and JAGS implementations.
---

# Regression Models

## Linear Regression

### Stan
```stan
data {
  int<lower=0> N;
  int<lower=0> K;
  matrix[N, K] X;
  vector[N] y;
}
parameters {
  real alpha;
  vector[K] beta;
  real<lower=0> sigma;
}
model {
  alpha ~ normal(0, 10);
  beta ~ normal(0, 5);
  sigma ~ exponential(1);
  y ~ normal(alpha + X * beta, sigma);
}
generated quantities {
  array[N] real y_rep;
  for (n in 1:N)
    y_rep[n] = normal_rng(alpha + X[n] * beta, sigma);
}
```

### JAGS
```
model {
  for (i in 1:N) {
    y[i] ~ dnorm(mu[i], tau)
    mu[i] <- alpha + inprod(X[i,], beta[])
  }
  alpha ~ dnorm(0, 0.001)
  for (k in 1:K) { beta[k] ~ dnorm(0, 0.001) }
  tau ~ dgamma(0.001, 0.001)
  sigma <- 1/sqrt(tau)
}
```

## Logistic Regression

### Stan
```stan
data {
  int<lower=0> N;
  int<lower=0> K;
  matrix[N, K] X;
  array[N] int<lower=0,upper=1> y;
}
parameters {
  real alpha;
  vector[K] beta;
}
model {
  alpha ~ normal(0, 2.5);
  beta ~ normal(0, 2.5);
  y ~ bernoulli_logit(alpha + X * beta);
}
```

### JAGS
```
model {
  for (i in 1:N) {
    y[i] ~ dbern(p[i])
    logit(p[i]) <- alpha + inprod(X[i,], beta[])
  }
  alpha ~ dnorm(0, 0.4)    # SD ≈ 1.58
  for (k in 1:K) { beta[k] ~ dnorm(0, 0.4) }
}
```

## Poisson Regression

### Stan
```stan
model {
  alpha ~ normal(0, 5);
  beta ~ normal(0, 2.5);
  y ~ poisson_log(alpha + X * beta);
}
```

### JAGS
```
model {
  for (i in 1:N) {
    y[i] ~ dpois(lambda[i])
    log(lambda[i]) <- alpha + inprod(X[i,], beta[])
  }
}
```

## Negative Binomial (Overdispersed Counts)

### Stan
```stan
parameters {
  real alpha;
  vector[K] beta;
  real<lower=0> phi;  // Overdispersion
}
model {
  phi ~ exponential(1);
  y ~ neg_binomial_2_log(alpha + X * beta, phi);
}
```

## Robust Regression (Student-t Errors)

### Stan
```stan
parameters {
  real alpha;
  vector[K] beta;
  real<lower=0> sigma;
  real<lower=1> nu;  // Degrees of freedom
}
model {
  nu ~ gamma(2, 0.1);  // Prior on df
  y ~ student_t(nu, alpha + X * beta, sigma);
}
```

## QR Decomposition (For Correlated Predictors)

```stan
transformed data {
  matrix[N, K] Q = qr_thin_Q(X) * sqrt(N - 1.0);
  matrix[K, K] R = qr_thin_R(X) / sqrt(N - 1.0);
  matrix[K, K] R_inv = inverse(R);
}
parameters {
  vector[K] theta;
  real<lower=0> sigma;
}
model {
  y ~ normal(Q * theta, sigma);
}
generated quantities {
  vector[K] beta = R_inv * theta;
}
```

## Prior Recommendations

| Parameter | Weakly Informative | Reference |
|-----------|-------------------|-----------|
| Intercept | `normal(0, 10)` | Scale of outcome |
| Coefficients | `normal(0, 2.5)` | Gelman et al. |
| SD (sigma) | `exponential(1)` | Half-normal alternative |
| Logistic coef | `normal(0, 2.5)` | ~4 logit units = extreme |
regression-models | SkillHub