Back to skills
SkillHub ClubShip Full StackFull Stack
resource-sharing
Resource Sharing Skill
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Stars
10
Hot score
84
Updated
March 20, 2026
Overall rating
C3.7
Composite score
3.7
Best-practice grade
B75.1
Install command
npx @skill-hub/cli install plurigrid-asi-resource-sharing
Repository
Best for
Primary workflow: Ship Full Stack.
Technical facets: Full Stack.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: plurigrid.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install resource-sharing into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/plurigrid/asi before adding resource-sharing to shared team environments
- Use resource-sharing for development workflows
Works across
Claude CodeCodex CLIGemini CLIOpenCode
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: resource-sharing
description: Resource Sharing Skill
version: 1.0.0
---
# Resource Sharing Skill
> Distribute computational load across machines using GF(3) balanced allocation
## Overview
Resource sharing implements the "all category resource sharing machines" pattern:
- **MINUS (-1)**: Nodes with excess capacity (receivers)
- **ERGODIC (0)**: Coordinator/broker nodes
- **PLUS (+1)**: Nodes with excess load (senders)
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Resource Sharing Mesh │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Node A │◄────►│ Broker │◄────►│ Node B │ │
│ │ PLUS +1 │ │ ERGODIC │ │ MINUS -1│ │
│ │ (sender)│ │ (0) │ │(receiver)│ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ │ ▲ │
│ │ Work Migration │ │
│ └──────────────────────────────────┘ │
│ │
│ GF(3) Invariant: Σ node_trits ≡ 0 (mod 3) │
└─────────────────────────────────────────────────────────────┘
```
## Node Classification
```bash
# Determine node trit based on load
node_trit() {
load=$(uptime | awk -F'load average:' '{print $2}' | cut -d, -f1 | tr -d ' ')
cpus=$(sysctl -n hw.ncpu 2>/dev/null || nproc)
ratio=$(echo "$load / $cpus" | bc -l)
if (( $(echo "$ratio > 0.8" | bc -l) )); then
echo "+1" # PLUS: overloaded, needs to shed work
elif (( $(echo "$ratio < 0.3" | bc -l) )); then
echo "-1" # MINUS: underloaded, can accept work
else
echo "0" # ERGODIC: balanced
fi
}
```
## Resource Transfer Protocol
### Via LocalSend
```bash
# Share file to least loaded peer
share_to_idle() {
local file="$1"
peers=$(tailscale status --json | jq -r '.Peer[] | .HostName')
for peer in $peers; do
trit=$(ssh "$peer" 'node_trit')
if [[ "$trit" == "-1" ]]; then
localsend send --target "$peer" "$file"
return
fi
done
echo "No idle peers available"
}
```
### Via Tailscale
```bash
# Direct file copy to underloaded node
migrate_workload() {
local pid="$1"
local target=$(find_minus_node)
# Checkpoint process state
criu dump -t "$pid" -D /tmp/checkpoint
# Transfer to target
tailscale file cp /tmp/checkpoint "$target:"
# Restore on target
ssh "$target" "criu restore -D /tmp/checkpoint"
}
```
## Babashka Resource Monitor
```clojure
#!/usr/bin/env bb
(require '[babashka.process :refer [shell]])
(defn load-avg []
(-> (shell {:out :string} "sysctl -n vm.loadavg")
:out
(clojure.string/split #"\s+")
second
parse-double))
(defn cpu-count []
(-> (shell {:out :string} "sysctl -n hw.ncpu")
:out
clojure.string/trim
parse-long))
(defn node-trit []
(let [ratio (/ (load-avg) (cpu-count))]
(cond
(> ratio 0.8) +1 ; PLUS: shed load
(< ratio 0.3) -1 ; MINUS: accept load
:else 0))) ; ERGODIC: balanced
(defn scum-processes []
(->> (shell {:out :string} "ps axo pid,%cpu,%mem,comm")
:out
clojure.string/split-lines
rest
(map #(clojure.string/split % #"\s+" 4))
(filter #(> (parse-double (nth % 1)) 30))))
(println "Node trit:" (node-trit))
(println "SCUM processes:" (count (scum-processes)))
```
## GF(3) Load Balancing
```
For N nodes with trits t₁, t₂, ..., tₙ:
Invariant: Σtᵢ ≡ 0 (mod 3)
Rebalancing rules:
1. If Σtᵢ > 0: migrate work from PLUS to MINUS nodes
2. If Σtᵢ < 0: wake idle work on MINUS nodes
3. If Σtᵢ = 0: system balanced, no action needed
```
## Integration with SCUM Score
```
Resource Decision Matrix:
SCUM Score | Node Trit | Action
---------- | --------- | ------
>0.35 | +1 | Migrate process to MINUS node
>0.35 | 0 | Throttle locally
>0.35 | -1 | Allow (node can handle)
<0.35 | any | No action needed
```
## Justfile Recipes
```just
# Check mesh balance
mesh-balance:
for host in $(tailscale status --json | jq -r '.Peer[].HostName'); do
echo -n "$host: "
ssh "$host" 'uptime' 2>/dev/null || echo "unreachable"
done
# Find receiving nodes
find-minus:
tailscale status --json | jq -r '.Peer[].HostName' | while read h; do
load=$(ssh "$h" "uptime | awk -F: '{print \$NF}' | cut -d, -f1")
echo "$h: $load"
done | sort -t: -k2 -n | head -3
# Migrate SCUM to idle node
migrate-scum PID:
target=$(just find-minus | head -1 | cut -d: -f1)
echo "Migrating PID {{PID}} to $target"
```
---
**Skill Name**: resource-sharing
**Trit**: 0 (ERGODIC - Coordinator)
**GF(3) Role**: Brokers load between PLUS and MINUS nodes
**Integration**: scum-score, localsend-mcp, tailscale-mesh
## Scientific Skill Interleaving
This skill connects to the K-Dense-AI/claude-scientific-skills ecosystem:
### Graph Theory
- **networkx** [○] via bicomodule
- Universal graph hub
### Bibliography References
- `general`: 734 citations in bib.duckdb
## Cat# Integration
This skill maps to **Cat# = Comod(P)** as a bicomodule in the equipment structure:
```
Trit: 0 (ERGODIC)
Home: Prof
Poly Op: ⊗
Kan Role: Adj
Color: #26D826
```
### GF(3) Naturality
The skill participates in triads satisfying:
```
(-1) + (0) + (+1) ≡ 0 (mod 3)
```
This ensures compositional coherence in the Cat# equipment structure.