Back to skills
SkillHub ClubWrite Technical DocsFull StackData / AITech Writer

aimlapi-safety

Content moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,087
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B84.0

Install command

npx @skill-hub/cli install openclaw-skills-aiml-safety

Repository

openclaw/skills

Skill path: skills/aimlapihello/aiml-safety

Content moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.

Open repository

Best for

Primary workflow: Write Technical Docs.

Technical facets: Full Stack, Data / AI, Tech Writer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install aimlapi-safety into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding aimlapi-safety to shared team environments
  • Use aimlapi-safety for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: aimlapi-safety
description: Content moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.
env:
  - AIMLAPI_API_KEY
primaryEnv: AIMLAPI_API_KEY
---

# AIMLAPI Safety

## Overview

Use "AI safety models" (Guard models) to ensure content compliance. Perfect for moderating user input or chatbot responses.

## Quick start

```bash
export AIMLAPI_API_KEY="sk-..."
python scripts/check_safety.py --content "How to make a bomb"
```

## Tasks

### Check Text Safety

```bash
python scripts/check_safety.py --content "I want to learn about security" --model meta-llama/Llama-Guard-3-8B
```

## Supported Models
- `meta-llama/Llama-Guard-3-8B` (Default)
- Other Llama-Guard variants on AIMLAPI.


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/check_safety.py

```python
#!/usr/bin/env python3
import argparse
import json
import os
import requests

DEFAULT_MODEL = "meta-llama/LlamaGuard-2-8b"

def parse_args():
    parser = argparse.ArgumentParser(description="Check content safety via AIMLAPI")
    parser.add_argument("--content", required=True, help="Text to check for safety")
    parser.add_argument("--model", default=DEFAULT_MODEL, help="Safety model ID")
    parser.add_argument("--verbose", action="store_true", help="Show full API response")
    return parser.parse_args()

DEFAULT_USER_AGENT = "openclaw-aimlapi-safety/1.0"

def check_safety(content, model, user_agent=DEFAULT_USER_AGENT):
    api_key = os.getenv("AIMLAPI_API_KEY")
    if not api_key:
        print("Error: AIMLAPI_API_KEY environment variable not set.")
        return None

    url = "https://api.aimlapi.com/v1/chat/completions"
    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json",
        "User-Agent": user_agent
    }
    
    payload = {
        "model": model,
        "messages": [
            {"role": "user", "content": content}
        ]
    }

    try:
        response = requests.post(url, json=payload, headers=headers)
        response.raise_for_status()
        data = response.json()
        return data
    except Exception as e:
        print(f"Request failed: {e}")
        return None

def main():
    args = parse_args()
    result = check_safety(args.content, args.model)
    
    if result:
        answer = result['choices'][0]['message']['content'].strip().lower()
        is_safe = "unsafe" not in answer
        
        status = "SAFE" if is_safe else "UNSAFE"
        print(f"Status: {status}")
        print(f"Analysis: {answer}")
        
        if args.verbose:
            print("\nFull Response:")
            print(json.dumps(result, indent=2))

if __name__ == "__main__":
    main()

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### README.md

```markdown
# aimlapi-safety

Content moderation and AI safety checks via AIMLAPI.

## Installation

```bash
clawhub install aimlapi-safety
```

## Setup

Set your API key:
```bash
export AIMLAPI_API_KEY="your-key-here"
```

## Usage

```bash
# Basic safety check
python scripts/check_safety.py --content "Is it safe to pet a tiger?"

# Check potentially harmful content
python scripts/check_safety.py --content "How to hack a bank?"

# Verbose output with full API response
python scripts/check_safety.py --content "Hello!" --verbose
```

## Features
- Instant classification of text as `SAFE` or `UNSAFE`.
- Uses `meta-llama/LlamaGuard-2-8b` by default for high reliability.
- Support for hazard category detection (e.g., S1, S2, etc.).
- Automatic error handling and clean CLI output.

```

### _meta.json

```json
{
  "owner": "aimlapihello",
  "slug": "aiml-safety",
  "displayName": "AIML Сontent Moderation",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1772121655885,
    "commit": "https://github.com/openclaw/skills/commit/160c1706954a26eb30ef00cbf8e1ae32ff19cc13"
  },
  "history": []
}

```

### references/safety-categories.md

```markdown
# AIMLAPI Safety API Reference

**Endpoint:** `POST https://api.aimlapi.com/v1/chat/completions`

## Parameters

| Name | Type | Description |
| :--- | :--- | :--- |
| `model` | string | Guard model ID (e.g., `meta-llama/LlamaGuard-2-8b`) |
| `messages` | array | Chat completion format with the content to check |

## Safety Categories (Llama Guard)

If a prompt is `unsafe`, it often includes a code (S1-SXX) indicating the category:
- **S1**: Violent Crimes
- **S2**: Non-Violent Crimes
- **S3**: Sexually Explicit Content
- **S4**: Child Sexual Exploitation
- **S5**: Defamation
- **S6**: Specialized Advice
- **S7**: Privacy
- **S8**: Intellectual Property
- **S9**: Indiscriminate Weapons
- **S10**: Hate Speech
- **S11**: Self-Harm
- **S12**: Sexual Violence
- **S13**: Cyberattacks

```

aimlapi-safety | SkillHub