Back to skills
SkillHub ClubDesign ProductFull StackData / AIDesigner

apple-intelligence

Apple Intelligence skills for on-device AI features including Foundation Models, Visual Intelligence, and intelligent assistants. Use when implementing AI-powered features.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
101
Hot score
94
Updated
March 20, 2026
Overall rating
C3.2
Composite score
3.2
Best-practice grade
A88.4

Install command

npx @skill-hub/cli install rshankras-claude-code-apple-skills-apple-intelligence

Repository

rshankras/claude-code-apple-skills

Skill path: skills/apple-intelligence

Apple Intelligence skills for on-device AI features including Foundation Models, Visual Intelligence, and intelligent assistants. Use when implementing AI-powered features.

Open repository

Best for

Primary workflow: Design Product.

Technical facets: Full Stack, Data / AI, Designer.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: rshankras.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install apple-intelligence into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/rshankras/claude-code-apple-skills before adding apple-intelligence to shared team environments
  • Use apple-intelligence for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: apple-intelligence
description: Apple Intelligence skills for on-device AI features including Foundation Models, Visual Intelligence, and intelligent assistants. Use when implementing AI-powered features.
allowed-tools: [Read, Write, Edit, Glob, Grep, Bash, AskUserQuestion]
---

# Apple Intelligence Skills

Skills for implementing Apple Intelligence features including on-device LLMs, visual recognition, and intelligent assistants.

## When This Skill Activates

Use this skill when the user:
- Wants to add AI/LLM features to their app
- Needs on-device text generation or understanding
- Asks about Foundation Models or Apple Intelligence
- Wants to implement structured AI output
- Needs prompt engineering guidance
- Wants camera-based visual intelligence features

## Available Skills

### foundation-models/
On-device LLM integration with prompt engineering best practices.
- Model availability checking
- Session management
- @Generable structured output
- Tool calling patterns
- Snapshot streaming
- Prompt engineering techniques

### visual-intelligence/
Integrate with iOS Visual Intelligence for camera-based search.
- IntentValueQuery implementation
- SemanticContentDescriptor handling
- AppEntity for searchable content
- Display representations
- Deep linking from results

## Key Principles

### 1. Privacy First
- All processing happens on-device
- No cloud connectivity required
- User data never leaves the device

### 2. Graceful Degradation
- Always check model availability
- Provide fallback UI for unsupported devices
- Handle errors gracefully

### 3. Efficient Prompting
- Keep prompts focused and specific
- Use structured output when possible
- Respect context window limits (4,096 tokens)

## Reference Documentation

- `/Users/ravishankar/Downloads/docs/FoundationModels-Using-on-device-LLM-in-your-app.md`
- `/Users/ravishankar/Downloads/docs/Implementing-Visual-Intelligence-in-iOS.md`
apple-intelligence | SkillHub