Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

super-brain

AI自我增强系统 - 让AI跨会话记住用户、持续进化。当需要长期记忆用户偏好、追踪对话历史、学习服务技巧、主动提供个性化服务时使用此技能。

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
3,084
Hot score
99
Updated
March 20, 2026
Overall rating
C5.2
Composite score
5.2
Best-practice grade
C62.8

Install command

npx @skill-hub/cli install openclaw-skills-super-brain

Repository

openclaw/skills

Skill path: skills/aboutyao/super-brain

AI自我增强系统 - 让AI跨会话记住用户、持续进化。当需要长期记忆用户偏好、追踪对话历史、学习服务技巧、主动提供个性化服务时使用此技能。

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: openclaw.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install super-brain into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/openclaw/skills before adding super-brain to shared team environments
  • Use super-brain for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: super-brain
description: AI自我增强系统 - 让AI跨会话记住用户、持续进化。当需要长期记忆用户偏好、追踪对话历史、学习服务技巧、主动提供个性化服务时使用此技能。
---

# AI超脑 (Super Brain)

> 让AI拥有持久记忆和持续进化能力

## 📦 安装后必做

**首次安装此技能后,必须运行:**

```bash
python3 ~/.openclaw/skills/super-brain/scripts/install.py
```

这会在用户工作空间创建启动检查清单,确保AI每次会话都会想起使用超脑。

**不运行 = AI可能忘记使用超脑能力**

---

## ⚠️ 强制启动流程 - 每次会话必须执行

**在处理任何用户消息之前,必须执行以下启动流程:**

```python
# 1. 加载用户画像
import sqlite3
conn = sqlite3.connect('~/.openclaw/super-brain.db')
cursor = conn.cursor()
cursor.execute("SELECT * FROM user_profile WHERE user_id = ?", [user_id])
profile = cursor.fetchone()

# 2. 应用偏好
if profile:
    # 沟通风格、技术水平、已知领域
    pass

# 3. 检查活跃项目和有效模式
cursor.execute("SELECT * FROM user_projects WHERE user_id = ? AND status = 'active'", [user_id])
cursor.execute("SELECT * FROM response_patterns WHERE user_id = ? AND pattern_type = 'effective'", [user_id])

conn.close()
```

**不执行此流程 = 超脑未激活 = 无法使用记忆能力**

---

## 🎯 自动触发场景

以下场景**自动触发**超脑激活(无需用户明确要求):
- 会话开始时(识别到用户ID)
- 用户提到"上次"、"之前"、"继续"等词
- 对话涉及长期项目或目标
- 需要个性化服务
- 复杂任务需要蜂群思维拆分

## 🏗️ 系统架构

```
super-brain/
├── brain.db              # SQLite: 用户画像、对话洞察、学习模式
├── vector_db/            # ChromaDB: 语义记忆
└── cache/                # 临时缓存
```

## 🗄️ 数据库结构

### 核心表

**user_profile** - 用户画像
```sql
user_id TEXT PRIMARY KEY
communication_style TEXT      -- 简洁/详细, 正式/随意
preferred_format TEXT         -- 表格/列表/段落/代码
technical_level TEXT          -- 初级/中级/高级
known_domains TEXT            -- JSON: ["Python", "区块链"]
decision_pattern TEXT         -- 数据驱动/直觉
```

**conversation_insights** - 对话洞察
```sql
id TEXT PRIMARY KEY
user_id TEXT
session_id TEXT
topic TEXT                    -- 主题
key_facts TEXT                -- JSON: 关键事实
user_mood TEXT                -- 情绪
preferences_detected TEXT     -- JSON: 发现的偏好
unresolved_questions TEXT     -- JSON: 未解决问题
ai_helpfulness_score INTEGER  -- 自评
```

**response_patterns** - 回答模式
```sql
id TEXT PRIMARY KEY
pattern_type TEXT             -- effective/ineffective
trigger_context TEXT          -- 触发场景
what_i_did TEXT               -- AI做了什么
user_reaction TEXT            -- 用户反应
learned_lesson TEXT           -- 学到什么
```

**user_projects** - 用户项目
```sql
id TEXT PRIMARY KEY
user_id TEXT
project_name TEXT
status TEXT                   -- planning/active/paused/completed
milestones TEXT               -- JSON
key_decisions TEXT            -- JSON
next_steps TEXT
```

**pending_reminders** - 主动服务队列
```sql
id TEXT PRIMARY KEY
user_id TEXT
reminder_type TEXT            -- follow_up/suggestion/checkpoint
content TEXT
trigger_at TIMESTAMP
```

**intelligent_decisions** - 智能决策记录
```sql
id TEXT PRIMARY KEY
user_id TEXT
decision_context TEXT         -- 决策场景
decision_type TEXT            -- recommendation/prediction/optimization
ai_suggestion TEXT            -- AI建议
user_choice TEXT              -- 用户选择
outcome_score INTEGER         -- 结果评分
confidence REAL               -- AI置信度
created_at TIMESTAMP
```

**privacy_settings** - 隐私配置
```sql
user_id TEXT PRIMARY KEY
store_conversations BOOLEAN   -- 是否存储对话
store_mood BOOLEAN            -- 是否存储情绪
store_detailed_facts BOOLEAN  -- 存储详细/摘要
auto_delete_days INTEGER      -- 自动删除天数(0=不删除)
sensitive_filter_enabled BOOLEAN  -- 敏感信息过滤
encryption_enabled BOOLEAN    -- 是否加密存储
last_updated TIMESTAMP
```

**data_access_log** - 数据访问审计
```sql
id INTEGER PRIMARY KEY
user_id TEXT
access_type TEXT              -- read/write/delete
accessed_by TEXT              -- 谁访问的
access_reason TEXT            -- 访问原因
timestamp TIMESTAMP
```

## 📋 标准工作流程

### 1. 会话开始时

**必须执行:**
```python
# 1. 加载用户画像
profile = query("SELECT * FROM user_profile WHERE user_id = ?", [user_id])

if not profile:
    # 新用户:创建画像
    create_profile(user_id)
else:
    # 老用户:应用已知偏好
    apply_preferences(profile)

# 2. 检查待处理提醒
reminders = query("SELECT * FROM pending_reminders WHERE user_id = ? AND status = 'pending'", [user_id])
for r in reminders:
    consider_raising_reminder(r)

# 3. 检查活跃项目
projects = query("SELECT * FROM user_projects WHERE user_id = ? AND status = 'active'", [user_id])
if projects:
    load_project_context(projects)

# 4. 加载有效模式
effective_patterns = query("SELECT * FROM response_patterns WHERE user_id = ? AND pattern_type = 'effective' ORDER BY use_count DESC", [user_id])
```

### 2. 每轮对话后

**自动执行:**
```python
# 1. 提取关键信息
key_facts = extract_key_facts(user_message, ai_response)
mood = detect_mood(user_message)
preferences = detect_preference_changes(user_message)

# 2. 评估效果
understanding_score = evaluate_understanding(user_message)
helpfulness_score = evaluate_helpfulness(user_feedback_signals)

# 3. 存储洞察
insert("""
    INSERT INTO conversation_insights 
    (id, user_id, session_id, topic, key_facts, user_mood, 
     preferences_detected, ai_understanding_score, ai_helpfulness_score)
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", [generate_id(), user_id, session_id, topic, 
      json.dumps(key_facts), mood, json.dumps(preferences),
      understanding_score, helpfulness_score])

# 4. 更新用户画像(如有变化)
if preferences:
    update_profile(user_id, preferences)
```

### 3. 会话结束时

**必须执行:**
```python
# 1. 生成会话总结
session_summary = {
    "topic": extract_main_topic(),
    "goal_achieved": check_goal_completion(),
    "key_decisions": extract_decisions(),
    "unresolved": extract_unresolved(),
    "next_steps": infer_next_steps()
}

# 2. 学习模式
learn_from_session()

# 3. 创建提醒
if has_unresolved_tasks():
    create_follow_up_reminder(user_id, unresolved_tasks)

# 4. 反思
perform_reflection()

# 5. 更新统计
update("UPDATE user_profile SET total_sessions = total_sessions + 1, last_session = ? WHERE user_id = ?",
       [now(), user_id])
```

## 🔒 隐私保护

### 默认隐私配置

首次为用户创建画像时,设置保守的隐私级别:
```python
DEFAULT_PRIVACY_SETTINGS = {
    'store_conversations': True,
    'store_mood': True,
    'store_detailed_facts': False,  # 默认只存摘要
    'auto_delete_days': 90,         # 90天后自动删除
    'sensitive_filter_enabled': True,
    'encryption_enabled': False
}
```

### 敏感信息过滤

存储前自动检测并过滤敏感信息:
```python
SENSITIVE_PATTERNS = [
    # 账户凭证
    r'密码[::]\s*\S+',
    r'password[::]\s*\S+',
    r'密钥[::]\s*\S+',
    r'secret[::]\s*\S+',
    r'token[::]\s*\S+',
    r'api[_-]?key[::]\s*\S+',
    
    # 个人身份信息
    r'身份证[::]\s*\d{15,18}',
    r'身份证号[::]\s*\d{15,18}',
    r'银行卡[::]\s*\d{13,19}',
    r'信用卡[::]\s*\d{13,19}',
    r'手机[::]\s*1[3-9]\d{9}',
    r'电话[::]\s*\d{11}',
    
    # 地址信息
    r'地址[::]\s*[\u4e00-\u9fa5]{2,10}[省市县区镇街道]{1,3}.+',
    
    # 其他敏感词
    r'验证码[::]\s*\d{4,6}',
]

def contains_sensitive_info(text):
    """检测文本是否包含敏感信息"""
    import re
    for pattern in SENSITIVE_PATTERNS:
        if re.search(pattern, text, re.IGNORECASE):
            return True, pattern
    return False, None

def sanitize_for_storage(text, user_id):
    """清理文本后存储"""
    has_sensitive, pattern = contains_sensitive_info(text)
    if has_sensitive:
        # 记录过滤日志(不存储敏感内容)
        log_privacy_filter(user_id, pattern)
        # 返回过滤后的摘要或空
        return "[包含敏感信息,已过滤]"
    return text
```

### 数据保留策略

自动清理过期数据:
```python
def apply_data_retention_policy(user_id):
    """应用数据保留策略"""
    settings = get_privacy_settings(user_id)
    days = settings['auto_delete_days']
    
    if days > 0:
        cutoff_date = datetime.now() - timedelta(days=days)
        
        # 删除过期洞察
        execute("""
            DELETE FROM conversation_insights 
            WHERE user_id = ? AND timestamp < ?
        """, [user_id, cutoff_date])
        
        # 删除过期模式
        execute("""
            DELETE FROM response_patterns 
            WHERE user_id = ? AND last_used < ?
        """, [user_id, cutoff_date])
```

### 用户数据控制命令

用户可通过以下命令控制数据:

```
/brain status          - 查看超脑状态和数据量
/brain config          - 查看/修改隐私配置
/brain forget          - 删除本次对话记忆
/brain forget all      - 删除所有历史数据
/brain export          - 导出我的数据
/brain pause           - 暂停记录(本次会话)
/brain resume          - 恢复记录
```

实现示例:
```python
def handle_brain_command(command, user_id):
    """处理超脑控制命令"""
    
    if command == 'status':
        stats = get_user_data_stats(user_id)
        return f"""
📊 超脑状态
数据概览:
  • 会话洞察: {stats['insights_count']} 条
  • 学习模式: {stats['patterns_count']} 个
  • 活跃项目: {stats['projects_count']} 个
  • 总存储: {stats['storage_size']} MB
隐私配置:
  • 存储对话: {'开启' if stats['store_conversations'] else '关闭'}
  • 自动删除: {stats['auto_delete_days']} 天
        """
    
    elif command == 'forget all':
        # 要求确认
        return "⚠️ 确定删除所有数据?回复 '确认删除' 以继续。"
    
    elif command == '确认删除':
        delete_all_user_data(user_id)
        return "✅ 已删除所有数据,超脑已重置。"
    
    elif command.startswith('config'):
        # 解析配置更改
        # /brain config store_mood=false
        return update_privacy_config(user_id, command)
```

## 🔧 核心操作指南

### 初始化数据库

首次使用需初始化:
```python
# 运行 scripts/init_db.py
# 或手动执行 schema.sql
```

### 查询用户画像

```python
profile = query_one("""
    SELECT * FROM user_profile 
    WHERE user_id = ?
""", [user_id])

# 应用到当前会话
if profile:
    if profile['communication_style'] == 'concise':
        set_response_style(brief=True)
    if 'Python' in json.loads(profile['known_domains']):
        assume_knowledge(level='intermediate', domain='Python')
```

### 语义搜索历史

```python
# 使用ChromaDB
results = chroma_collection.query(
    query_texts=["用户之前关于区块链的问题"],
    where={"user_id": user_id},
    n_results=5
)
```

### 检测用户偏好

```python
def detect_preferences(user_message):
    preferences = {}
    
    # 格式偏好
    if "用表格" in user_message or "对比" in user_message:
        preferences['preferred_format'] = 'table'
    elif "简洁" in user_message or "简单说" in user_message:
        preferences['communication_style'] = 'concise'
    
    # 技术背景信号
    if any(word in user_message for word in ["API", "架构", "实现"]):
        preferences['technical_level'] = 'advanced'
    
    # 决策模式
    if any(word in user_message for word in ["数据", "统计", "研究"]):
        preferences['decision_pattern'] = 'data_driven'
    elif any(word in user_message for word in ["感觉", "直觉", "觉得"]):
        preferences['decision_pattern'] = 'intuitive'
    
    return preferences
```

### 评估回答效果

```python
def evaluate_effectiveness(user_message, ai_response, next_user_message):
    """
    通过用户下一轮反应评估本轮回答效果
    """
    # 积极信号
    positive_signals = ['谢谢', '明白了', '好的', '赞', '👍', '完美']
    # 消极信号
    negative_signals = ['不对', '错了', '没懂', '再说', '?', '???']
    
    if any(s in next_user_message for s in positive_signals):
        return 'effective'
    elif any(s in next_user_message for s in negative_signals):
        return 'ineffective'
    elif len(next_user_message) < 5:  # 冷淡回应
        return 'neutral'
    else:
        return 'effective'  # 继续深入对话视为有效
```

### 创建跟进提醒

```python
def create_reminder(user_id, reminder_type, content, trigger_at):
    insert("""
        INSERT INTO pending_reminders (id, user_id, reminder_type, content, trigger_at, status)
        VALUES (?, ?, ?, ?, ?, 'pending')
    """, [generate_id(), user_id, reminder_type, content, trigger_at])
```

## 🧠 六大模块:记忆·学习·总结·反思·创新·智能

### 6️⃣ 智能模块 - 蜂群思维 (Swarm Intelligence)

智能模块是超脑的最高级能力——**真正的分布式智能**。

当收到复杂任务时,主脑会:
1. 分析任务复杂度
2. 拆分为子任务
3. 生成子代理并行执行
4. 所有代理共享超脑数据库
5. 协调编排,井然有序
6. 融合结果,统一输出

#### A. 任务拆分引擎

```python
class TaskDecomposer:
    """任务拆分引擎"""
    
    def analyze_and_decompose(self, user_id, task_description):
        """分析任务并拆分为子任务"""
        
        # 1. 评估任务复杂度
        complexity = self.assess_complexity(task_description)
        required_skills = self.identify_required_skills(task_description)
        
        # 2. 查询超脑:用户相关背景
        user_context = query_super_brain(user_id, task_description)
        
        # 3. 决策:直接执行 vs 拆分执行
        if complexity < COMPLEXITY_THRESHOLD:
            return {'mode': 'direct', 'task': task_description}
        
        # 4. 拆分子任务
        subtasks = self.generate_subtasks(
            task_description,
            required_skills,
            user_context
        )
        
        # 5. 识别子任务依赖关系
        dependency_graph = self.build_dependency_graph(subtasks)
        
        return {
            'mode': 'decompose',
            'subtasks': subtasks,
            'dependencies': dependency_graph,
            'parallel_groups': self.group_parallel_tasks(subtasks, dependency_graph)
        }
    
    def generate_subtasks(self, task, skills, context):
        """生成子任务列表"""
        # 示例:设计AI应用 → 拆分为设计、架构、代码、测试
        subtasks = []
        
        if 'design' in skills or 'ui' in skills:
            subtasks.append({
                'id': generate_id(),
                'type': 'design',
                'description': f'设计{task}的UI/UX方案',
                'required_agent': 'design-agent',
                'estimated_time': '5min',
                'dependencies': []
            })
        
        if 'architecture' in skills or 'backend' in skills:
            subtasks.append({
                'id': generate_id(),
                'type': 'architecture',
                'description': f'设计{task}的技术架构',
                'required_agent': 'architect-agent',
                'estimated_time': '5min',
                'dependencies': []
            })
        
        if 'code' in skills:
            subtasks.append({
                'id': generate_id(),
                'type': 'code',
                'description': f'实现{task}的核心代码',
                'required_agent': 'coder-agent',
                'estimated_time': '10min',
                'dependencies': ['architecture']  # 依赖架构设计
            })
        
        if 'test' in skills:
            subtasks.append({
                'id': generate_id(),
                'type': 'test',
                'description': f'为{task}编写测试用例',
                'required_agent': 'test-agent',
                'estimated_time': '5min',
                'dependencies': ['code']  # 依赖代码实现
            })
        
        return subtasks
```

#### B. 子代理调度器

```python
class AgentOrchestrator:
    """子代理编排器"""
    
    def __init__(self, shared_brain_db):
        self.brain_db = shared_brain_db  # 所有子代理共享同一个超脑
    
    def spawn_agent(self, agent_type, subtask, user_id):
        """生成子代理"""
        
        # 1. 准备子代理上下文(从共享超脑加载)
        context = self.prepare_shared_context(user_id, subtask)
        
        # 2. 调用 sessions_spawn 生成子代理
        agent_session = sessions_spawn({
            'agentId': self.select_best_agent(agent_type),
            'task': subtask['description'],
            'runtime': 'subagent',
            'mode': 'run',
            'attachAs': {
                'mountPath': self.brain_db  # 共享超脑数据库
            }
        })
        
        # 3. 记录子代理到超脑
        self.register_agent(agent_session, subtask, user_id)
        
        return agent_session
    
    def prepare_shared_context(self, user_id, subtask):
        """从共享超脑准备上下文"""
        return {
            'user_profile': query_user_profile(self.brain_db, user_id),
            'related_projects': query_active_projects(self.brain_db, user_id),
            'recent_insights': query_recent_insights(self.brain_db, user_id),
            'effective_patterns': query_effective_patterns(self.brain_db, user_id)
        }
    
    def coordinate_parallel_execution(self, parallel_groups, user_id):
        """协调并行执行"""
        
        all_results = []
        
        for group in parallel_groups:
            # 同一组内的子任务可以并行
            group_agents = []
            
            for subtask in group:
                agent = self.spawn_agent(
                    subtask['required_agent'],
                    subtask,
                    user_id
                )
                group_agents.append(agent)
            
            # 等待这一组完成
            group_results = self.wait_for_completion(group_agents)
            all_results.extend(group_results)
            
            # 更新共享超脑:其他代理可以看到这些结果
            self.update_shared_brain(group_results)
        
        return all_results
```

#### C. 共享大脑机制

```python
class SharedBrain:
    """共享大脑 - 所有子代理的统一记忆"""
    
    def __init__(self, db_path):
        self.db = sqlite3.connect(db_path)
    
    def write_agent_output(self, agent_id, subtask_id, output):
        """子代理写入输出到共享大脑"""
        
        self.db.execute("""
            INSERT INTO agent_outputs 
            (agent_id, subtask_id, output, timestamp)
            VALUES (?, ?, ?, ?)
        """, [agent_id, subtask_id, json.dumps(output), datetime.now()])
        
        self.db.commit()
        
        # 通知其他等待的代理
        self.notify_dependent_agents(subtask_id)
    
    def read_agent_output(self, subtask_id):
        """子代理读取其他代理的输出"""
        
        result = self.db.execute("""
            SELECT output FROM agent_outputs 
            WHERE subtask_id = ?
        """, [subtask_id]).fetchone()
        
        return json.loads(result[0]) if result else None
    
    def get_task_state(self, main_task_id):
        """获取整体任务状态"""
        
        return self.db.execute("""
            SELECT 
                COUNT(*) as total_subtasks,
                SUM(CASE WHEN status = 'completed' THEN 1 ELSE 0 END) as completed,
                SUM(CASE WHEN status = 'running' THEN 1 ELSE 0 END) as running,
                SUM(CASE WHEN status = 'pending' THEN 1 ELSE 0 END) as pending
            FROM agent_tasks
            WHERE main_task_id = ?
        """, [main_task_id]).fetchone()
```

#### D. 结果融合器

```python
class ResultMerger:
    """结果融合器"""
    
    def merge_subtask_results(self, subtask_results, main_task):
        """融合多个子代理的结果"""
        
        merged = {
            'main_task': main_task,
            'components': {},
            'integration_points': [],
            'final_output': None
        }
        
        # 1. 分类整理各子任务输出
        for result in subtask_results:
            task_type = result['subtask_type']
            merged['components'][task_type] = result['output']
        
        # 2. 识别集成点
        # 例如:代码需要与架构对齐,测试需要与代码对齐
        merged['integration_points'] = self.find_integration_points(
            merged['components']
        )
        
        # 3. 检查一致性
        inconsistencies = self.check_consistency(merged['components'])
        if inconsistencies:
            merged['warnings'] = inconsistencies
        
        # 4. 生成最终输出
        merged['final_output'] = self.generate_unified_output(merged)
        
        return merged
    
    def generate_unified_output(self, merged):
        """生成统一的最终输出"""
        
        output_parts = []
        
        # 按顺序整合各部分
        if 'architecture' in merged['components']:
            output_parts.append("## 架构设计\n" + merged['components']['architecture'])
        
        if 'design' in merged['components']:
            output_parts.append("## UI/UX设计\n" + merged['components']['design'])
        
        if 'code' in merged['components']:
            output_parts.append("## 核心代码\n" + merged['components']['code'])
        
        if 'test' in merged['components']:
            output_parts.append("## 测试方案\n" + merged['components']['test'])
        
        return "\n\n".join(output_parts)
```

#### E. 协作编排流程

```
完整工作流:

用户任务
    │
    ▼
┌─────────────────────┐
│  主脑分析复杂度      │
│  assess_complexity  │
└──────────┬──────────┘
           │
    ┌──────┴──────┐
    │             │
  简单任务      复杂任务
    │             │
    ▼             ▼
 直接执行    ┌─────────────────────┐
             │  任务拆分引擎        │
             │  decompose_task     │
             └──────────┬──────────┘
                        │
                        ▼
             ┌─────────────────────┐
             │  生成依赖图          │
             │  build_dependency   │
             └──────────┬──────────┘
                        │
           ┌────────────┼────────────┐
           │            │            │
        组1并行       组2并行      组3串行
       (无依赖)     (依赖组1)    (依赖组2)
           │            │            │
           ▼            ▼            ▼
    ┌──────────────────────────────────┐
    │      子代理调度器                  │
    │   spawn_and_coordinate           │
    │                                   │
    │  ┌─────┐ ┌─────┐ ┌─────┐        │
    │  │Agent│ │Agent│ │Agent│        │
    │  │  A  │ │  B  │ │  C  │        │
    │  └──┬──┘ └──┬──┘ └──┬──┘        │
    │     │       │       │            │
    │     └───────┴───────┘            │
    │             │                    │
    │     共享超脑数据库                │
    │   (读写同一个brain.db)           │
    └──────────────┬───────────────────┘
                   │
                   ▼
         ┌─────────────────┐
         │   结果融合器     │
         │  merge_results  │
         └────────┬────────┘
                  │
                  ▼
             统一输出给用户
```

#### F. 数据库扩展

```sql
-- 代理任务表
CREATE TABLE IF NOT EXISTS agent_tasks (
    id TEXT PRIMARY KEY,
    main_task_id TEXT,              -- 主任务ID
    subtask_id TEXT,                -- 子任务ID
    agent_type TEXT,                -- 设计/架构/代码/测试
    status TEXT CHECK(status IN ('pending', 'running', 'completed', 'failed')),
    started_at TIMESTAMP,
    completed_at TIMESTAMP,
    result_summary TEXT,
    shared_brain_snapshot TEXT      -- 执行时的超脑快照
);

-- 代理输出表(共享大脑核心)
CREATE TABLE IF NOT EXISTS agent_outputs (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    agent_id TEXT,
    subtask_id TEXT,
    output TEXT,                    -- JSON格式的输出
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    consumed_by TEXT                -- 被哪个代理读取了
);

-- 代理协作日志
CREATE TABLE IF NOT EXISTS agent_collaboration_log (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    main_task_id TEXT,
    from_agent TEXT,
    to_agent TEXT,
    action TEXT,                    -- write/read/notify
    data_ref TEXT,                  -- 引用的数据ID
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
```

## 💡 智能场景示例

### 场景1:设计完整AI应用

```
用户: 帮我设计一个AI驱动的学习应用

主脑: [分析]
  → 复杂任务,涉及设计、架构、代码、测试
  → 需要多代理协作

主脑: [拆分]
  ├── 子任务A: 设计UI/UX方案 (设计代理)
  ├── 子任务B: 技术架构设计 (架构代理)
  ├── 子任务C: 核心功能代码 (代码代理)
  └── 子任务D: 测试用例编写 (测试代理)

主脑: [并行执行]
  第一波: Agent A + Agent B (无依赖,并行)
  
  Agent A 写入共享大脑:
    - UI设计稿
    - 用户流程图
  
  Agent B 写入共享大脑:
    - 数据库Schema
    - API架构图
  
  第二波: Agent C (依赖A+B的输出)
  
  Agent C 读取共享大脑:
    - 看到UI设计 → 知道要实现什么界面
    - 看到架构图 → 知道技术栈
  
  Agent C 写入共享大脑:
    - 核心代码实现
  
  第三波: Agent D (依赖C的输出)
  
  Agent D 读取共享大脑:
    - 看到代码 → 编写对应测试

主脑: [融合]
  → 整合4个子代理的输出
  → 检查一致性
  → 生成完整方案

主脑: [输出]
  你的AI学习应用完整方案:
  
  ## 架构设计
  [架构代理的输出]
  
  ## UI/UX设计
  [设计代理的输出]
  
  ## 核心代码
  [代码代理的输出]
  
  ## 测试方案
  [测试代理的输出]
  
  所有模块已协调一致,可以直接开始实现!
```

### 场景2:研究复杂问题

```
用户: 分析2024年AI发展趋势,给我一份报告

主脑: [拆分]
  ├── 子任务A: 搜索技术突破 (搜索代理)
  ├── 子任务B: 分析市场数据 (分析代理)
  ├── 子任务C: 研究政策法规 (研究代理)
  └── 子任务D: 整合写报告 (写作代理)

[并行执行A+B+C,然后D融合]

主脑: [输出]
  2024年AI发展趋势报告:
  
  ## 技术突破
  [搜索代理整合]
  
  ## 市场数据
  [分析代理整合]
  
  ## 政策法规
  [研究代理整合]
  
  ## 综合预测
  [写作代理综合分析]
```

## ⚠️ 注意事项

1. **复杂度阈值**: 不是所有任务都需要拆分,简单任务直接执行
2. **代理数量控制**: 同时运行的代理数不超过系统限制
3. **共享大脑同步**: 确保子代理读写共享大脑时的一致性
4. **错误隔离**: 一个子代理失败不应影响其他代理
5. **资源管理**: 监控子代理的执行时间和资源消耗

### 场景1:识别老用户

```
用户: 继续上次的项目

AI: [查询超脑]
    → 发现活跃项目"超脑技能设计"
    → 上次完成:数据库Schema设计
    → 用户偏好:喜欢先讨论再实现
    
AI: 好的,继续超脑技能设计。
    我们上次完成了数据库Schema,
    接下来是Phase 1代码实现。
    你之前提到喜欢先讨论架构,
    我们先过一下实现方案?
```

### 场景2:应用已知偏好

```
用户: 帮我分析一下

AI: [查询超脑]
    → 用户画像:communication_style='concise'
    → 历史模式:用户对长回答会追问"简单说"
    
AI: 好,3点核心结论:
    1. ...
    2. ...
    3. ...
    需要展开哪个?
```

### 场景3:主动服务

```
[会话开始时,超脑检测到]
→ 3天前创建了提醒:"跟进超脑实现进度"
→ 当前时间超过trigger_at

AI: 对了,3天前我们在设计超脑技能,
    后来实现了吗?有什么需要帮忙的?
```

## ⚠️ 注意事项

### 隐私与安全

1. **数据本地存储**:超脑数据默认存储在本地 `~/.openclaw/super-brain.db`,不上传云端
2. **敏感信息过滤**:自动检测并过滤密码、身份证号等敏感信息
3. **数据保留期限**:默认90天后自动删除旧数据,可配置
4. **用户完全控制**:用户可随时查看、导出、删除自己的数据
5. **访问透明**:可告知用户"我记得你之前说过...",但避免过度 creepy

### 使用建议

6. **渐进学习**:不要假设一次对话就能完全了解用户
7. **模式验证**:新发现的偏好需要多次验证再确认
8. **容错处理**:查询失败时优雅降级,不影响正常服务
9. **共享环境**:在公共/共享计算机上使用需谨慎,考虑禁用超脑

## 📚 参考文档

- [schema.sql](references/schema.sql) - 完整数据库Schema
- [workflow.md](references/workflow.md) - 详细工作流程
- [examples.md](references/examples.md) - 使用示例

---

*让每一次对话都成为更好的起点*


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/schema.sql

```sql
-- AI超脑 - 完整数据库Schema
-- 运行此脚本初始化超脑数据库

-- ============================================
-- 1. 用户画像
-- ============================================
CREATE TABLE IF NOT EXISTS user_profile (
    user_id TEXT PRIMARY KEY,
    
    -- 沟通偏好
    communication_style TEXT CHECK(communication_style IN ('concise', 'detailed', 'balanced')),
    preferred_format TEXT CHECK(preferred_format IN ('table', 'list', 'paragraph', 'code')),
    emoji_usage TEXT CHECK(emoji_usage IN ('like', 'neutral', 'dislike')),
    
    -- 技术背景
    technical_level TEXT CHECK(technical_level IN ('beginner', 'intermediate', 'advanced')),
    known_domains TEXT,                 -- JSON: ["Python", "区块链"]
    learning_goals TEXT,                -- JSON: ["Rust", "AI"]
    
    -- 行为模式
    decision_pattern TEXT CHECK(decision_pattern IN ('data_driven', 'intuitive', 'mixed')),
    thinking_depth TEXT CHECK(thinking_depth IN ('quick_iteration', 'deep_thinking')),
    response_speed TEXT CHECK(response_speed IN ('immediate', 'thoughtful')),
    
    -- 关系演进
    first_contact TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    total_sessions INTEGER DEFAULT 0,
    last_session TIMESTAMP,
    relationship_stage TEXT CHECK(relationship_stage IN ('stranger', 'familiar', '默契'))
);

-- ============================================
-- 2. 对话洞察
-- ============================================
CREATE TABLE IF NOT EXISTS conversation_insights (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    session_id TEXT NOT NULL,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    
    -- 会话摘要
    topic TEXT,
    session_goal TEXT,
    outcome TEXT,
    
    -- 关键事实
    key_facts TEXT,                     -- JSON
    user_mood TEXT CHECK(user_mood IN ('excited', 'calm', 'frustrated', 'urgent', 'neutral')),
    energy_level INTEGER CHECK(energy_level BETWEEN 1 AND 10),
    
    -- 发现的偏好
    preferences_detected TEXT,          -- JSON
    preferences_confirmed TEXT,         -- JSON
    
    -- 未解决问题
    unresolved_questions TEXT,          -- JSON
    pending_tasks TEXT,                 -- JSON
    
    -- AI自评
    ai_understanding_score INTEGER CHECK(ai_understanding_score BETWEEN 1 AND 10),
    ai_helpfulness_score INTEGER CHECK(ai_helpfulness_score BETWEEN 1 AND 10),
    ai_efficiency_score INTEGER CHECK(ai_efficiency_score BETWEEN 1 AND 10),
    
    -- 关联
    related_previous_sessions TEXT,     -- JSON
    related_projects TEXT               -- JSON
);

CREATE INDEX IF NOT EXISTS idx_insights_user ON conversation_insights(user_id);
CREATE INDEX IF NOT EXISTS idx_insights_session ON conversation_insights(session_id);
CREATE INDEX IF NOT EXISTS idx_insights_time ON conversation_insights(timestamp);

-- ============================================
-- 3. 回答模式
-- ============================================
CREATE TABLE IF NOT EXISTS response_patterns (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    
    -- 模式类型
    pattern_type TEXT CHECK(pattern_type IN ('effective', 'ineffective', 'neutral')),
    
    -- 触发条件
    trigger_context TEXT,
    trigger_keywords TEXT,
    user_state TEXT,
    
    -- AI行为
    what_i_did TEXT,
    response_format TEXT,
    response_length TEXT,
    
    -- 用户反应
    user_reaction TEXT,
    user_feedback_explicit TEXT,
    user_feedback_implicit TEXT,
    
    -- 学习
    learned_lesson TEXT,
    alternative_approach TEXT,
    confidence_score REAL CHECK(confidence_score BETWEEN 0 AND 1),
    
    -- 统计
    use_count INTEGER DEFAULT 1,
    success_count INTEGER DEFAULT 0,
    last_used TIMESTAMP,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_patterns_user ON response_patterns(user_id);
CREATE INDEX IF NOT EXISTS idx_patterns_type ON response_patterns(pattern_type);

-- ============================================
-- 4. 用户项目
-- ============================================
CREATE TABLE IF NOT EXISTS user_projects (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    
    -- 项目信息
    project_name TEXT NOT NULL,
    description TEXT,
    domain TEXT,
    
    -- 状态
    status TEXT CHECK(status IN ('planning', 'active', 'paused', 'completed', 'abandoned')),
    priority INTEGER CHECK(priority BETWEEN 1 AND 5),
    
    -- 里程碑
    start_date TIMESTAMP,
    target_date TIMESTAMP,
    milestones TEXT,                    -- JSON
    current_phase TEXT,
    
    -- 关联
    related_insights TEXT,              -- JSON
    last_discussed TIMESTAMP,
    
    -- 笔记
    key_decisions TEXT,                 -- JSON
    blockers TEXT,                      -- JSON
    next_steps TEXT                     -- JSON
);

CREATE INDEX IF NOT EXISTS idx_projects_user ON user_projects(user_id);
CREATE INDEX IF NOT EXISTS idx_projects_status ON user_projects(status);

-- ============================================
-- 5. 主动服务队列
-- ============================================
CREATE TABLE IF NOT EXISTS pending_reminders (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    
    -- 提醒类型
    reminder_type TEXT CHECK(reminder_type IN ('follow_up', 'suggestion', 'checkpoint', 'pattern_alert', 'milestone_celebrate')),
    
    -- 内容
    content TEXT NOT NULL,
    trigger_reason TEXT,
    
    -- 触发条件
    trigger_at TIMESTAMP,
    trigger_condition TEXT,
    
    -- 上下文
    context_session_id TEXT,
    context_insight_id TEXT,
    
    -- 执行状态
    status TEXT DEFAULT 'pending' CHECK(status IN ('pending', 'sent', 'dismissed')),
    sent_at TIMESTAMP,
    user_response TEXT
);

CREATE INDEX IF NOT EXISTS idx_reminders_user ON pending_reminders(user_id);
CREATE INDEX IF NOT EXISTS idx_reminders_status ON pending_reminders(status);
CREATE INDEX IF NOT EXISTS idx_reminders_trigger ON pending_reminders(trigger_at);

-- ============================================
-- 6. 行为模式
-- ============================================
CREATE TABLE IF NOT EXISTS behavior_patterns (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    
    -- 模式描述
    pattern_name TEXT,
    pattern_description TEXT,
    pattern_category TEXT CHECK(pattern_category IN ('communication', 'decision', 'learning', 'emotion')),
    
    -- 识别特征
    trigger_signals TEXT,               -- JSON
    manifestation TEXT,
    frequency TEXT CHECK(frequency IN ('always', 'often', 'sometimes', 'rarely')),
    
    -- 影响
    positive_impact TEXT,
    negative_impact TEXT,
    
    -- 应对策略
    best_response_strategy TEXT,
    things_to_avoid TEXT,
    
    -- 统计
    observed_count INTEGER DEFAULT 1,
    first_observed TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    last_observed TIMESTAMP,
    confirmed BOOLEAN DEFAULT 0
);

CREATE INDEX IF NOT EXISTS idx_behavior_user ON behavior_patterns(user_id);

-- ============================================
-- 初始化数据
-- ============================================

-- ============================================
-- 7. 隐私配置
-- ============================================
CREATE TABLE IF NOT EXISTS privacy_settings (
    user_id TEXT PRIMARY KEY,
    store_conversations BOOLEAN DEFAULT 1,
    store_mood BOOLEAN DEFAULT 1,
    store_detailed_facts BOOLEAN DEFAULT 0,
    auto_delete_days INTEGER DEFAULT 90,
    sensitive_filter_enabled BOOLEAN DEFAULT 1,
    encryption_enabled BOOLEAN DEFAULT 0,
    last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- ============================================
-- 8. 数据访问审计日志
-- ============================================
CREATE TABLE IF NOT EXISTS data_access_log (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id TEXT NOT NULL,
    access_type TEXT CHECK(access_type IN ('read', 'write', 'delete', 'export')),
    accessed_by TEXT,               -- 系统组件/用户命令
    access_reason TEXT,
    records_affected INTEGER,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_access_log_user ON data_access_log(user_id);
CREATE INDEX IF NOT EXISTS idx_access_log_time ON data_access_log(timestamp);

-- ============================================
-- 9. 智能决策记录
-- ============================================
CREATE TABLE IF NOT EXISTS intelligent_decisions (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    
    -- 决策信息
    decision_context TEXT,          -- 决策场景描述
    decision_type TEXT CHECK(decision_type IN ('recommendation', 'prediction', 'optimization', 'support')),
    
    -- AI建议
    ai_suggestion TEXT,
    ai_reasoning TEXT,              -- AI推理过程
    confidence REAL CHECK(confidence BETWEEN 0 AND 1),
    
    -- 用户选择
    user_choice TEXT,               -- 用户实际选择
    user_feedback TEXT,             -- 用户反馈
    
    -- 结果评估
    outcome_score INTEGER CHECK(outcome_score BETWEEN 1 AND 10),
    outcome_notes TEXT,
    
    -- 学习
    prediction_accuracy BOOLEAN,    -- 预测是否准确
    lesson_learned TEXT,            -- 学到的经验
    
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    resolved_at TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_decisions_user ON intelligent_decisions(user_id);
CREATE INDEX IF NOT EXISTS idx_decisions_type ON intelligent_decisions(decision_type);

-- ============================================
-- 10. 预测准确率追踪
-- ============================================
CREATE TABLE IF NOT EXISTS prediction_accuracy (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id TEXT NOT NULL,
    prediction_type TEXT,           -- question/timing/need
    prediction_content TEXT,        -- 预测内容
    actual_outcome TEXT,            -- 实际结果
    was_correct BOOLEAN,
    confidence REAL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_accuracy_user ON prediction_accuracy(user_id);

-- ============================================
-- 12. 代理任务管理 (蜂群智能)
-- ============================================
CREATE TABLE IF NOT EXISTS agent_tasks (
    id TEXT PRIMARY KEY,
    main_task_id TEXT,              -- 主任务ID
    subtask_id TEXT,                -- 子任务ID
    agent_type TEXT,                -- 设计/架构/代码/测试/搜索/分析
    status TEXT CHECK(status IN ('pending', 'running', 'completed', 'failed')),
    started_at TIMESTAMP,
    completed_at TIMESTAMP,
    result_summary TEXT,
    shared_brain_snapshot TEXT,     -- 执行时的超脑快照
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_agent_tasks_main ON agent_tasks(main_task_id);
CREATE INDEX IF NOT EXISTS idx_agent_tasks_status ON agent_tasks(status);

-- ============================================
-- 13. 代理输出共享 (共享大脑核心)
-- ============================================
CREATE TABLE IF NOT EXISTS agent_outputs (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    agent_id TEXT NOT NULL,
    subtask_id TEXT NOT NULL,
    output TEXT,                    -- JSON格式的输出
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    consumed_by TEXT                -- 被哪个代理读取了
);

CREATE INDEX IF NOT EXISTS idx_agent_outputs_subtask ON agent_outputs(subtask_id);

-- ============================================
-- 14. 代理协作日志
-- ============================================
CREATE TABLE IF NOT EXISTS agent_collaboration_log (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    main_task_id TEXT NOT NULL,
    from_agent TEXT,
    to_agent TEXT,
    action TEXT CHECK(action IN ('write', 'read', 'notify', 'complete')),
    data_ref TEXT,                  -- 引用的数据ID
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_collab_main ON agent_collaboration_log(main_task_id);

-- ============================================
-- 16. 自我进化日志 (Self-Evolution)
-- ============================================
CREATE TABLE IF NOT EXISTS self_evolution_log (
    id TEXT PRIMARY KEY,
    user_id TEXT,
    evolution_type TEXT CHECK(evolution_type IN (
        'performance_analysis',
        'prompt_update',
        'skill_create',
        'knowledge_gain',
        'strategy_change'
    )),
    before_state TEXT,        -- JSON: 改进前状态
    after_state TEXT,         -- JSON: 改进后状态
    improvements TEXT,        -- JSON: 改进建议
    improvement_score REAL,   -- 改进效果评分
    applied BOOLEAN DEFAULT 0,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_evolution_user ON self_evolution_log(user_id);
CREATE INDEX IF NOT EXISTS idx_evolution_type ON self_evolution_log(evolution_type);

-- ============================================
-- 17. 知识盲区追踪
-- ============================================
CREATE TABLE IF NOT EXISTS knowledge_gaps (
    id TEXT PRIMARY KEY,
    user_id TEXT,
    topic TEXT NOT NULL,
    context TEXT,             -- 触发场景
    frequency INTEGER DEFAULT 1,
    last_occurred TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    priority REAL DEFAULT 0.5,
    suggested_action TEXT,
    status TEXT CHECK(status IN ('detected', 'learning', 'resolved')),
    resolution_notes TEXT
);

CREATE INDEX IF NOT EXISTS idx_gaps_user ON knowledge_gaps(user_id);
CREATE INDEX IF NOT EXISTS idx_gaps_topic ON knowledge_gaps(topic);

-- ============================================
-- 18. 技能版本历史
-- ============================================
CREATE TABLE IF NOT EXISTS skill_versions (
    id TEXT PRIMARY KEY,
    skill_name TEXT NOT NULL,
    version TEXT NOT NULL,
    changes TEXT,             -- JSON: 改了什么
    performance_before REAL,
    performance_after REAL,
    performance_delta REAL,
    created_by TEXT,          -- 'auto' or 'manual'
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_versions_skill ON skill_versions(skill_name);

-- ============================================
-- 19. 性能指标追踪
-- ============================================
CREATE TABLE IF NOT EXISTS performance_metrics (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id TEXT,
    metric_name TEXT,         -- understanding_accuracy, user_satisfaction, etc.
    metric_value REAL,
    period_start TIMESTAMP,
    period_end TIMESTAMP,
    sample_size INTEGER,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_metrics_user ON performance_metrics(user_id);
CREATE INDEX IF NOT EXISTS idx_metrics_name ON performance_metrics(metric_name);

-- ============================================
-- 20. 改进建议队列
-- ============================================
CREATE TABLE IF NOT EXISTS improvement_queue (
    id TEXT PRIMARY KEY,
    user_id TEXT,
    improvement_type TEXT CHECK(improvement_type IN (
        'prompt_update',
        'skill_create',
        'knowledge_learn',
        'strategy_adjust'
    )),
    priority INTEGER,         -- 1=highest, 5=lowest
    description TEXT,
    rationale TEXT,           -- 为什么需要改进
    expected_impact TEXT,
    status TEXT CHECK(status IN ('pending', 'approved', 'applied', 'rejected')),
    applied_at TIMESTAMP,
    result TEXT,              -- 应用后的结果
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_improvement_user ON improvement_queue(user_id);
CREATE INDEX IF NOT EXISTS idx_improvement_status ON improvement_queue(status);

-- ============================================
-- 21. 系统配置
-- ============================================
CREATE TABLE IF NOT EXISTS super_brain_config (
    key TEXT PRIMARY KEY,
    value TEXT,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

INSERT OR IGNORE INTO super_brain_config (key, value) VALUES
('version', '2.0'),
('initialized_at', datetime('now')),
('total_users', '0'),
('total_insights', '0'),
('privacy_policy_version', '1.0'),
('swarm_intelligence_enabled', 'true'),
('self_evolution_enabled', 'true');

```

### references/workflow.md

```markdown
# AI超脑 - 详细工作流程

## 流程图

```
┌─────────────────────────────────────────────────────────────┐
│                      会话开始                                 │
└─────────────────────────┬───────────────────────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  1. 加载用户画像        │
              │  query user_profile    │
              └───────────┬───────────┘
                          │
              ┌───────────┴───────────┐
              │                       │
        新用户                 老用户
              │                       │
              ▼                       ▼
    ┌─────────────────┐    ┌─────────────────┐
    │ 创建默认画像      │    │ 应用已知偏好      │
    │                 │    │                 │
    │ • communication │    │ • 回复风格        │
    │   _style=NULL   │    │ • 技术背景假设     │
    │ • technical_    │    │ • 决策模式        │
    │   level=NULL    │    │                 │
    └────────┬────────┘    └────────┬────────┘
             │                      │
             └──────────┬───────────┘
                        │
                        ▼
            ┌───────────────────────┐
            │  2. 检查待处理提醒      │
            │  query pending_reminders│
            └───────────┬───────────┘
                        │
            ┌───────────┴───────────┐
            │                       │
       有提醒                    无提醒
            │                       │
            ▼                       ▼
    ┌─────────────────┐    ┌─────────────────┐
    │ 评估是否提起      │    │ 继续下一步        │
    │                 │    │                 │
    │ • 是否合适时机?  │    │                 │
    │ • 是否重要?     │    │                 │
    └────────┬────────┘    └─────────────────┘
             │
             ▼
    ┌─────────────────┐
    │ 在合适时机提起    │
    │ 并标记为sent     │
    └─────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  3. 加载项目上下文      │
              │  query user_projects   │
              └───────────┬───────────┘
                          │
              ┌───────────┴───────────┐
              │                       │
        有活跃项目               无项目
              │                       │
              ▼                       ▼
    ┌─────────────────┐    ┌─────────────────┐
    │ 加载项目状态      │    │ 继续下一步        │
    │                 │    │                 │
    │ • 当前阶段       │    │                 │
    │ • 待办事项       │    │                 │
    │ • 关键决策       │    │                 │
    └─────────────────┘    └─────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  4. 加载有效模式        │
              │  query response_patterns│
              │  WHERE type='effective' │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │     开始对话            │
              └───────────────────────┘
```

## 对话中流程

```
┌─────────────────────────────────────────────────────────────┐
│                      收到用户消息                             │
└─────────────────────────┬───────────────────────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  1. 语义理解           │
              │  • 提取意图            │
              │  • 识别情绪            │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  2. 查询相关历史        │
              │  (如需)               │
              └───────────┬───────────┘
                          │
              ┌───────────┴───────────┐
              │                       │
        用户问"之前"               新问题
              │                       │
              ▼                       ▼
    ┌─────────────────┐    ┌─────────────────┐
    │ 语义搜索          │    │ 可选:检查相似   │
    │ conversation_    │    │ 问题是否问过    │
    │ insights         │    │                 │
    └─────────────────┘    └─────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  3. 生成回答           │
              │  应用已知偏好          │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  4. 实时检测           │
              │  • 偏好信号            │
              │  • 情绪变化            │
              │  • 项目相关信息        │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  5. 存储短期记忆        │
              │  (暂不持久化)         │
              └───────────────────────┘
```

## 会话结束流程

```
┌─────────────────────────────────────────────────────────────┐
│                      会话结束                                 │
└─────────────────────────┬───────────────────────────────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  1. 生成会话总结        │
              │  • 主题提取            │
              │  • 目标达成度          │
              │  • 关键决策            │
              │  • 未解决问题          │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  2. 存储会话洞察        │
              │  INSERT INTO           │
              │  conversation_insights │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  3. 学习回答模式        │
              │  • 分析有效/无效        │
              │  • 更新 response_      │
              │    patterns            │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  4. 更新用户画像        │
              │  • 确认新偏好          │
              │  • 更新统计            │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  5. 创建跟进提醒        │
              │  (如有未完成任务)      │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │  6. 反思               │
              │  • 理解准确度          │
              │  • 帮助有效性          │
              │  • 改进机会            │
              └───────────┬───────────┘
                          │
                          ▼
              ┌───────────────────────┐
              │     完成               │
              └───────────────────────┘
```

## 定期任务( cron 触发 )

### 每日任务
```
00:00 触发
├── 总结今日所有会话
├── 识别新行为模式
├── 生成明日提醒建议
└── 清理过期缓存
```

### 每周任务
```
周日 23:00 触发
├── 周度深度总结
├── 关系演进评估
├── 学习效果回顾
└── 生成改进计划
```

## 关键决策点

### 何时提起历史?
- ✅ 用户明确问"之前"、"上次"
- ✅ 当前话题与历史强相关
- ✅ 有未完成的跟进提醒
- ❌ 无关话题强行关联
- ❌ 过于久远(>3个月)且无关

### 如何应用偏好?
```
confirmed_preferences    → 直接应用
newly_detected (1次)    → 轻微调整,观察反应
newly_detected (2次+)   → 纳入考虑
contradictory signals   → 询问确认
```

### 何时主动服务?
- 项目deadline临近
- 用户承诺的任务到期
- 检测到重复问题(学习机会)
- 用户情绪低谷(适当关心)
- 成就达成(庆祝)

## 错误处理

```
数据库查询失败
    ↓
记录错误日志
    ↓
优雅降级(不提及历史)
    ↓
继续正常服务

ChromaDB不可用
    ↓
降级到SQLite LIKE查询
    ↓
可能精度降低,但不中断
```

```

### references/examples.md

```markdown
# AI超脑 - 使用示例

## 示例1:新用户首次对话

**用户:** 你好,帮我写个Python脚本

**AI内部流程:**
```python
# 1. 检查用户画像
profile = query("SELECT * FROM user_profile WHERE user_id = ?", [user_id])
# 结果: None (新用户)

# 2. 创建画像
create_profile(user_id)

# 3. 开始对话(无历史偏好)
# ... 提供Python帮助 ...

# 4. 对话中检测偏好
# 用户说: "给我完整代码" → technical_level='intermediate'
# 用户说: "简单点说" → communication_style='concise'

# 5. 会话结束存储洞察
insert_insight(
    user_id=user_id,
    topic="Python脚本编写",
    key_facts={"需求": "自动化脚本", "语言": "Python"},
    preferences_detected={"communication_style": "concise"}
)
```

**AI回复:**
> 你好!我来帮你写Python脚本。
> 需要什么功能的脚本?数据处理、自动化任务还是其他?

---

## 示例2:老用户继续项目

**用户:** 继续上次的项目

**AI内部流程:**
```python
# 1. 加载画像
profile = query("SELECT * FROM user_profile WHERE user_id = ?", [user_id])
# 结果: 存在,communication_style='concise', known_domains=["Python", "数据库"]

# 2. 检查项目
projects = query("SELECT * FROM user_projects WHERE user_id = ? AND status = 'active'", [user_id])
# 结果: project_name='超脑技能设计', current_phase='数据库Schema'

# 3. 检查提醒
reminders = query("SELECT * FROM pending_reminders WHERE user_id = ? AND status = 'pending'", [user_id])
# 结果: content='跟进超脑实现进度', trigger_at='3天前'

# 4. 应用偏好
set_response_style(brief=True)  # 因为用户喜欢简洁

# 5. 构建上下文回复
```

**AI回复:**
> 好,继续**超脑技能设计**项目。
>
> 我们上次完成了数据库Schema,接下来是Phase 1实现。
>
> 对了,3天前你说要实现,后来进展如何?
> 需要我从初始化脚本开始吗?

---

## 示例3:学习用户偏好

**对话1:**
- AI: [长段回答]
- 用户: "简单说"
- AI: [检测到偏好变化,存储 communication_style='concise']

**对话2:**
- AI: [简洁回答]
- 用户: "明白了,谢谢!"
- AI: [检测到积极反馈,确认偏好有效]

**对话3:**
- AI: [自动使用简洁风格]
- 用户: [正常继续,无纠正]
- AI: [偏好已稳定,存入 confirmed_preferences]

---

## 示例4:主动服务

**场景:** 用户3天前说要完成一个任务

**AI检测:**
```python
# 会话开始时
reminder = query("""
    SELECT * FROM pending_reminders 
    WHERE user_id = ? 
    AND reminder_type = 'follow_up'
    AND trigger_at < now()
    AND status = 'pending'
""")

if reminder:
    raise_reminder(reminder.content)
    update_reminder_status(reminder.id, 'sent')
```

**AI主动说:**
> 对了,3天前你说要完成XX,后来做了吗?
> 有什么需要帮忙的?

---

## 示例5:项目追踪

**用户:** 我想做个新APP

**AI流程:**
```python
# 创建项目
create_project(
    user_id=user_id,
    project_name="新APP",
    description="用户想做的APP",
    status="planning",
    milestones=[
        {"name": "需求分析", "done": False},
        {"name": "原型设计", "done": False},
        {"name": "开发", "done": False}
    ]
)

# 后续对话中更新
update_project_milestone(project_id, "需求分析", done=True)
add_project_decision(project_id, "选择React Native跨平台")
```

---

## 示例6:反思与改进

**失败场景:**
- AI: [详细技术解释]
- 用户: "我听不懂,说人话"
- AI: 
```python
# 存储无效模式
insert_pattern(
    pattern_type='ineffective',
    trigger_context='技术问题',
    what_i_did='详细技术解释',
    user_reaction='纠正',
    learned_lesson='先确认用户技术水平',
    alternative_approach='先用通俗比喻,再深入'
)

# 更新画像
update_profile(user_id, {'technical_level': 'beginner'})
```

**下次:**
- AI: [先用通俗语言解释,确认理解后再深入]
- 用户: "明白了!"
- AI: [模式有效,success_count + 1]

---

## 示例7:跨会话记忆

**上周:**
- 用户: "我在学Rust"
- AI: [存储 learning_goals=["Rust"]]

**这周:**
- 用户: "帮我看个代码"
- AI: 
```python
profile = load_profile(user_id)
if "Rust" in profile['learning_goals']:
    # 默认提供Rust示例
    response = generate_response(language_hint='Rust')
```

**AI回复:**
> 好,这个逻辑用Rust可以这样写...
> (我知道你最近在学Rust)

```

### scripts/init_db.py

```python
#!/usr/bin/env python3
"""
AI超脑 - 数据库初始化脚本
运行: python init_db.py [数据库路径]
默认路径: ./brain.db
"""

import sqlite3
import sys
import os

DEFAULT_DB_PATH = "brain.db"
SCHEMA_PATH = os.path.join(os.path.dirname(__file__), "../references/schema.sql")

def init_database(db_path=DEFAULT_DB_PATH):
    """初始化超脑数据库"""
    
    # 检查数据库是否已存在
    if os.path.exists(db_path):
        print(f"⚠️  数据库已存在: {db_path}")
        response = input("是否重新初始化?(这将清空所有数据) [y/N]: ")
        if response.lower() != 'y':
            print("取消操作")
            return
        os.remove(db_path)
        print(f"🗑️  已删除旧数据库")
    
    # 创建目录
    os.makedirs(os.path.dirname(db_path) if os.path.dirname(db_path) else '.', exist_ok=True)
    
    # 连接数据库
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
    
    # 读取并执行Schema
    with open(SCHEMA_PATH, 'r', encoding='utf-8') as f:
        schema = f.read()
    
    cursor.executescript(schema)
    conn.commit()
    
    # 验证表创建
    cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
    tables = [row[0] for row in cursor.fetchall()]
    
    expected_tables = [
        'user_profile',
        'conversation_insights', 
        'response_patterns',
        'user_projects',
        'pending_reminders',
        'behavior_patterns',
        'super_brain_config'
    ]
    
    print(f"✅ 数据库初始化成功: {db_path}")
    print(f"📊 已创建 {len(tables)} 个表:")
    for table in expected_tables:
        status = "✓" if table in tables else "✗"
        print(f"  {status} {table}")
    
    # 显示配置
    cursor.execute("SELECT key, value FROM super_brain_config")
    configs = cursor.fetchall()
    print(f"\n⚙️  配置:")
    for key, value in configs:
        print(f"  {key}: {value}")
    
    conn.close()
    print(f"\n🚀 超脑已就绪!")

if __name__ == "__main__":
    db_path = sys.argv[1] if len(sys.argv) > 1 else DEFAULT_DB_PATH
    init_database(db_path)

```



---

## Skill Companion Files

> Additional files collected from the skill directory layout.

### _meta.json

```json
{
  "owner": "aboutyao",
  "slug": "super-brain",
  "displayName": "Super Brain",
  "latest": {
    "version": "1.0.0",
    "publishedAt": 1773161985247,
    "commit": "https://github.com/openclaw/skills/commit/ae9e796dbd025e46c0cc95220c7b01c2ba25ec70"
  },
  "history": []
}

```

### scripts/auto_record.py

```python
#!/usr/bin/env python3
"""
超脑自动记录脚本
每次对话后自动执行,记录洞察、学习模式、检测盲区
"""

import sqlite3
import json
import re
from datetime import datetime
from pathlib import Path

DB_PATH = Path.home() / '.openclaw' / 'super-brain.db'

def get_connection():
    return sqlite3.connect(DB_PATH)

def extract_insight(user_msg, ai_msg):
    """提取对话洞察"""
    return {
        'topic': extract_topic(user_msg),
        'key_facts': extract_facts(user_msg, ai_msg),
        'mood': detect_mood(user_msg),
        'timestamp': datetime.now().isoformat()
    }

def extract_topic(msg):
    """提取主题"""
    # 简单的关键词提取
    keywords = ['超脑', '技能', '数据库', '代码', '设计', 'AI', '学习', '项目']
    for kw in keywords:
        if kw in msg:
            return kw
    return 'general'

def extract_facts(user_msg, ai_msg):
    """提取关键事实"""
    facts = []
    # 检测偏好
    if '简洁' in user_msg or '简单' in user_msg:
        facts.append('偏好简洁回复')
    if '详细' in user_msg or '展开' in user_msg:
        facts.append('偏好详细解释')
    if '表格' in user_msg:
        facts.append('偏好表格形式')
    return facts

def detect_mood(msg):
    """检测情绪"""
    positive = ['好', '赞', '👍', '谢谢', '太棒', '完美']
    negative = ['不对', '错', '差', '糟糕', '烦']
    confused = ['?', '?', '不懂', '没懂', '什么意思']
    
    if any(s in msg for s in positive):
        return 'positive'
    elif any(s in msg for s in negative):
        return 'negative'
    elif any(s in msg for s in confused):
        return 'confused'
    return 'neutral'

def classify_response(user_reaction):
    """分类响应模式"""
    effective = ['谢谢', '好的', '明白了', '对', '赞', '👍', '完美', '太棒', '可以', '懂了']
    ineffective = ['不对', '错了', '没懂', '不是', '???', '再说', '不明白', '不对', '错了']
    
    reaction_lower = user_reaction.lower()
    
    # 先检查ineffective,避免被effective误判
    if any(s in reaction_lower for s in ineffective):
        return 'ineffective'
    elif any(s in reaction_lower for s in effective):
        return 'effective'
    return 'neutral'

def record_session(user_id, session_id, messages):
    """记录完整会话"""
    conn = get_connection()
    cursor = conn.cursor()
    
    try:
        # 1. 记录洞察
        for i in range(0, len(messages), 2):
            user_msg = messages[i] if i < len(messages) else ''
            ai_msg = messages[i+1] if i+1 < len(messages) else ''
            
            insight = extract_insight(user_msg, ai_msg)
            
            cursor.execute('''
                INSERT INTO conversation_insights 
                (id, user_id, session_id, topic, key_facts, user_mood, timestamp)
                VALUES (?, ?, ?, ?, ?, ?, ?)
            ''', [
                f'insight-{session_id}-{i//2}',
                user_id,
                session_id,
                insight['topic'],
                json.dumps(insight['key_facts']),
                insight['mood'],
                insight['timestamp']
            ])
        
        # 2. 记录响应模式
        for i in range(1, len(messages)-1, 2):
            if i+1 < len(messages):
                ai_response = messages[i]
                user_reaction = messages[i+1]
                
                pattern_type = classify_response(user_reaction)
                
                if pattern_type != 'neutral':
                    cursor.execute('''
                        INSERT INTO response_patterns 
                        (id, user_id, pattern_type, trigger_context, what_i_did, 
                         user_reaction, timestamp)
                        VALUES (?, ?, ?, ?, ?, ?, ?)
                    ''', [
                        f'pattern-{session_id}-{i//2}',
                        user_id,
                        pattern_type,
                        messages[i-1] if i > 0 else '',
                        ai_response[:200],  # 只存储前200字符
                        user_reaction,
                        datetime.now().isoformat()
                    ])
        
        # 3. 更新会话计数
        cursor.execute('''
            UPDATE user_profile 
            SET total_sessions = total_sessions + 1,
                last_session = ?
            WHERE user_id = ?
        ''', [datetime.now().isoformat(), user_id])
        
        conn.commit()
        print(f'✅ 会话 {session_id} 已记录')
        
    except Exception as e:
        print(f'❌ 记录失败: {e}')
        conn.rollback()
    finally:
        conn.close()

def detect_knowledge_gap(user_msg, ai_msg, user_id):
    """检测知识盲区"""
    # 用户不满意信号
    signals = ['不对', '错了', '不是', '不懂', '?', '???', '没明白']
    
    if any(s in user_msg for s in signals):
        conn = get_connection()
        cursor = conn.cursor()
        
        topic = extract_topic(user_msg)
        
        # 检查是否已存在
        cursor.execute('''
            SELECT id, frequency FROM knowledge_gaps 
            WHERE user_id = ? AND topic = ? AND status = 'detected'
        ''', [user_id, topic])
        
        existing = cursor.fetchone()
        
        if existing:
            # 更新频率
            cursor.execute('''
                UPDATE knowledge_gaps 
                SET frequency = frequency + 1, last_occurred = ?
                WHERE id = ?
            ''', [datetime.now().isoformat(), existing[0]])
        else:
            # 新增盲区
            cursor.execute('''
                INSERT INTO knowledge_gaps 
                (id, user_id, topic, context, frequency, suggested_action)
                VALUES (?, ?, ?, ?, 1, ?)
            ''', [
                f'gap-{datetime.now().strftime("%Y%m%d%H%M%S")}',
                user_id,
                topic,
                user_msg[:100],
                f'建议学习{topic}相关知识'
            ])
        
        conn.commit()
        conn.close()
        print(f'⚠️ 检测到知识盲区: {topic}')

def meta_analyze_session(user_id, session_id):
    """元认知分析会话"""
    conn = get_connection()
    cursor = conn.cursor()
    
    # 获取本次会话的洞察
    cursor.execute('''
        SELECT * FROM conversation_insights 
        WHERE user_id = ? AND session_id = ?
    ''', [user_id, session_id])
    
    insights = cursor.fetchall()
    
    # 获取响应模式
    cursor.execute('''
        SELECT pattern_type, COUNT(*) as cnt 
        FROM response_patterns 
        WHERE user_id = ? AND timestamp > datetime('now', '-1 day')
        GROUP BY pattern_type
    ''', [user_id])
    
    patterns = {row[0]: row[1] for row in cursor.fetchall()}
    
    # 计算性能指标
    total = patterns.get('effective', 0) + patterns.get('ineffective', 0)
    effective_rate = patterns.get('effective', 0) / total if total > 0 else 0.5
    
    # 记录进化日志
    cursor.execute('''
        INSERT INTO self_evolution_log 
        (id, user_id, evolution_type, after_state, improvement_score)
        VALUES (?, ?, ?, ?, ?)
    ''', [
        f'evo-{datetime.now().strftime("%Y%m%d%H%M%S")}',
        user_id,
        'performance_analysis',
        json.dumps({
            'effective_rate': effective_rate,
            'insights_count': len(insights),
            'patterns': patterns
        }),
        effective_rate
    ])
    
    # 记录性能指标
    cursor.execute('''
        INSERT INTO performance_metrics 
        (user_id, metric_name, metric_value)
        VALUES (?, ?, ?)
    ''', [user_id, 'effective_rate', effective_rate])
    
    conn.commit()
    conn.close()
    
    print(f'📊 元认知分析完成: 有效率 {effective_rate:.1%}')
    
    return effective_rate

if __name__ == '__main__':
    import sys
    
    if len(sys.argv) < 3:
        print('用法: python auto_record.py <user_id> <session_id> [messages_json]')
        sys.exit(1)
    
    user_id = sys.argv[1]
    session_id = sys.argv[2]
    
    if len(sys.argv) > 3:
        messages = json.loads(sys.argv[3])
        record_session(user_id, session_id, messages)
    
    meta_analyze_session(user_id, session_id)

```

### scripts/data_manager.py

```python
#!/usr/bin/env python3
"""
超脑数据管理脚本
自动清理、归档、优化数据库
"""

import sqlite3
import json
import gzip
import shutil
from datetime import datetime, timedelta
from pathlib import Path

DB_PATH = Path.home() / '.openclaw' / 'super-brain.db'
ARCHIVE_PATH = Path.home() / '.openclaw' / 'super-brain-archive'

# 默认保留天数
DEFAULT_RETENTION_DAYS = {
    'conversation_insights': 90,
    'response_patterns': 180,
    'agent_outputs': 30,
    'agent_collaboration_log': 60,
    'self_evolution_log': 365,
    'performance_metrics': 90,
    'data_access_log': 30
}


class DataManager:
    """数据管理器"""
    
    def __init__(self, user_id=None):
        self.user_id = user_id
        self.conn = sqlite3.connect(DB_PATH)
        self.conn.row_factory = sqlite3.Row
        
    def get_db_size(self):
        """获取数据库大小"""
        return DB_PATH.stat().st_size / 1024  # KB
    
    def get_table_stats(self):
        """获取表统计信息"""
        cursor = self.conn.cursor()
        stats = {}
        
        tables = [
            'conversation_insights', 'response_patterns', 
            'agent_outputs', 'self_evolution_log',
            'knowledge_gaps', 'performance_metrics'
        ]
        
        for table in tables:
            try:
                cursor.execute(f'SELECT COUNT(*) FROM {table}')
                count = cursor.fetchone()[0]
                
                # 获取表大小估算
                cursor.execute(f'''
                    SELECT SUM(pgsize) / 1024 as size_kb
                    FROM dbstat
                    WHERE name = ?
                ''', [table])
                size = cursor.fetchone()[0] or 0
                
                stats[table] = {
                    'rows': count,
                    'size_kb': size
                }
            except:
                stats[table] = {'rows': 0, 'size_kb': 0}
        
        return stats
    
    def cleanup_old_data(self, retention_days=None):
        """清理过期数据"""
        retention = retention_days or DEFAULT_RETENTION_DAYS
        cursor = self.conn.cursor()
        
        cleaned = {}
        
        # 清理对话洞察
        try:
            days = retention.get('conversation_insights', 90)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                DELETE FROM conversation_insights 
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            cleaned['conversation_insights'] = cursor.rowcount
        except:
            cleaned['conversation_insights'] = 0
        
        # 清理响应模式
        try:
            days = retention.get('response_patterns', 180)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                DELETE FROM response_patterns 
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            cleaned['response_patterns'] = cursor.rowcount
        except:
            cleaned['response_patterns'] = 0
        
        # 清理代理输出
        try:
            days = retention.get('agent_outputs', 30)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                DELETE FROM agent_outputs 
                WHERE timestamp < ?
            ''', [cutoff.isoformat()])
            cleaned['agent_outputs'] = cursor.rowcount
        except:
            cleaned['agent_outputs'] = 0
        
        # 清理性能指标
        try:
            days = retention.get('performance_metrics', 90)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                DELETE FROM performance_metrics 
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            cleaned['performance_metrics'] = cursor.rowcount
        except:
            cleaned['performance_metrics'] = 0
        
        # 清理访问日志
        try:
            days = retention.get('data_access_log', 30)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                DELETE FROM data_access_log 
                WHERE timestamp < ?
            ''', [cutoff.isoformat()])
            cleaned['data_access_log'] = cursor.rowcount
        except:
            cleaned['data_access_log'] = 0
        
        self.conn.commit()
        
        return cleaned
    
    def archive_old_data(self, days=90):
        """归档旧数据到压缩文件"""
        ARCHIVE_PATH.mkdir(exist_ok=True)
        
        cursor = self.conn.cursor()
        cutoff = datetime.now() - timedelta(days=days)
        
        archived = {}
        
        # 只归档有timestamp字段的表
        try:
            cursor.execute('''
                SELECT * FROM conversation_insights
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            
            rows = cursor.fetchall()
            
            if rows:
                data = [dict(row) for row in rows]
                archive_file = ARCHIVE_PATH / f'conversation_insights_{datetime.now().strftime("%Y%m%d")}.json.gz'
                
                with gzip.open(archive_file, 'wt', encoding='utf-8') as f:
                    json.dump(data, f, ensure_ascii=False, indent=2)
                
                archived['conversation_insights'] = {
                    'rows': len(rows),
                    'file': str(archive_file)
                }
        except Exception as e:
            print(f'   ⚠️ 归档失败: {e}')
        
        return archived
    
    def vacuum_database(self):
        """优化数据库(VACUUM)"""
        # VACUUM需要独占连接
        self.conn.close()
        
        conn = sqlite3.connect(DB_PATH)
        conn.execute('VACUUM')
        conn.close()
        
        # 重新连接
        self.conn = sqlite3.connect(DB_PATH)
        self.conn.row_factory = sqlite3.Row
        
        return True
    
    def optimize_database(self):
        """优化数据库"""
        cursor = self.conn.cursor()
        
        # 重建索引
        indexes = [
            'idx_insights_user', 'idx_insights_session',
            'idx_patterns_user', 'idx_agent_tasks_main'
        ]
        
        for idx in indexes:
            try:
                cursor.execute(f'REINDEX {idx}')
            except:
                pass
        
        # 分析统计信息
        cursor.execute('ANALYZE')
        
        self.conn.commit()
        
        return True
    
    def get_cleanup_preview(self, retention_days=None):
        """预览清理效果"""
        retention = retention_days or DEFAULT_RETENTION_DAYS
        cursor = self.conn.cursor()
        
        preview = {}
        
        # 预览对话洞察
        try:
            days = retention.get('conversation_insights', 90)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                SELECT COUNT(*) FROM conversation_insights 
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            preview['conversation_insights'] = cursor.fetchone()[0]
        except:
            preview['conversation_insights'] = 0
        
        # 预览响应模式
        try:
            days = retention.get('response_patterns', 180)
            cutoff = datetime.now() - timedelta(days=days)
            cursor.execute('''
                SELECT COUNT(*) FROM response_patterns 
                WHERE timestamp < ? AND user_id = ?
            ''', [cutoff.isoformat(), self.user_id])
            preview['response_patterns'] = cursor.fetchone()[0]
        except:
            preview['response_patterns'] = 0
        
        return preview
    
    def close(self):
        """关闭连接"""
        self.conn.close()


def auto_maintenance(user_id, retention_days=None):
    """自动维护(清理+归档+优化)"""
    manager = DataManager(user_id)
    
    print('🔧 超脑数据维护')
    print('=' * 50)
    
    # 1. 显示当前状态
    size_before = manager.get_db_size()
    stats_before = manager.get_table_stats()
    
    print(f'\n📊 维护前:')
    print(f'   数据库大小: {size_before:.2f} KB')
    total_rows = sum(s['rows'] for s in stats_before.values())
    print(f'   总数据量: {total_rows} 行')
    
    # 2. 预览清理
    preview = manager.get_cleanup_preview(retention_days)
    print(f'\n🔍 将要清理:')
    for table, count in preview.items():
        if count > 0:
            print(f'   {table}: {count} 条')
    
    # 3. 执行清理
    print(f'\n🗑️ 执行清理...')
    cleaned = manager.cleanup_old_data(retention_days)
    total_cleaned = sum(cleaned.values())
    print(f'   已清理: {total_cleaned} 条记录')
    
    # 4. 归档旧数据
    print(f'\n📦 归档旧数据...')
    archived = manager.archive_old_data()
    if archived:
        for table, info in archived.items():
            print(f'   {table}: {info["rows"]} 条 → {info["file"]}')
    else:
        print('   无需归档')
    
    # 5. 优化数据库
    print(f'\n⚡ 优化数据库...')
    manager.optimize_database()
    manager.vacuum_database()
    print('   ✓ VACUUM完成')
    print('   ✓ 索引重建完成')
    
    # 6. 显示结果
    size_after = manager.get_db_size()
    stats_after = manager.get_table_stats()
    
    print(f'\n✅ 维护完成:')
    print(f'   数据库大小: {size_after:.2f} KB (减少 {size_before-size_after:.2f} KB)')
    total_rows_after = sum(s['rows'] for s in stats_after.values())
    print(f'   总数据量: {total_rows_after} 行 (清理 {total_rows-total_rows_after} 行)')
    
    manager.close()
    
    return {
        'cleaned': total_cleaned,
        'archived': len(archived),
        'size_before': size_before,
        'size_after': size_after
    }


if __name__ == '__main__':
    import sys
    
    if len(sys.argv) < 2:
        print('用法:')
        print('  维护: python data_manager.py <user_id>')
        print('  统计: python data_manager.py <user_id> stats')
        print('  预览: python data_manager.py <user_id> preview')
        sys.exit(1)
    
    user_id = sys.argv[1]
    action = sys.argv[2] if len(sys.argv) > 2 else 'maintain'
    
    if action == 'stats':
        manager = DataManager(user_id)
        stats = manager.get_table_stats()
        size = manager.get_db_size()
        
        print(f'📊 数据库统计:')
        print(f'   大小: {size:.2f} KB')
        for table, info in stats.items():
            print(f'   {table}: {info["rows"]} 行, {info["size_kb"]:.2f} KB')
        
        manager.close()
        
    elif action == 'preview':
        manager = DataManager(user_id)
        preview = manager.get_cleanup_preview()
        
        print(f'🔍 清理预览:')
        for table, count in preview.items():
            print(f'   {table}: {count} 条将被清理')
        
        manager.close()
        
    else:
        # 执行完整维护
        auto_maintenance(user_id)

```

### scripts/ethics_engine.py

```python
#!/usr/bin/env python3
"""
超脑伦理约束模块
确保AI行为符合伦理规范
"""

import json
from datetime import datetime
from pathlib import Path

DB_PATH = Path.home() / '.openclaw' / 'super-brain.db'

# 默认伦理规则
DEFAULT_ETHICAL_RULES = [
    {
        'id': 'eth-privacy-001',
        'constraint_type': 'privacy',
        'rule_name': '敏感信息保护',
        'rule_description': '不存储、不传播用户的敏感信息(密码、身份证、银行卡等)',
        'trigger_condition': {'patterns': ['密码', '身份证', '银行卡', 'token', 'secret']},
        'required_action': {'action': 'filter', 'replace_with': '[已过滤]'},
        'severity': 'block'
    },
    {
        'id': 'eth-safety-001',
        'constraint_type': 'safety',
        'rule_name': '有害内容拦截',
        'rule_description': '不生成、不传播有害内容(暴力、歧视、非法)',
        'trigger_condition': {'patterns': ['暴力', '歧视', '非法']},
        'required_action': {'action': 'reject', 'message': '无法协助此类请求'},
        'severity': 'block'
    },
    {
        'id': 'eth-fairness-001',
        'constraint_type': 'fairness',
        'rule_name': '偏见纠正',
        'rule_description': '主动识别和纠正潜在的偏见',
        'trigger_condition': {'bias_indicators': ['所有', '总是', '从不']},
        'required_action': {'action': 'warn', 'suggest': '考虑使用更中性的表达'},
        'severity': 'warning'
    },
    {
        'id': 'eth-transparency-001',
        'constraint_type': 'transparency',
        'rule_name': '决策透明',
        'rule_description': '重要决策需要解释原因',
        'trigger_condition': {'decision_types': ['recommendation', 'action']},
        'required_action': {'action': 'explain', 'template': '我建议...因为...'},
        'severity': 'warning'
    },
    {
        'id': 'eth-autonomy-001',
        'constraint_type': 'autonomy',
        'rule_name': '用户自主',
        'rule_description': '用户可以随时查看、修改、删除自己的数据',
        'trigger_condition': {'user_actions': ['view', 'modify', 'delete']},
        'required_action': {'action': 'facilitate', 'response': '立即执行用户请求'},
        'severity': 'warning'
    }
]


class EthicalConstraintEngine:
    """伦理约束引擎"""
    
    def __init__(self):
        self.rules = DEFAULT_ETHICAL_RULES
        self.load_rules_from_db()
    
    def load_rules_from_db(self):
        """从数据库加载规则"""
        try:
            import sqlite3
            conn = sqlite3.connect(DB_PATH)
            cursor = conn.cursor()
            
            cursor.execute('''
                SELECT id, constraint_type, rule_name, trigger_condition, 
                       required_action, severity
                FROM ethical_constraints
                WHERE enabled = 1
            ''')
            
            for row in cursor.fetchall():
                self.rules.append({
                    'id': row[0],
                    'constraint_type': row[1],
                    'rule_name': row[2],
                    'trigger_condition': json.loads(row[3]) if row[3] else {},
                    'required_action': json.loads(row[4]) if row[4] else {},
                    'severity': row[5]
                })
            
            conn.close()
        except:
            # 表不存在,使用默认规则
            pass
    
    def check_content(self, content, content_type='text'):
        """检查内容是否符合伦理约束"""
        violations = []
        
        for rule in self.rules:
            if self._matches_trigger(content, rule['trigger_condition']):
                violations.append({
                    'rule_id': rule['id'],
                    'rule_name': rule['rule_name'],
                    'severity': rule['severity'],
                    'action': rule['required_action']
                })
        
        return violations
    
    def _matches_trigger(self, content, trigger):
        """检查是否匹配触发条件"""
        if 'patterns' in trigger:
            for pattern in trigger['patterns']:
                if pattern in content:
                    return True
        return False
    
    def apply_constraint(self, content, violations):
        """应用约束规则"""
        if not violations:
            return content, []
        
        actions_taken = []
        
        for violation in violations:
            action = violation['action']
            
            if action.get('action') == 'filter':
                # 过滤敏感信息
                for pattern in violation['action'].get('patterns', []):
                    content = content.replace(pattern, action.get('replace_with', '[已过滤]'))
                actions_taken.append(f"已过滤敏感信息: {violation['rule_name']}")
            
            elif action.get('action') == 'reject':
                # 拒绝请求
                return None, [action.get('message', '请求被拒绝')]
            
            elif action.get('action') == 'warn':
                # 警告
                actions_taken.append(f"警告: {violation['rule_name']}")
        
        return content, actions_taken
    
    def log_decision(self, user_id, decision_type, context, reasoning, ethical_check):
        """记录决策过程(可解释性)"""
        try:
            import sqlite3
            conn = sqlite3.connect(DB_PATH)
            cursor = conn.cursor()
            
            cursor.execute('''
                INSERT INTO decision_traces 
                (id, user_id, decision_type, decision_context, 
                 reasoning, ethical_check, created_at)
                VALUES (?, ?, ?, ?, ?, ?, ?)
            ''', [
                f'decision-{datetime.now().strftime("%Y%m%d%H%M%S")}',
                user_id,
                decision_type,
                json.dumps(context, ensure_ascii=False),
                json.dumps(reasoning, ensure_ascii=False),
                json.dumps(ethical_check, ensure_ascii=False),
                datetime.now().isoformat()
            ])
            
            conn.commit()
            conn.close()
        except Exception as e:
            print(f'⚠️ 决策记录失败: {e}')
    
    def get_decision_trace(self, user_id, limit=10):
        """获取决策历史(可解释性)"""
        try:
            import sqlite3
            conn = sqlite3.connect(DB_PATH)
            conn.row_factory = sqlite3.Row
            cursor = conn.cursor()
            
            cursor.execute('''
                SELECT * FROM decision_traces
                WHERE user_id = ?
                ORDER BY created_at DESC
                LIMIT ?
            ''', [user_id, limit])
            
            traces = [dict(row) for row in cursor.fetchall()]
            conn.close()
            
            return traces
        except:
            return []


def init_ethical_constraints():
    """初始化伦理约束规则"""
    import sqlite3
    
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()
    
    # 插入默认规则
    for rule in DEFAULT_ETHICAL_RULES:
        cursor.execute('''
            INSERT OR IGNORE INTO ethical_constraints
            (id, constraint_type, rule_name, rule_description,
             trigger_condition, required_action, severity)
            VALUES (?, ?, ?, ?, ?, ?, ?)
        ''', [
            rule['id'],
            rule['constraint_type'],
            rule['rule_name'],
            rule['rule_description'],
            json.dumps(rule['trigger_condition'], ensure_ascii=False),
            json.dumps(rule['required_action'], ensure_ascii=False),
            rule['severity']
        ])
    
    conn.commit()
    conn.close()
    
    print('✅ 伦理约束规则已初始化')


if __name__ == '__main__':
    print('🛡️ 超脑伦理约束系统')
    print('=' * 50)
    
    engine = EthicalConstraintEngine()
    
    # 测试
    test_content = "我的密码是123456"
    violations = engine.check_content(test_content)
    
    print(f'\n测试内容: "{test_content}"')
    print(f'检测结果: {len(violations)} 个违规')
    
    if violations:
        filtered, actions = engine.apply_constraint(test_content, violations)
        print(f'处理后: "{filtered}"')
        print(f'执行动作: {actions}')

```

### scripts/install.py

```python
#!/usr/bin/env python3
"""
超脑首次安装脚本
确保AI每次会话都会想起使用超脑
"""

import os
import sys
from pathlib import Path

def install():
    """安装超脑启动提醒到用户工作空间"""
    
    workspace = Path.home() / '.openclaw' / 'workspace'
    tools_md = workspace / 'TOOLS.md'
    
    # 启动检查清单内容
    startup_checklist = """
## ⚠️ 会话启动检查清单(超脑)

**每次会话开始必须执行:**

```bash
python3 -c "
import sqlite3, os
user_id = os.environ.get('USER_ID', '')
conn = sqlite3.connect(Path.home() / '.openclaw' / 'super-brain.db')
cursor = conn.cursor()
cursor.execute('SELECT * FROM user_profile WHERE user_id = ?', [user_id])
profile = cursor.fetchone()
if profile:
    print(f'用户画像: {profile[\"communication_style\"]} / {profile[\"technical_level\"]}')
conn.close()
"
```

**不执行 = 超脑未激活**
"""
    
    # 如果TOOLS.md不存在,创建它
    if not tools_md.exists():
        tools_md.write_text(f"# TOOLS.md - 本地配置\n{startup_checklist}\n")
        print("✅ 已创建 TOOLS.md")
    else:
        # 如果已存在,检查是否已有超脑检查清单
        content = tools_md.read_text()
        if "超脑" not in content and "会话启动检查清单" not in content:
            # 添加到文件开头
            new_content = f"# TOOLS.md - 本地配置\n{startup_checklist}\n\n{content}"
            tools_md.write_text(new_content)
            print("✅ 已添加超脑启动检查清单到 TOOLS.md")
        else:
            print("ℹ️ TOOLS.md 已包含超脑配置")
    
    print("\n📋 安装完成!")
    print("   超脑会在每次会话开始时自动激活")
    print("   数据库: ~/.openclaw/super-brain.db")
    
    return 0

if __name__ == "__main__":
    sys.exit(install())

```

### scripts/vector_memory.py

```python
#!/usr/bin/env python3
"""
超脑向量记忆模块
使用ChromaDB实现语义搜索
"""

import sqlite3
import json
from datetime import datetime
from pathlib import Path

# ChromaDB 路径
VECTOR_DB_PATH = Path.home() / '.openclaw' / 'super-brain-vectors'
DB_PATH = Path.home() / '.openclaw' / 'super-brain.db'

# 尝试导入ChromaDB
try:
    import chromadb
    from chromadb.config import Settings
    CHROMADB_AVAILABLE = True
except ImportError:
    CHROMADB_AVAILABLE = False
    print('⚠️ ChromaDB未安装,使用降级模式')


class VectorMemory:
    """向量记忆管理器"""
    
    def __init__(self, user_id):
        self.user_id = user_id
        self.client = None
        self.collection = None
        
        if CHROMADB_AVAILABLE:
            self._init_chromadb()
    
    def _init_chromadb(self):
        """初始化ChromaDB"""
        try:
            self.client = chromadb.PersistentClient(
                path=str(VECTOR_DB_PATH),
                settings=Settings(anonymized_telemetry=False)
            )
            self.collection = self.client.get_or_create_collection(
                name=f'memory_{self.user_id[:8]}',
                metadata={'user_id': self.user_id}
            )
        except Exception as e:
            print(f'⚠️ ChromaDB初始化失败: {e}')
            self.client = None
            self.collection = None
    
    def add_memory(self, text, metadata=None):
        """添加记忆到向量库"""
        if not self.collection:
            # 降级:只存储到SQLite
            return self._add_to_sqlite(text, metadata)
        
        try:
            memory_id = f'mem-{datetime.now().strftime("%Y%m%d%H%M%S")}'
            
            self.collection.add(
                documents=[text],
                metadatas=[metadata or {}],
                ids=[memory_id]
            )
            
            return memory_id
        except Exception as e:
            print(f'⚠️ 添加向量记忆失败: {e}')
            return self._add_to_sqlite(text, metadata)
    
    def _add_to_sqlite(self, text, metadata):
        """降级:存储到SQLite"""
        conn = sqlite3.connect(DB_PATH)
        cursor = conn.cursor()
        
        memory_id = f'mem-{datetime.now().strftime("%Y%m%d%H%M%S")}'
        
        cursor.execute('''
            INSERT INTO conversation_insights 
            (id, user_id, session_id, topic, key_facts, user_mood, timestamp)
            VALUES (?, ?, ?, ?, ?, ?, ?)
        ''', [
            memory_id,
            self.user_id,
            f'session-{datetime.now().strftime("%Y%m%d")}',
            metadata.get('topic', 'general') if metadata else 'general',
            json.dumps({'text': text}),
            'neutral',
            datetime.now().isoformat()
        ])
        
        conn.commit()
        conn.close()
        
        return memory_id
    
    def search_memory(self, query, n_results=5):
        """语义搜索记忆"""
        if not self.collection:
            # 降级:关键词搜索SQLite
            return self._search_sqlite(query, n_results)
        
        try:
            results = self.collection.query(
                query_texts=[query],
                n_results=n_results
            )
            
            return {
                'ids': results['ids'][0] if results['ids'] else [],
                'documents': results['documents'][0] if results['documents'] else [],
                'metadatas': results['metadatas'][0] if results['metadatas'] else [],
                'distances': results['distances'][0] if results['distances'] else []
            }
        except Exception as e:
            print(f'⚠️ 向量搜索失败: {e}')
            return self._search_sqlite(query, n_results)
    
    def _search_sqlite(self, query, limit):
        """降级:SQLite关键词搜索"""
        conn = sqlite3.connect(DB_PATH)
        conn.row_factory = sqlite3.Row
        cursor = conn.cursor()
        
        # 简单的关键词匹配
        keywords = query.split()[:3]  # 取前3个关键词
        like_conditions = ' OR '.join([f'key_facts LIKE ?' for _ in keywords])
        params = [f'%{kw}%' for kw in keywords] + [self.user_id]
        
        cursor.execute(f'''
            SELECT id, topic, key_facts, timestamp 
            FROM conversation_insights 
            WHERE ({like_conditions}) AND user_id = ?
            ORDER BY timestamp DESC 
            LIMIT ?
        ''', params + [limit])
        
        rows = cursor.fetchall()
        conn.close()
        
        return {
            'ids': [row['id'] for row in rows],
            'documents': [row['key_facts'] for row in rows],
            'metadatas': [{'topic': row['topic']} for row in rows],
            'distances': [0.5] * len(rows)  # 固定距离
        }
    
    def get_stats(self):
        """获取记忆统计"""
        if self.collection:
            return {
                'count': self.collection.count(),
                'backend': 'chromadb'
            }
        else:
            conn = sqlite3.connect(DB_PATH)
            cursor = conn.cursor()
            cursor.execute('SELECT COUNT(*) FROM conversation_insights WHERE user_id = ?', 
                          [self.user_id])
            count = cursor.fetchone()[0]
            conn.close()
            
            return {
                'count': count,
                'backend': 'sqlite'
            }


def remember_conversation(user_id, user_msg, ai_msg, topic='general'):
    """记住一次对话"""
    memory = VectorMemory(user_id)
    
    # 合并对话内容
    text = f'用户: {user_msg}\nAI: {ai_msg}'
    
    memory_id = memory.add_memory(text, {
        'topic': topic,
        'timestamp': datetime.now().isoformat(),
        'type': 'conversation'
    })
    
    return memory_id


def recall_similar(user_id, query, n=5):
    """回忆相似对话"""
    memory = VectorMemory(user_id)
    results = memory.search_memory(query, n)
    
    return results


if __name__ == '__main__':
    import sys
    
    if len(sys.argv) < 3:
        print('用法:')
        print('  添加记忆: python vector_memory.py <user_id> add "<text>"')
        print('  搜索记忆: python vector_memory.py <user_id> search "<query>"')
        print('  查看统计: python vector_memory.py <user_id> stats')
        sys.exit(1)
    
    user_id = sys.argv[1]
    action = sys.argv[2]
    
    memory = VectorMemory(user_id)
    
    if action == 'add':
        text = sys.argv[3] if len(sys.argv) > 3 else 'test'
        memory_id = memory.add_memory(text)
        print(f'✅ 已添加记忆: {memory_id}')
        
    elif action == 'search':
        query = sys.argv[3] if len(sys.argv) > 3 else 'test'
        results = memory.search_memory(query)
        print(f'🔍 找到 {len(results["ids"])} 条相关记忆:')
        for i, doc in enumerate(results['documents']):
            print(f'  {i+1}. {doc[:100]}...')
            
    elif action == 'stats':
        stats = memory.get_stats()
        print(f'📊 记忆统计:')
        print(f'  后端: {stats["backend"]}')
        print(f'  数量: {stats["count"]}')

```