Back to skills
SkillHub ClubAnalyze Data & AIFull StackData / AI

azure-ai

Use for Azure AI: Search, Speech, OpenAI, Document Intelligence. Helps with search, vector/hybrid search, speech-to-text, text-to-speech, transcription, OCR. USE FOR: AI Search, query search, vector search, hybrid search, semantic search, speech-to-text, text-to-speech, transcribe, OCR, convert text to speech. DO NOT USE FOR: Function apps/Functions (use azure-functions), databases (azure-postgres/azure-kusto), general Azure resources.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
1,779
Hot score
99
Updated
March 20, 2026
Overall rating
C4.0
Composite score
4.0
Best-practice grade
B73.6

Install command

npx @skill-hub/cli install microsoft-skills-azure-ai

Repository

microsoft/skills

Skill path: .github/plugins/azure-skills/skills/azure-ai

Use for Azure AI: Search, Speech, OpenAI, Document Intelligence. Helps with search, vector/hybrid search, speech-to-text, text-to-speech, transcription, OCR. USE FOR: AI Search, query search, vector search, hybrid search, semantic search, speech-to-text, text-to-speech, transcribe, OCR, convert text to speech. DO NOT USE FOR: Function apps/Functions (use azure-functions), databases (azure-postgres/azure-kusto), general Azure resources.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Full Stack, Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: microsoft.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install azure-ai into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/microsoft/skills before adding azure-ai to shared team environments
  • Use azure-ai for development workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: azure-ai
description: "Use for Azure AI: Search, Speech, OpenAI, Document Intelligence. Helps with search, vector/hybrid search, speech-to-text, text-to-speech, transcription, OCR. USE FOR: AI Search, query search, vector search, hybrid search, semantic search, speech-to-text, text-to-speech, transcribe, OCR, convert text to speech. DO NOT USE FOR: Function apps/Functions (use azure-functions), databases (azure-postgres/azure-kusto), general Azure resources."
---

# Azure AI Services

## Services

| Service | Use When | MCP Tools | CLI |
|---------|----------|-----------|-----|
| AI Search | Full-text, vector, hybrid search | `azure__search` | `az search` |
| Speech | Speech-to-text, text-to-speech | `azure__speech` | - |
| OpenAI | GPT models, embeddings, DALL-E | - | `az cognitiveservices` |
| Document Intelligence | Form extraction, OCR | - | - |

## MCP Server (Preferred)

When Azure MCP is enabled:

### AI Search
- `azure__search` with command `search_index_list` - List search indexes
- `azure__search` with command `search_index_get` - Get index details
- `azure__search` with command `search_query` - Query search index

### Speech
- `azure__speech` with command `speech_transcribe` - Speech to text
- `azure__speech` with command `speech_synthesize` - Text to speech

**If Azure MCP is not enabled:** Run `/azure:setup` or enable via `/mcp`.

## AI Search Capabilities

| Feature | Description |
|---------|-------------|
| Full-text search | Linguistic analysis, stemming |
| Vector search | Semantic similarity with embeddings |
| Hybrid search | Combined keyword + vector |
| AI enrichment | Entity extraction, OCR, sentiment |

## Speech Capabilities

| Feature | Description |
|---------|-------------|
| Speech-to-text | Real-time and batch transcription |
| Text-to-speech | Neural voices, SSML support |
| Speaker diarization | Identify who spoke when |
| Custom models | Domain-specific vocabulary |

## SDK Quick References

For programmatic access to these services, see the condensed SDK guides:

- **AI Search**: [Python](references/sdk/azure-search-documents-py.md) | [TypeScript](references/sdk/azure-search-documents-ts.md) | [.NET](references/sdk/azure-search-documents-dotnet.md)
- **OpenAI**: [.NET](references/sdk/azure-ai-openai-dotnet.md)
- **Vision**: [Python](references/sdk/azure-ai-vision-imageanalysis-py.md) | [Java](references/sdk/azure-ai-vision-imageanalysis-java.md)
- **Transcription**: [Python](references/sdk/azure-ai-transcription-py.md)
- **Translation**: [Python](references/sdk/azure-ai-translation-text-py.md) | [TypeScript](references/sdk/azure-ai-translation-ts.md)
- **Document Intelligence**: [.NET](references/sdk/azure-ai-document-intelligence-dotnet.md) | [TypeScript](references/sdk/azure-ai-document-intelligence-ts.md)
- **Content Safety**: [Python](references/sdk/azure-ai-contentsafety-py.md) | [TypeScript](references/sdk/azure-ai-contentsafety-ts.md) | [Java](references/sdk/azure-ai-contentsafety-java.md)

## Service Details

For deep documentation on specific services:

- AI Search indexing and queries -> [Azure AI Search documentation](https://learn.microsoft.com/azure/search/search-what-is-azure-search)
- Speech transcription patterns -> [Azure AI Speech documentation](https://learn.microsoft.com/azure/ai-services/speech-service/overview)


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### references/sdk/azure-search-documents-py.md

```markdown
# Azure AI Search — Python SDK Quick Reference

> Condensed from **azure-search-documents-py**. Full patterns (agentic retrieval, integrated vectorization, skillsets)
> in the **azure-search-documents-py** plugin skill if installed.

## Install
```bash
pip install azure-search-documents azure-identity
```

## Quick Start
```python
from azure.search.documents import SearchClient
from azure.search.documents.indexes import SearchIndexClient, SearchIndexerClient
from azure.search.documents.models import VectorizedQuery
```

## Non-Obvious Patterns
- `SearchIndexingBufferedSender` for batch uploads with auto-batching/retries
- Vector field type: `Collection(Edm.Single)` with `vector_search_dimensions` + `vector_search_profile_name`
- Async client: `from azure.search.documents.aio import SearchClient`
- `KnowledgeBaseRetrievalClient` for agentic retrieval with LLM-powered Q&A

## Best Practices
1. Use hybrid search for best relevance combining vector and keyword
2. Enable semantic ranking for natural language queries
3. Index in batches of 100-1000 documents for efficiency
4. Use filters to narrow results before ranking
5. Configure vector dimensions to match your embedding model
6. Use HNSW algorithm for large-scale vector search
7. Create suggesters at index creation time (cannot add later)
8. Use `SearchIndexingBufferedSender` for batch uploads
9. Always define semantic configuration for agentic retrieval indexes
10. Use `create_or_update_index` for idempotent index creation
11. Close clients with context managers or explicit `close()`

```

### references/sdk/azure-search-documents-ts.md

```markdown
# Azure AI Search — TypeScript SDK Quick Reference

> Condensed from **azure-search-documents-ts**. Full patterns (semantic config, vector profiles, autocomplete)
> in the **azure-search-documents-ts** plugin skill if installed.

## Install
```bash
npm install @azure/search-documents @azure/identity
```

## Quick Start
```typescript
import { SearchClient, SearchIndexClient, SearchIndexerClient } from "@azure/search-documents";
const searchClient = new SearchClient(endpoint, indexName, credential);
```

## Non-Obvious Patterns
- Vector search uses `vectorSearchOptions.queries` array with `kind: "vector"`
- Semantic search requires `queryType: "semantic"` + `semanticSearchOptions`
- Batch ops: `searchClient.indexDocuments({ actions: [{ upload: doc }, { delete: doc }] })`

## Best Practices
1. Use hybrid search — combine vector + text for best results
2. Enable semantic ranking — improves relevance for natural language queries
3. Batch document uploads — use `uploadDocuments` with arrays, not single docs
4. Use filters for security — implement document-level security with filters
5. Index incrementally — use `mergeOrUploadDocuments` for updates
6. Monitor query performance — use `includeTotalCount: true` sparingly in production

```

### references/sdk/azure-search-documents-dotnet.md

```markdown
# Azure AI Search — .NET SDK Quick Reference

> Condensed from **azure-search-documents-dotnet**. Full patterns (FieldBuilder, hybrid search, semantic answers)
> in the **azure-search-documents-dotnet** plugin skill if installed.

## Install
```bash
dotnet add package Azure.Search.Documents
```

## Quick Start
```csharp
using Azure.Search.Documents;
using Azure.Search.Documents.Indexes;
var client = new SearchClient(new Uri(endpoint), indexName, credential);
```

## Non-Obvious Patterns
- `FieldBuilder` + model attributes (`[SimpleField]`, `[SearchableField]`, `[VectorSearchField]`) for type-safe index definitions
- `VectorizedQuery` for vector search; set via `SearchOptions.VectorSearch.Queries`
- Semantic answers: `result.Value.SemanticSearch.Answers` / captions on each result

## Best Practices
1. Use `DefaultAzureCredential` over API keys for production
2. Use `FieldBuilder` with model attributes for type-safe index definitions
3. Use `CreateOrUpdateIndexAsync` for idempotent index creation
4. Batch document operations for better throughput
5. Use `Select` to return only needed fields
6. Configure semantic search for natural language queries
7. Combine vector + keyword + semantic for best relevance

```

### references/sdk/azure-ai-openai-dotnet.md

```markdown
# Azure OpenAI — .NET SDK Quick Reference

> Condensed from **azure-ai-openai-dotnet**. Full patterns (function calling, structured outputs, RAG with Search)
> in the **azure-ai-openai-dotnet** plugin skill if installed.

## Install
```bash
dotnet add package Azure.AI.OpenAI
```

## Quick Start
```csharp
using Azure.AI.OpenAI;
using OpenAI.Chat;
var azureClient = new AzureOpenAIClient(new Uri(endpoint), credential);
ChatClient chatClient = azureClient.GetChatClient("gpt-4o-mini");
```

## Non-Obvious Patterns
- Client hierarchy: `AzureOpenAIClient.GetChatClient()` / `GetEmbeddingClient()` / `GetImageClient()` / `GetAudioClient()`
- Reasoning models (o1): use `DeveloperChatMessage` instead of `SystemChatMessage`, set `ReasoningEffortLevel`
- RAG: `#pragma warning disable AOAI001` then `options.AddDataSource(new AzureSearchChatDataSource{...})`
- Structured outputs: `ChatResponseFormat.CreateJsonSchemaFormat(...)`

## Best Practices
1. Use Entra ID in production — avoid API keys
2. Reuse client instances — create once, share across requests
3. Handle rate limits — implement exponential backoff for 429 errors
4. Stream for long responses — use `CompleteChatStreamingAsync`
5. Set appropriate timeouts for long completions
6. Use structured outputs for consistent response format
7. Monitor token usage via `completion.Usage` for cost management
8. Validate tool call arguments before execution

```

### references/sdk/azure-ai-vision-imageanalysis-py.md

```markdown
# Azure AI Vision Image Analysis — Python SDK Quick Reference

> Condensed from **azure-ai-vision-imageanalysis-py**. Full patterns (dense captions, smart crops, people detection)
> in the **azure-ai-vision-imageanalysis-py** plugin skill if installed.

## Install
```bash
pip install azure-ai-vision-imageanalysis
```

## Quick Start
```python
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
client = ImageAnalysisClient(endpoint=endpoint, credential=credential)
```

## Non-Obvious Patterns
- `analyze_from_url(image_url=..., visual_features=[...])` for URL; `analyze(image_data=bytes)` for file
- VisualFeatures enum: `CAPTION`, `DENSE_CAPTIONS`, `TAGS`, `OBJECTS`, `READ`, `PEOPLE`, `SMART_CROPS`
- Async: `from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient`

## Best Practices
1. Select only needed visual features to optimize latency and cost
2. Use async client for high-throughput scenarios
3. Handle HttpResponseError for invalid images or auth issues
4. Enable `gender_neutral_caption` for inclusive descriptions
5. Specify `language` for localized captions
6. Use `smart_crops_aspect_ratios` matching your thumbnail requirements
7. Cache results when analyzing the same image multiple times

```

### references/sdk/azure-ai-vision-imageanalysis-java.md

```markdown
# Azure AI Vision Image Analysis — Java SDK Quick Reference

> Condensed from **azure-ai-vision-imageanalysis-java**. Full patterns (dense captions, smart crops, people detection)
> in the **azure-ai-vision-imageanalysis-java** plugin skill if installed.

## Install
```xml
<dependency>
  <groupId>com.azure</groupId>
  <artifactId>azure-ai-vision-imageanalysis</artifactId>
  <version>1.1.0-beta.1</version>
</dependency>
```

## Quick Start
```java
import com.azure.ai.vision.imageanalysis.ImageAnalysisClient;
import com.azure.ai.vision.imageanalysis.ImageAnalysisClientBuilder;
import com.azure.ai.vision.imageanalysis.models.*;
ImageAnalysisClient client = new ImageAnalysisClientBuilder()
    .endpoint(endpoint).credential(credential).buildClient();
```

## Non-Obvious Patterns
- File input: `BinaryData.fromFile(new File("img.jpg").toPath())`
- URL: `client.analyzeFromUrl(url, Arrays.asList(VisualFeatures.CAPTION), options)`
- `ImageAnalysisOptions.setSmartCropsAspectRatios(Arrays.asList(1.0, 1.5))`

## Best Practices
1. Select only needed features to reduce latency and cost
2. Caption/Dense Captions require GPU-supported regions
3. Use `setGenderNeutralCaption(true)` for inclusive output
4. Specify language with `setLanguage("en")` for localized captions
5. Use async client for high-throughput scenarios

```

### references/sdk/azure-ai-transcription-py.md

```markdown
# Azure AI Transcription — Python SDK Quick Reference

> Condensed from **azure-ai-transcription-py**. Full patterns (real-time streaming, diarization, timestamps)
> in the **azure-ai-transcription-py** plugin skill if installed.

## Install
```bash
pip install azure-ai-transcription
```

## Quick Start
```python
import os
from azure.ai.transcription import TranscriptionClient
client = TranscriptionClient(endpoint=os.environ["TRANSCRIPTION_ENDPOINT"],
    credential=os.environ["TRANSCRIPTION_KEY"])
```

## Non-Obvious Patterns
- Auth uses subscription key string directly (not AzureKeyCredential); DefaultAzureCredential not supported
- Batch: `client.begin_transcription(name=..., locale="en-US", content_urls=[...], diarization_enabled=True)`
- Real-time: `stream = client.begin_stream_transcription(locale="en-US"); stream.send_audio_file("audio.wav")`

## Best Practices
1. Enable diarization when multiple speakers are present
2. Use batch transcription for long files stored in blob storage
3. Capture timestamps for subtitle generation
4. Specify language to improve recognition accuracy
5. Handle streaming backpressure for real-time transcription
6. Close transcription sessions when complete

```

### references/sdk/azure-ai-translation-text-py.md

```markdown
# Azure AI Text Translation — Python SDK Quick Reference

> Condensed from **azure-ai-translation-text-py**. Full patterns (transliteration, dictionary lookup, sentence boundaries)
> in the **azure-ai-translation-text-py** plugin skill if installed.

## Install
```bash
pip install azure-ai-translation-text
```

## Quick Start
```python
from azure.ai.translation.text import TextTranslationClient
from azure.core.credentials import AzureKeyCredential
client = TextTranslationClient(credential=AzureKeyCredential(key), region=region)
```

## Non-Obvious Patterns
- API key auth requires `region` param: `TextTranslationClient(credential=..., region="eastus")`
- Source language param: `from_parameter="fr"` (not `from` — reserved word)
- Dict example model: `from azure.ai.translation.text.models import DictionaryExampleTextItem`
- Async: `from azure.ai.translation.text.aio import TextTranslationClient`

## Best Practices
1. Batch translations — send multiple texts in one request (up to 100)
2. Specify source language when known to improve accuracy
3. Use async client for high-throughput scenarios
4. Cache language list — supported languages change infrequently
5. Handle profanity appropriately for your application
6. Use `html` text_type when translating HTML content
7. Include alignment for applications needing word mapping

```

### references/sdk/azure-ai-translation-ts.md

```markdown
# Azure Translation — TypeScript SDK Quick Reference

> Condensed from **azure-ai-translation-ts**. Full patterns (document translation, batch SAS, transliterate)
> in the **azure-ai-translation-ts** plugin skill if installed.

## Install
```bash
npm install @azure-rest/ai-translation-text @azure/identity
```

## Quick Start
```typescript
import TextTranslationClient, { TranslatorCredential, isUnexpected } from "@azure-rest/ai-translation-text";
const credential: TranslatorCredential = { key: process.env.TRANSLATOR_SUBSCRIPTION_KEY!, region: process.env.TRANSLATOR_REGION! };
const client = TextTranslationClient(process.env.TRANSLATOR_ENDPOINT!, credential);
```

## Non-Obvious Patterns
- REST client — `TextTranslationClient` is a function, not a class
- Translate via `client.path("/translate").post({ body: { inputs: [...] } })`
- Document translation: separate package `@azure-rest/ai-translation-document`
- Batch docs require SAS URLs for source/target blob containers

## Best Practices
1. Auto-detect source — omit `language` parameter to auto-detect
2. Batch requests — translate multiple texts in one call for efficiency
3. Use SAS tokens — for document translation, use time-limited SAS URLs
4. Handle errors — always check `isUnexpected(response)` before accessing body
5. Regional endpoints — use regional endpoints for lower latency

```

### references/sdk/azure-ai-document-intelligence-dotnet.md

```markdown
# Azure Document Intelligence — .NET SDK Quick Reference

> Condensed from **azure-ai-document-intelligence-dotnet**. Full patterns (custom models, classifiers, layout extraction)
> in the **azure-ai-document-intelligence-dotnet** plugin skill if installed.

## Install
```bash
dotnet add package Azure.AI.DocumentIntelligence
```

## Quick Start
```csharp
using Azure.AI.DocumentIntelligence;
var client = new DocumentIntelligenceClient(new Uri(endpoint), credential);
var adminClient = new DocumentIntelligenceAdministrationClient(new Uri(endpoint), credential);
```

## Non-Obvious Patterns
- Analyze is async LRO: `await client.AnalyzeDocumentAsync(WaitUntil.Completed, "prebuilt-invoice", uri)`
- Field access: `document.Fields.TryGetValue("VendorName", out DocumentField field)`
- Custom model build: `BuildDocumentModelOptions(modelId, DocumentBuildMode.Template, blobSource)`
- Entra ID requires custom subdomain, not regional endpoint

## Best Practices
1. Use DefaultAzureCredential for production
2. Reuse client instances — clients are thread-safe
3. Handle long-running operations with `WaitUntil.Completed`
4. Check field confidence — always verify `Confidence` property
5. Use appropriate model — prebuilt for common docs, custom for specialized
6. Use custom subdomain — required for Entra ID authentication

```

### references/sdk/azure-ai-document-intelligence-ts.md

```markdown
# Azure Document Intelligence — TypeScript SDK Quick Reference

> Condensed from **azure-ai-document-intelligence-ts**. Full patterns (custom models, classifiers, batch polling)
> in the **azure-ai-document-intelligence-ts** plugin skill if installed.

## Install
```bash
npm install @azure-rest/ai-document-intelligence @azure/identity
```

## Quick Start
```typescript
import DocumentIntelligence, { isUnexpected, getLongRunningPoller, AnalyzeOperationOutput } from "@azure-rest/ai-document-intelligence";
const client = DocumentIntelligence(endpoint, new DefaultAzureCredential());
```

## Non-Obvious Patterns
- REST client — `DocumentIntelligence` is a function, not a class
- Analyze path: `client.path("/documentModels/{modelId}:analyze", "prebuilt-layout").post({...})`
- Must use `getLongRunningPoller(client, initialResponse)` then `poller.pollUntilDone()`
- Local file: send as `base64Source` in body, not as binary stream
- Pagination: `import { paginate } from "@azure-rest/ai-document-intelligence"`

## Best Practices
1. Use `getLongRunningPoller()` — document analysis is async, always poll
2. Check `isUnexpected()` — type guard for proper error handling
3. Choose the right model — prebuilt when possible, custom for specialized docs
4. Handle confidence scores — set thresholds for your use case
5. Use `paginate()` helper for listing models
6. Prefer neural mode for custom models over template

```

### references/sdk/azure-ai-contentsafety-py.md

```markdown
# Azure AI Content Safety — Python SDK Quick Reference

> Condensed from **azure-ai-contentsafety-py**. Full patterns (blocklist management, image analysis, 8-severity mode)
> in the **azure-ai-contentsafety-py** plugin skill if installed.

## Install
```bash
pip install azure-ai-contentsafety
```

## Quick Start
```python
from azure.ai.contentsafety import ContentSafetyClient, BlocklistClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
client = ContentSafetyClient(endpoint=endpoint, credential=credential)
```

## Non-Obvious Patterns
- Two clients: `ContentSafetyClient` (analyze) and `BlocklistClient` (blocklist management)
- Image from file: base64-encode bytes, pass via `ImageData(content=base64_str)`
- 8-severity mode: `AnalyzeTextOptions(text=..., output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS)`
- Blocklist analyze: `AnalyzeTextOptions(text=..., blocklist_names=[...], halt_on_blocklist_hit=True)`

## Best Practices
1. Use blocklists for domain-specific terms
2. Set severity thresholds appropriate for your use case
3. Handle multiple categories — content can be harmful in multiple ways
4. Use `halt_on_blocklist_hit` for immediate rejection
5. Log analysis results for audit and improvement
6. Consider 8-severity mode for finer-grained control
7. Pre-moderate AI outputs before showing to users

```

### references/sdk/azure-ai-contentsafety-ts.md

```markdown
# Azure AI Content Safety — TypeScript SDK Quick Reference

> Condensed from **azure-ai-contentsafety-ts**. Full patterns (blocklist CRUD, image moderation, severity thresholds)
> in the **azure-ai-contentsafety-ts** plugin skill if installed.

## Install
```bash
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-auth
```

## Quick Start
```typescript
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";
import { AzureKeyCredential } from "@azure/core-auth";
const client = ContentSafetyClient(endpoint, new AzureKeyCredential(key));
```

## Non-Obvious Patterns
- REST client — `ContentSafetyClient` is a function, not a class
- Text: `client.path("/text:analyze").post({ body: { text, categories: [...] } })`
- Image: `client.path("/image:analyze").post({ body: { image: { content: base64 } } })`
- Blocklist create: `.path("/text/blocklists/{blocklistName}", name).patch({...})`
- API key import: `AzureKeyCredential` from `@azure/core-auth` (not `@azure/identity`)

## Best Practices
1. Always use `isUnexpected()` — type guard for error handling
2. Set appropriate thresholds — different categories may need different severity levels
3. Use blocklists for domain-specific terms to supplement AI detection
4. Log moderation decisions — keep audit trail for compliance
5. Handle edge cases — empty text, very long text, unsupported image formats

```

### references/sdk/azure-ai-contentsafety-java.md

```markdown
# Azure AI Content Safety — Java SDK Quick Reference

> Condensed from **azure-ai-contentsafety-java**. Full patterns (blocklist management, image moderation, 8-severity)
> in the **azure-ai-contentsafety-java** plugin skill if installed.

## Install
```xml
<dependency>
  <groupId>com.azure</groupId>
  <artifactId>azure-ai-contentsafety</artifactId>
  <version>1.1.0-beta.1</version>
</dependency>
```

## Quick Start
```java
import com.azure.ai.contentsafety.ContentSafetyClient;
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
import com.azure.ai.contentsafety.BlocklistClient;
import com.azure.ai.contentsafety.BlocklistClientBuilder;
ContentSafetyClient client = new ContentSafetyClientBuilder()
    .endpoint(endpoint).credential(credential).buildClient();
```

## Non-Obvious Patterns
- Two separate builders: `ContentSafetyClientBuilder` and `BlocklistClientBuilder`
- Image from file: `new ContentSafetyImageData().setContent(BinaryData.fromBytes(bytes))`
- Image from URL: `new ContentSafetyImageData().setBlobUrl(url)`
- Blocklist create uses raw `BinaryData` + `RequestOptions` (not typed model)

## Best Practices
1. Blocklist changes take ~5 minutes to take effect
2. Only request needed categories to reduce latency
3. Typically block severity >= 4 for strict moderation
4. Process multiple items in parallel for throughput
5. Cache blocklist results where appropriate

```

azure-ai | SkillHub