MCP Integration
Use Speakable as a Model Context Protocol (MCP) server to give AI coding assistants real-time accessibility analysis capabilities directly in your editor.
What is MCP?
The Model Context Protocol is an open standard that lets AI assistants call external tools during conversations. When you connect Speakable as an MCP server, your AI assistant can analyze HTML accessibility on demand, no copy-pasting into a separate tool.
Ask your AI assistant things like "check if this component is accessible" or "what would a screen reader say for this HTML" and it will call Speakable automatically to give you the answer.
Available Tools
analyze_htmlPredict how NVDA, JAWS, and VoiceOver will announce HTML content. Returns the predicted speech output line by line.
| Parameter | Type | Description |
|---|---|---|
| html | string (required) | The HTML content to analyze |
| screen_reader | enum (optional) | nvda, jaws, voiceover, or all (default: all) |
| selector | string (optional) | CSS selector to focus on specific elements |
audit_htmlGenerate an accessibility audit report. Reports landmark structure, heading hierarchy, interactive elements, missing names, and issues with severity levels.
| Parameter | Type | Description |
|---|---|---|
| html | string (required) | The HTML content to audit |
| selector | string (optional) | CSS selector to focus on specific elements |
diff_htmlCompare two HTML versions and report accessibility changes. Shows added, removed, and changed nodes — useful for detecting regressions before merging.
| Parameter | Type | Description |
|---|---|---|
| before_html | string (required) | The original HTML content |
| after_html | string (required) | The updated HTML content |
| selector | string (optional) | CSS selector to focus on specific elements |
Setup
Add Speakable to your MCP configuration. The server runs locally via npx — no global installation required.
Kiro
Add to .kiro/settings/mcp.json in your workspace or ~/.kiro/settings/mcp.json globally:
{
"mcpServers": {
"speakable": {
"command": "npx",
"args": ["-y", "@reticular/speakable-mcp"],
"disabled": false,
"autoApprove": ["analyze_html", "audit_html", "diff_html"]
}
}
}VS Code / Cursor
Add to .vscode/mcp.json in your workspace:
{
"servers": {
"speakable": {
"command": "npx",
"args": ["-y", "@reticular/speakable-mcp"]
}
}
}Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%/Claude/claude_desktop_config.json (Windows):
{
"mcpServers": {
"speakable": {
"command": "npx",
"args": ["-y", "@reticular/speakable-mcp"]
}
}
}Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"speakable": {
"command": "npx",
"args": ["-y", "@reticular/speakable-mcp"]
}
}
}Alternative: Global Install
If you prefer not to use npx, install globally and reference the binary directly:
npm install -g @reticular/speakable
Then use "command": "speakable-mcp" in your config instead of the npx variant.
Example Usage
Once configured, your AI assistant can call Speakable tools naturally during conversation.
"Check if this button is accessible:"
<button><svg>...</svg></button>
--- NVDA --- button --- JAWS --- button --- VoiceOver --- button Warnings: Button has no accessible name. Add aria-label or visible text.
"The button has no accessible name — screen readers will just say 'button' with no context. Add an aria-label to fix it:"
<button aria-label="Close dialog"><svg>...</svg></button>
How It Works
Your AI assistant receives HTML in the conversation (from a file, your question, or generated code).
It calls the Speakable MCP tool with the HTML content via JSON-RPC over stdio.
Speakable runs the full analysis pipeline locally (parse → extract → render) and returns predicted screen reader output.
The assistant uses the output to identify issues, suggest fixes, or confirm accessibility correctness.
Tips
Use autoApprove in Kiro to skip confirmation prompts for Speakable tools — they only read HTML and never modify files.
The MCP server runs the same engine as the CLI and web tool — identical output, zero network requests, fully local.
Use diff_html to check if a refactor changed screen reader output before committing.