Skip to main content
Skip to docs content

MCP Integration

Use Speakable as a Model Context Protocol (MCP) server to give AI coding assistants real-time accessibility analysis capabilities directly in your editor.

What is MCP?

The Model Context Protocol is an open standard that lets AI assistants call external tools during conversations. When you connect Speakable as an MCP server, your AI assistant can analyze HTML accessibility on demand, no copy-pasting into a separate tool.

Ask your AI assistant things like "check if this component is accessible" or "what would a screen reader say for this HTML" and it will call Speakable automatically to give you the answer.

Available Tools

analyze_html

Predict how NVDA, JAWS, and VoiceOver will announce HTML content. Returns the predicted speech output line by line.

ParameterTypeDescription
htmlstring (required)The HTML content to analyze
screen_readerenum (optional)nvda, jaws, voiceover, or all (default: all)
selectorstring (optional)CSS selector to focus on specific elements
audit_html

Generate an accessibility audit report. Reports landmark structure, heading hierarchy, interactive elements, missing names, and issues with severity levels.

ParameterTypeDescription
htmlstring (required)The HTML content to audit
selectorstring (optional)CSS selector to focus on specific elements
diff_html

Compare two HTML versions and report accessibility changes. Shows added, removed, and changed nodes — useful for detecting regressions before merging.

ParameterTypeDescription
before_htmlstring (required)The original HTML content
after_htmlstring (required)The updated HTML content
selectorstring (optional)CSS selector to focus on specific elements

Setup

Add Speakable to your MCP configuration. The server runs locally via npx — no global installation required.

Kiro

Add to .kiro/settings/mcp.json in your workspace or ~/.kiro/settings/mcp.json globally:

mcp.json
{
  "mcpServers": {
    "speakable": {
      "command": "npx",
      "args": ["-y", "@reticular/speakable-mcp"],
      "disabled": false,
      "autoApprove": ["analyze_html", "audit_html", "diff_html"]
    }
  }
}

VS Code / Cursor

Add to .vscode/mcp.json in your workspace:

.vscode/mcp.json
{
  "servers": {
    "speakable": {
      "command": "npx",
      "args": ["-y", "@reticular/speakable-mcp"]
    }
  }
}

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%/Claude/claude_desktop_config.json (Windows):

claude_desktop_config.json
{
  "mcpServers": {
    "speakable": {
      "command": "npx",
      "args": ["-y", "@reticular/speakable-mcp"]
    }
  }
}

Windsurf

Add to ~/.codeium/windsurf/mcp_config.json:

mcp_config.json
{
  "mcpServers": {
    "speakable": {
      "command": "npx",
      "args": ["-y", "@reticular/speakable-mcp"]
    }
  }
}

Alternative: Global Install

If you prefer not to use npx, install globally and reference the binary directly:

Terminal
npm install -g @reticular/speakable

Then use "command": "speakable-mcp" in your config instead of the npx variant.

Example Usage

Once configured, your AI assistant can call Speakable tools naturally during conversation.

You ask

"Check if this button is accessible:"

<button><svg>...</svg></button>
Speakable responds via MCP
--- NVDA ---
button

--- JAWS ---
button

--- VoiceOver ---
button

Warnings:
Button has no accessible name. Add aria-label or visible text.
AI assistant explains

"The button has no accessible name — screen readers will just say 'button' with no context. Add an aria-label to fix it:"

<button aria-label="Close dialog"><svg>...</svg></button>

How It Works

1.

Your AI assistant receives HTML in the conversation (from a file, your question, or generated code).

2.

It calls the Speakable MCP tool with the HTML content via JSON-RPC over stdio.

3.

Speakable runs the full analysis pipeline locally (parse → extract → render) and returns predicted screen reader output.

4.

The assistant uses the output to identify issues, suggest fixes, or confirm accessibility correctness.

Tips

Use autoApprove in Kiro to skip confirmation prompts for Speakable tools — they only read HTML and never modify files.

The MCP server runs the same engine as the CLI and web tool — identical output, zero network requests, fully local.

Use diff_html to check if a refactor changed screen reader output before committing.