Skip to main content

Overview

AssemblyAISTTService provides real-time speech recognition using AssemblyAI’s WebSocket API with support for interim results, end-of-turn detection, and configurable audio processing parameters for accurate transcription in conversational AI applications.

Installation

To use AssemblyAI services, install the required dependency:
pip install "pipecat-ai[assemblyai]"

Prerequisites

AssemblyAI Account Setup

Before using AssemblyAI STT services, you need:
  1. AssemblyAI Account: Sign up at AssemblyAI Console
  2. API Key: Generate an API key from your dashboard
  3. Model Selection: Choose from available transcription models and features

Required Environment Variables

  • ASSEMBLYAI_API_KEY: Your AssemblyAI API key for authentication

Configuration

AssemblyAISTTService

api_key
str
required
AssemblyAI API key for authentication.
language
Language
default:"Language.EN"
deprecated
Language code for transcription. AssemblyAI currently supports English. Deprecated in v0.0.105. Use settings=AssemblyAISTTService.Settings(...) instead.
api_endpoint_base_url
str
default:"wss://streaming.assemblyai.com/v3/ws"
WebSocket endpoint URL. Override for custom or proxied deployments.
sample_rate
int
default:"16000"
Audio sample rate in Hz.
encoding
str
default:"pcm_s16le"
Audio encoding format.
connection_params
AssemblyAIConnectionParams
default:"None"
deprecated
Connection configuration parameters. Deprecated in v0.0.105. Use settings=AssemblyAISTTService.Settings(...) instead. See AssemblyAIConnectionParams below for field mapping.
vad_force_turn_endpoint
bool
default:"True"
Controls turn detection mode. When True (Pipecat mode, default): Forces AssemblyAI to return finals ASAP so Pipecat’s turn detection (e.g., Smart Turn) decides when the user is done. VAD stop sends ForceEndpoint as ceiling. No UserStarted/StoppedSpeakingFrame emitted from STT. When False (AssemblyAI turn detection mode, u3-rt-pro only): AssemblyAI’s model controls turn endings using built-in turn detection. Uses AssemblyAI API defaults for all parameters unless explicitly set. Emits UserStarted/StoppedSpeakingFrame from STT.
should_interrupt
bool
default:"True"
Whether to interrupt the bot when the user starts speaking in AssemblyAI turn detection mode (vad_force_turn_endpoint=False). Only applies when using AssemblyAI’s built-in turn detection.
speaker_format
Optional[str]
default:"None"
Optional format string for speaker labels when diarization is enabled. Use {speaker} for speaker label and {text} for transcript text. Example: "<{speaker}>{text}</{speaker}>" or "{speaker}: {text}". If None, transcript text is not modified.
settings
AssemblyAISTTService.Settings
default:"None"
Runtime-configurable settings for the STT service. See Settings below.
ttfs_p99_latency
float
default:"ASSEMBLYAI_TTFS_P99"
P99 latency from speech end to final transcript in seconds. Override for your deployment.

AssemblyAIConnectionParams

connection_params is deprecated as of v0.0.105. Use settings=AssemblyAISTTService.Settings(...) instead. The sample_rate and encoding fields remain as direct constructor arguments. All other fields have moved into Settings — speech_model maps to model.
Connection-level parameters previously passed via the connection_params constructor argument.
ParameterTypeDefaultDescription
sample_rateint16000Audio sample rate in Hz.
encodingLiteral"pcm_s16le"Audio encoding format. Options: "pcm_s16le", "pcm_mulaw".
end_of_turn_confidence_thresholdfloatNoneConfidence threshold for end-of-turn detection.
min_turn_silenceintNoneMinimum silence duration (ms) when confident about end-of-turn.
min_end_of_turn_silence_when_confidentintNoneDEPRECATED. Use min_turn_silence instead. Will be removed in a future version.
max_turn_silenceintNoneMaximum silence duration (ms) before forcing end-of-turn.
keyterms_promptList[str]NoneList of key terms to guide transcription. Will be JSON serialized before sending.
promptstrNoneBETA: Optional text prompt to guide transcription. Only used when speech_model is "u3-rt-pro". Cannot be used with keyterms_prompt. We suggest starting with no prompt. See AssemblyAI prompting best practices for guidance.
speech_modelLiteral"u3-rt-pro"Required. Speech model to use. Options: "universal-streaming-english", "universal-streaming-multilingual", "u3-rt-pro". Defaults to "u3-rt-pro" if not specified.
language_detectionboolNoneEnable automatic language detection. Only applicable to universal-streaming-multilingual. Turn messages include language information.
format_turnsboolTrueWhether to format transcript turns. Only applicable to universal-streaming-english and universal-streaming-multilingual models. For u3-rt-pro, formatting is automatic and built-in.
speaker_labelsboolNoneEnable speaker diarization. Final transcripts include a speaker field (e.g., “Speaker A”, “Speaker B”).
vad_thresholdfloatNoneVoice activity detection confidence threshold. Only applicable to u3-rt-pro. The confidence threshold (0.0 to 1.0) for classifying audio frames as silence. Frames with VAD confidence below this value are considered silent. Increase for noisy environments to reduce false speech detection. Defaults to 0.3 (API default). For best performance when using with external VAD (e.g., Silero), align this value with your VAD’s activation threshold. Defaults to None (not sent).

Settings

Runtime-configurable settings passed via the settings constructor argument using AssemblyAISTTService.Settings(...). These can be updated mid-conversation with STTUpdateSettingsFrame. See Service Settings for details.
ParameterTypeDefaultDescription
modelstrNoneSTT model identifier. (Inherited from base STT settings.)
languageLanguage | strLanguage.ENLanguage for speech recognition. (Inherited from base STT settings.)
formatted_finalsboolTrueWhether to enable transcript formatting.
word_finalization_max_wait_timeintNoneMaximum time to wait for word finalization in milliseconds.
end_of_turn_confidence_thresholdfloatNoneConfidence threshold for end-of-turn detection.
min_turn_silenceintNoneMinimum silence duration (ms) when confident about end-of-turn.
max_turn_silenceintNoneMaximum silence duration (ms) before forcing end-of-turn.
keyterms_promptList[str]NoneList of key terms to guide transcription.
promptstrNoneOptional text prompt to guide transcription (u3-rt-pro only).
language_detectionboolNoneEnable automatic language detection.
format_turnsboolTrueWhether to format transcript turns.
speaker_labelsboolNoneEnable speaker diarization.
vad_thresholdfloatNoneVAD confidence threshold (0.0–1.0) for classifying audio frames as silence.

Usage

Basic Setup

from pipecat.services.assemblyai.stt import AssemblyAISTTService

stt = AssemblyAISTTService(
    api_key=os.getenv("ASSEMBLYAI_API_KEY"),
)

With Custom Settings

from pipecat.services.assemblyai.stt import AssemblyAISTTService

stt = AssemblyAISTTService(
    api_key=os.getenv("ASSEMBLYAI_API_KEY"),
    settings=AssemblyAISTTService.Settings(
        keyterms_prompt=["Pipecat", "AssemblyAI"],
    ),
    vad_force_turn_endpoint=True,
)

With AssemblyAI Built-in Turn Detection

AssemblyAI’s u3-rt-pro model supports built-in turn detection for more natural conversation flow:
from pipecat.services.assemblyai.stt import AssemblyAISTTService

stt = AssemblyAISTTService(
    api_key=os.getenv("ASSEMBLYAI_API_KEY"),
    vad_force_turn_endpoint=False,  # Use AssemblyAI's built-in turn detection
    settings=AssemblyAISTTService.Settings(
        # Optional: Tune turn detection timing
        min_turn_silence=100,  # Minimum silence (ms) when confident about end-of-turn
        max_turn_silence=1000,  # Maximum silence (ms) before forcing end-of-turn
    ),
)

With Speaker Diarization

Enable speaker identification for multi-party conversations:
from pipecat.services.assemblyai.stt import AssemblyAISTTService

stt = AssemblyAISTTService(
    api_key=os.getenv("ASSEMBLYAI_API_KEY"),
    settings=AssemblyAISTTService.Settings(
        speaker_labels=True,  # Enable speaker diarization
    ),
    speaker_format="{speaker}: {text}",  # Format transcripts with speaker labels
)

Notes

  • u3-rt-pro model: The default model is now u3-rt-pro, which provides the best performance and supports built-in turn detection.
  • Turn detection modes:
    • Pipecat mode (vad_force_turn_endpoint=True, default): Forces AssemblyAI to return finals ASAP so Pipecat’s turn detection (e.g., Smart Turn) decides when the user is done. The service sends a ForceEndpoint message when VAD detects the user has stopped speaking.
    • AssemblyAI mode (vad_force_turn_endpoint=False, u3-rt-pro only): AssemblyAI’s model controls turn endings using built-in turn detection. The service emits UserStartedSpeakingFrame and UserStoppedSpeakingFrame based on AssemblyAI’s detection.
  • Speaker diarization: Enable speaker_labels=True in Settings to automatically identify different speakers. Final transcripts will include a speaker field (e.g., “Speaker A”, “Speaker B”). Use the speaker_format parameter to format transcripts with speaker labels.
  • Language detection: When using universal-streaming-multilingual with language_detection=True, Turn messages include language_code and language_confidence fields for automatic language detection.
  • Prompting: The prompt parameter (u3-rt-pro only) allows you to guide transcription for specific names, terms, or domain vocabulary. This is a beta feature - AssemblyAI recommends testing without a prompt first. Cannot be used with keyterms_prompt.
  • Dynamic settings updates: You can update keyterms_prompt, prompt, min_turn_silence, and max_turn_silence at runtime using STTUpdateSettingsFrame without reconnecting.
The connection_params= / InputParams / params= pattern is deprecated as of v0.0.105. Use Settings / settings= instead. See the Service Settings guide for migration details.

Event Handlers

AssemblyAI STT supports the standard service connection events:
EventDescription
on_connectedConnected to AssemblyAI WebSocket
on_disconnectedDisconnected from AssemblyAI WebSocket
@stt.event_handler("on_connected")
async def on_connected(service):
    print("Connected to AssemblyAI")