Overview
AzureLLMService provides access to Azure OpenAI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with enterprise-grade security and compliance.
Azure LLM API Reference
Pipecat’s API methods for Azure OpenAI integration
Example Implementation
Complete example with function calling
Azure OpenAI Documentation
Official Azure OpenAI documentation and setup
Azure Portal
Create OpenAI resources and get credentials
Installation
To use Azure OpenAI services, install the required dependency:Prerequisites
Azure OpenAI Setup
Before using Azure OpenAI LLM services, you need:- Azure Account: Sign up at Azure Portal
- OpenAI Resource: Create an Azure OpenAI resource in your subscription
- Model Deployment: Deploy your chosen model (GPT-4, GPT-4o, etc.)
- Credentials: Get your API key, endpoint, and deployment name
Required Environment Variables
AZURE_CHATGPT_API_KEY: Your Azure OpenAI API keyAZURE_CHATGPT_ENDPOINT: Your Azure OpenAI endpoint URLAZURE_CHATGPT_MODEL: Your model deployment name
Configuration
Azure OpenAI API key for authentication.
Azure OpenAI endpoint URL (e.g.,
"https://your-resource.openai.azure.com/").Deprecated in v0.0.105. Use
settings=AzureLLMService.Settings(model=...)
instead.Azure OpenAI API version string.
AzureLLMService inherits from OpenAILLMService, it also accepts the following parameters:
Deprecated in v0.0.105. Use
settings=AzureLLMService.Settings(...)
instead.Request timeout in seconds. Used when
retry_on_timeout is enabled to
determine when to retry.Whether to retry the request once if it times out. The retry attempt has no
timeout limit.
Settings
Runtime-configurable settings passed via thesettings constructor argument using AzureLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details.
AzureLLMService uses the same settings as OpenAILLMService. See the OpenAI LLM Settings section for the full parameter reference.
Usage
Basic Setup
With Custom Settings
Updating Settings at Runtime
Model settings can be changed mid-conversation usingLLMUpdateSettingsFrame:
Notes
- Deployment name vs model name: The
modelparameter should be your Azure deployment name, not the underlying model name (e.g., use"my-gpt4-deployment"instead of"gpt-4"). - API version: Different API versions support different features. Check the Azure OpenAI documentation for version-specific capabilities.
- Full OpenAI compatibility: Since
AzureLLMServiceinherits fromOpenAILLMService, it supports all the same features including function calling, vision input, and streaming responses.
Event Handlers
AzureLLMService supports the same event handlers as OpenAILLMService, inherited from LLMService:
| Event | Description |
|---|---|
on_completion_timeout | Called when an LLM completion request times out |
on_function_calls_started | Called when function calls are received and execution is about to start |