Overview
DeepSeekLLMService provides access to DeepSeek’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with advanced reasoning capabilities.
DeepSeek LLM API Reference
Pipecat’s API methods for DeepSeek integration
Example Implementation
Complete example with function calling
DeepSeek Documentation
Official DeepSeek API documentation and features
DeepSeek Platform
Access models and manage API keys
Installation
To use DeepSeek services, install the required dependency:Prerequisites
DeepSeek Account Setup
Before using DeepSeek LLM services, you need:- DeepSeek Account: Sign up at DeepSeek Platform
- API Key: Generate an API key from your account dashboard
- Model Selection: Choose from available DeepSeek models with reasoning capabilities
Required Environment Variables
DEEPSEEK_API_KEY: Your DeepSeek API key for authentication
Configuration
DeepSeek API key for authentication.
Base URL for DeepSeek API endpoint.
Model identifier to use.Deprecated in v0.0.105. Use
settings=DeepSeekLLMService.Settings(model=...) instead.Runtime-configurable settings. See Settings below.
Settings
Runtime-configurable settings passed via thesettings constructor argument using DeepSeekLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details.
This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.
Usage
Basic Setup
With Custom Settings
Notes
- DeepSeek does not support the
seedandmax_completion_tokensparameters. Usemax_tokensinstead. - DeepSeek models offer strong reasoning capabilities, particularly the
deepseek-reasonermodel variant.