Overview
TogetherLLMService provides access to Together AI’s language models, including Meta’s Llama 3.1 and 3.2 models, through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with optimized open-source model hosting.
Together AI LLM API Reference
Pipecat’s API methods for Together AI integration
Example Implementation
Complete example with function calling
Together AI Documentation
Official Together AI API documentation and features
Together AI Platform
Access open-source models and manage API keys
Installation
To use Together AI services, install the required dependencies:Prerequisites
Together AI Account Setup
Before using Together AI LLM services, you need:- Together AI Account: Sign up at Together AI
- API Key: Generate an API key from your account dashboard
- Model Selection: Choose from available open-source models (Llama, Mistral, etc.)
Required Environment Variables
TOGETHER_API_KEY: Your Together AI API key for authentication
Configuration
Together AI API key for authentication.
Base URL for Together AI API endpoint.
Deprecated in v0.0.105. Use
settings=TogetherLLMService.Settings(model=...)
instead.Runtime-configurable settings. See Settings below.
Settings
Runtime-configurable settings passed via thesettings constructor argument using TogetherLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details.
This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.
Usage
Basic Setup
With Custom Settings
Notes
- Together AI hosts a wide variety of open-source models. Model identifiers use the
organization/model-nameformat. - Together AI fully supports the OpenAI-compatible parameter set inherited from
OpenAILLMService.