Overview
GrokLLMService provides access to Grok’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with Grok’s unique reasoning capabilities.
Grok LLM API Reference
Pipecat’s API methods for Grok integration
Example Implementation
Complete example with function calling
Grok Documentation
Official Grok API documentation and features
X.AI Platform
Access Grok models and manage API keys
Installation
To use Grok services, install the required dependencies:Prerequisites
Grok Account Setup
Before using Grok LLM services, you need:- X.AI Account: Sign up at X.AI Console
- API Key: Generate an API key from your console dashboard
- Model Selection: Choose from available Grok models
Required Environment Variables
XAI_API_KEY: Your X.AI API key for authentication
Configuration
X.AI API key for authentication.
Base URL for Grok API endpoint.
Model identifier to use.Deprecated in v0.0.105. Use
settings=GrokLLMService.Settings(model=...) instead.Settings
Runtime-configurable settings passed via thesettings constructor argument using GrokLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details.
This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.
Usage
Basic Setup
With Custom Settings
Notes
- Grok uses incremental token reporting. The service accumulates token usage metrics during processing and reports the final totals at the end of each request.
- Grok supports prompt caching and reasoning tokens, which are tracked in usage metrics when available.