Skip to main content

Overview

GrokLLMService provides access to Grok’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with Grok’s unique reasoning capabilities.

Installation

To use Grok services, install the required dependencies:
pip install "pipecat-ai[grok]"

Prerequisites

Grok Account Setup

Before using Grok LLM services, you need:
  1. X.AI Account: Sign up at X.AI Console
  2. API Key: Generate an API key from your console dashboard
  3. Model Selection: Choose from available Grok models

Required Environment Variables

  • XAI_API_KEY: Your X.AI API key for authentication

Configuration

api_key
str
required
X.AI API key for authentication.
base_url
str
default:"https://api.x.ai/v1"
Base URL for Grok API endpoint.
model
str
default:"None"
deprecated
Model identifier to use.Deprecated in v0.0.105. Use settings=GrokLLMService.Settings(model=...) instead.
settings
GrokLLMService.Settings
default:"None"
Runtime-configurable settings. See Settings below.

Settings

Runtime-configurable settings passed via the settings constructor argument using GrokLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details. This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.

Usage

Basic Setup

import os
from pipecat.services.grok import GrokLLMService

llm = GrokLLMService(
    api_key=os.getenv("XAI_API_KEY"),
    model="grok-3-beta",
)

With Custom Settings

from pipecat.services.grok import GrokLLMService

llm = GrokLLMService(
    api_key=os.getenv("XAI_API_KEY"),
    settings=GrokLLMService.Settings(
        model="grok-3-beta",
        temperature=0.7,
        top_p=0.9,
        max_completion_tokens=1024,
    ),
)

Notes

  • Grok uses incremental token reporting. The service accumulates token usage metrics during processing and reports the final totals at the end of each request.
  • Grok supports prompt caching and reasoning tokens, which are tracked in usage metrics when available.
The InputParams / params= pattern is deprecated as of v0.0.105. Use Settings / settings= instead. See the Service Settings guide for migration details.