Skip to main content

Overview

PerplexityLLMService provides access to Perplexity’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses and context management, with special handling for Perplexity’s incremental token reporting and built-in internet search capabilities.

Installation

To use Perplexity services, install the required dependencies:
pip install "pipecat-ai[perplexity]"

Prerequisites

Perplexity Account Setup

Before using Perplexity LLM services, you need:
  1. Perplexity Account: Sign up at Perplexity
  2. API Key: Generate an API key from your account dashboard
  3. Model Selection: Choose from available models with built-in search capabilities

Required Environment Variables

  • PERPLEXITY_API_KEY: Your Perplexity API key for authentication
Unlike other LLM services, Perplexity does not support function calling. Instead, they offer native internet search built in without requiring special function calls.

Configuration

api_key
str
required
Perplexity API key for authentication.
base_url
str
default:"https://api.perplexity.ai"
Base URL for Perplexity API endpoint.
model
str
default:"None"
deprecated
Deprecated in v0.0.105. Use settings=PerplexityLLMService.Settings(model=...) instead.
settings
PerplexityLLMService.Settings
default:"None"
Runtime-configurable settings. See Settings below.

Settings

Runtime-configurable settings passed via the settings constructor argument using PerplexityLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details. This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference.

Usage

Basic Setup

import os
from pipecat.services.perplexity import PerplexityLLMService

llm = PerplexityLLMService(
    api_key=os.getenv("PERPLEXITY_API_KEY"),
    model="sonar",
)

With Custom Settings

from pipecat.services.perplexity import PerplexityLLMService

llm = PerplexityLLMService(
    api_key=os.getenv("PERPLEXITY_API_KEY"),
    settings=PerplexityLLMService.Settings(
        model="sonar",
        temperature=0.7,
        top_p=0.9,
        max_tokens=1024,
    ),
)

Notes

  • Perplexity does not support function calling or tools. The service only sends messages to the API, without tool definitions.
  • Perplexity uses incremental token reporting. The service accumulates token usage metrics during processing and reports the final totals at the end of each request.
  • Perplexity models have built-in internet search capabilities, providing up-to-date information without requiring additional tool configuration.
The InputParams / params= pattern is deprecated as of v0.0.105. Use Settings / settings= instead. See the Service Settings guide for migration details.