Installation

Install the React SDK using your preferred package manager:

npm install @outspeed/react

Prerequisites

Before using the React SDK, you’ll need:

  1. Outspeed API Key: Get your API key from the Outspeed Dashboard
  2. Backend Token Endpoint: A server endpoint to generate ephemeral keys for client authentication

Backend Setup

app.use(express.json());

app.post("/token", async (req, res) => {
  try {
    const response = await fetch("https://api.outspeed.com/v1/realtime/sessions", {
      method: "POST",
      headers: {
        Authorization: `Bearer ${process.env.OUTSPEED_API_KEY}`,
        "Content-Type": "application/json",
      },
      body: JSON.stringify(req.body),
    });

    if (!response.ok) {
      const error = await response.text();
      console.error("failed to generate ephemeral key:", error);
      res.status(response.status).json({ error: "Failed to generate token" });
      return;
    }

    const data = await response.json();
    res.json(data);
  } catch (error) {
    console.error("failed to generate ephemeral key:", error);
    res.status(500).json({ error: "Internal server error" });
  }
});

Environment Variables

Add your Outspeed API key to your server’s environment variables:

OUTSPEED_API_KEY=your_outspeed_api_key_here

Never expose your Outspeed API key in client-side code. Always generate ephemeral tokens on your backend server.

Session Configuration

The React SDK uses a SessionConfig object to configure voice sessions:

import { type SessionConfig } from "@outspeed/client";

const sessionConfig: SessionConfig = {
  model: "outspeed-v1",
  instructions: "You are a helpful assistant.",
  voice: "david", // see the voices page for all available voices
  turn_detection: {
    type: "semantic_vad",
  },
  first_message: "Hello! How can I help you today?", // Optional welcome message
};

Want to use a different voice? See all available voices you can choose from.

Configuration Options

OptionTypeDescription
modelstringMust be "outspeed-v1"
instructionsstringSystem prompt for the AI assistant
voicestringVoice ID to use for speech synthesis
turn_detectionobjectVoice activity detection settings
first_messagestringOptional initial message from assistant
You can find available voices here

Next Steps

Now that you have the SDK installed and configured, you can start building voice AI applications: