Skip to main content
Version: Beta (v4.x)

Ask AI API Reference

The Ask AI API enables developers to build custom chat interfaces powered by Algolia's AI assistant. Use these endpoints to create tailored conversational experiences that search your Algolia index and generate contextual responses using your own LLM provider.

Key capabilities:

  • Real-time streaming responses for better user experience
  • Advanced facet filtering to control AI context
  • HMAC token authentication for secure API access
  • Full compatibility with popular frameworks like Next.js and Vercel AI SDK
warning

Ask AI is a private beta feature under the Algolia Terms of Service ("Beta Services"). Use of this feature is subject to Algolia's GenAI Addendum.

info

This API documentation is primarily for developers building custom Ask AI integrations. If you're using the DocSearch package, you typically won't need this information since DocSearch handles the Ask AI API integration automatically. For standard DocSearch usage, see the DocSearch documentation instead.

Overview​

The Algolia Ask AI API provides endpoints for integrating with an Algolia Ask AI assistant. You can use this API to build custom chat interfaces and integrate Algolia with your LLM.

Base URL: https://askai.algolia.com

All endpoints allow cross-origin requests (CORS) from browser-based apps.


Authentication​

Ask AI uses HMAC tokens for authentication. Tokens expire after 5 minutes, so you'll need to request a new one before each chat request.

Get an HMAC Token​

POST /chat/token

Headers:

  • X-Algolia-Assistant-Id: Your Ask AI assistant configuration ID
  • Origin (optional): Request origin for CORS validation
  • Referer (optional): Full URL of the requesting page

Response:

{
"success": true,
"token": "HMAC_TOKEN"
}

Endpoints​

Chat with Ask AI​

POST /chat

Start or continue a chat with the AI assistant. The response is streamed in real-time using server-sent events, allowing you to display the AI's response as it's being generated.

Headers:

  • X-Algolia-Application-Id: Your Algolia application ID
  • X-Algolia-API-Key: Your Algolia API key
  • X-Algolia-Index-Name: Algolia index to use
  • X-Algolia-Assistant-Id: Ask AI assistant configuration ID
  • Authorization: HMAC token (get from /chat/token)

Request Body:

{
"id": "your-conversation-id",
"messages": [
{
"role": "user",
"content": "What is Algolia?",
"id": "msg-123",
"createdAt": "2025-01-01T12:00:00.000Z",
"parts": [
{
"type": "text",
"text": "What is Algolia?"
}
]
}
],
"searchParameters": {
"facetFilters": ["language:en", "version:latest"]
}
}

Request Body Parameters:

  • id (string, required): Unique conversation identifier
  • messages (array, required): Array of conversation messages
    • role (string): "user" or "assistant"
    • content (string): Message content
    • id (string): Unique message ID
    • createdAt (string, optional): ISO timestamp
    • parts (array, optional): Message parts (used by Vercel AI SDK)
  • searchParameters (object, optional): Search configuration
    • facetFilters (array, optional): Filter the context used by Ask AI

Using Search Parameters:

Search parameters allow you to control how Ask AI searches your index:

{
"id": "conversation-1",
"messages": [
{
"role": "user",
"content": "How do I configure the API?",
"id": "msg-1"
}
],
"searchParameters": {
"facetFilters": [
"language:en",
"version:latest",
"type:content"
]
}
}

Advanced Facet Filtering with OR Logic:

You can use nested arrays for OR logic within facet filters:

{
"searchParameters": {
"facetFilters": [
"language:en",
[
"docusaurus_tag:default",
"docusaurus_tag:docs-default-current"
]
]
}
}

This example filters to:

  • language:en AND
  • (docusaurus_tag:default OR docusaurus_tag:docs-default-current)

Common Use Cases:

  • Multi-language sites: ["language:en"]
  • Versioned documentation: ["version:latest"] or ["version:v2.0"]
  • Content types: ["type:content"] to exclude navigation/metadata
  • Multiple tags: [["tag:api", "tag:tutorial"]] for OR logic
  • Categories with fallbacks: [["category:advanced", "category:intermediate"]]

Response:

  • Content-Type: text/event-stream
  • Format: Server-sent events with incremental AI response chunks
  • Benefits: Real-time response display, better user experience, lower perceived latency

Handling Streaming Responses:

const response = await fetch('/chat', { /* ... */ });
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;

const chunk = decoder.decode(value);
// Display chunk immediately in your UI
console.log('Received chunk:', chunk);
}

Submit Feedback​

POST /chat/feedback

Submit thumbs up/down feedback for a chat message.

Headers:

  • X-Algolia-Assistant-Id: Your Ask AI assistant configuration ID
  • Authorization: HMAC token

Request Body:

{
"appId": "YOUR_APP_ID",
"messageId": "msg-123",
"thumbs": 1
}
  • thumbs: 1 for positive feedback, 0 for negative

Response:

{
"success": true,
"message": "Feedback was successfully submitted."
}

Health Check​

GET /chat/health

Check the operational status of the Ask AI service.

Response: OK (text/plain)


Custom Integration Examples​

Basic Chat Implementation​

class AskAIChat {
constructor({ appId, apiKey, indexName, assistantId }) {
this.appId = appId;
this.apiKey = apiKey;
this.indexName = indexName;
this.assistantId = assistantId;
this.baseUrl = 'https://askai.algolia.com';
}

async getToken() {
const response = await fetch(`${this.baseUrl}/chat/token`, {
method: 'POST',
headers: {
'X-Algolia-Assistant-Id': this.assistantId,
},
});
const data = await response.json();
return data.token;
}

async sendMessage(conversationId, messages, searchParameters = {}) {
const token = await this.getToken();

const response = await fetch(`${this.baseUrl}/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Algolia-Application-Id': this.appId,
'X-Algolia-API-Key': this.apiKey,
'X-Algolia-Index-Name': this.indexName,
'X-Algolia-Assistant-Id': this.assistantId,
'Authorization': token,
},
body: JSON.stringify({
id: conversationId,
messages,
...(Object.keys(searchParameters).length > 0 && { searchParameters }),
}),
});

if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}

// Return a streaming iterator for real-time response handling
return {
async *[Symbol.asyncIterator]() {
const reader = response.body.getReader();
const decoder = new TextDecoder();

try {
while (true) {
const { done, value } = await reader.read();
if (done) break;

// Decode and yield each chunk as it arrives
const chunk = decoder.decode(value, { stream: true });
if (chunk.trim()) {
yield chunk;
}
}
} finally {
reader.releaseLock();
}
}
};
}

async submitFeedback(messageId, thumbs) {
const token = await this.getToken();

const response = await fetch(`${this.baseUrl}/chat/feedback`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Algolia-Assistant-Id': this.assistantId,
'Authorization': token,
},
body: JSON.stringify({
appId: this.appId,
messageId,
thumbs,
}),
});

return response.json();
}
}

// Usage with streaming
const chat = new AskAIChat({
appId: 'YOUR_APP_ID',
apiKey: 'YOUR_API_KEY',
indexName: 'YOUR_INDEX_NAME',
assistantId: 'YOUR_ASSISTANT_ID',
});

// Send message and handle streaming response
const stream = await chat.sendMessage('conversation-1', [
{
role: 'user',
content: 'What is Algolia?',
id: 'msg-1',
},
], {
facetFilters: ['language:en', 'type:content']
}); // Add search parameters

// Display response as it streams in real-time
let fullResponse = '';
for await (const chunk of stream) {
fullResponse += chunk;
console.log('Received chunk:', chunk);
// Update your UI immediately with each chunk
// e.g., appendToMessageUI(chunk);
}
console.log('Complete response:', fullResponse);

With Vercel AI SDK​

The Vercel AI SDK (v4) provides automatic handling of the request format and streaming:

Option 1: Direct Integration​

import { useChat } from 'ai/react';

function ChatComponent() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: 'https://askai.algolia.com/chat',
headers: {
'X-Algolia-Application-Id': 'YOUR_APP_ID',
'X-Algolia-API-Key': 'YOUR_API_KEY',
'X-Algolia-Index-Name': 'YOUR_INDEX_NAME',
'X-Algolia-Assistant-Id': 'YOUR_ASSISTANT_ID',
},
});

return (
<div>
{messages.map(m => (
<div key={m.id}>
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))}

<form onSubmit={handleSubmit}>
<input
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
}

For better security and token management, create a Next.js API route as a proxy:

pages/api/chat.ts (or app/api/chat/route.ts for App Router):

import { StreamingTextResponse } from 'ai';

export const runtime = 'edge';

async function getToken(assistantId: string, origin: string) {
const tokenRes = await fetch('https://askai.algolia.com/chat/token', {
method: 'POST',
headers: {
'X-Algolia-Assistant-Id': assistantId,
'Origin': origin,
},
});

const tokenData = await tokenRes.json();
if (!tokenData.success) {
throw new Error(tokenData.message || 'Failed to get token');
}
return tokenData.token;
}

export default async function handler(req: Request) {
try {
const body = await req.json();
const assistantId = process.env.ALGOLIA_ASSISTANT_ID!;
const origin = req.headers.get('origin') || '';

// Fetch a new token before each chat call
const token = await getToken(assistantId, origin);

// Prepare headers for Algolia Ask AI
const headers = {
'X-Algolia-Application-Id': process.env.ALGOLIA_APP_ID!,
'X-Algolia-API-Key': process.env.ALGOLIA_API_KEY!,
'X-Algolia-Index-Name': process.env.ALGOLIA_INDEX_NAME!,
'X-Algolia-Assistant-Id': assistantId,
'Authorization': token,
'Content-Type': 'application/json',
};

// Forward the request to Algolia Ask AI
const response = await fetch('https://askai.algolia.com/chat', {
method: 'POST',
headers,
body: JSON.stringify(body),
});

if (!response.ok) {
throw new Error(`Ask AI API error: ${response.status}`);
}

// Stream the response back to the client
return new StreamingTextResponse(response.body);
} catch (error) {
console.error('Chat API error:', error);
return new Response(
JSON.stringify({ error: 'Internal server error' }),
{ status: 500, headers: { 'Content-Type': 'application/json' } }
);
}
}

Environment Variables (.env.local):

ALGOLIA_APP_ID=your_app_id
ALGOLIA_API_KEY=your_api_key
ALGOLIA_INDEX_NAME=your_index_name
ALGOLIA_ASSISTANT_ID=your_assistant_id

Frontend with useChat:

import { useChat } from 'ai/react';

function ChatComponent() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat', // Use your Next.js API route
body: {
searchParameters: {
facetFilters: ['language:en', 'type:content']
},
},
});

return (
<div className="chat-container">
<div className="messages">
{messages.map(m => (
<div key={m.id} className={`message ${m.role}`}>
<strong>{m.role === 'user' ? 'You' : 'AI'}:</strong>
<div>{m.content}</div>
</div>
))}
{isLoading && <div className="loading">AI is thinking...</div>}
</div>

<form onSubmit={handleSubmit}>
<input
value={input}
placeholder="Ask a question..."
onChange={handleInputChange}
disabled={isLoading}
/>
<button type="submit" disabled={isLoading}>
{isLoading ? 'Sending...' : 'Send'}
</button>
</form>
</div>
);
}

Benefits of the proxy approach:

  • Security: API keys stay on the server
  • Token management: Automatic token refresh
  • Error handling: Centralized error management
  • CORS: No cross-origin issues
  • Caching: Can add caching logic if needed

Error Handling​

All error responses follow this format:

{
"success": false,
"message": "Error description"
}

Common error scenarios:

  • Invalid assistant ID: Configuration doesn't exist
  • Expired token: Request a new HMAC token
  • Rate limiting: Too many requests
  • Invalid index: Index name doesn't exist or isn't accessible

Best Practices​

  1. Token Management: Always request a fresh HMAC token before chat requests
  2. Error Handling: Implement retry logic for network failures
  3. Streaming: Handle server-sent events properly for real-time responses
  4. Feedback: Implement thumbs up/down for continuous improvement
  5. CORS: Ensure your domain is whitelisted in your Ask AI configuration

For more information, see the Ask AI documentation.