REST API Reference
RealTimeX provides a set of public REST API endpoints for building custom integrations when an official SDK is not available or not suitable for your environment.
Base URL: http://localhost:3001/
Authentication (v1.0.8+)
RealTimeX SDK supports two modes of authentication: Production Mode (App ID) and Developer Mode (API Key).
1. Production Mode (App ID)
Default mode used when your application is running within the RealTimeX ecosystem. It uses the x-app-id header to identify your app and verify specific manifest-based permissions.
Required Header:
X-App-Id: your-local-app-uuidThe x-app-id is automatically injected via the RTX_APP_ID environment variable when RealTimeX starts your application.
2. Developer Mode (API Key)
Recommended for local development and testing. It uses a standard Bearer token to grant full access to all SDK features without requiring upfront app registration or manifest configuration.
Required Header:
Authorization: Bearer your-api-keyHow to generate an API Key:
- Open RealTimeX Desktop
- Go to Settings → Tool → Developer API
- Click Generate New API Key
Identity Warning: In Development Mode, the system maps your identity directly to the API Key. If you share a single API Key across multiple different local apps, they will share the same configuration state. For example, updating vector database settings in App A will affect App B if they use the same key.
Error Responses
Missing Header (401):
{
"success": false,
"error": "Missing required header: x-app-id",
"code": "MISSING_APP_ID"
}Invalid App ID (401):
{
"success": false,
"error": "Invalid app ID: <app-id>",
"code": "INVALID_APP_ID"
}Permissions
Each API endpoint requires specific permissions. Permissions are declared in the SDK manifest and requested from the user.
Permission States
- Upfront Prompt: The SDK declares needed permissions in the constructor. RealTimeX prompts for all missing permissions on app launch.
- Runtime Prompt: If an API is called without a pre-granted permission, the SDK triggers a native dialog and retries the call automatically.
| Permission | Required For |
|---|---|
api.agents | GET /agents |
api.workspaces | GET /workspaces |
api.threads | GET /workspaces/:slug/threads |
api.task | GET /task/:uuid |
webhook.trigger | POST /webhooks/realtimex |
activities.read | GET /activities |
activities.write | POST /activities, PATCH /activities/:id, DELETE /activities/:id |
llm.chat | POST /sdk/llm/chat, POST /sdk/llm/chat/stream |
llm.embed | POST /sdk/llm/embed |
llm.providers | GET /sdk/llm/providers, GET /sdk/llm/providers/chat, GET /sdk/llm/providers/embed |
vectors.read | POST /sdk/llm/vectors/query, GET /sdk/llm/vectors/workspaces, GET /sdk/llm/vectors/config |
tts.generate | POST /sdk/tts, POST /sdk/tts/stream |
| vectors.write | POST /sdk/llm/vectors/upsert, POST /sdk/llm/vectors/delete, POST /sdk/llm/vectors/register |
| mcp.servers | GET /sdk/mcp/servers |
| mcp.tools | GET /sdk/mcp/servers/:name/tools, POST /sdk/mcp/servers/:name/tools/:tool/execute |
| (None) | GET /sdk/llm/vectors/providers |
SDK Endpoints
These endpoints are used by the official SDKs to coordinate with the RealTimeX Main App.
GET /sdk/ping
Connectivity and authentication check. Returns current mode and app context.
Request Headers:
x-app-idORAuthorization: Bearer <key>
Response (200):
{
"success": true,
"message": "pong",
"mode": "development",
"appId": "your-app-id-if-provided",
"timestamp": "2024-01-15T10:00:00Z"
}GET /sdk/local-apps/data-dir
Get the absolute path to the app-specific persistent storage directory.
Response (200):
{
"success": true,
"dataDir": "/Users/user/.realtimex.ai/Resources/local-apps/your-app-id"
}POST /webhooks/realtimex
Unified webhook receiver for SDK. Supports multiple event types.
Event: trigger-agent
Create a calendar event and optionally trigger an agent.
Request:
{
"app_id": "uuid",
"app_name": "My App",
"event": "trigger-agent",
"payload": {
"raw_data": { "type": "order", "amount": 100 },
"auto_run": true,
"agent_name": "order-processor",
"workspace_slug": "operations",
"thread_slug": "general",
"prompt": "Process this order"
}
}Response (200):
{
"success": true,
"task_uuid": "abc-123",
"calendar_event_uuid": "def-456",
"auto_run": true,
"message": "Agent triggered"
}Event: task-start
Mark a task as processing. Used by agents before starting work.
Request:
{
"event": "task-start",
"payload": {
"task_uuid": "abc-123",
"machine_id": "agent-001"
}
}Response: { "success": true, "status": "processing" }
Event: task-complete
Mark a task as completed with result data.
Request:
{
"event": "task-complete",
"payload": {
"task_uuid": "abc-123",
"machine_id": "agent-001",
"result": { "summary": "Order processed", "data": {...} }
}
}Response: { "success": true, "status": "completed" }
Event: task-fail
Mark a task as failed with error message.
Request:
{
"event": "task-fail",
"payload": {
"task_uuid": "abc-123",
"machine_id": "agent-001",
"error": "Connection timeout to external API"
}
}Response: { "success": true, "status": "failed" }
| Field | Type | Required | Description |
|---|---|---|---|
app_id | string | No | Local App UUID (from Main App) |
app_name | string | No | Display name for calendar events |
event | string | Yes | Event type (see above) |
payload | object | Yes | Event-specific payload |
GET /task/:uuid
Get task status and run history.
Response (200):
{
"success": true,
"task": {
"uuid": "abc-123",
"title": "[My App] New Activity",
"status": "completed",
"action_type": "SDK_TRIGGER",
"source_app": "uuid",
"error": null,
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T10:05:00Z"
},
"runs": [
{
"id": 1,
"agent_name": "processor",
"workspace_slug": "operations",
"thread_slug": "general",
"status": "completed",
"started_at": "2024-01-15T10:01:00Z",
"completed_at": "2024-01-15T10:05:00Z",
"error": null
}
]
}GET /agents
List available AI agents.
Response (200):
{
"success": true,
"agents": [
{
"name": "order-processor",
"display_name": "Order Processor",
"description": "Processes incoming orders",
"hub_id": null
}
]
}GET /workspaces
List all workspaces.
Response (200):
{
"success": true,
"workspaces": [
{
"id": 1,
"slug": "operations",
"name": "Operations",
"type": "workspace",
"created_at": "2024-01-01T00:00:00Z"
}
]
}GET /workspaces/:slug/threads
List threads in a workspace.
Response (200):
{
"success": true,
"workspace": {
"id": 1,
"slug": "operations",
"name": "Operations"
},
"threads": [
{
"id": 1,
"slug": "general",
"name": "General",
"created_at": "2024-01-01T00:00:00Z"
}
]
}Activities Endpoints (via Proxy)
These endpoints proxy to the Main App which handles Supabase operations. No direct database access is needed.
Important: If your Local App is configured with Require User Authentication enabled, you must call sdk.auth.syncSupabaseToken(token) before making requests to the activities proxy endpoints. Without the synced token, your proxy requests will execute with the anonymous key and will be rejected by Supabase's Row Level Security (RLS) policies.
POST /activities
Create a new activity.
Request Headers:
X-App-Id: Your Local App UUID (required)
Request Body:
{
"raw_data": { "type": "order", "amount": 100 }
}Response (201):
{
"data": {
"id": "activity-uuid",
"raw_data": { "type": "order", "amount": 100 },
"status": "pending",
"created_at": "2024-01-15T10:00:00Z"
}
}GET /activities
List activities with filters.
Request Headers:
X-App-Id: Your Local App UUID (required)
Query Parameters:
| Parameter | Type | Description |
|---|---|---|
status | string | Filter by status (pending, claimed, completed, failed) |
limit | number | Max results (default: 50) |
offset | number | Pagination offset |
PATCH /activities/:id
Update an activity status or result.
Request Body:
{
"status": "processed",
"result": { "summary": "Order processed" }
}DELETE /activities/:id
Delete an activity.
LLM Endpoints
Powerful AI capabilities including chat completion, streaming, and embeddings.
POST /sdk/llm/chat
Synchronous chat completion from configured LLM providers.
Request:
{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
}Response (200):
{
"success": true,
"response": {
"content": "Hello! How can I help you today?",
"provider": "openai",
"model": "gpt-4o",
"metrics": {
"prompt_tokens": 10,
"completion_tokens": 8,
"total_tokens": 18
}
}
}POST /sdk/llm/chat/stream
Streaming chat completion using Server-Sent Events (SSE).
Request Headers:
Accept: text/event-stream
Events:
data: JSON chunk containing{ "textResponse": "...", "uuid": "..." }event: error: Occurs if there's a runtime LLM error.data: [DONE]: End of stream.
POST /sdk/llm/embed
Generate high-dimensional vectors for text inputs.
Request:
{
"input": ["Hello world", "Artificial Intelligence"],
"provider": "realtimexai",
"model": "text-embedding-ada-002"
}Response (200):
{
"success": true,
"embeddings": [[0.1, 0.2, ...], [0.3, 0.4, ...]],
"dimensions": 1536,
"provider": "realtimexai",
"model": "text-embedding-ada-002"
}GET /sdk/llm/providers/chat
List only configured Chat (LLM) providers and their models.
Response (200):
{
"success": true,
"providers": [
{
"provider": "openai",
"models": [ { "id": "gpt-4o", "name": "gpt-4o" }, ... ],
"active": true
},
{
"provider": "anthropic",
"models": [ { "id": "claude-3-5-sonnet", "name": "Claude 3.5 Sonnet" }, ... ],
"active": false
}
]
}GET /sdk/llm/providers/embed
List only configured Embedding providers and their models.
Response (200):
{
"success": true,
"providers": [
{
"provider": "openai",
"models": [ { "id": "text-embedding-3-small", "name": "text-embedding-3-small" } ],
"active": false
},
{
"provider": "native",
"models": [ { "id": "all-MiniLM-L6-v2", "name": "all-MiniLM-L6-v2" } ],
"active": true
}
]
}Agent Endpoints
Interact with AI agents for complex tasks, tool usage, and persistent conversations.
POST /sdk/agent/session
Create a new stateful agent session. This allows for multi-turn conversations with context memory.
Request:
{
"workspace_slug": "optional-workspace-slug",
"thread_slug": "optional-existing-thread",
"agent_name": "@agent"
}Response (200):
{
"success": true,
"session": {
"session_id": "uuid-...",
"agent": {
"name": "@agent",
"provider": "openai",
"model": "gpt-4o"
}
}
}POST /sdk/agent/session/:id/chat
Send a message to an active session (Synchronous).
Request:
{
"message": "Analyze the sales data"
}Response (200):
{
"success": true,
"response": {
"id": "msg-uuid",
"text": "Based on the sales data...",
"thoughts": ["Reading file...", "Calculating metrics..."]
}
}POST /sdk/agent/session/:id/chat/stream
Stream a message response from an active session (SSE).
Response (SSE Events):
data: JSON object withtypefield.type: "agentThought": Internal reasoning or tool execution status. Content inthoughtfield.type: "textResponse": The final response content. Content intextResponsefield.
Handling Complex Responses:
If the textResponse field contains a JSON string, it may represent a rich content block (e.g., tool usage, files). Clients should allow parsing this string to display rich UI blocks.
Block Structure (if parsed from textResponse):
{
"dataType": "toolUse" | "files" | "image" | "json" | "text",
"data": { ... }
}Vector Store Endpoints
Store and query vectors for Retrieval-Augmented Generation (RAG).
POST /sdk/llm/vectors/upsert
Store vectors with metadata into the managed LanceDB/Vector storage.
Request:
{
"vectors": [
{
"id": "doc-1-chunk-1",
"vector": [0.1, 0.2, ...],
"metadata": { "text": "...", "documentId": "doc-1" }
}
],
"workspaceId": "knowledge-base"
}POST /sdk/llm/vectors/query
Perform semantic similarity search.
Request:
{
"vector": [0.1, 0.2, ...],
"topK": 5,
"workspaceId": "knowledge-base",
"filter": { "documentId": "doc-1" }
}[!TIP] Isolation & Filtering:
workspaceId: Use this for top-level data isolation. It creates separate namespaces in the vector database (physical isolation).documentId: Use this as a metadata tag for filtering results within a workspace. Filtering is applied after similarity search (post-filter).
POST /sdk/llm/vectors/delete
Delete vectors from storage. Currently supports deleteAll: true.
Request:
{
"deleteAll": true,
"workspaceId": "knowledge-base"
}GET /sdk/llm/vectors/workspaces
List all namespaces/workspaces storage for this app.
Response (200):
{
"success": true,
"workspaces": ["default", "testws", "ws1"]
}GET /sdk/llm/vectors/providers
List all supported vector database providers and their metadata.
Response (200):
{
"success": true,
"providers": [
{
"name": "pgvector",
"label": "PGVector",
"description": "PostgreSQL with pgvector extension",
"fields": [
{
"name": "databaseUrl",
"label": "Database URL",
"type": "string",
"placeholder": "postgresql://user:pass@host:5432/dbname"
},
{
"name": "tableName",
"label": "Table Name",
"type": "string",
"placeholder": "vectors"
}
]
}
]
}POST /sdk/llm/vectors/register
Register a custom vector database configuration for the authenticated app. The system will attempt to connect to the database before saving.
Request Body:
{
"provider": "pgvector",
"config": {
"databaseUrl": "postgresql://user:pass@host:5432/db",
"tableName": "app_vectors"
}
}Response (200):
{
"success": true,
"message": "Vector configuration registered successfully."
}Response (400):
{
"success": false,
"error": "Failed to connect to PGVector: connection refused"
}GET /sdk/llm/vectors/config
Retrieve the current active vector database configuration for this app.
Response (200):
{
"success": true,
"provider": "pgvector",
"config": {
"databaseUrl": "postgresql://user:****@host:5432/db",
"tableName": "app_vectors"
}
}Secrets like passwords in databaseUrl are automatically masked in the response for security.
Text-to-Speech (TTS) Endpoints
Generate speech from text using local or cloud providers.
GET /sdk/tts/providers
List available TTS providers and their configuration.
Response (200):
{
"success": true,
"providers": [
{
"id": "supertonic_local",
"name": "Supertonic (Local)",
"type": "client",
"configured": true,
"supportsStreaming": true,
"config": {
"voices": ["default"],
"languages": ["en", "vi", "fr", "..."],
"speed": { "min": 0.5, "max": 2.0, "default": 1.0 }
}
}
],
"default": "supertonic_local"
}POST /sdk/tts
Generate speech from text (returns audio buffer).
Request Body:
{
"text": "Hello world!",
"provider": "supertonic_local",
"voice": "default",
"speed": 1.0,
"language": "en"
}Response: Binary audio data with Content-Type: audio/wav or audio/mpeg.
POST /sdk/tts/stream
Generate speech with SSE streaming for progressive playback.
Request Body: Same as /sdk/tts
Response (SSE):
event: info: Initial info with chunk count.event: chunk: Audio chunk in base64.event: done: Stream completion.
Chunk Data Format:
{
"index": 0,
"total": 3,
"audio": "base64-string...",
"mimeType": "audio/wav"
}Speech-to-Text (STT) Endpoints
Convert spoken audio into text using local or cloud providers.
GET /sdk/stt/providers
List available STT providers and their supported models.
Response (200):
{
"success": true,
"providers": [
{
"id": "native",
"name": "Native (System)",
"description": "Uses system Web Speech API",
"models": []
},
{
"id": "whisper",
"name": "Whisper (Local)",
"description": "Runs locally using Transformers.js",
"models": [
{ "id": "onnx-community/whisper-base", "name": "Whisper Base" }
]
}
]
}POST /sdk/stt/listen
Listen to the microphone and transcribe speech to text.
Request Body:
{
"provider": "whisper",
"model": "onnx-community/whisper-base",
"timeout": 10000,
"language": "en"
}Response (200):
{
"success": true,
"text": "Hello world, this is a test."
}MCP Server Endpoints
Interact with MCP (Model Context Protocol) servers configured in RealTimeX. Supports both local servers (managed by the Hypervisor) and remote servers (connected via MCP Proxy).
GET /sdk/mcp/servers
List all configured MCP servers.
Query Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
provider | string | all | Filter: local, remote, or all |
Response (200):
{
"success": true,
"servers": [
{
"name": "fetch",
"display_name": "Web Fetch",
"description": "Make HTTP requests and fetch web content",
"server_type": "stdio",
"enabled": true,
"provider": "local",
"tags": ["web", "http", "api"]
},
{
"name": "github",
"display_name": "GitHub",
"description": "Interact with GitHub repositories",
"server_type": "remote",
"enabled": true,
"provider": "remote",
"tags": ["github", "git"]
}
]
}GET /sdk/mcp/servers/:serverName/tools
List available tools for a specific MCP server. The server will auto-boot if needed (local servers).
Query Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
provider | string | local | Server provider: local or remote |
Response (200):
{
"success": true,
"server": "fetch",
"provider": "local",
"tools": [
{
"name": "fetch",
"description": "Fetches a URL from the internet and extracts its contents as markdown.",
"input_schema": {
"type": "object",
"properties": {
"url": { "type": "string", "format": "uri", "description": "URL to fetch" },
"max_length": { "type": "integer", "default": 5000 }
},
"required": ["url"]
}
}
]
}POST /sdk/mcp/servers/:serverName/tools/:toolName/execute
Execute a tool on an MCP server. Local servers are auto-booted if not running.
Query Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
provider | string | local | Server provider: local or remote |
Request Body:
{
"arguments": {
"url": "https://httpbin.org/get",
"max_length": 500
}
}Response (200):
{
"success": true,
"server": "fetch",
"tool": "fetch",
"provider": "local",
"result": {
"content": [
{
"type": "text",
"text": "Contents of https://httpbin.org/get:\n{\"args\": {}, ...}"
}
],
"isError": false
}
}Security: MCP tool execution is essentially arbitrary code execution. Only grant mcp.tools permission to trusted applications.