Local Apps Developer Guide
This guide covers how to integrate your applications with RealTimeX. Depending on your project requirements, you can choose between two operating modes.
Prerequisites
Regardless of the mode you choose, you must first configure your Local App in the RealTimeX Main App:
- Open RealTimeX → Settings → Local Apps
- Create or configure your Local App
- Enter Supabase URL and Anon Key
- For Compatible Mode, click Login to Supabase → Auto-Setup Schema.
Operating Modes
RealTimeX SDK supports two modes based on your stage of development.
1. Developer Mode (API Key)
Recommended for local development. Use an API key to gain full access to all features without manifest configuration.
How to get an API key:
- Open RealTimeX Desktop
- Go to Settings → Tool → Developer API
- Click Generate New API Key
In Developer Mode, you don't need to specify permissions in the SDK constructor. All permissions are granted automatically.
Identity & Data Isolation: In Development Mode, RealtimeX identifies your application based on the API Key. If multiple local apps use the same API Key, they will share the same identity. This means registering a custom vector database in one app will overwrite the configuration for all other apps using that same key.
2. Production Mode (App ID)
Default mode when your app is launched by RealTimeX. It uses manifest-based permissions and the RTX_APP_ID environment variable.
Installation
npm install @realtimex/sdkDemo Examples
Check out our complete demo applications showcasing all SDK features:
👉 GitHub: local-app-examples (opens in a new tab)
| Example | Language | Description |
|---|---|---|
nodejs-app | TypeScript + Express | Full-featured demo with TailwindCSS UI |
python-app | Python + NiceGUI | Interactive demo with real-time UI |
Quick Start
When you start your Local App from the RealTimeX Main App, environment variables RTX_APP_ID and RTX_APP_NAME are automatically set. The SDK auto-detects these.
import express from 'express';
import { RealtimeXSDK } from '@realtimex/sdk';
const app = express();
const sdk = new RealtimeXSDK({
permissions: [
'api.agents', // List agents
'api.workspaces', // List workspaces
'api.threads', // List threads
'webhook.trigger', // Trigger agents
'database.config', // Read Database config
'activities.read', // Read activities
'activities.write', // Write activities
'llm.chat', // Chat completion
'llm.embed', // Generate embeddings
'vectors.read', // Query vectors
'vectors.write', // Store vectors
'stt.listen', // Transcribe audio
'mcp.servers', // List MCP servers
'mcp.tools', // Use MCP tools
],
});
// OR Developer Mode (API Key)
const sdkDev = new RealtimeXSDK({
realtimex: {
apiKey: 'sk-your-api-key'
},
permissions: [
'api.agents', // List agents
'api.workspaces', // List workspaces
'api.threads', // List threads
'webhook.trigger', // Trigger agents
'database.config', // Read Database config
'activities.read', // Read activities
'activities.write', // Write activities
'llm.chat', // Chat completion
'llm.embed', // Generate embeddings
'llm.providers', // List LLM providers (chat, embed)
'vectors.read', // Query vectors
'vectors.write', // Store vectors
]
});
// Get available port (auto-detects or finds free port if conflict)
const port = await sdk.port.getPort();
// Insert activity
const activity = await sdk.activities.insert({
type: 'new_lead',
email: 'user@example.com',
});
// Trigger agent
await sdk.webhook.triggerAgent({
raw_data: activity,
auto_run: true,
agent_name: 'processor',
workspace_slug: 'sales',
thread_slug: 'general',
});
app.listen(port, () => console.log(`Running on port ${port}`));SDK Features
Supabase Auth & Context
If your RealTimeX Local App requires user authentication directly with Supabase (i.e. you have enabled the Require User Authentication toggle in your Local App configuration), you must dynamically load the workspace database credentials and sync the user's login session to the Main App. This enables seamless bypass of Row Level Security (RLS) for Realtime updates and Activities operations.
Important: If Require User Authentication is enabled on the app config, sdk.activities.* operations will fail (due to RLS policy) unless you call sdk.auth.syncSupabaseToken() after the user logs in. If it is disabled (Open Access mode), you do not need to sync any token.
Required Permission: You must include 'database.config' in your SDK permissions to fetch the credentials.
import { createClient } from '@supabase/supabase-js';
// 1. Fetch Supabase configuration from Main App
const config = await sdk.database.getConfig();
console.log('Using Supabase URL:', config.url);
// 2. Initialize your Supabase client
const supabase = createClient(config.url, config.anonKey, {
auth: { persistSession: true }
});
// 3. User logs in to your app
const { data, error } = await supabase.auth.signInWithPassword({
email: 'user@example.com',
password: 'password123'
});
// 4. Sync token to Main App for RLS operations
if (data.session?.access_token) {
const syncResult = await sdk.auth.syncSupabaseToken(data.session.access_token);
console.log('Token synced successfully:', syncResult.success);
}Activities CRUD
Manage your activities table directly through the SDK without needing direct database access. Any data changes (INSERT, UPDATE, or DELETE) will automatically create a new calendar event. If Automation Agent Handlers are configured, they will also trigger an automated agent task.
If you enabled Require User Authentication in your Local App settings, you must call sdk.auth.syncSupabaseToken(token) before using these methods, otherwise Supabase Row Level Security (RLS) will block the operations.
// Insert new activity
const activity = await sdk.activities.insert({
type: 'order',
amount: 100
});
// List activities with filters
const pending = await sdk.activities.list({
status: 'pending',
limit: 10
});
// Get single activity
const item = await sdk.activities.get('activity-uuid');
// Update activity
await sdk.activities.update('activity-uuid', {
status: 'processed'
});
// Delete activity
await sdk.activities.delete('activity-uuid');Webhook & Agent Triggering
Trigger AI Agents to process data either manually or automatically.
// Manual Mode (Default) - Creates calendar event for review
await sdk.webhook.triggerAgent({
raw_data: { email: 'user@example.com' },
});
// Auto-run Mode - Triggers agent immediately
await sdk.webhook.triggerAgent({
raw_data: activity,
auto_run: true,
agent_name: 'pdf-processor',
workspace_slug: 'operations',
thread_slug: 'general', // or "create_new" to create new thread
prompt: 'Summarize this file' // Optional
});Agent Chat & Sessions
Interact with AI Agents using stateful sessions, supporting multi-turn conversations and streaming responses.
Required Permissions:
const sdk = new RealtimeXSDK({
permissions: ['agent.chat']
});1. Session Management
Create a session to maintain conversation history.
// Create a new session
const session = await sdk.agent.createSession({
agent_name: '@agent', // Optional: specific agent
workspace_slug: 'operations' // Optional
});
console.log('Session ID:', session.session_id);
// Get session info
const info = await sdk.agent.getSession(session.session_id);
// Close session
await sdk.agent.closeSession(session.session_id);2. Chat (Synchronous)
Send a message and wait for the full response.
const response = await sdk.agent.chat(session.session_id, "Analyze this dataset");
console.log(response.text);
console.log('Thoughts:', response.thoughts);3. Chat (Streaming)
Stream the response token-by-token, including thoughts and rich content blocks.
for await (const event of sdk.agent.streamChat(session.session_id, "Analyze this dataset")) {
if (event.type === 'agentThought') {
console.log('Thinking:', event.thought);
} else if (event.type === 'textResponse') {
process.stdout.write(event.textResponse);
} else if (event.type === 'responseData') {
// Handle rich blocks (tool use, files, etc.)
const block = JSON.parse(event.textResponse);
console.log('Rich Block:', block.dataType);
}
}4. Stateless Chat (One-off)
If you don't need a session, you can send a one-off message.
const { response } = await sdk.agent.startChat("Quick question", {
agent_name: '@agent'
});Public Metadata API
Access RealTimeX system metadata like available agents and workspaces.
const agents = await sdk.api.getAgents();
const workspaces = await sdk.api.getWorkspaces();
const threads = await sdk.api.getThreads('workspace-slug');
const taskStatus = await sdk.api.getTask('task-uuid');Port Management
The SDK includes automatic port management to prevent conflicts when running multiple Local Apps simultaneously.
// Get an available port (auto-detects from RTX_PORT or finds free port)
const port = await sdk.port.getPort();
app.listen(port);
// Or use individual methods
const suggested = sdk.port.getSuggestedPort(); // RTX_PORT env or 8080
const available = await sdk.port.isPortAvailable(3000);
const freePort = await sdk.port.findAvailablePort(8080);How it works: When launched from RealTimeX, your app receives an RTX_PORT environment variable with the configured port. The SDK uses this as the starting point and automatically finds the next available port if it's occupied.
App Data Directory
Every Local App has a dedicated persistent storage directory on the user's machine. You can use this to store local databases, logs, or large assets that shouldn't be in the source code.
Path Format: ~/.realtimex.ai/Resources/local-apps/{appId}
const dataDir = await sdk.getAppDataDir();
console.log('My persistent storage is at:', dataDir);Development Mode: In development mode using an API Key, the directory is derived from the masked API key to prevent data collision between different development environments.
LLM Proxy & Vector Store
The SDK provides access to RealtimeX's LLM capabilities without needing to manage API keys directly. Perfect for building RAG (Retrieval-Augmented Generation) applications.
Required Permissions:
const sdk = new RealtimeXSDK({
permissions: ['llm.chat', 'llm.embed', 'llm.providers', 'vectors.read', 'vectors.write']
});List Available Models
// Get chat (LLM) providers
const { providers: chatProviders } = await sdk.llm.chatProviders();
console.log('Chat models:', chatProviders[0].models);
// Get embedding providers
const { providers: embedProviders } = await sdk.llm.embedProviders();
console.log('Embedding models:', embedProviders[0].models);Chat Completion
// Sync Chat
const response = await sdk.llm.chat(
[
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is RealtimeX?' }
],
{
model: 'gpt-4o', // Optional: specific model
provider: 'openai', // Optional: specific provider
temperature: 0.7, // Optional: 0.0-2.0
max_tokens: 1000 // Optional: max response tokens
}
);
console.log(response.response?.content);
// Streaming Chat
for await (const chunk of sdk.llm.chatStream(messages, options)) {
process.stdout.write(chunk.textResponse || '');
}Generate Embeddings
const { embeddings, dimensions, provider, model } = await sdk.llm.embed(
['Hello world', 'Goodbye'],
{ provider: 'openai', model: 'text-embedding-3-small' } // Optional
);
// embeddings: number[][] - vector arrays
// dimensions: number - vector dimension (e.g., 1536)Vector Store (RAG)
Store and search vectors for building knowledge bases:
// Upsert vectors with metadata
await sdk.llm.vectors.upsert([
{
id: 'chunk-1',
vector: embeddings[0],
metadata: {
text: 'Hello world', // Original text (for retrieval)
documentId: 'doc-1', // Logical grouping
customField: 'any value' // Any custom metadata
}
}
], {
workspaceId: 'ws-123' // Optional: physical namespace isolation
});
// Query similar vectors
const results = await sdk.llm.vectors.query(queryVector, {
topK: 5, // Number of results
workspaceId: 'ws-123', // Optional: search in specific workspace
filter: { documentId: 'doc-1' } // Optional: filter by document
});
// returns: { success, results: [{ id, score, metadata }] }
// List all namespaces/workspaces storage for this app
const { workspaces } = await sdk.llm.vectors.listWorkspaces();
// returns: { success, workspaces: ['ws-123', 'default', ...] }
// Delete all vectors in a workspace
await sdk.llm.vectors.delete({
deleteAll: true,
workspaceId: 'ws-123'
});High-Level Helpers (Recommended)
For common RAG patterns, use these helpers that combine embedding + storage/search:
// embedAndStore: Text → Embed → Store (one call)
await sdk.llm.embedAndStore(
['Document text 1', 'Document text 2'], // texts to embed
{
documentId: 'doc-123', // Optional: logical grouping
workspaceId: 'ws-456', // Optional: physical isolation
provider: 'openai', // Optional: embedding provider
model: 'text-embedding-3-small' // Optional: embedding model
}
);
// search: Query → Embed → Search (one call)
const searchResults = await sdk.llm.search(
'What is RealtimeX?', // search query (text, not vector)
{
topK: 5, // Number of results
workspaceId: 'ws-123', // Optional: search in workspace
documentId: 'doc-1', // Optional: filter by document
provider: 'openai', // Optional: embedding provider
model: 'text-embedding-3-small' // Optional: embedding model
}
);
// returns: [{ id, score, metadata: { text, documentId, ... } }]
// Use results for RAG
const context = results.map(r => r.metadata.text).join('\n');
const response = await sdk.llm.chat([
{ role: 'system', content: `Context:\n${context}` },
{ role: 'user', content: 'What is RealtimeX?' }
]);Isolation vs Filtering:
workspaceId: Creates physical namespace (sdk_{appId}_{wsId}) - data completely isolateddocumentId: Metadata tag, filtered after search - logical grouping within workspace
Custom Vector Storage
By default, every Local App is assigned an isolated LanceDB instance. However, you can discover other supported providers and register your own production-grade vector database (like PGVector, Chroma, or AstraDB).
1. Discover Supported Providers
Query available vector database engines and their required configuration fields.
const { providers } = await sdk.llm.vectors.listProviders();
// providers: [{ name, label, description, fields: [...] }]2. Register Custom Configuration
Once you have the configuration for your provider, register it. The SDK will automatically test the connection before saving.
await sdk.llm.vectors.registerConfig('pgvector', {
databaseUrl: 'postgresql://user:pass@host:5432/db',
tableName: 'my_app_vectors'
});3. Retrieve Current Configuration
You can programmatically check which storage backend your app is currently using.
const { provider, config } = await sdk.llm.vectors.getConfig();
console.log(`Using ${provider} at ${config.uri || config.databaseUrl}`);Persistence: Once a custom configuration is successfully registered, RealtimeX persists it for your App ID. All subsequent upsert, query, and search calls will automatically use this backend.
Text-to-Speech (TTS)
Generate speech from text using local (Supertonic, Piper) or cloud (OpenAI, ElevenLabs) providers.
Required Permissions:
const sdk = new RealtimeXSDK({
permissions: ['tts.generate']
});List Available Providers
const providers = await sdk.tts.listProviders();
// providers: [{ id, name, type, configured, config: { voices, languages, speed } }]
// Filter to configured providers only
const available = providers.filter(p => p.configured);Generate Speech (Buffer)
Get complete audio as a buffer - best for short text:
const audioBuffer = await sdk.tts.speak("Hello world!", {
provider: 'supertonic_local', // Optional: defaults to system default
voice: 'default', // Optional: voice/speaker ID
speed: 1.0, // Optional: 0.5-2.0
language: 'en', // Optional: for multilingual providers
num_inference_steps: 10 // Optional: Supertonic quality (1-20)
});
// audioBuffer is ArrayBuffer, save or play it
const blob = new Blob([audioBuffer], { type: 'audio/wav' });
const audio = new Audio(URL.createObjectURL(blob));
audio.play();Streaming Speech (Chunked)
For long text, use streaming to get progressive audio chunks:
for await (const chunk of sdk.tts.speakStream("Long text content...")) {
console.log(`Chunk ${chunk.index + 1}/${chunk.total}`);
// chunk.audio is ArrayBuffer (already decoded!)
const blob = new Blob([chunk.audio], { type: chunk.mimeType });
const audio = new Audio(URL.createObjectURL(blob));
await audio.play(); // Play each chunk as it arrives
}Provider Types:
- Local (client): Supertonic, Piper - runs offline, no API costs
- Cloud (server): OpenAI, ElevenLabs, Groq - requires API keys
Speech-to-Text (STT)
Convert spoken audio into text using local or cloud providers.
Required Permissions:
const sdk = new RealtimeXSDK({
permissions: ['stt.listen']
});List Available Providers & Models
Retrieve a list of available STT providers and their supported models.
const providers = await sdk.stt.listProviders();
// providers: [
// { id: 'native', name: 'Native (System)', models: [] },
// { id: 'whisper', name: 'Whisper (Local)', models: [{ id: 'onnx-base', ... }] },
// { id: 'groq', name: 'Groq Cloud', models: [...] }
// ]
console.log(`Available providers: ${providers.map(p => p.name).join(', ')}`);Transcribe Audio
Listen to the microphone and transcribe speech to text.
const { text, error } = await sdk.stt.listen({
language: 'en-US', // Optional: 'en-US', 'vi-VN', etc.
timeout: 10000, // Optional: Max duration in ms (default 60s)
provider: 'whisper', // Optional: 'native', 'whisper', 'groq'
model: 'onnx-community/whisper-base' // Optional: specific model ID (if supported)
});
if (error) {
console.error('STT failed:', error);
} else {
console.log('Transcript:', text);
}Providers:
- native: Uses the system's Web Speech API (Fast, no download required, limited language support).
- whisper: Uses local Transformers.js via ONNX (High accuracy, privacy-focused, requires model download).
- groq: Uses Groq Cloud API (Fastest cloud inference, requires API key).
MCP Server Tools
Interact with MCP (Model Context Protocol) servers configured in RealTimeX. List available servers, discover their tools, and execute them programmatically.
Required Permissions:
const sdk = new RealtimeXSDK({
permissions: ['mcp.servers', 'mcp.tools']
});List MCP Servers
// List all servers (local + remote)
const servers = await sdk.mcp.getServers();
// Filter by provider
const localOnly = await sdk.mcp.getServers('local');
const remoteOnly = await sdk.mcp.getServers('remote');
for (const server of servers) {
console.log(`${server.name} (${server.provider}) - ${server.display_name}`);
}List Server Tools
const tools = await sdk.mcp.getTools('fetch');
for (const tool of tools) {
console.log(`Tool: ${tool.name}`);
console.log(` Description: ${tool.description}`);
console.log(` Parameters: ${JSON.stringify(tool.input_schema)}`);
}Execute a Tool
// Execute the 'fetch' tool on the local 'fetch' MCP server
const result = await sdk.mcp.executeTool('fetch', 'fetch', {
url: 'https://httpbin.org/get',
max_length: 500
});
console.log(result);
// { content: [{ type: 'text', text: '...' }], isError: false }
// Execute on a remote server
const remoteResult = await sdk.mcp.executeTool(
'github', 'list-repos', { user: 'octocat' }, 'remote'
);Security: MCP tool execution is essentially arbitrary code execution. Only grant mcp.tools permission to applications you trust.
Auto-boot: Local MCP servers are automatically started when you call getTools() or executeTool() if they are not already running.
REST API Access
If you are not using one of our official SDKs, you can communicate with RealTimeX directly via our REST API.