Large Language Models
đź’ˇ
Tip: Multi-modal models (text-to-text & image-to-text) are supported for both System and Workspace use.
RealTimeX empowers you to chat, generate, and build with the world’s best LLMs—from local models to global cloud providers.
Select your preferred provider and model; some may require additional configuration.
Supported Language Model Providers
Local Language Model Providers
Built-in (default)
Default fast and private model for all users.
Ollama
Modern local LLM runner, supports many open models.
LM Studio
Easy-to-use desktop app for local LLM inference.
Local AI
Open source, extensible local inference backend.
Cloud Language Model Providers
OpenAI
GPT-4o, GPT-4, GPT-3.5, and more.
Azure OpenAI
Microsoft’s enterprise OpenAI service.
AWS Bedrock
Amazon’s generative AI platform.
Anthropic
Claude family models (Claude 3, Opus, etc).
Cohere
Enterprise language and embedding models.
Google Gemini Pro
Google’s latest multi-modal AI.
Hugging Face
Run any model from the Hugging Face Hub.
Together AI
Fast, multi-model provider with open APIs.
OpenRouter
Unified API for many top LLMs.
Perplexity AI
Conversational search and AI answers.
Mistral API
Fast, powerful open models (Mistral, Mixtral, etc).
Groq
Ultra-fast inference for top LLMs.
KoboldCPP
Run GGUF models on your own hardware.
OpenAI (generic)
Generic OpenAI-compatible API.