Importing custom models

Importing custom LLMs into RealTimeX

RealTimeX allows you to easily load any valid GGUF file and select that as your LLM with zero-setup. Please use text-based LLMs only for this process. Embedder models will not function as chat models.

Import a model into RealTimeX

‼️

Desktop only!
The following steps currently apply to RealTimeX Desktop.


Use Ollama to Import and Run Models

Ollama makes it extremely easy to run LLMs locally, and RealTimeX can connect to any Ollama-served model.

Step-by-step:

  1. Install Ollama

  2. Pull an LLM with Ollama

  3. Connect RealTimeX to Ollama

    • In RealTimeX Desktop, go to the LLM setup or model selection screen.
    • Choose Ollama as the model provider.
    • RealTimeX will auto-detect running Ollama models. Select the one you want to use.
  4. Start using the model!

    • The model will now be available in RealTimeX just like any built-in option.
đź’ˇ

You can run multiple models with Ollama. Just pull each one with ollama pull modelname and select the active model in RealTimeX.


Troubleshooting