branch: elpa/gptel
commit fa83f993fef0a29e544e952aace5260353174bce
Author: Alexis Gallagher <ale...@alexisgallagher.com>
Commit: GitHub <nore...@github.com>

    README: Add setup instructions for Open WebUI (#954)
---
 README.org | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/README.org b/README.org
index 59b89ec33e..349cdaae27 100644
--- a/README.org
+++ b/README.org
@@ -14,6 +14,7 @@ gptel is a simple Large Language Model chat client for Emacs, 
with support for m
 | Anthropic (Claude)   | ✓        | [[https://www.anthropic.com/api][API key]] 
                   |
 | Gemini               | ✓        | 
[[https://makersuite.google.com/app/apikey][API key]]                    |
 | Ollama               | ✓        | [[https://ollama.ai/][Ollama running 
locally]]     |
+| Open WebUI           | ✓        | [[https://openwebui.com/][Open WebUI 
running locally]]   |
 | Llama.cpp            | ✓        | 
[[https://github.com/ggml-org/llama.cpp/tree/master/tools/server#quick-start][Llama.cpp
 running locally]]  |
 | Llamafile            | ✓        | 
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile 
server]]     |
 | GPT4All              | ✓        | [[https://gpt4all.io/index.html][GPT4All 
running locally]]    |
@@ -102,6 +103,7 @@ gptel uses Curl if available, but falls back to the 
built-in url-retrieve to wor
       - [[#azure][Azure]]
       - [[#gpt4all][GPT4All]]
       - [[#ollama][Ollama]]
+      - [[#open-webui][Open WebUI]]
       - [[#gemini][Gemini]]
       - [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
       - [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
@@ -335,6 +337,62 @@ The above code makes the backend available to select.  If 
you want it to be the
 
 #+html: </details>
 
+#+html: <details><summary>
+**** Open WebUI
+#+html: </summary>
+
+[[https://openwebui.com/][Open WebUI]] is an open source, self-hosted system 
which provides a multi-user web chat interface and an API endpoint for 
accessing LLMs, especially LLMs running locally on inference servers like 
Ollama.
+
+Because it presents an OpenAI-compatible endpoint, you use ~gptel-make-openai~ 
to register it as a backend.
+
+For instance, you can use this form to register a backend for a local instance 
of Open Web UI served via http on port 3000:
+
+#+begin_src emacs-lisp
+(gptel-make-openai "OpenWebUI"
+  :host "localhost:3000"
+  :protocol "http"
+  :key "KEY_FOR_ACCESSING_OPENWEBUI"
+  :endpoint "/api/chat/completions"
+  :stream t
+  :models '("gemma3n:latest"))
+#+end_src
+
+Or if you are running Open Web UI on another host on your local network 
(~box.local~), serving via https with self-signed certificates, this will work:
+
+#+begin_src emacs-lisp
+(gptel-make-openai "OpenWebUI"
+  :host "box.local"
+  :curl-args '("--insecure") ; needed for self-signed certs
+  :key "KEY_FOR_ACCESSING_OPENWEBUI"
+  :endpoint "/api/chat/completions"
+  :stream t
+  :models '("gemma3n:latest"))
+#+end_src
+
+To find your API key in Open WebUI, click the user name in the bottom left, 
Settings, Account, and then Show by API Keys section.
+
+Refer to the documentation of =gptel-make-openai= for more configuration 
options.
+
+You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]])
+
+***** (Optional) Set as the default gptel backend
+
+The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+#+begin_src emacs-lisp
+;; OPTIONAL configuration
+(setq
+ gptel-model "gemma3n:latest"
+ gptel-backend (gptel-make-openai "OpenWebUI"
+                 :host "localhost:3000"
+                 :protocol "http"
+                 :key "KEY_FOR_ACCESSING_OPENWEBUI"
+                 :endpoint "/api/chat/completions"
+                 :stream t
+                 :models '("gemma3n:latest")))
+#+end_src
+
+#+html: </details>
+
 #+html: <details><summary>
 **** Gemini
 #+html: </summary>

Reply via email to