Creating a “Custom GPT” with Open Web UI

TL;DR:

  1. Configure your RAG settings
  2. Set up a new Workspace with uploaded documents
  3. Set up a new Model, basing it on any model you already have in your models list
  4. Select that new model from your list
  5. Enjoy

Why pay $20 per month to OpenAI when you can do it for “free” using OpenWeb UI.

People say “free” a lot in the AI space. It’s not really free, unless run OWUI locally, and you have the time and skills to set it up.

Alternatively, it’ll cost you hosting somewhere like Digital Ocean – and the time to set it up, and then the cost of the API calls to whichever LLM you choose.

Anyway… here’s a very quick run-through on how to set it up. I don’t go into the details of RAG settings, or outline options depending on whether you’re running locally or paying for something like OpenAI. It’s very much a “quick start”…

Click on your avatar icon in the top-right to get to your settings:

Select “Settings”

Select “Admin Settings”

Click on “Documents”

Select the model you want to use for “Embedding Model Engine”

Go back to your Workspace

Click on “Knowledge” and then “+” icon to create a new knowledge base

Enter in the details

Drag and drop, or click on the “+” to upload documents

Congratulations – you now have a knowledge base

Click on Models and then the “+” icon to add a new model

You’ll be presented with this. The form is a little poor in terms of UX – finding the fields to edit takes a bit of squinting.

Give your model a name, select the base model, visibility, and run through all the other options as modify as you like.

IMPORTANT: Ensure you write out an appropriate system prompt including telling the LLM to use the provided documentation when answering questions and whether you also want it to use its general knowledge to answer questions.

Select the knowledge base you created earlier

Congratulations – you now have a new model with attached knowledge base.

Click on New Chat

Select your new model

In this case I uploaded all the scraped documentation from the OpenWeb UI website, so I can ask it questions about OpenWeb UI

demodomain

View Comments

  • Great write-up! One question, I have project related documentation containing lots of screenshots and other images. If I upload this documentation and create a custom model based on an image capable LLM, will the text responses be based on image input from the documentation? If yes, do you have a specific open source model that would be capable of this?

    • Sure, there's multi-modal models - give it some text and some images and it'll respond with text. If the position of your images within the text is important then you'll need to save the whole page as an image (which is not fun), or save everything as a PDF an use a model capable of parsing PDFs and retaining the structure of the PDF so it can "see" the text and images in the right way.

      I send my PDFs to Gemini as I think it's better at parsing PDFs. I'm not sure what Opensource models would be suitable.

Recent Posts

The Art and Science of Syncing with n8n: A Technical Deep-Dive

Syncing seems easy, right? Just grab data from System A, push it into System B,…

15 hours ago

Multi-Model, Multi-Platform AI Pipe in OpenWebUI

OpenWeb UI supports connections to OpenAI and any platform that supports the OpenAI API format…

1 week ago

The Open WebUI RAG Conundrum: Chunks vs. Full Documents

On Reddit, and elsewhere, a somewhat "hot" topic is using OWUI to manage a knowledge…

1 week ago

Code Smart in n8n: Programming Principles for Better Workflows

As a coder of 30+ years, I've learned that coding isn't really about coding –…

2 weeks ago

Case Study – provider agnostic AI chat interface

A client wanted a “chatbot” to interface with all of the providers (Google, OpenAI, Perplexity,…

2 weeks ago

Upwork: A Freelancer’s Descent into Kafkaesque Absurdity

The gig economy. A promised land of flexibility, autonomy, and unlimited potential. Or so the…

2 weeks ago