Blog Post

The Local OLLAMA Gui

,

I ran a small ollama model in a container and have been doing some experiments. One of the things I wanted to do was get a GUI so I wasn’t always running docker to connect (with –it) or writing code to interact.

I saw a post somewhere that there is a webgui, so I decided to try it. This post shows the quick setup process.

This is part of a series of experiments with AI systems.

Run the Container

The command I used was this one:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main

This downloaded and ran the container, which I forgot about until I saw it running in my Docker list.

Connecting and Getting Going

The command above runs on port 3000, so I went to port 3000 on my local host and saw this:

2025-01_0162

I clicked signup and added info, which isn’t really checked.

2025-01_0163

The first account is set at the admin and I see this screen:

2025-01_0164

I need to select a model to query, and when I click the drop down, I see the two I downloaded as part of my ollama container setup.

2025-01_0165

I picked mistral and started asking questions. I started with clicking the prompt on the screen, and I saw:

2025-01_0166

If you play with different models, give this a try as an easy way to run an AI on your local machine and see how well it can help you with anything you do.

Note, be wary, and make sure you read the disclaimer at the bottom (circled by me)

2025-01_0154

Original post (opens in new tab)
View comments in original post (opens in new tab)

Read 19,205 times
(275 in last 30 days)

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating