I ran a small ollama model in a container and have been doing some experiments. One of the things I wanted to do was get a GUI so I wasn’t always running docker to connect (with –it) or writing code to interact.
I saw a post somewhere that there is a webgui, so I decided to try it. This post shows the quick setup process.
This is part of a series of experiments with AI systems.
Run the Container
The command I used was this one:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
This downloaded and ran the container, which I forgot about until I saw it running in my Docker list.
Connecting and Getting Going
The command above runs on port 3000, so I went to port 3000 on my local host and saw this:
I clicked signup and added info, which isn’t really checked.
The first account is set at the admin and I see this screen:
I need to select a model to query, and when I click the drop down, I see the two I downloaded as part of my ollama container setup.
I picked mistral and started asking questions. I started with clicking the prompt on the screen, and I saw:
If you play with different models, give this a try as an easy way to run an AI on your local machine and see how well it can help you with anything you do.
Note, be wary, and make sure you read the disclaimer at the bottom (circled by me)