● LIVE   Breaking News & Analysis
Bitvise
2026-05-07
Cloud Computing

Run a Private AI Image Generator on Your Machine with Docker and Open WebUI

Set up a private, local AI image generator using Docker Model Runner and Open WebUI. No cloud subscription needed.

Many of us know the frustration of using online AI image services. You upload a prompt, worry about privacy, watch your credit balance shrink, or get blocked by a content filter for something as innocent as a dragon in a business suit. What if you could keep everything on your own computer and still enjoy a polished chat interface? That's now possible with Docker Model Runner and Open WebUI.

With just a few commands, you can download an image-generation model, connect it to a familiar chat UI, and start creating images—all locally, privately, and entirely under your control. No cloud subscription, no data leaving your machine, no unexpected costs.

This guide walks you through setting up your own private image generator, step by step.

The Benefits of Local Image Generation

Running an AI image model locally offers several advantages:

Run a Private AI Image Generator on Your Machine with Docker and Open WebUI
Source: www.docker.com
  • Privacy: Your prompts and generated images never leave your machine.
  • No ongoing costs: Once you have the model, generate as many images as you want.
  • No content filters: You control what you generate (within legal boundaries, of course).
  • Offline capability: No internet required after initial model download.

What You'll Need

Before we begin, make sure your system meets these requirements:

  • Docker Desktop (macOS) or Docker Engine (Linux)
  • Approximately 8 GB of free RAM for a small model; more is better
  • GPU (optional but recommended): NVIDIA (CUDA), Apple Silicon (MPS), or CPU fallback

If you can run docker model version without errors, you're ready to proceed.

How Docker Model Runner and Open WebUI Work Together

Docker Model Runner acts as a control plane. It handles downloading models, managing the inference backend, and exposing a fully OpenAI-compatible API—including the /v1/images/generations endpoint. Open WebUI, an open‑source chat interface, can speak that same API natively. The result: you get a clean chat window that sends image generation requests straight to your local model.

Open WebUI as Your Chat Interface

Instead of a command line or a bare API, Open WebUI provides a graphic, conversational experience. You type a prompt like "a dragon in a business suit," and the interface sends it to your local model. The generated image appears directly in the chat. No server fees, no third‑party services.

Step‑by‑Step Setup Guide

Step 1: Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format). Models are distributed as OCI artifacts on Docker Hub, just like container images. To pull the stable-diffusion model, run:

Run a Private AI Image Generator on Your Machine with Docker and Open WebUI
Source: www.docker.com
docker model pull stable-diffusion

Verify the download with:

docker model inspect stable-diffusion

The output shows the model's SHA256, tags, creation date, and detailed configuration—including the DDUF file name and size (e.g., 6.94 GB for the XL base version). The model is stored locally, ready to run.

Step 2: Launch Open WebUI

Here comes the elegant part. Docker Model Runner includes a built‑in launch command that automatically wires Open WebUI to your local inference endpoint:

docker model launch openwebui

That single command starts the model server, configures the API connection, and opens the Open WebUI interface in your browser. You're now ready to generate images from the chat box.

Under the Hood: The DDUF Format

The DDUF format bundles all diffusion model components—text encoder, VAE, UNet or DiT, and scheduler configuration—into a single portable file. At runtime, Docker Model Runner unpacks this file and uses it to serve inference requests. This packaging makes distribution and versioning as simple as managing any other OCI artifact.

Additional Tips for Best Results

  • Model choice: stable-diffusion is a great starting point. Other models may be available on Docker Hub.
  • Performance: A GPU dramatically speeds up generation. With CPU only, expect longer wait times for high‑resolution images.
  • Storage: Models can be several gigabytes. Ensure sufficient disk space before pulling.

Now you have everything you need to create a private, fully local AI image generator. No cloud subscriptions, no privacy worries—just your prompts and your machine.