Image Generation Stable Diffusion AI Art
⚒️

WebUI Forge

An optimized Stable Diffusion WebUI with improved performance, reduced VRAM usage, and advanced features

Beginner open-source self-hosted stable-diffusion performance-optimized

Alternative To

  • • Automatic1111 WebUI
  • • Fooocus
  • • ComfyUI

Difficulty Level

Beginner

Suitable for users with basic technical knowledge. Easy to set up and use.

Overview

WebUI Forge is an optimized version of the popular Automatic1111 Stable Diffusion WebUI, developed by the creator of ControlNet and Fooocus. It offers significant performance improvements, memory optimizations, and advanced features while maintaining the familiar interface of Automatic1111. WebUI Forge is designed to be faster and more efficient, particularly beneficial for users with lower-end hardware, while adding built-in extensions and new features like IP Adapter, Photomaker, and more.

System Requirements

  • CPU: Intel Core i3 or AMD equivalent (multi-core recommended)
  • RAM: 8GB+ (16GB+ recommended for SDXL models)
  • GPU:
    • NVIDIA: 4GB+ VRAM (6GB+ recommended for SDXL models)
    • AMD: Compatible with torch-directml
    • Intel Arc: Compatible with oneAPI
    • Apple Silicon: Via MPS acceleration
  • Storage: 10GB+ for application plus additional space for models (30GB+ recommended)
  • OS: Windows, Linux, or macOS (with some limitations on macOS)

Installation Guide

Option 1: Windows One-Click Installation (Easiest)

  1. Download the latest one-click package from the GitHub repository
  2. Extract the ZIP file using 7-Zip (right-click file → properties → unblock if needed)
  3. Run update.bat to ensure you have the latest version
  4. Run run.bat to start WebUI Forge
  5. Access the interface via your web browser at the provided local URL

Option 2: Manual Installation (All Platforms)

For Windows

  1. Install Python 3.10 and Git

  2. Clone the repository:

    git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.git
    
  3. Navigate to the directory and run:

    webui-user.bat
    

For Linux

  1. Install Python 3.10 and Git

  2. Clone the repository:

    git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.git
    
  3. Navigate to the directory and run:

    ./webui.sh
    

For macOS

  1. Install Python 3.10 and Git

  2. Clone the repository:

    git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.git
    
  3. Navigate to the directory and run:

    ./webui.sh --always-cpu --skip-torch-cuda-test
    

    For Apple Silicon Macs, the MPS acceleration will be used automatically.

Note: If you already have Automatic1111 WebUI installed, you can reuse your models and extensions by creating symbolic links to the appropriate directories.

Practical Exercise: Getting Started with WebUI Forge

Let’s walk through a basic workflow to help you get familiar with WebUI Forge’s features and performance improvements.

Step 1: Setting Up Your Environment

  1. Launch WebUI Forge using the appropriate method for your platform
  2. Once the web interface loads, select the appropriate UI mode from the dropdown menu at the top left:
    • “sd” for Stable Diffusion 1.5 models
    • “xl” for SDXL models
    • “flux” for Flux models (if available)

Step 2: Text-to-Image Generation

  1. In the “Text-to-Image” tab, enter a detailed prompt in the text field, such as:

    A majestic mountain landscape at sunset, photorealistic, vibrant colors, dramatic lighting, 8k resolution
    
  2. For the negative prompt, enter things you want to avoid in the image:

    blurry, distorted, watermark, low quality, grainy
    
  3. Configure your generation settings:

    • Sampling method: DPM++ 2M Karras
    • Sampling steps: 25-30
    • Width/Height: 512x512 (or 1024x1024 for SDXL)
    • CFG Scale: 7
    • Seed: -1 (random)
  4. Click “Generate” to create your first image

  5. Notice the generation time and compare it with your previous experiences with Automatic1111 (if applicable)

Step 3: Using Advanced Features - IP Adapter

IP Adapter allows you to use reference images to guide your generations:

  1. In the “img2img” tab, upload a reference image
  2. Enable “IP Adapter” from the scripts dropdown at the bottom
  3. Configure the IP Adapter settings:
    • Weight: 0.5-0.8
    • Select an appropriate IP Adapter model
  4. Use a prompt that complements your reference image
  5. Click “Generate” to create an image influenced by both your prompt and reference image

Step 4: Using Photomaker

Photomaker helps create images with consistent faces or styles:

  1. In the “Control Net” section, enable one of the control nets
  2. Select “Photomaker” as the preprocessor
  3. Upload a reference image with a face
  4. Use a prompt that includes “[v]” where “v” represents the face from your reference
  5. Adjust the control weight and guidance as needed
  6. Generate to see how Photomaker influences your results

Step 5: Experimenting with Performance Settings

WebUI Forge offers several ways to optimize performance:

  1. For low VRAM GPUs, try adjusting the “GPU Weight” slider at the top right
  2. Enable “Queue/Async Swap” for better memory management during batch processing
  3. Experiment with different precision settings:
    • “full precision” for highest quality but slower processing
    • “half precision” for faster processing with minimal quality loss

Resources

Official Documentation

Tutorials and Guides

Community Support

  • Fooocus - A simpler image generation interface by the same developer
  • ControlNet - Adds control to diffusion models, also by the same developer
  • Automatic1111 WebUI - The original project that WebUI Forge is based on

Suggested Projects

You might also be interested in these similar projects:

🖼️

Fooocus

A user-friendly image generation platform based on Stable Diffusion XL with Midjourney-like simplicity

Difficulty: Beginner
Updated: Mar 1, 2025

A powerful node-based interface for Stable Diffusion image generation workflows

Difficulty: Intermediate
Updated: Mar 1, 2025

Self-host InvokeAI for AI experimentation

Updated: Mar 1, 2025