WebUI Forge
An optimized Stable Diffusion WebUI with improved performance, reduced VRAM usage, and advanced features
Alternative To
- • Automatic1111 WebUI
- • Fooocus
- • ComfyUI
Difficulty Level
Suitable for users with basic technical knowledge. Easy to set up and use.
Overview
WebUI Forge is an optimized version of the popular Automatic1111 Stable Diffusion WebUI, developed by the creator of ControlNet and Fooocus. It offers significant performance improvements, memory optimizations, and advanced features while maintaining the familiar interface of Automatic1111. WebUI Forge is designed to be faster and more efficient, particularly beneficial for users with lower-end hardware, while adding built-in extensions and new features like IP Adapter, Photomaker, and more.
System Requirements
- CPU: Intel Core i3 or AMD equivalent (multi-core recommended)
- RAM: 8GB+ (16GB+ recommended for SDXL models)
- GPU:
- NVIDIA: 4GB+ VRAM (6GB+ recommended for SDXL models)
- AMD: Compatible with torch-directml
- Intel Arc: Compatible with oneAPI
- Apple Silicon: Via MPS acceleration
- Storage: 10GB+ for application plus additional space for models (30GB+ recommended)
- OS: Windows, Linux, or macOS (with some limitations on macOS)
Installation Guide
Option 1: Windows One-Click Installation (Easiest)
- Download the latest one-click package from the GitHub repository
- Extract the ZIP file using 7-Zip (right-click file → properties → unblock if needed)
- Run
update.batto ensure you have the latest version - Run
run.batto start WebUI Forge - Access the interface via your web browser at the provided local URL
Option 2: Manual Installation (All Platforms)
For Windows
Install Python 3.10 and Git
Clone the repository:
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.gitNavigate to the directory and run:
webui-user.bat
For Linux
Install Python 3.10 and Git
Clone the repository:
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.gitNavigate to the directory and run:
./webui.sh
For macOS
Install Python 3.10 and Git
Clone the repository:
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.gitNavigate to the directory and run:
./webui.sh --always-cpu --skip-torch-cuda-testFor Apple Silicon Macs, the MPS acceleration will be used automatically.
Note: If you already have Automatic1111 WebUI installed, you can reuse your models and extensions by creating symbolic links to the appropriate directories.
Practical Exercise: Getting Started with WebUI Forge
Let’s walk through a basic workflow to help you get familiar with WebUI Forge’s features and performance improvements.
Step 1: Setting Up Your Environment
- Launch WebUI Forge using the appropriate method for your platform
- Once the web interface loads, select the appropriate UI mode from the dropdown menu at the top left:
- “sd” for Stable Diffusion 1.5 models
- “xl” for SDXL models
- “flux” for Flux models (if available)
Step 2: Text-to-Image Generation
In the “Text-to-Image” tab, enter a detailed prompt in the text field, such as:
A majestic mountain landscape at sunset, photorealistic, vibrant colors, dramatic lighting, 8k resolutionFor the negative prompt, enter things you want to avoid in the image:
blurry, distorted, watermark, low quality, grainyConfigure your generation settings:
- Sampling method: DPM++ 2M Karras
- Sampling steps: 25-30
- Width/Height: 512x512 (or 1024x1024 for SDXL)
- CFG Scale: 7
- Seed: -1 (random)
Click “Generate” to create your first image
Notice the generation time and compare it with your previous experiences with Automatic1111 (if applicable)
Step 3: Using Advanced Features - IP Adapter
IP Adapter allows you to use reference images to guide your generations:
- In the “img2img” tab, upload a reference image
- Enable “IP Adapter” from the scripts dropdown at the bottom
- Configure the IP Adapter settings:
- Weight: 0.5-0.8
- Select an appropriate IP Adapter model
- Use a prompt that complements your reference image
- Click “Generate” to create an image influenced by both your prompt and reference image
Step 4: Using Photomaker
Photomaker helps create images with consistent faces or styles:
- In the “Control Net” section, enable one of the control nets
- Select “Photomaker” as the preprocessor
- Upload a reference image with a face
- Use a prompt that includes “[v]” where “v” represents the face from your reference
- Adjust the control weight and guidance as needed
- Generate to see how Photomaker influences your results
Step 5: Experimenting with Performance Settings
WebUI Forge offers several ways to optimize performance:
- For low VRAM GPUs, try adjusting the “GPU Weight” slider at the top right
- Enable “Queue/Async Swap” for better memory management during batch processing
- Experiment with different precision settings:
- “full precision” for highest quality but slower processing
- “half precision” for faster processing with minimal quality loss
Resources
Official Documentation
- GitHub Repository - Main documentation and installation guides
- Development Plan Discussion - Information about the project’s roadmap
Tutorials and Guides
- Stable Diffusion Art: Installing SD Forge - Comprehensive installation guide
- Civitai Education: Generative AI Art with Forge - Quickstart guide for beginners
Community Support
- GitHub Issues - Bug reports and feature requests
- GitHub Discussions - Community discussions and showcases
Related Projects
- Fooocus - A simpler image generation interface by the same developer
- ControlNet - Adds control to diffusion models, also by the same developer
- Automatic1111 WebUI - The original project that WebUI Forge is based on
Suggested Projects
You might also be interested in these similar projects:
A user-friendly image generation platform based on Stable Diffusion XL with Midjourney-like simplicity
A powerful node-based interface for Stable Diffusion image generation workflows