01 / Recommended path
Use the official Windows portable build.
Download the current portable release, extract it, then run it once before adding the ALTERA2ION files.
Setup guide / Flux 2 Klein Base 9B
This is the clean install path for ALTERA2ION exterior and interior packages. It is built around the package-ready local baseline plus the browser-first Comfy Cloud option.
4
Setup paths
22GB+
Local minimum
24GB
Practical GPU class
Windows / local install
Official ComfyUI currently recommends the Windows portable build as the fastest local path. For ALTERA2ION, the important part is launching cleanly and placing the model stack in the right folders before you open the package JSON.
| Component | Minimum for ALTERA2ION | Notes |
|---|---|---|
| GPU | 24GB-class NVIDIA GPU | RTX 3090, RTX 4090, or equivalent. |
| VRAM | 22GB minimum | BFL currently lists Base 9B at 21.7GB VRAM, so 24GB cards are the practical local floor. |
| RAM | 32GB system memory | 64GB is the cleaner daily-use target. |
| Storage | 100GB free SSD | High-speed local storage keeps model loads cleaner. |
| OS | Windows 10/11 or Linux | Use the local ComfyUI path that matches your machine. |
Windows / first run
01 / Recommended path
Download the current portable release, extract it, then run it once before adding the ALTERA2ION files.
02 / Required package files
These package workflows do not use a single checkpoints folder. They rely on a diffusion model, text encoder, VAE, and the package LoRA in their matching ComfyUI directories.
ComfyUI\models\diffusion_models\flux-2-klein-base-9b-fp8.safetensors ComfyUI\models\text_encoders\qwen_3_8b_fp8mixed.safetensors ComfyUI\models\vae\flux2-vae.safetensors ComfyUI\models\loras\[your-package-lora].safetensors
03 / Model folders
Place the base model, text encoder, VAE, and package LoRA in the matching folders before you load the supplied workflow.
04 / Load workflow
Import the supplied JSON and keep the packaged settings unchanged for the first run.
05 / Launch
Load your base image and your reference image, then queue the graph.
Alternative install
If you prefer the official CLI route instead of the portable build, ComfyUI documents the following bootstrap path:
pip install comfy-cli comfy install
Linux / local install
Linux is the cleanest sustained local path if you already run a dedicated workstation. The official ComfyUI guidance currently calls Python 3.13 very well supported, with 3.12 as the fallback if a custom node is fussy.
01 / Python + repo
sudo apt update sudo apt install -y python3.13 python3.13-venv python3-pip git git clone https://github.com/Comfy-Org/ComfyUI ~/ComfyUI cd ~/ComfyUI python3.13 -m venv venv source venv/bin/activate
02 / NVIDIA PyTorch
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130 pip install -r requirements.txt
03 / Model stack + manager
After the base install, place the same four files used on Windows into the matching model folders, then run ComfyUI and load the supplied JSON.
python main.py --enable-manager
macOS / recommended path
What is officially true
ComfyUI does support Apple Silicon. The current official install guidance points macOS users to the latest PyTorch nightly plus the manual install path.
What we support for ALTERA2ION
For the ALTERA2ION package workflow, our supported MVP path on Mac is Comfy Cloud. That keeps setup simple without overselling local Apple hardware performance.
Best move today
If your machine is a MacBook, iMac, or iPad-based workflow companion, jump straight to Comfy Cloud. You keep the same package JSON, avoid the unsupported local hardware path, and stay aligned with the browser-first setup we can actually support for delivery.
Comfy Cloud / browser path
This is the cleanest setup if you do not want a dedicated local workstation. The package stays the same. You just move the compute to a browser-hosted ComfyUI environment.
| Requirement | What you need |
|---|---|
| Device | Any Windows, Linux, or Mac laptop, plus iPad or tablet |
| Software | Modern web browser |
| Local hardware | No local GPU required |
| Network | Stable internet connection |
| Package files | Flux 2 Klein Base 9B FP8, qwen text encoder, flux2 VAE, package LoRA, supplied JSON |
Comfy Cloud / first run
01 / Workspace
Create a fresh Comfy Cloud workspace for the package so you keep the model stack and JSON isolated.
02 / Model access
Accept the FLUX 2 Klein Base 9B access terms on Hugging Face first if your workspace needs that model pulled in.
03 / Files
Upload the package files your workspace needs so the LoRA and workflow sit in one clean project space.
04 / Model stack
Confirm the workspace can see the diffusion model, text encoder, VAE, and the package LoRA.
05 / Workflow
Import the supplied JSON and leave the package system prompt untouched.
06 / Queue
Drop in Image 1 and Image 2, confirm the node paths, and run the graph.
Decision guide
| Local | Comfy Cloud | |
|---|---|---|
| Setup time | Longer, but full control | Fastest path to first render |
| Hardware | 22GB minimum / 24GB-class GPU | No local GPU required |
| Best for | Dedicated studio workstation | Lighter devices and rapid onboarding |
| Devices | Windows or Linux workstation | Windows, Linux, Mac, iPad, or tablet |
| Workflow JSON | Same supplied package JSON | Same supplied package JSON |
MVP ready
Once ComfyUI is running, the package install is just the model stack, the LoRA, and the supplied JSON.
Browse workflows