Hardware Requirements
CorridorKey was designed and built on a Linux workstation equipped with an NVIDIA RTX Pro 6000 (96 GB VRAM). The community is actively optimising it for consumer GPUs — the most recent build should work on cards with 6–8 GB of VRAM, and it can run on most Mac systems with unified memory.
Core Engine (CorridorKey)
| Spec | Minimum | Recommended |
|---|---|---|
| GPU VRAM | 6 GB | 8 GB+ |
| Compute | CUDA, MPS, or CPU | CUDA (NVIDIA) |
| System RAM | 8 GB | 16 GB+ |
The engine dynamically scales inference to its native 2048×2048 backbone, so more VRAM allows larger plates to be processed without tiling.
Windows CUDA driver requirement
To run GPU acceleration natively on Windows, your system must have NVIDIA drivers that support CUDA 12.8 or higher. If your drivers only support older CUDA versions, the installer will likely fall back to the CPU.
Optional Modules
GVM and VideoMaMa are optional Alpha Hint generators with significantly higher hardware requirements. You do not need them — you can always provide your own Alpha Hints from other software.
Optional — GVM and VideoMaMa weights
These modules generate Alpha Hints automatically but have large model files and extreme hardware requirements. Installing them is completely optional; you can always provide your own Alpha Hints from other software.
GVM (~80 GB VRAM required):
uv run hf download geyongtao/gvm --local-dir gvm_core/weights
VideoMaMa (originally 80 GB+ VRAM; community optimisations bring it under 24 GB, though not yet fully integrated here):
# Fine-tuned VideoMaMa weights
uv run hf download SammyLim/VideoMaMa \
--local-dir VideoMaMaInferenceModule/checkpoints/VideoMaMa
# Stable Video Diffusion base model (VAE + image encoder, ~2.5 GB)
# Accept the licence at stabilityai/stable-video-diffusion-img2vid-xt first
uv run hf download stabilityai/stable-video-diffusion-img2vid-xt \
--local-dir VideoMaMaInferenceModule/checkpoints/stable-video-diffusion-img2vid-xt \
--include "feature_extractor/*" "image_encoder/*" "vae/*" "model_index.json"
| Module | VRAM Required | Notes |
|---|---|---|
| GVM | ~80 GB | Uses massive Stable Video Diffusion models. |
| VideoMaMa | 80 GB+ (native) / <24 GB (community optimised) | Community tweaks reduce VRAM, but extreme optimisations are not yet fully integrated in this repo. |
Apple Silicon
Apple Silicon (MPS / MLX)
CorridorKey runs on Apple Silicon Macs using unified memory. Two backend options are available:
-
MPS — PyTorch's Metal Performance Shaders backend. Works out of the box but some operators may require the CPU fallback flag:
export PYTORCH_ENABLE_MPS_FALLBACK=1 -
MLX — Native Apple Silicon acceleration via the MLX framework. Avoids PyTorch's MPS layer entirely and typically runs faster. Requires installing the MLX extras (
uv sync --extra mlx) and obtaining.safetensorsweights.
Because Apple Silicon shares memory between the CPU and GPU, the full system RAM is available to the model — no separate VRAM budget applies.