Installation
This project uses uv to manage Python and all dependencies. uv is a fast, modern replacement for pip that automatically handles Python versions, virtual environments, and package installation in a single step. You do not need to install Python yourself — uv does it for you.
Install uv if you don't already have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
Tip
On Windows the automated .bat installers handle uv installation for you.
If you open a new terminal after installing uv and see 'uv' is not
recognized, close and reopen the terminal so the updated PATH takes effect.
Platform Setup
- Clone or download this repository to your local machine.
-
Double-click
Install_CorridorKey_Windows.bat. This will automatically install uv (if needed), set up your Python environment, install all dependencies, and download the CorridorKey model.CUDA driver requirement
To run GPU acceleration natively on Windows, your system must have NVIDIA drivers that support CUDA 12.8 or higher. If your drivers only support older CUDA versions, the installer will likely fall back to the CPU.
-
(Optional) Double-click
Install_GVM_Windows.batandInstall_VideoMaMa_Windows.batto download the heavy optional Alpha Hint generator weights.
- Clone or download this repository to your local machine.
-
Install uv if you don't have it:
curl -LsSf https://astral.sh/uv/install.sh | sh -
Install all dependencies (uv will download Python 3.10+ automatically if needed):
uv sync # CPU/MPS (default — works everywhere) uv sync --extra cuda # CUDA GPU acceleration (Linux/Windows) uv sync --extra mlx # Apple Silicon MLX acceleration
Download the Model Checkpoint
Download the CorridorKey checkpoint (~300 MB):
Download CorridorKey_v1.0.pth from Hugging Face
Place the file inside CorridorKeyModule/checkpoints/ and rename it to
CorridorKey.pth so the final path is:
CorridorKeyModule/checkpoints/CorridorKey.pth
Warning
The engine will not start without this checkpoint. Make sure the filename
is exactly CorridorKey.pth (not CorridorKey_v1.0.pth).
Optional Weights
Optional — GVM and VideoMaMa weights
These modules generate Alpha Hints automatically but have large model files and extreme hardware requirements. Installing them is completely optional; you can always provide your own Alpha Hints from other software.
GVM (~80 GB VRAM required):
uv run hf download geyongtao/gvm --local-dir gvm_core/weights
VideoMaMa (originally 80 GB+ VRAM; community optimisations bring it under 24 GB, though not yet fully integrated here):
# Fine-tuned VideoMaMa weights
uv run hf download SammyLim/VideoMaMa \
--local-dir VideoMaMaInferenceModule/checkpoints/VideoMaMa
# Stable Video Diffusion base model (VAE + image encoder, ~2.5 GB)
# Accept the licence at stabilityai/stable-video-diffusion-img2vid-xt first
uv run hf download stabilityai/stable-video-diffusion-img2vid-xt \
--local-dir VideoMaMaInferenceModule/checkpoints/stable-video-diffusion-img2vid-xt \
--include "feature_extractor/*" "image_encoder/*" "vae/*" "model_index.json"
Docker (Linux + NVIDIA GPU)
If you prefer not to install dependencies locally, you can run CorridorKey in Docker.
Prerequisites
- Docker Engine + Docker Compose plugin installed.
- NVIDIA driver installed on the host (Linux), with CUDA compatibility for the PyTorch CUDA 12.6 wheels used by this project.
- NVIDIA Container Toolkit installed and configured for Docker (
nvidia-smishould work on the host, anddocker run --rm --gpus all nvidia/cuda:12.6.3-runtime-ubuntu22.04 nvidia-smishould succeed).
Build and Run
-
Build the image:
docker build -t corridorkey:latest . -
Run an action directly (example — inference):
docker run --rm -it --gpus all \ -e OPENCV_IO_ENABLE_OPENEXR=1 \ -v "$(pwd)/ClipsForInference:/app/ClipsForInference" \ -v "$(pwd)/Output:/app/Output" \ -v "$(pwd)/CorridorKeyModule/checkpoints:/app/CorridorKeyModule/checkpoints" \ -v "$(pwd)/gvm_core/weights:/app/gvm_core/weights" \ -v "$(pwd)/VideoMaMaInferenceModule/checkpoints:/app/VideoMaMaInferenceModule/checkpoints" \ corridorkey:latest --action run_inference --device cuda -
Docker Compose (recommended for repeat runs):
docker compose build docker compose --profile gpu run --rm corridorkey --action run_inference --device cuda docker compose --profile gpu run --rm corridorkey --action list docker compose --profile cpu run --rm corridorkey-cpu --action run_inference --device cpu -
(Optional) Pin to specific GPU(s) for multi-GPU workstations:
NVIDIA_VISIBLE_DEVICES=0 docker compose --profile gpu run --rm corridorkey --action list NVIDIA_VISIBLE_DEVICES=1,2 docker compose --profile gpu run --rm corridorkey --action run_inference --device cuda
Notes
- You still need to place model weights in the same folders used by native runs (mounted above).
- The container does not include kernel GPU drivers; those always come from the host. The image provides user-space dependencies and relies on Docker's NVIDIA runtime to pass through driver libraries/devices.
-
The wizard works too — use a path inside the container, for example:
docker run --rm -it --gpus all \ -e OPENCV_IO_ENABLE_OPENEXR=1 \ -v "$(pwd)/ClipsForInference:/app/ClipsForInference" \ -v "$(pwd)/Output:/app/Output" \ -v "$(pwd)/CorridorKeyModule/checkpoints:/app/CorridorKeyModule/checkpoints" \ -v "$(pwd)/gvm_core/weights:/app/gvm_core/weights" \ -v "$(pwd)/VideoMaMaInferenceModule/checkpoints:/app/VideoMaMaInferenceModule/checkpoints" \ corridorkey:latest --action wizard --win_path /app/ClipsForInference