Contributing
Thanks for your interest in improving CorridorKey! Whether you're a VFX artist, a pipeline TD, or a machine learning researcher, contributions of all kinds are welcome — bug reports, feature ideas, documentation fixes, and code.
Legal Agreement
By contributing to this project you agree that your contributions will be licensed under the project's CorridorKey Licence.
By submitting a Pull Request you specifically acknowledge and agree to the terms set forth in Section 6 (CONTRIBUTIONS) of the license. This ensures that Corridor Digital maintains the full right to use, distribute, and sublicense this codebase, including PR contributions.
Prerequisites
- Python 3.10 or newer
- uv for dependency management
This project uses uv to manage Python and all dependencies. uv is a fast, modern replacement for pip that automatically handles Python versions, virtual environments, and package installation in a single step. You do not need to install Python yourself — uv does it for you.
Install uv if you don't already have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
Tip
On Windows the automated .bat installers handle uv installation for you.
If you open a new terminal after installing uv and see 'uv' is not
recognized, close and reopen the terminal so the updated PATH takes effect.
Dev Setup
git clone https://github.com/nikopueringer/CorridorKey.git
cd CorridorKey
uv sync --group dev # installs all dependencies + dev tools (pytest, ruff)
No manual virtualenv creation, no pip install — uv handles everything.
Running Tests
uv run pytest # run all tests
uv run pytest -v # verbose (shows each test name)
uv run pytest -m "not gpu" # skip tests that need a CUDA GPU
uv run pytest --cov # show test coverage
Most tests run in a few seconds and don't need a GPU or model weights. Tests
that require CUDA are marked with @pytest.mark.gpu and will be skipped
automatically if no GPU is available.
Apple Silicon (Mac) Notes
Apple Silicon (MPS / MLX)
CorridorKey runs on Apple Silicon Macs using unified memory. Two backend options are available:
-
MPS — PyTorch's Metal Performance Shaders backend. Works out of the box but some operators may require the CPU fallback flag:
export PYTORCH_ENABLE_MPS_FALLBACK=1 -
MLX — Native Apple Silicon acceleration via the MLX framework. Avoids PyTorch's MPS layer entirely and typically runs faster. Requires installing the MLX extras (
uv sync --extra mlx) and obtaining.safetensorsweights.
Because Apple Silicon shares memory between the CPU and GPU, the full system RAM is available to the model — no separate VRAM budget applies.
If you are contributing on an Apple Silicon Mac, there are a few extra things to be aware of.
Backend Selection
CorridorKey auto-detects MPS on Apple Silicon. To test with the MLX backend or force CPU, set the environment variable before running:
export CORRIDORKEY_BACKEND=mlx # use native MLX on Apple Silicon
export CORRIDORKEY_DEVICE=cpu # force CPU (useful for isolating device bugs)
MPS Operator Fallback
If PyTorch raises an error about an unsupported MPS operator, enable CPU fallback for those ops:
export PYTORCH_ENABLE_MPS_FALLBACK=1
Linting and Formatting
The project uses ruff for both linting and formatting.
uv run ruff check # check for lint errors
uv run ruff format --check # check formatting (no changes)
uv run ruff format # auto-format your code
| Setting | Value |
|---|---|
| Lint rules | E, F, W, I, B (basic style, unused imports, import sorting, common bug patterns) |
| Line length | 120 characters |
| Excluded dirs | gvm_core/, VideoMaMaInferenceModule/ (third-party research code kept close to upstream) |
CI runs both checks on every pull request. Running them locally before pushing saves a round-trip.
Making Changes
Pull Request Workflow
- Fork the repo and create a branch for your change.
- Make your changes.
- Run
uv run pytestanduv run ruff checkto make sure everything passes. - Open a pull request against
main.
In your PR description, focus on why you made the change, not just what changed. If you're fixing a bug, describe the symptoms. If you're adding a feature, explain the use case. A couple of sentences is plenty.
What Makes a Good Contribution
- Bug fixes — especially for edge cases in EXR/linear workflows, color space handling, or platform-specific issues.
- Tests — more test coverage is always welcome, particularly for
clip_manager.pyandinference_engine.py. - Documentation — better explanations, usage examples, or clarifying comments in tricky code.
- Performance — reducing GPU memory usage, speeding up frame processing, or optimizing I/O.
Model Weights
The model checkpoint (CorridorKey_v1.0.pth) and optional GVM/VideoMaMa
weights are not in the git repo. Most tests don't need them. If you're
working on inference code and need the weights, follow the download instructions
in the Installation guide.
Questions?
Join the Discord — it's the fastest way to get help or discuss ideas before opening a PR.