My UV Docker workflow
“uv” is a newish Python package installer and resolver. It is a nice balance between the simplicity of plain old venv, and the complexity of poetry. The team behind so far, had made the right opinionated choices and I believe it will continue to grow.
I decided to migrate a few projects to it. Some super simple, but one that was particularly complex with platform-specific requirements with CUDA and Pytorch. The documentations gives many options, which can make a migration overwhelming with the paralysis of choice. Additionally, uv does not yet generate a platform-agnostic lockfile, so there are a couple of things to watch out for in complex OS-specific projects.
Here is where I ended up, where I feel it is a nice balance between keeping the same simple pip workflows, but gaining the speed of uv with a straightforward 2 min migration process.
FROM python:[preferred image]
# your normal setup up to pip install
# 1. install uv inside of Docker
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# 2. Copy your requirements to Docker
COPY requirements.in .
# 3. compile your requirements
RUN uv pip compile requirements.in -o requirements.txt
#4. Install the new compiled requirements, specific to the Docker platform
RUN uv pip sync requirements.txt --no-cache-dir --compile-bytecode --system
# the rest of Dockerfile
The key changes are really simple 4 lines of code, and using a requirements.in file for your dependencies.
Simple and optional optimizations are:
- —compile-bytecode
--compile-bytecode
Compile Python files to bytecode after installation.By default, uv does not compile Python (
.py) files to bytecode (__pycache__/*.pyc); instead, compilation is performed lazily the first time a module is imported. For use-cases in which start time is critical, such as CLI applications and Docker containers, this option can be enabled to trade longer installation times for faster start times.
- —no-cache-dir
--no-cache-dir
The --no-cache-dir option tells pip to not save the downloaded packages locally, since we are in an ephemeral container. The cache doesn’t persist anyway.
- —system
By default, uv installs into the virtual environment in the current working directory or any parent directory. The
--systemoption instructs uv to instead use the first Python found in the systemPATH.
Typically - a requirements.in file will be whatever you actually import in your project. For example, in our complex Pytorch and CUDA projects - this was the requirements.in:
colpali-engine==0.3.1
runpod==1.7.0
Pillow==10.4.0
Those 3 “simple” dependencies when compiled - generate 120+ other packages that are super fragile and OS dependent. This is all abstracted and taken care of though with uv - with all the action happening at the Dockerfile build time.
--platform linux/amd64 or use the platform option in docker-compose. So, you exactly match everything between development and production. This is the case with or without uv.This workflow has been in production for a couple of weeks now, with no issues. It was relatively painless and quick. The biggest gain is probably around developer experience where having a simple requirements.in file allows for quick upgrades and confidence that nothing will break accidentally.


