In a recent tweet, Pytorch announced that they will no longer publish packages to Anaconda. Ok, nighthly first, but the writing is clear.
No worries, below is an alternative:
- Uses Mamba
- Installs cuda + cudatoolkit (nvcc + other tools, useful for some LLM accelerators)
- Installs pytorch from PIP
- Works on my computer
- Harder to replicate in an automated fashion
Assume that you have mamba as your favorite Python package manager. If not, well, conda would also work. But I strongly suggest switching ASAP. Smth smth licensing issues.
Run in a console:
mamba create -n torch python=3.12 cuda=12.4 cudatoolkit -c nvidia
This will install some 4G of files. Feel free to change the name of the env or remove cudatoolkit if you don’t need it.
Activate it:
mamba activate torch
Let’s do some tests:
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
$ python --version
Python 3.12.7
Ok, now the new bit (it’s one line):
pip install torch==2.5.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
Note that I “fixed” the torch version and the index-url. Justin Case. And a bit of future proofing.
Let it settle and let’s test what we have.
$ python -c "import torch;print(torch.cuda.is_available());print(torch.__version__)"
True
2.5.1+cu124
And we have a torch-cuda capable env installed from pip.
Disclaimer:
It works ON MY COMPUTER
It prints True when asked about CUDA, I did not run any meaningful code on the env. But usually, once the cuda device is available, I did not get any surprises, in a very long time.
Maybe we should switch to tinygrad? JAX+Keras? Opinions? Ping me on X!