Recent Issues Encountered When Installing NVIDIA CUDA and Python Torch Locally
Hardware & System
My graphics card is a 2060, and the operating system is Windows 10.
CUDA Toolkit
Visit this link to install the official CUDA Toolkit: CUDA Toolkit - Free Tools and Training. After installation, I found that the NVIDIA program on the desktop could not be opened. I then updated the NVIDIA driver, and it worked.
Using UV to Manage Python Environments
There are many Python environment setup tools, such as Anaconda, Miniconda, pyenv, and UV. I recommend using UV. UV is a high-performance Python package and project management tool developed by Astral in Rust. It aims to provide a faster package installation and dependency management experience than traditional pip. UV can not only manage Python packages but also manage multiple Python versions and switch between them quickly.
You can install UV using the following commands:
# On macOS and Linux.
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows.
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# With pip.
pip install uv
After installation, you can check if UV is successfully installed by running uv --version
.
Python Version
After downloading the 2GB Torch, I found that one of the project dependencies required a Python version between 3.7 and 3.11, while my Python version was 3.13. I recommend using python3.10
, which should be sufficient for most cases.
Virtual Environment
Create a virtual environment in your project folder by running uv venv --python 3.10
.
Installing Dependencies
To install general dependencies (excluding torch
), run uv pip install xxx1 xxx2 xxx3
. There is no need to activate the virtual environment. However, you need to activate the virtual environment when running the code. If you want to use a mirror to speed up the process, for example, using Tsinghua's mirror, change it to uv pip install xxx1 xxx2 xxx3 -i https://pypi.tuna.tsinghua.edu.cn/simple
. To install all dependencies from requirements.txt
, run uv pip install -r requirements.txt
.
Installing Torch
First, visit this link: Start Locally. Select your environment, and it will generate the installation code. Although my CUDA version is 12.8, it was not available, so I chose the highest available version, 12.6. The code provided to me was pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
.
I split it into two parts: uv pip install torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
. This should not be a problem since these packages are relatively small. The key part is this: uv pip install torch --index-url https://download.pytorch.org/whl/cu126
. The Torch package is over 2GB. You can try this code to see your download speed. If the speed is not satisfactory, use the following method.
There are some methods online to install torch
using a mirror. If you choose a newer CUDA or Torch version, you may find that the mirror is ineffective.
pip install
can install from a whl
file. Therefore, find the address of the file you need, download it using other download tools, and then install it. This is much faster. Moreover, when the CUDA and Python versions are the same, multiple virtual environments can use this downloaded file for installation.
Open the URL in the code generated by Start Locally (--index-url
). Mine was https://download.pytorch.org/whl/cu126
. Find Torch and click to open it. There are many files. If you don't know which one to choose, understand the naming convention of these files. I chose torch-2.6.0+cu126-cp310-cp310-win_amd64.whl
. Here, 2.6.0
is the Torch version, cu126
is the CUDA version (cuda 12.6
), cp310
is the Python version (python 3.10
), and win_amd64
is the operating system.
Save the downloaded file outside the project. In the project folder, run uv pip install torch file path
to install Torch.
Testing Whether CUDA and Torch Are Working
Create a check_cuda_torch.py
file.
import torch
def test_pytorch_and_cuda():
print("PyTorch Version:", torch.__version__)
# Check if CUDA is available
if torch.cuda.is_available():
print("Is CUDA Available: Yes")
print("CUDA Version:", torch.version.cuda)
print("Number of CUDA Devices Supported:", torch.cuda.device_count())
# Get the name of the current device
current_device = torch.cuda.current_device()
print("Current CUDA Device Name:", torch.cuda.get_device_name(current_device))
# Test tensor operations on the GPU
# Create a tensor and move it to the GPU
x = torch.tensor([1.0, 2.0, 3.0], device="cuda")
y = torch.tensor([4.0, 5.0, 6.0], device="cuda")
z = x + y
print("Result of Calculation on GPU:", z)
else:
print("Is CUDA Available: No")
if __name__ == "__main__":
test_pytorch_and_cuda()
Activate the virtual environment by running .venv/Scripts/activate
and then run python check_cuda_torch.py
.
For other issues, please consult an AI assistant.