Docker images for Huggingface transformers.
Available versions:
- 4.43.1 (CUDA 12.1, CUDA 12.1 langchain, CUDA 12.1 langchain/optional RAG, CUDA 12.1 MMS)
- 4.42.4-post (CUDA 12.1, CUDA 12.1 langchain)
- 4.42.3 (CUDA 12.1, CUDA 12.1 langchain)
- 4.40.2 (CUDA 12.1, CUDA 12.1 langchain)
- 4.40.0 (CUDA 11.7, CUDA 11.7 langchain)
- 4.36.0 (CUDA 11.7, CUDA 11.7 Mistral, CUDA 11.7 Text classification)
- 4.35.0 (CUDA 11.7, CUDA 11.7 Mistral)
- 4.31.0 (CUDA 11.7, CUDA 11.7 with falcontune, CUDA 11.7 Llama2 (8bit), CUDA 11.7 QLoRA, CUDA 11.7 RemBERT, CUDA 11.7 translate)
- 4.7.0 (CUDA 11.1, CUDA 11.1 with finetune-gpt2xl)
In case models or datasets require being logged into Huggingface, you can give your Docker container access via an access token.
In order to create an access token, do the following:
- Log into https://huggingface.co
- Go to Settings -> Access tokens
- Create a token (read access is sufficient, unless you want to push models back to huggingface)
- Copy the token onto the clipboard
- Save the token in a .env file, using
HF_TOKEN
as the variable name
Add the following parameter to make all the environment variables stored in the .env
file in
the current directory available to your Docker container:
--env-file=`pwd`/.env
With the HF_TOKEN
environment variable set, you can now log into Huggingface inside your Docker
container using the following command:
huggingface-cli login --token=$HF_TOKEN