Deploying LLMs in DKubeX¶
Both base and finetuned LLMs can be deployed in DKubeX. The steps to deploy them are given below.
Note
To make the LLM deployment accessible for all users on a particular DKubeX setup, please use the --public
flag in the deployment command.
Deploying Base LLMs¶
You can deploy base LLMs which are registered with the DKubeX LLM Registry and base LLMs available on the Huggingface repository.
To list all base LLMs registered with DKubeX, use the following command.
d3x llms list
Information
To see the full list of LLMs registered with DKubeX LLM Registry, please visit the List of LLMs in DKubeX LLM Catalog page.
To deploy a base LLM registered with the DKubeX LLM registry, use the following command. Replace the parts enclosed within <> with the appropriate details.
Note
In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command.
d3x llms deploy --name <name of the deployment> --model <LLM Name> --type <GPU Type> --token <access token for the model (if required)>
d3x llms deploy --name llama27b --model meta-llama/Llama-2-7b-chat-hf --type a10 --token hf_Ahq***********jWmO
You can check the status of the deployment from the Deployment page in DKubeX or by running the following command.
d3x serve list
You can deploy base LLMs not available in the DKubeX LLM Registry from the Huggingface repository. Replace the parts enclosed within <> with the appropriate details.
Attention
Make sure you have provided the deployment configuration file for the LLM that you want to deploy in your workspace.
Note
In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command.
d3x llms deploy --name <deployment name> --config <path to deployment config file> --type <GPU type> --token <access token for the model (if required)>
d3x llms deploy --name llama213b --config /home/ocdlgit/llama213b.yaml --type a10 --token hf_Ahq***************WmO
You can check the status of the deployment from the Deployment page in DKubeX or by running the following command.
d3x serve list
Deploying Finetuned LLMs¶
You can deploy LLMs finetuned and saved in your workspace, or you can also deploy finetuned LLMs registered in MLFlow in your workspace.
To deploy a finetuned LLM saved in your workspace, use the following command. Replace the parts enclosed within <> with the appropriate details.
Note
In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command.
d3x llms deploy -n <name of the deployment> --base_model <base LLM name> -m <absolute path to the finetuned model> --type <GPU type> --token <access token for the model (if required)>
d3x llms deploy -n llama27bft --base_model meta-llama/Llama-2-7b-chat-hf -m /home/ocdlgit/finetuned_llama27b --type a10 --token hf_Ahq*********************WmO
You can check the status of the deployment from the Deployment page in DKubeX or by running the following command.
d3x serve list
To deploy a finetuned LLM registered in MLFlow in your workspace, use the steps provided below:
To list all LLMs registered in MLFlow, use the following command.
d3x models list
To deploy a finetuned LLM registered in MLFlow, use the following command. Replace the parts enclosed within <> with the appropriate details.
Note
In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command.
d3x llms deploy -n <name of the deployment> --base_model <base LLM name> --mlflow <name of registered model>:<model version> --type <GPU type> --token <access token for the model (if required)>
d3x llms deploy -n llama27bft --base_model meta-llama/Llama-2-7b-chat-hf --mlflow llama27b:1 --type a10 --token hf_Ahq****************WmO
You can check the status of the deployment from the Deployment page in DKubeX or by running the following command.
d3x serve list