Deploying LLMs in DKubeX

LLMs can be deployed in DKubeX using KServe. The steps to deploy them are given below.

Note

  • To make the LLM deployment accessible for all users on a particular DKubeX setup, please use the --public flag in the deployment command.

  • To deploy an LLM on DKubeX with SkyPilot and Sky-Serve, visit Deploying LLMs with SkyPilot.

Deploying Base LLMs

You can deploy base LLMs which are registered with the DKubeX LLM Catalog or from Huggingface repository with a custom configuration file.

  • To list all base LLMs registered with DKubeX, use the following command.

    d3x llms list
    

    Information

    To see the full list of LLMs registered with DKubeX LLM Catalog, please visit the List of LLMs in DKubeX LLM Catalog page.

To deploy a base LLM registered with the DKubeX LLM Catalog, use the following command. Replace the parts enclosed within <> with the appropriate details.

Note

In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command.

d3x llms deploy --name <name of the deployment> --model <LLM Name> --type <GPU Type> --token <access token for the model (if required)> --kserve
  • Provide an unique name of the LLM deployment after the --name flag replacing <deployment-name> in the command.

  • Replace <model-name> with the name of the LLM from the DKubeX catalog after the --model flag.

  • Provide the Huggingface access token for the LLM after the --token flag replacing <access token> in the command.

  • Use the --publish flag to make the deployment details available for all users on the DKubeX setup.

  • Use the --kserve flag to deploy the LLM using KServe.

  • Use the --min_replicas and --max_replicas flags along with number of replicas to set the minimum and maximum number of replicas configuration for the deployment. For example, --min_replicas 1.

  • You can check the status of the deployment from the Deployment page in DKubeX or by running the following command.

    d3x serve list