Quickstart¶
In this example we will explore the basic features of DKubeX. We will start by ingesting a dataset from our workspace, deploying a base LLM model, and building a RAG chat application. The steps are as follows:
Prerequisites¶
You must have the current version of DKubeX installed into your system. For detailed instructions regarding installation and logging in to DKubeX, please refer to Installation.
For this example, ideally you need a a10 GPU (g5.4xlarge) node attached to your cluster.
Attention
In case of a RKE2 setup, please make sure that you have labeled the node as “a10”. Also, in case you are using any other type of GPU node, make sure to use the label for that node which you have put during DKubeX installation process.
Make sure you have access and the secret token (only if required) for that particular model which you are going to deploy (in this example, Llama3-8B).
Note
In the case the model you want to deploy is not in the DKubeX LLMs registry, you can also deploy LLMs directly from the HuggingFace model hub using the configuration file of that particular model. Make sure you have access to that particular model (in case the model is a private model).
For instructions on how to deploy the model from HuggingFace repository, please refer to Deploying Base LLMs.
Export the following variables to your workspace by running the following commands on your DKubeX Terminal.
Replace the
<username>
part with your DKubeX username.export NAMESPACE="<username>" export HOMEDIR=/home/${NAMESPACE}
This example uses the ContractNLI dataset throughout where required. You need to download the dataset on your DKubeX workspace.
Attention
Although the ContractsNLI dataset is in accordance with the terms and conditions of the Creative Commons Attribution 4.0 International Public License, it is recommended to go through the terms and conditions of the dataset before using it. You can read the terms and conditions here- https://stanfordnlp.github.io/contract-nli/#download
To download the dataset, open the Terminal application from the DKubeX UI and run the following command:
wget https://stanfordnlp.github.io/contract-nli/resources/contract-nli.zip -P ${HOMEDIR}/
Unzip the downloaded file using the following command. A folder called contract-nli will be created which will contain the entire dataset. Also at this point, remove the unnecessary files from the dataset folder.
unzip ${HOMEDIR}/contract-nli.zip rm -rf ${HOMEDIR}/contract-nli/dev.json ${HOMEDIR}/contract-nli/LICENSE ${HOMEDIR}/contract-nli/README.md ${HOMEDIR}/contract-nli/TERMS ${HOMEDIR}/contract-nli/test.json ${HOMEDIR}/contract-nli/train.json ${HOMEDIR}/contract-nli.zip
Ingesting Data¶
Note
For detailed information regarding this section, please refer to Data ingestion and creating dataset.
Important
This example uses the BAAI/bge-large-en-v1.5
embeddings model for data ingestion.
Configuring .yaml
file for ingestion¶
You need to provide a configuration .yaml file to be used during the ingestion process.
On the Terminal application in DKubeX UI, run the following command to pull the ingestion configuration file.
wget https://raw.githubusercontent.com/dkubeio/dkubex-examples/refs/tags/v0.8.5.4.1/rag/ingestion/ingest.yaml -P ${HOMEDIR}/
You need to provide proper details on the ingest.yaml file. Run
vim ${HOMEDIR}/ingest.yaml
and make the following changes.On the
embedding
section, selecthuggingface
as we are going to use theBAAI/bge-large-en-v1.5
embedding model which is already present in the DKubeX models catalog.On the
reader
section, selectfile
as we are going to use the file reader from Llamaindex to read the documents for ingestion. For more information regarding the file reader, visit the Llamaindex documentation.Uncomment the entire
huggingface
section underEmbedding Model Details
. Here the name of the embedding model to be used is provided.Make sure the
file
section underData Reader Details
is uncommented. Under here, in theinput_dir
field provide the absolute path to your dataset folder, i.e. in this case,/home/<your username>/contract-nli/
(Provide your DKubeX username in place of<your username>
).
You can also modify and customize several other options in the ingest.yaml file according to your needs, including the splitter class, chunk size, embedding model to be used, etc.
Triggering ingestion and creating dataset¶
Open the Terminal application in DKubeX UI.
Use the following command to perform data ingestion and create the dataset. A dataset named contracts will be created.
d3x dataset ingest -d contracts --config ${HOMEDIR}/ingest.yaml --faq
Note
A few documents from the ContractsNLI dataset may show errors during the ingestion process. This is expected behaviour ar those documents’ format are not suitable for ingestion.
The time taken for the ingestion process to complete depends on the size of the dataset. The ContractsNLI dataset contains 605 documents and the ingestion process may take around 30 minutes to complete. Please wait patiently for the process to complete.
In case the terminal shows a timed-out error, that means the ingestion is still in progress, and you will need to run the command provided on the CLI after the error message to continue to get the ingestion logs.
The record of the ingestion and related artifacts are also stored in the MLFlow application on DKubeX UI.
To check if the dataset has been created, stored and are ready to use, use the following command:
d3x dataset list
To check the list of documents that has been ingested in the dataset, use the following command:
d3x dataset show -d contracts
Deploying LLMs from Model Catalog¶
Note
For detailed information regarding this section, please refer to Deploying LLMs in DKubeX.
Here we will deploy the base Llama3-8B model, which is already pre-registered with DKubeX.
Note
This workflow requires an a10 GPU node. Make sure your cluster is equipped with such. Also, in case you are using any other type of GPU node, make sure to use the label for that node which you have put during DKubeX installation process.
To list all LLM models registered with DKubeX, use the following command.
d3x llms list
Export the access token for the Llama3-8B model. Replace the
<Huggingface token for Llama3-8B>
part with the token for the Llama3-8B model.export HF_TOKEN="<Huggingface token for Llama3-8B>"
Deploy the base Llama3-8B model using the following command.
d3x llms deploy --name=llama38bbase --model=meta-llama/Meta-Llama-3-8B-Instruct --token ${HF_TOKEN} --type=a10 --publish
Note
In case you are using a EKS setup, please change the value of the flag --type from a10 to g5.4xlarge in the following command. Also, in case you are using any other type of GPU node, make sure to use the label for that node which you have put during DKubeX installation process.
You can check the status of the deployment from the Deployments page in DKubeX or by running the following command.
d3x serve list
Wait until the deployment is in running state.
Building your first RAG chat application¶
Note
For detailed information regarding this section, please refer to Creating and accessing the chatbot application.
From the DKubeX UI, open and log into the SecureLLM application. Once open, click on the Admin Login button and log in using the admin credentials provided during installation.
Hint
In case you do not have the credentials for logging in to SecureLLM, please contact your administrator.
On the left sidebar, click on the Keys menu and go to the Application Keys tab on that page.
To create a new key for your application, use the following steps:
On the API key name field, provide a unique name for the key to be created.
From the LLM Keys dropdown list, select DKUBEX.
From the Models dropdown list, select your deployed base model.
Click on the Generate Key button.
A pop-up window will show up on your screen containing the application key for your new application. Alternatively, you can also access your application key from the list of keys in the Application Key tab.
Copy this application key for further use, as it will be required to create the chatbot application. Also make sure that you are copying the entire key including the sk- part.
From the DKubeX UI, go to the Terminal application.
You will need to configure and use the
query.yaml
file from the dkubex-examples repo to be used in the query process in the Securechat application.Run the following command to put the
query.yaml
file on your workspace.wget https://raw.githubusercontent.com/dkubeio/dkubex-examples/refs/tags/v0.8.5.4.1/rag/query/query.yaml -P ${HOMEDIR}/
Run
vim ${HOMEDIR}/query.yaml
and provide the following details on thequery.yaml
file. Once provided, save the file.On the
dataset
field, provide the name of the dataset you created earlier, i.e.contracts
.On the
embedding
field, provide the type of the embedding model used for ingestion, i.e.huggingface
.In the
synthesizer
section, provide the following details:On the
llm
field, make suredkubex
is selected.On the
llm_url
field, provide the endpoint URL of the deployed model to be used. The endpoint URL can be found on the Deployments page of DKubeX UI.On the
llm_key
field, provide the serving token for the deployed model to be used. To find the serving token, go to the Deployments page of DKubeX UI and click on the deployed model name. The serving token will be available on the model details page.
Under the
Embedding Model config
section, uncomment the entirehuggingface
section. Here the name of the embedding model to be used is provided.In the
securellm
section, provide the following details:On the
appkey
field, provide the application key that you created earlier on the SecureLLM application.On the
dkubex_url
field, provide the URL to access DKubeX.
You will need to configure and use the
securechat.yaml
file from the dkubex-examples repo to configure and create the chatbot application.Run the following command to put the
securechat.yaml
file on your workspace.wget https://raw.githubusercontent.com/dkubeio/dkubex-examples/refs/tags/v0.8.5.4.1/rag/securechat/securechat.yaml -P ${HOMEDIR}/
Run
vim ${HOMEDIR}/securechat.yaml
and provide the following details on thesecurechat.yaml
file. Once provided, save the file.On the
name:
field, provide a unique name to be used for the chatbot application. In this example, we will use the namendabase
.On the
env:SECUREAPP_ACCESS_KEY:
field, provide a password which will be used to access the chatbot application.On the
env:FMQUERY_ARGS:
field, provide the following details:Use the argument
llm
to specify that the chatbot application will use the LLM model deployment (llama38bbase
) in DKubeX.Provide the name of the dataset, i.e.
contracts
after the--dataset
flag.Provide the absolute path of the
query.yaml
file that you created earlier after the--config
flag. It should be/home/<your username>/query.yaml
for this example. Replace the<your username>
part with your DKubeX username.
On the
ingressprefix:
field, provide a unique prefix to be used for the chatbot application. In this example, we will use the prefix/ndabase
. This will be used in the application URL. e.g.https://123.45.67.89:32443/ndabase
Launch the app deployment with the following command:
d3x apps create --config ${HOMEDIR}/securechat.yaml
To check the status of the app deployment, use the following command:
d3x apps list
Once the app deployment status becomes running, you can access the application from the Apps page of DKubeX UI. Provide the application key that you set in the
SECUREAPP_ACCESS_KEY
field earlier to start using the chat application.
Hint
You can ask the following questions to the chatbot when using the ContractsNLI dataset:
How do I frame a confidential information clause?
What is the difference between a unilateral and mutual NDA?
What are some common exceptions to confidential information clauses?
Tutorials and More Information¶
For more examples including how to train and register models and deploy user applications, please visit the following pages and go through the table provided:
Training Fashion MNIST model in DKubeX
Finetuning open-source LLMs
Deploying models registered in MLFlow in DKubeX
Deploying models from Huggingface repo
Deploying LLM registered in DKubeX
Creating a Securechat App using BGE-large Embeddings and Llama3-8b Summarisation Models
Wine Model Finetuning using Skypilot
Llama2 Finetuning using SkyPilot