Creating a Securechat App using MPNET-v2 Embeddings and Llama2-7b Summarisation Models

This example will take you through the procedure of deploying a LLM chatbot that can answer questions based on specific dataset(s) provided by you.

Prerequisites

  • You need to create a folder in your workspace. This folder will contain all the documents you are going to use for data ingestion.

    • You can directly put your document-containing folder in the workspace from your local system by using the filebrowser application on the workspace tab in DKubeX UI, or you can create a folder from the CLI to download the files from their URL.

  • The Llama2-7b model needs to be deployed on DKubeX.

    Note

    This step requires an a10 GPU node. Make sure your cluster is equipped with such.

    Note

    For more information regarding deploying Llama2-7b model on DKubeX, please visit Deploying LLMs in DKubeX.

    • You need to provide the configuration yaml file for the model to be deployed in the workspace. Provide the configuration file from your local system by using the filebrowser application on the workspace tab in DKubeX UI.

    • Deploy the model using the following command. Replace the parts enclosed within $ with the appropriate details.

      d3x llms deploy --name=llama2 --config $path to the config yaml in workspace$ --type=g5.4xlarge --token $your huggingface token for llama2-7b$
      

      Note

      In case of setups brought up on a Rancher cluster, the -t or --type option in this command denotes the node or instance type which you have provided in the Prerequisites section of installing DKubeX.

    • To check whether the status of the deployment has become running, run d3x serve list

  • Export the following variables to your workspace by running the following commands.

    export PYTHONWARNINGS="ignore"
    
    export OPENAI_API_KEY=dummy
    

Data ingestion and creating dataset

  • For MPNET-v2 based data ingestion, use the following command. Replace the parts enclose with ‘$’ as shown in the example commands.

d3x fm docs add -d $dataset name$ -s $path to document containing folder$ -emb mpnet-v2
  • To check the datasets that has been created, stored and are ready to use, use the following command:

    d3x fm docs show datasets
    

Note

For additional commands regarding datasets, documents, and backup, please visit Additional Commands Regarding Datasets, Documents & Backup.

Querying the dataset

You can test the datasets by querying them without creating the chatbots by using the following commands.

Note

You need to create a SecureLLM API key before running these commands. For help regarding this, please visit Creating and Deleting Application Keys.

export OPENAI_API_KEY=$securellm API key$
d3x fm query -i llm -d $dataset name$ -emb mpnet-v2 -dep $llama2 deployment name$ --namespace $Namespace$ --securellm

Creating and accessing the chatbot application

  • To create a chatbot application using MPNET-v2 embeddings and Llama2-13b summarisation, use the following command. Make sure to replace the parts enclosed within $ with the appropriate details as shown in the example.

d3x apps create --name=$app name$ -e OPENAI_API_BASE=http://securellm-be.securellm:3005/v1 -e OPENAI_API_HOST=http://securellm-be.securellm:3005 -e OPENAI_API_KEY=$securellm API key$ -e DATASET=$dataset name$ -e embeddings=mpnet -e DKUBEX_DEPLOYMENT=llama2 -e APP_TITLE="$app description$" -p 3000 --dockeruser=dkubex123 --dockerpsw=dckr_pat_LkREM9aGwlC_-o4bprMjrKf0uS0 -rt false -ip /$text to be added after DKubeX IP to open the chatbot$ --publish=true --memory=4 --cpu=1 dkubex123/llmapp:securechat
  • To check the status of the app deployment, use the following command:

    d3x apps list
    
  • Once the app deployment status becomes running, you can access the application from the Apps page of DKubeX UI.