Our Ballast Lane development team was tasked with implementing a Chatbot Proof of Concept (POC) for our client, PrescriberPoint, a pharmaceutical startup. It needed to be capable of answering questions about a drug based on the label, give drug-to-drug interaction information, and assist the user with insurance authorization forms.
Given these requirements, the main concern was how to build the chat service. We decided to go with OpenAI’s API to leverage their Large language models (LLMs), machine learning models (like davinci, chat-gpt and ada) and LangChain, which is a framework for developing applications powered by language models.
Familiarity with the definitions of large language models (LLM), word embeddings, vector databases, and text chunking was required.
AWS SageMaker was used to test and develop the chatbot. We used what is called an “Agent” as our main entity. The core idea of agents is to use an LLM to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. You can learn more hereAWS SageMaker was used to test and develop the chatbot. We used what is called an “Agent” as our main entity. The core idea of agents is to use an LLM to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. You can learn more here.
As the agent needed to perform different actions based on the user intent, we defined one for each requirement:
-
Drug label questions: We fed the scraped drug label data a vector DB using OpenAI’s embedding model. That allowed us to retrieve the most similar chunks and feed that into OpenAI’s model to find the answer to the users’ question. In a case where the model determines that the information given is not enough to formulate an answer it will just say that it doesn’t know.
-
Drug-to-Drug interaction: This was quite straightforward to build, we took the names of both drugs being asked to check and queried a specific API which returned a response if there was some kind of interaction between them.
-
Prior auth forms: This required more of a workflow in the frontend. When the agent determined that the user wanted help with prior authorization forms, it returned a formatted output that the frontend could parse and use the values to start a workflow.
After testing the agent enough in SageMaker, we needed to make it available to be consumed by actual users. We used FastAPI to build a simple python API, exposing a chat endpoint that would be consumed by a client application built with NextJS.
Instead of building the base python API ourselves, we leveraged ChatGPT to build the baseline for us, and we took it from there.
Our Frontend application was a mixture of mocked and real API calls. We built enough to make an impression and showcase the potential of the application and AI usage.
Currently, we are still working on the application, now building it as a real product and adding more complexity to it like chat history, session history, a feedback mechanism, and message sharing.