
Hi Community
In this article, I will introduce my application irisChatGPT which is built on LangChain Framework.
First of all, let us have a brief overview of the framework.
The entire world is talking about ChatGPT and how Large Language Models(LLMs) have become so powerful and has been performing beyond expectations, giving human-like conversations. This is just the beginning of how this can be applied to every enterprise and every domain!
The most important question that remains is how to apply this power to domain-specific data and scenario-specific response behavior suitable to the needs of the enterprise.
LangChain provides a structured and effective answer to this problem at hand! LangChain is the technology that can help realize the immense potential of the LLMs to build astounding applications by providing a layer of abstraction around the LLMs and making the use of LLMs easy and effective. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3.
The framework, however, introduces additional possibilities, for example, the one of easily using external data sources, such as Wikipedia, to amplify the capabilities provided by the model. I am sure that you have all probably tried to use Chat-GPT and find that it fails to answer about events that occurred beyond a certain date. In this case, a search on Wikipedia could help GPT to answer more questions.
LangChain Structure
The framework is organized into six modules each module allows you to manage a different aspect of the interaction with the LLM. Let’s see what the modules are.
- Models: Allows you to instantiate and use three different types of language-models, which are:
- Large Language Models (LLMs): these foundational machine learning models that are able to understand natural language. These accept strings in input and generate strings in output.
- Chat Models: models powered by LLM but are specialized to chat with the user. You can read more here.
- Text Embedding Models: these models are used to project textual data into a geometric space. These models take text as input and return a list of numbers, the embedding of the text.
- Prompts: The prompt is how we interact with the model to try to obtain an output from it. By now knowing how to write an effective prompt is of critical importance. This framework module allows us to better manage prompts. For example, by creating templates that we can reuse.
- Indexes: The best models are often those that are combined with some of your textual data, in order to add context or explain something to the model. This module helps us do just that.
- Chains: Many times to solve tasks a single API call to an LLM is not enough. This module allows other tools to be integrated. For example, one call can be a composed chain with the purpose of getting information from Wikipedia and then giving this information as input to the model. This module allows multiple tools to be concatenated in order to solve complex tasks.
- Memory: This module allows us to create a persisting state between calls of a model. Being able to use a model that remembers what has been said in the past will surely improve our application.
- Agents: An agent is an LLM that makes a decision, takes an action, makes an observation about what it has done, and continues in this manner until it can complete its task. This module provides a set of agents that can be used.
Now let’s go into a little more detail and see how to implement code by taking advantage of the different modules.
How LangChain works
Step1 :
User sends the question to LangChain
Step2 :
LangChain send this question to Embedding Model
Step3 :
Embedding model converts the text to vectors as text is stored as vectors in the database and returns to LangChain
Step4 :
LangChain send these vectors to the vector database (There are multiple vector database, We are using chroma in our application)
Step5 :
Vector database returns Top K Approximately Nearest Neighbors (KNN) Vectors
Step6 :
LangChain send question along with KNN vectors to Large Language Models (LLMs) (We are using OpenAI in our application)
Step7 :
LLM returns the answer to Langchain
Step8 :
Langchain returns the answer to the user
About Application
irisChatGPT application leverages the functionality of one of the hottest python framework LangChain built around Large Language Models (LLMs). LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models. Application is built by using objectscript with the help of intersystems Embedded Python functionality. It also contains Streamlit web application which is an open-source Python app framework to create beautiful web apps for data science and machine learning.
Features
Below is the list of application features along with the related screenshots
ChatGPT with FHIR server
.png)
.png)
.png)
.png)
.png)
.png)
.png)
Answer questions over a Cache database by using SQLDatabaseChain
Create your own chatGPT model and chat with it
OpenAI ChatGPT
Wikipedia Search
Search on the internet by using DuckDuckGo (DDG) general search engine
Generate Python code by using Python REPL LangChain functionality
.png)
Streamlit Web application ONLINE DEMO
Objectscript Reference
.png)
Grand Prix Contest 2023
.png)
Personal ChatGPT
.png)
OpenAI ChatGPT
.png)
Thanks