| by Arround The Web | No comments

How to Add a Custom Memory Type in LangChain?

LangChain is used to configure/design Language Models or Chatbots that can interact with humans like a chat. These chat messages are linked through chains as the name LangChain suggests and the user can also store them in memory. The LangChain allows the developers to use memory libraries that provide the use of built-in classes or the customization of their own memory.

Quick Outline

This post will show:

How to Add a Custom Memory Type in LangChain

Conclusion

How to Add a Custom Memory Type in LangChain?

Adding a customized memory type in LangChain allows the user to get the most performance as the memory. The user can configure the memory type according to his requirements. To add a custom memory type in LangChain, simply go through the following steps:

Step 1: Installing Frameworks

First, install the LangChain framework to get started with the process of adding a custom memory type:

pip install langchain

Running the above command in the Python Notebook will install the dependencies for the LangChain as displayed in the following snippet:

Install the OpenAI module to get its libraries that can be used to configure the LLMs:

pip install openai

This guide will use the spaCy framework to design the custom memory type in the LangChain and the following code is used to install the module:

pip install spacy

The spaCy model uses the hash table to store the information as the observation like previous chat messages. The following code is used to download the Large Language Model or LLM from the spaCy library to build an advanced NLP model:

!python -m spacy download en_core_web_lg

Importing “os” and “getpass” libraries are for entering the API key from the OpenAI’s account to set up its environment:

import os
import getpass

os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

Step 2: Importing Libraries

The next step is to import the required libraries for customizing the memory type according to the chat model:

from langchain.schema import BaseMemory

from langchain.chains import ConversationChain

from pydantic import BaseModel

from langchain.llms import OpenAI

from typing import List, Dict, Any

Importing the “spaCy” library to load the “en_core_web_lg” model and assign it to the “nlp” variable as it is the Natural Language Processing model:

import spacy

nlp = spacy.load("en_core_web_lg")

Step 3: Building Custom Memory

After that, simply build the custom memory using BaseMemory and BaseModel arguments in the Memory class. Then, configure entities (collected/stored from the data) that can be stored in the memory as complete information or as a single unit. The memory is configured to contain all the entities from the document to optimize the performance of the memory and model:

class SpacyEntityMemory(BaseMemory, BaseModel):
    """ Memory class for storing information about entities"""
    entities: dict = {}
    memory_key: str = "entities"
    def clear(self):
        self.entities = {}
    @property
    def memory_variables(self) -> List[str]:
        """ Initialize the variables provided to the query"""
        return [self.memory_key]
#define the memory variables using the arguments
    def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
        """ Call the variables for memory i.e. entity key"""
        doc = nlp(inputs[list(inputs.keys())[0]])
#configure entities to be stored in the memory for an individual unit
        entities = [
            self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities
        ]
        return {self.memory_key: "\n".join(entities)}
#define the save_context() to use the memory
    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
        """Store observation from this chat to the memory"""
        text = inputs[list(inputs.keys())[0]]
        doc = nlp(text)
        for ent in doc.ents:
            ent_str = str(ent)
            if ent_str in self.entities:
                self.entities[ent_str] += f"\n{text}"
            else:
                self.entities[ent_str] = text

Step 4: Configuring Prompt Template

After that, simply configure the prompt template that explains the structure of the input provided by the user/human:

from langchain.prompts.prompt import PromptTemplate

template = """The following is an interaction between a machine and a human It says it does not know If the machine does not know the answer The machine (AI) provides details from its context and if it does not understand the answer to any question it simply says sorry

Entity info:

{entities}

Communication:

Human: {input}

AI:"""

prompt = PromptTemplate(input_variables=["entities", "input"], template=template)

Step 5: Testing the Model

Before testing the model, simply configure the LLM using the OpenAI() method and set up the ConversationChain() function with arguments:

llm = OpenAI(temperature=0)

conversation = ConversationChain(

llm=llm, prompt=prompt, verbose=True, memory=SpacyEntityMemory()

)

Give information to the model using the input argument while calling the predict() method with the conversation variable:

conversation.predict(input="Harrison likes machine learning")

Output

The model has absorbed the information and stored it in the memory and also posed the question related to the information for getting on with the conversation:

The user can respond to the question from the model to add more information to the memory or test the memory by asking the question about the information:

conversation.predict(

  input="What is the Harrison's favorite subject"

)

The model gives the output based on the previous information and displays it on the screen as the following snippet shows:

That’s all about adding a custom memory type in LangChain.

Conclusion

To add a custom memory type in LangChain, simply install the required modules for importing libraries to build the custom memory. The spaCy is the important library that is being used in this guide to add a custom memory using its NLP model. After that, configure the custom memory and prompt template to give the structure of the chat interface. Once the configuration is done, simply test the memory of the model by asking for information related to stored data.

Share Button

Source: linuxhint.com

Leave a Reply