The Langchain integration enables to monitor your Langchain agents and chains with a single line of code.

You should create a new instance of the callback handler for each invocation.
import os
from literalai import LiteralClient

from langchain_openai import ChatOpenAI
from langchain.schema.runnable.config import RunnableConfig
from langchain.schema import StrOutputParser
from langchain.prompts import ChatPromptTemplate

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

cb = client.langchain_callback()

prompt = ChatPromptTemplate.from_messages(
    ['human', 'Tell me a short joke about {topic}']
)

model = ChatOpenAI(streaming=True)
runnable = prompt | model | StrOutputParser()

res = runnable.invoke(
    {"topic": "ice cream"},
    config=RunnableConfig(callbacks=[cb], run_name="joke")
    )

Multiple langchain calls in a single thread

You can combine the Langchain callback handler with the concept of Thread to monitor multiple langchain calls in a single thread.

import os
from literalai import LiteralClient

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

with literal_client.thread(name="Langchain example") as thread:
    cb = client.langchain_callback()
    # Call your Langchain agent here