Let’s say we have a simple LLM application that takes a user input, performs a retrieval step and generates the final response with an LLM.

The code for this application would look like this:

Logging the conversation with Literal

First, we initialize the Literal client.

Logging the steps

In this example we have 2 steps: semantic_search and generate_response. We can use the step decorator to log these steps.

Logging the run

Logging the thread

A thread is a sequence of steps that are related to each other. In our example, we have a single thread. To create a thread, we use the thread decorator.

Logging the user question and final answer

Finally, we can log the user question and the final answer using client.message.

Full code

Running the example in Python

To run the example, you need to install the Literal client:

pip install literalai

Then, you can run the example:

python example.py

On the Literal platform, you will see the following thread being logged:

Rendering of the Thread