This code integrates an asynchronous OpenAI client with a LiteralAI client to create a conversational agent. It utilizes LiteralAI’s step decorators for structured logging and tool orchestration within a conversational flow. The agent can process user messages, make decisions on tool usage, and generate responses based on a predefined set of tools and a maximum iteration limit to prevent infinite loops.

This example demonstrates thread-based monitoring, allowing for detailed tracking and analysis of conversational threads.

.env
LITERAL_API_KEY=
OPENAI_API_KEY=
pip install uvicorn
uvicorn thread-fastapi:app --reload

Then, you can send requests from a client:

client.py
import requests

url = 'http://127.0.0.1:8000/process/'

import uuid

thread_id = str(uuid.uuid4())

# First query
data1 = {
  "message_history": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "what's the weather in sf"}],
  "thread_id": thread_id
}

response1 = requests.post(url, json=data1)

# Second query
data2 = {
  "message_history": response1.json()["message_history"] + [{"role": "user", "content": "what's the weather in paris"}],
  "thread_id": thread_id
}

response2 = requests.post(url, json=data2)
if response2.status_code == 200:
    print(response2.json())
else:
    print(f"Error: {response2.status_code}, {response2.text}")