This code integrates an asynchronous OpenAI client with a LiteralAI client to create a conversational agent. It utilizes LiteralAI’s step decorators for structured logging and tool orchestration within a conversational flow. The agent can process user messages, make decisions on tool usage, and generate responses based on a predefined set of tools and a maximum iteration limit to prevent infinite loops.

This example demonstrates thread-based monitoring, allowing for detailed tracking and analysis of conversational threads.

.env
LITERAL_API_KEY=
OPENAI_API_KEY=

With the integration of Literal AI, you can now visualize threads, runs and LLM calls directly on the Literal AI platform, enhancing transparency and debuggability of your AI-driven applications.