You can draft and iterate on prompts on the Literal Prompt Playground. Once you are happy with your prompt, you can save it and use it in your code.

Moving prompts out of your code has several benefits:

  • Non technical people can draft and iterate on prompts.
  • You can deploy a new version of your prompt without deploying your code.
  • You can track which prompt version for a specific generation.
  • You can compare prompt versions to see which one performs better.

Get a Prompt

Getting a prompt is the first step to use it in your code. You can get a prompt by its name.

import os
from literalai import LiteralClient

client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))
# This will fetch the champion version, you can also pass a specific version
prompt = await client.api.get_prompt(name="Default")

# Synchronous version
prompt = client.api.get_prompt_sync(name="Default")

Format a Prompt

Once you got your prompt, you can format it to get messages in the OpenAI format.

Combining prompts with integrations (like the OpenAI integration) allows you to log the generations and to track which prompt versions were used to generate them.

import os
import asyncio
from literalai import LiteralClient
from openai import AsyncOpenAI

openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))
# Optionally instrument the openai client to log generations
literal_client.instrument_openai()

async def main():
    prompt = await literal_client.api.get_prompt(name="Default")

    # Optionally pass variables to the prompt
    variables = {"foo": "bar"}

    messages = prompt.format(variables)
    stream = await openai_client.chat.completions.create(
        messages=messages,
        # Optionally pass the tools defined in the prompt
        tools=prompt.tools,
        # Pass the settings defined in the prompt
        **prompt.settings,
        stream=True
    )

    async for chunk in stream:
        print(chunk)

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

Langchain Chat Template

Since Langchain has a different format for prompts, you can convert a Literal prompt to a Langchain Chat Template.

You can combine the prompt with the Langchain integration to log the generations and to track which prompt versions were used to generate them.

import os
import asyncio
from literalai import LiteralClient

from langchain_openai import ChatOpenAI
from langchain.schema.runnable.config import RunnableConfig
from langchain.schema import StrOutputParser

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

async def main():
    prompt = await literal_client.api.get_prompt(name="Default")
    lc_prompt = prompt.to_langchain_chat_prompt_template()

    model = ChatOpenAI(streaming=True)
    runnable = lc_prompt | model | StrOutputParser()
    
    cb = literal_client.langchain_callback()
    variables = {"foo": "bar"}

    res = runnable.invoke(
        variables, 
        config=RunnableConfig(callbacks=[cb], run_name="Test run")
        )

loop = asyncio.get_event_loop()
loop.run_until_complete(main())