The following objects can be passed to a step

ChatGeneration

@dataclass
class ChatGeneration(BaseGeneration):
    messages: List[GenerationMessage] = Field(default_factory=list)
    type = GenerationType.CHAT
messages
List[Dict]

List of messages sent to the LLM. Following the openai chat format.

provider
str

The provider of the LLM, like openai.

model
str

The model used for the generation, like gpt-4.

error
str

The error message if the generation failed.

settings
Dict

The settings of the LLM, like temperature

variables
Dict

The variables used to format the prompt.

tags
List[str]

Optional tags to add to the generation.

prompt
str

The prompt used for the generation.

tools
Dict

The tools used to generate the completion, following the openai format.

message_completion
Dict

The message returned by the LLM, following the openai chat format.

token_count
int

The token count of the completion.

input_token_count
int

The token count of the input.

output_token_count
int

The token count of the output.

tt_first_token
float

The time it took to generate the first token. Only available when streaming.

token_throughput_in_s
float

The token throughput in tokens per second. Only available when streaming.

duration
float

The duration of the generation in ms.

CompletionGeneration

provider
str

The provider of the LLM, like openai.

model
str

The model used for the generation, like gpt-4.

error
str

The error message if the generation failed.

settings
Dict

The settings of the LLM, like temperature

variables
Dict

The variables used to format the prompt.

tags
List[str]

Optional tags to add to the generation.

prompt
str

The prompt used for the generation.

completion
str

The completion returned by the LLM.

token_count
int

The token count of the completion.

input_token_count
int

The token count of the input.

output_token_count
int

The token count of the output.

tt_first_token
float

The time it took to generate the first token. Only available when streaming.

token_throughput_in_s
float

The token throughput in tokens per second. Only available when streaming.

duration
float

The duration of the generation in ms.