Generation
The following objects can be passed to a step
ChatGeneration
List of messages sent to the LLM. Following the openai chat format.
The provider of the LLM, like openai
.
The model used for the generation, like gpt-4
.
The error message if the generation failed.
The settings of the LLM, like temperature
…
The variables used to format the prompt.
Optional tags to add to the generation.
The prompt used for the generation.
The tools used to generate the completion, following the openai format.
The message returned by the LLM, following the openai chat format.
The token count of the completion.
The token count of the input.
The token count of the output.
The time it took to generate the first token. Only available when streaming.
The token throughput in tokens per second. Only available when streaming.
The duration of the generation in ms.
CompletionGeneration
The provider of the LLM, like openai
.
The model used for the generation, like gpt-4
.
The error message if the generation failed.
The settings of the LLM, like temperature
…
The variables used to format the prompt.
Optional tags to add to the generation.
The prompt used for the generation.
The completion returned by the LLM.
The token count of the completion.
The token count of the input.
The token count of the output.
The time it took to generate the first token. Only available when streaming.
The token throughput in tokens per second. Only available when streaming.
The duration of the generation in ms.
Was this page helpful?