prompt() method. In this method, define the following parameters:
Unique identifier for the node.
Display name for the node.
Initial instructions that guide the LLM’s behavior.
It also supports expressions. For example:
It also supports expressions. For example:
flow.input.variable_1.The specific request or task you want the LLM to perform.
It also supports expressions. For example:
It also supports expressions. For example:
flow.input.variable_1.Examples of user prompts. You can configure:
- input: The example input prompt.
- expected_output: The expected output for the given input.
- enabled: A boolean value to enable or disable the example.
LLM used for content generation.
Parameters for the LLM. You can configure:
- temperature: Controls randomness. Higher values produce more diverse outputs.
- min_new_tokens: Sets the minimum number of tokens to generate.
- max_new_tokens: Sets the maximum number of tokens to generate.
- top_k: Limits token selection to the top k most likely options.
- top_p: Uses nucleus sampling to select from the top p cumulative probability.
- stop_sequences: Defines sequences that stop generation when encountered.
Description of the node.
Input schema for the LLM.
Output schema for the LLM.
Define input mappings using a structured collection of Assignment objects.
Defines the configuration for the retry option using a JSON structure. In this JSON, set the following fields:
error_message: an optional string that describes the retry error.max_retries: an optional integer that limits how many times the node retries.retry_interval: an optional integer that sets the interval between retries in milliseconds.
Python

