Skip to content

API Reference

Welcome to the OpenTrace API Reference. This documentation is automatically generated from the source code and provides detailed information about all classes, functions, and modules.

Main Modules

Core Components

opto.trace

GRAPH module-attribute

GRAPH = Graph()

ExecutionError

ExecutionError(exception_node: ExceptionNode)

Bases: Exception

Base class for execution error in code tracing.

exception_node instance-attribute

exception_node = exception_node

Module

Bases: ParameterContainer

Module is a ParameterContainer which has a forward method.

forward

forward(*args, **kwargs)

load

load(file_name)

Load the parameters of the model from a pickle file.

save

save(file_name: str)

Save the parameters of the model to a pickle file.

Node

Node(
    value: Any,
    *,
    name: str = None,
    trainable: bool = False,
    description: str = None,
    info: Union[None, Dict] = None
)

Bases: AbstractNode[T]

A data node in a directed graph, this is a basic data structure of Trace.

Args: value (Any): The value to be assigned to the node. name (str, optional): The name of the node. trainable (bool, optional): Whether the node is trainable or not. Defaults to False. description (str, optional): String describing the node which acts as a soft constraint. Defaults to None. info (Union[None, Dict], optional): Dictionary containing additional information about the node. Defaults to None.

Attributes: trainable (bool): Whether the node is trainable or not. _feedback (dict): Dictionary of feedback from children nodes. _description (str): String describing the node. Defaults to "[Node]". _backwarded (bool): Whether the backward method has been called. _info (dict): Dictionary containing additional information about the node. _dependencies (dict): Dictionary of dependencies on parameters and expandable nodes.

Notes: The Node class extends AbstractNode to represent a data node in a directed graph. It includes attributes and methods to handle feedback, description, and dependencies. The node can be marked as trainable and store feedback from children nodes. The feedback mechanism is analogous to gradients in machine learning and propagates information back through the graph. The feedback mechanism supports non-commutative aggregation, so feedback should be handled carefully to maintain correct operation order. The node can track dependencies on parameters and expandable nodes (nodes that depend on parameters not visible in the current graph level).

description property

description

A textual description of the node.

expandable_dependencies property

expandable_dependencies

The depended expandable nodes.

Notes: Expandable nodes are those who depend on parameters not visible in the current graph level. Ensure that the '_dependencies' attribute is properly initialized and contains an 'expandable' key with a corresponding value before calling the expandable_dependencies function to avoid potential KeyError exceptions.

feedback property

feedback

The feedback from children nodes.

info property

info

Additional information about the node.

op_name property

op_name

The operator type of the node, extracted from the description.

parameter_dependencies property

parameter_dependencies

The depended parameters.

Notes: Ensure that the '_dependencies' attribute is properly initialized and contains a 'parameter' key with a corresponding value before calling the parameter_dependencies function to avoid potential KeyError exceptions.

trainable instance-attribute

trainable = trainable

type property

type

The type of the data stored in the node.

append

append(*args, **kwargs)

backward

backward(
    feedback: Any = "",
    propagator=None,
    retain_graph=False,
    visualize=False,
    simple_visualization=True,
    reverse_plot=False,
    print_limit=100,
)

Performs a backward pass in a computational graph.

This function propagates feedback from the current node to its parents, updates the graph visualization if required, and returns the resulting graph.

Args: feedback: The feedback given to the current node. propagator: A function that takes in a node and a feedback, and returns a dict of {parent: parent_feedback}. If not provided, a default GraphPropagator object is used. retain_graph: If True, the graph will be retained after backward pass. visualize: If True, the graph will be visualized using graphviz. simple_visualization: If True, identity operators will be skipped in the visualization. reverse_plot: If True, plot the graph in reverse order (from child to parent). print_limit: The maximum number of characters to print for node descriptions and content.

Returns: digraph: The visualization graph object if visualize=True, None otherwise.

Raises: AttributeError: If the node has already been backwarded.

Notes: The function checks if the current node has already been backwarded. If it has, an AttributeError is raised. For root nodes (no parents), only visualization is performed if enabled. For non-root nodes, feedback is propagated through the graph using a priority queue to ensure correct ordering. The propagator computes feedback for parent nodes based on the current node's description, data and feedback. Visualization is handled using graphviz if enabled, with options to simplify the graph by skipping identity operators.

call

call(fun: str, *args, **kwargs)

Call the function with the specified arguments and keyword arguments.

Args: fun: The function to call. args: The arguments to pass to the function. *kwargs: The keyword arguments to pass to the function.

Returns: Node: The result of the function call wrapped in a node.

capitalize

capitalize()

clone

clone()

Create and return a duplicate of the current Node object.

Returns: Node: A clone of the current node.

detach

detach()

Create and return a deep copy of the current instance of the Node class.

Returns: Node: A deep copy of the current node.

eq

eq(other)

Check if the node is equal to another value.

Args: other: The value to compare the node to.

Returns: Node: A node containing the comparison result.

Notes: If a logic operator is used in an if-statement, it will return a boolean value. Otherwise, it will return a MessageNode.

format

format(*args, **kwargs)

getattr

getattr(key)

Get the attribute of the node with the specified key.

Args: key: The key of the attribute to get.

Returns: Node: A node containing the requested attribute.

items

items()

join

join(seq)

keys

keys()

len

len()

Return the length of the node.

Returns: Node: A node containing the length value.

Notes: We overload magic methods that return a value. This method returns a MessageNode.

lower

lower()

neq

neq(other)

Check if the node is not equal to another value.

Args: other: The value to compare the node to.

Returns: Node: A node containing the comparison result.

Notes: If a logic operator is used in an if-statement, it will return a boolean value. Otherwise, it will return a MessageNode.

pop

pop(__index=-1)

replace

replace(old, new, count=-1)

split

split(sep=None, maxsplit=-1)

strip

strip(chars=None)

swapcase

swapcase()

title

title()

upper

upper()

values

values()

zero_feedback

zero_feedback()

Zero out the feedback of the node.

Notes: zero_feedback should be used judiciously within the feedback propagation process to avoid unintended loss of feedback data. It is specifically designed to be used after feedback has been successfully propagated to parent nodes.

NodeContainer

An identifier for a container of nodes.

stop_tracing

A contextmanager to disable tracing.

apply_op

apply_op(op, output, *args, **kwargs)

A broadcasting operation that applies an op to container of Nodes.

Args: op (callable): the operator to be applied. output (Any): the container to be updated. args (Any): the positional inputs of the operator. *kwargs (Any): the keyword inputs of the operator.

model

model(cls)

Wrap a class with this decorator. This helps collect parameters for the optimizer. This decorated class cannot be pickled.

node

node(data, name=None, trainable=False, description=None)

Create a Node object from data.

Args: data: The data to create the Node from. name (str, optional): The name of the Node. trainable (bool, optional): Whether the Node is trainable. Defaults to False. description (str, optional): A string describing the data.

Returns: Node: A Node object containing the data.

Notes: If trainable=True: - If data is already a Node, extracts underlying data and updates name - Creates ParameterNode with extracted data, name, trainable=True

If trainable=False:
    - If data is already a Node, returns it (with warning if name provided)
    - Otherwise creates new Node with data, name

Optimizers

opto.optimizers

OptoPrime module-attribute

OptoPrime = OptoPrime

OPRO

OPRO(*args, **kwargs)

Bases: OptoPrime

buffer instance-attribute

buffer = []

default_objective class-attribute instance-attribute

default_objective = (
    "Come up with a new variable in accordance to feedback."
)

output_format_prompt class-attribute instance-attribute

output_format_prompt = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying\n        the json syntax:\n\n        {{\n        "suggestion": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        When suggestion variables, write down the suggested values in "suggestion".\n        When <type> of a variable is (code), you should write the new definition in the\n        format of python code without syntax errors, and you should not change the\n        function name or the function signature.\n\n        If no changes or answer are needed, just output TERMINATE.\n        '
)

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Below are some example variables and their feedbacks.\n\n        {examples}\n\n        ================================\n\n        {instruction}\n        "
)

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

OptoPrimeMulti

OptoPrimeMulti(
    *args,
    num_responses: int = 3,
    temperature_min_max: Optional[List[float]] = None,
    selector: Optional[callable] = None,
    generation_technique: str = "temperature_variation",
    selection_technique: str = "best_of_n",
    experts_list: Optional[List[str]] = None,
    llm_profiles: Optional[List[str]] = None,
    llm_weights: Optional[List[float]] = None,
    **kwargs
)

Bases: OptoPrime

candidates instance-attribute

candidates = []

experts_list instance-attribute

experts_list = experts_list

generation_technique instance-attribute

generation_technique = generation_technique

llm_profiles instance-attribute

llm_profiles = llm_profiles

llm_weights instance-attribute

llm_weights = (
    llm_weights or [1.0] * len(llm_profiles)
    if llm_profiles
    else None
)

num_responses instance-attribute

num_responses = num_responses

selected_candidate instance-attribute

selected_candidate = None

selection_technique instance-attribute

selection_technique = selection_technique

selector instance-attribute

selector = selector

temperature_min_max instance-attribute

temperature_min_max = (
    temperature_min_max
    if temperature_min_max is not None
    else [0.0, 1.0]
)

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
    num_responses: int = 1,
    temperature: float = 0.0,
    llm=None,
) -> List[str]

Given a prompt, returns multiple candidate answers.

generate_candidates

generate_candidates(
    summary,
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    mask=None,
    max_tokens: int = None,
    num_responses: int = 3,
    generation_technique: str = "temperature_variation",
    temperature_min_max: Optional[List[float]] = None,
    experts_list: Optional[List[str]] = None,
) -> List[str]

Generate multiple candidates using various techniques. Args: summary: The summarized problem instance. system_prompt (str): The system-level prompt. user_prompt (str): The user-level prompt. verbose (bool): Whether to print debug information. mask: Mask for the problem instance. max_tokens (int, optional): Maximum token limit for the LLM responses. num_responses (int): Number of responses to request. generation_technique (str): Technique to use for generation: - "temperature_variation": Use varying temperatures - "self_refinement": Each solution refines the previous one - "iterative_alternatives": Generate diverse alternatives - "multi_experts": Use different expert personas temperature_min_max (List[float], optional): [min, max] temperature range. experts_list (List[str], optional): List of expert personas to use for multi_experts technique. Returns: List[str]: List of LLM responses as strings.

select_candidate

select_candidate(
    candidates: List,
    selection_technique="moa",
    problem_summary="",
) -> Dict

Select the best response based on the candidates using various techniques.

Args: candidates (List): List of candidate responses from generate_candidates. selection_technique (str): Technique to select the best response: - "moa" or "mixture_of_agents": Use LLM to mix the best elements of each response - "majority": Use LLM to choose the most frequent candidate - "lastofn" or "last_of_n" (choose also if selection technique is unknown): Simply return the last candidate

Returns: Dict: The selected candidate or an empty dictionary if no candidates exist.

OptoPrimeV1

OptoPrimeV1(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    ignore_extraction_error: bool = True,
    include_example=False,
    memory_size=0,
    max_tokens=4096,
    log=True,
    prompt_symbols=None,
    json_keys=None,
    use_json_object_format=True,
    highlight_variables=False,
    **kwargs
)

Bases: Optimizer

default_json_keys class-attribute instance-attribute

default_json_keys = {
    "reasoning": "reasoning",
    "answer": "answer",
    "suggestion": "suggestion",
}

default_objective class-attribute instance-attribute

default_objective = "You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback."

default_prompt_symbols class-attribute instance-attribute

default_prompt_symbols = {
    "variables": "#Variables",
    "constraints": "#Constraints",
    "inputs": "#Inputs",
    "outputs": "#Outputs",
    "others": "#Others",
    "feedback": "#Feedback",
    "instruction": "#Instruction",
    "code": "#Code",
    "documentation": "#Documentation",
}

example_problem instance-attribute

example_problem = format(
    instruction=default_objective,
    code="y = add(x=a,y=b)\nz = subtract(x=y, y=c)",
    documentation="add: add x and y \nsubtract: subtract y from x",
    variables="(int) a = 5",
    constraints="a: a > 0",
    outputs="(int) z = 1",
    others="(int) y = 6",
    inputs="(int) b = 1\n(int) c = 5",
    feedback="The result of the code is not as expected. The result should be 10, but the code returns 1",
    stepsize=1,
)

example_problem_template class-attribute instance-attribute

example_problem_template = dedent(
    "\n        Here is an example of problem instance and response:\n\n        ================================\n        {example_problem}\n        ================================\n\n        Your response:\n        {example_response}\n        "
)

example_prompt class-attribute instance-attribute

example_prompt = dedent(
    "\n\n        Here are some feasible but not optimal solutions for the current problem instance. Consider this as a hint to help you understand the problem better.\n\n        ================================\n\n        {examples}\n\n        ================================\n        "
)

example_response instance-attribute

example_response = dedent(
    '\n            {"reasoning": \'In this case, the desired response would be to change the value of input a to 14, as that would make the code return 10.\',\n             "answer", {},\n             "suggestion": {"a": 10}\n            }\n            '
)

final_prompt class-attribute instance-attribute

final_prompt = dedent('\n        Your response:\n        ')

final_prompt_with_variables class-attribute instance-attribute

final_prompt_with_variables = dedent(
    "\n        What are your suggestions on variables {names}?\n\n        Your response:\n        "
)

highlight_variables instance-attribute

highlight_variables = highlight_variables

ignore_extraction_error instance-attribute

ignore_extraction_error = ignore_extraction_error

include_example instance-attribute

include_example = include_example

llm instance-attribute

llm = llm or LLM()

log instance-attribute

log = [] if log else None

max_tokens instance-attribute

max_tokens = max_tokens

memory instance-attribute

memory = FIFOBuffer(memory_size)

objective instance-attribute

objective = objective or default_objective

output_format_prompt instance-attribute

output_format_prompt = format(**(default_json_keys))

output_format_prompt_no_answer class-attribute instance-attribute

output_format_prompt_no_answer = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying the json syntax:\n\n        {{\n        "{reasoning}": <Your reasoning>,\n        "{suggestion}": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        In "{reasoning}", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.\n\n        If you need to suggest a change in the values of #Variables, write down the suggested values in "{suggestion}". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes are needed, just output TERMINATE.\n        '
)

output_format_prompt_original class-attribute instance-attribute

output_format_prompt_original = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying the json syntax:\n\n        {{\n        "{reasoning}": <Your reasoning>,\n        "{answer}": <Your answer>,\n        "{suggestion}": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        In "{reasoning}", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.\n\n        If #Instruction asks for an answer, write it down in "{answer}".\n\n        If you need to suggest a change in the values of #Variables, write down the suggested values in "{suggestion}". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes or answer are needed, just output TERMINATE.\n        '
)

prompt_symbols instance-attribute

prompt_symbols = deepcopy(default_prompt_symbols)

representation_prompt class-attribute instance-attribute

representation_prompt = dedent(
    "\n        You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.\n\n        Specifically, a problem will be composed of the following parts:\n        - #Instruction: the instruction which describes the things you need to do or the question you should answer.\n        - #Code: the code defined in the problem.\n        - #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.\n        - #Variables: the input variables that you can change.\n        - #Constraints: the constraints or descriptions of the variables in #Variables.\n        - #Inputs: the values of other inputs to the code, which are not changeable.\n        - #Others: the intermediate values created through the code execution.\n        - #Outputs: the result of the code output.\n        - #Feedback: the feedback about the code's execution result.\n\n        In #Variables, #Inputs, #Outputs, and #Others, the format is:\n\n        <data_type> <variable_name> = <value>\n\n        If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.\n        "
)

summary_log instance-attribute

summary_log = [] if log else None

use_json_object_format instance-attribute

use_json_object_format = use_json_object_format

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Now you see problem instance:\n\n        ================================\n        {problem_instance}\n        ================================\n\n        "
)

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
)

Call the LLM with a prompt and return the response.

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

construct_update_dict

construct_update_dict(
    suggestion: Dict[str, Any],
) -> Dict[ParameterNode, Any]

Convert the suggestion in text into the right data type.

default_propagator

default_propagator()

Return the default Propagator object of the optimizer.

extract_llm_suggestion

extract_llm_suggestion(response: str)

Extract the suggestion from the response.

load

load(path: str)

Load the optimizer state from a file.

problem_instance

problem_instance(summary, mask=None)

replace_symbols

replace_symbols(text: str, symbols: Dict[str, str]) -> str

repr_node_constraint staticmethod

repr_node_constraint(node_dict)

repr_node_value staticmethod

repr_node_value(node_dict)

save

save(path: str)

Save the optimizer state to a file.

summarize

summarize()

OptoPrimeV2

OptoPrimeV2(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    ignore_extraction_error: bool = True,
    include_example=False,
    memory_size=0,
    max_tokens=4096,
    log=True,
    initial_var_char_limit=100,
    optimizer_prompt_symbol_set: OptimizerPromptSymbolSet = OptimizerPromptSymbolSet(),
    use_json_object_format=True,
    truncate_expression=truncate_expression,
    **kwargs
)

Bases: OptoPrime

default_objective class-attribute instance-attribute

default_objective = "You need to change the `{value_tag}` of the variables in {variables_section_title} to improve the output in accordance to {feedback_section_title}."

default_prompt_symbols instance-attribute

default_prompt_symbols = default_prompt_symbols

example_problem instance-attribute

example_problem = problem_instance(example_problem_summary)

example_problem_summary instance-attribute

example_problem_summary = FunctionFeedback(
    graph=[
        (1, "y = add(x=a,y=b)"),
        (2, "z = subtract(x=y, y=c)"),
    ],
    documentation={
        "add": "This is an add operator of x and y.",
        "subtract": "subtract y from x",
    },
    others={"y": (6, None)},
    roots={
        "a": (5, "a > 0"),
        "b": (1, None),
        "c": (5, None),
    },
    output={"z": (1, None)},
    user_feedback="The result of the code is not as expected. The result should be 10, but the code returns 1",
)

example_problem_template class-attribute instance-attribute

example_problem_template = dedent(
    "\n        Here is an example of problem instance and response:\n\n        ================================\n        {example_problem}\n        ================================\n\n        Your response:\n        {example_response}\n        "
)

example_prompt class-attribute instance-attribute

example_prompt = dedent(
    "\n\n        Here are some feasible but not optimal solutions for the current problem instance. Consider this as a hint to help you understand the problem better.\n\n        ================================\n\n        {examples}\n\n        ================================\n        "
)

example_response instance-attribute

example_response = example_output(
    reasoning="In this case, the desired response would be to change the value of input a to 14, as that would make the code return 10.",
    variables={"a": 10},
)

final_prompt class-attribute instance-attribute

final_prompt = dedent(
    "\n        What are your suggestions on variables {names}?\n\n        Your response:\n        "
)

ignore_extraction_error instance-attribute

ignore_extraction_error = ignore_extraction_error

include_example instance-attribute

include_example = include_example

initial_var_char_limit instance-attribute

initial_var_char_limit = initial_var_char_limit

llm instance-attribute

llm = llm or LLM()

log instance-attribute

log = [] if log else None

max_tokens instance-attribute

max_tokens = max_tokens

memory instance-attribute

memory = FIFOBuffer(memory_size)

objective instance-attribute

objective = objective or format(
    value_tag=value_tag,
    variables_section_title=variables_section_title,
    feedback_section_title=feedback_section_title,
)

optimizer_prompt_symbol_set instance-attribute

optimizer_prompt_symbol_set = optimizer_prompt_symbol_set

output_format_prompt_template class-attribute instance-attribute

output_format_prompt_template = dedent(
    "\n        Output_format: Your output should be in the following XML/HTML format:\n\n        ```\n        {output_format}\n        ```\n\n        In <{reasoning_tag}>, explain the problem: 1. what the {instruction_section_title} means 2. what the {feedback_section_title} on {outputs_section_title} means to {variables_section_title} considering how {variables_section_title} are used in {code_section_title} and other values in {documentation_section_title}, {inputs_section_title}, {others_section_title}. 3. Reasoning about the suggested changes in {variables_section_title} (if needed) and the expected result.\n\n        If you need to suggest a change in the values of {variables_section_title}, write down the suggested values in <{improved_variable_tag}>. Remember you can change only the values in {variables_section_title}, not others. When `type` of a variable is `code`, you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes are needed, just output TERMINATE.\n        "
)

prompt_symbols instance-attribute

prompt_symbols = deepcopy(default_prompt_symbols)

representation_prompt class-attribute instance-attribute

representation_prompt = dedent(
    "\n        You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.\n\n        Specifically, a problem will be composed of the following parts:\n        - {instruction_section_title}: the instruction which describes the things you need to do or the question you should answer.\n        - {code_section_title}: the code defined in the problem.\n        - {documentation_section_title}: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.\n        - {variables_section_title}: the input variables that you can change/tweak (trainable).\n        - {inputs_section_title}: the values of fixed inputs to the code, which CANNOT be changed (fixed).\n        - {others_section_title}: the intermediate values created through the code execution.\n        - {outputs_section_title}: the result of the code output.\n        - {feedback_section_title}: the feedback about the code's execution result.\n\n        In `{variables_section_title}`, `{inputs_section_title}`, `{outputs_section_title}`, and `{others_section_title}`, the format is:\n\n        For variables we express as this:\n        {variable_expression_format}\n\n        If `data_type` is `code`, it means `{value_tag}` is the source code of a python code, which may include docstring and definitions.\n        "
)

summary_log instance-attribute

summary_log = [] if log else None

truncate_expression instance-attribute

truncate_expression = truncate_expression

use_json_object_format instance-attribute

use_json_object_format = (
    use_json_object_format
    if expect_json and use_json_object_format
    else False
)

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Now you see problem instance:\n\n        ================================\n        {problem_instance}\n        ================================\n\n        "
)

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
)

Call the LLM with a prompt and return the response.

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

extract_llm_suggestion

extract_llm_suggestion(response: str)

Extract the suggestion from the response.

initialize_prompt

initialize_prompt()

load

load(path: str)

Load the optimizer state from a file.

problem_instance

problem_instance(summary, mask=None)

repr_node_value staticmethod

repr_node_value(node_dict)

repr_node_value_compact

repr_node_value_compact(
    node_dict,
    node_tag="node",
    value_tag="value",
    constraint_tag="constraint",
)

save

save(path: str)

Save the optimizer state to a file.

TextGrad

TextGrad(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    max_tokens=4096,
    log=False,
    **kwargs
)

Bases: Optimizer

llm instance-attribute

llm = llm or LLM()

log instance-attribute

log = [] if log else None

max_tokens instance-attribute

max_tokens = max_tokens

new_variable_tags instance-attribute

new_variable_tags = [
    "<IMPROVED_VARIABLE>",
    "</IMPROVED_VARIABLE>",
]

optimizer_system_prompt instance-attribute

optimizer_system_prompt = format(
    new_variable_start_tag=new_variable_tags[0],
    new_variable_end_tag=new_variable_tags[1],
)

print_limit instance-attribute

print_limit = 100

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
)

Call the LLM with a prompt and return the response.

load

load(path: str)

Load the optimizer state from a file.

save

save(path: str)

Save the optimizer state to a file.

Training Framework

opto.trainer

Utilities

opto.utils