Skip to content

opto.optimizers

OptoPrime module-attribute

OptoPrime = OptoPrime

OptoPrimeV1

OptoPrimeV1(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    ignore_extraction_error: bool = True,
    include_example=False,
    memory_size=0,
    max_tokens=4096,
    log=True,
    prompt_symbols=None,
    json_keys=None,
    use_json_object_format=True,
    highlight_variables=False,
    **kwargs
)

Bases: Optimizer

representation_prompt class-attribute instance-attribute

representation_prompt = dedent(
    "\n        You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.\n\n        Specifically, a problem will be composed of the following parts:\n        - #Instruction: the instruction which describes the things you need to do or the question you should answer.\n        - #Code: the code defined in the problem.\n        - #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.\n        - #Variables: the input variables that you can change.\n        - #Constraints: the constraints or descriptions of the variables in #Variables.\n        - #Inputs: the values of other inputs to the code, which are not changeable.\n        - #Others: the intermediate values created through the code execution.\n        - #Outputs: the result of the code output.\n        - #Feedback: the feedback about the code's execution result.\n\n        In #Variables, #Inputs, #Outputs, and #Others, the format is:\n\n        <data_type> <variable_name> = <value>\n\n        If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.\n        "
)

default_objective class-attribute instance-attribute

default_objective = "You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback."

output_format_prompt_original class-attribute instance-attribute

output_format_prompt_original = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying the json syntax:\n\n        {{\n        "{reasoning}": <Your reasoning>,\n        "{answer}": <Your answer>,\n        "{suggestion}": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        In "{reasoning}", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.\n\n        If #Instruction asks for an answer, write it down in "{answer}".\n\n        If you need to suggest a change in the values of #Variables, write down the suggested values in "{suggestion}". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes or answer are needed, just output TERMINATE.\n        '
)

output_format_prompt_no_answer class-attribute instance-attribute

output_format_prompt_no_answer = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying the json syntax:\n\n        {{\n        "{reasoning}": <Your reasoning>,\n        "{suggestion}": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        In "{reasoning}", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.\n\n        If you need to suggest a change in the values of #Variables, write down the suggested values in "{suggestion}". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes are needed, just output TERMINATE.\n        '
)

example_problem_template class-attribute instance-attribute

example_problem_template = dedent(
    "\n        Here is an example of problem instance and response:\n\n        ================================\n        {example_problem}\n        ================================\n\n        Your response:\n        {example_response}\n        "
)

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Now you see problem instance:\n\n        ================================\n        {problem_instance}\n        ================================\n\n        "
)

example_prompt class-attribute instance-attribute

example_prompt = dedent(
    "\n\n        Here are some feasible but not optimal solutions for the current problem instance. Consider this as a hint to help you understand the problem better.\n\n        ================================\n\n        {examples}\n\n        ================================\n        "
)

final_prompt class-attribute instance-attribute

final_prompt = dedent('\n        Your response:\n        ')

final_prompt_with_variables class-attribute instance-attribute

final_prompt_with_variables = dedent(
    "\n        What are your suggestions on variables {names}?\n\n        Your response:\n        "
)

default_prompt_symbols class-attribute instance-attribute

default_prompt_symbols = {
    "variables": "#Variables",
    "constraints": "#Constraints",
    "inputs": "#Inputs",
    "outputs": "#Outputs",
    "others": "#Others",
    "feedback": "#Feedback",
    "instruction": "#Instruction",
    "code": "#Code",
    "documentation": "#Documentation",
}

default_json_keys class-attribute instance-attribute

default_json_keys = {
    "reasoning": "reasoning",
    "answer": "answer",
    "suggestion": "suggestion",
}

ignore_extraction_error instance-attribute

ignore_extraction_error = ignore_extraction_error

llm instance-attribute

llm = llm or LLM()

objective instance-attribute

objective = objective or default_objective

example_problem instance-attribute

example_problem = format(
    instruction=default_objective,
    code="y = add(x=a,y=b)\nz = subtract(x=y, y=c)",
    documentation="add: add x and y \nsubtract: subtract y from x",
    variables="(int) a = 5",
    constraints="a: a > 0",
    outputs="(int) z = 1",
    others="(int) y = 6",
    inputs="(int) b = 1\n(int) c = 5",
    feedback="The result of the code is not as expected. The result should be 10, but the code returns 1",
    stepsize=1,
)

example_response instance-attribute

example_response = dedent(
    '\n            {"reasoning": \'In this case, the desired response would be to change the value of input a to 14, as that would make the code return 10.\',\n             "answer", {},\n             "suggestion": {"a": 10}\n            }\n            '
)

include_example instance-attribute

include_example = include_example

max_tokens instance-attribute

max_tokens = max_tokens

log instance-attribute

log = [] if log else None

summary_log instance-attribute

summary_log = [] if log else None

memory instance-attribute

memory = FIFOBuffer(memory_size)

prompt_symbols instance-attribute

prompt_symbols = deepcopy(default_prompt_symbols)

output_format_prompt instance-attribute

output_format_prompt = format(**(default_json_keys))

use_json_object_format instance-attribute

use_json_object_format = use_json_object_format

highlight_variables instance-attribute

highlight_variables = highlight_variables

default_propagator

default_propagator()

Return the default Propagator object of the optimizer.

summarize

summarize()

repr_node_value staticmethod

repr_node_value(node_dict)

repr_node_constraint staticmethod

repr_node_constraint(node_dict)

problem_instance

problem_instance(summary, mask=None)

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

replace_symbols

replace_symbols(text: str, symbols: Dict[str, str]) -> str

construct_update_dict

construct_update_dict(
    suggestion: Dict[str, Any],
) -> Dict[ParameterNode, Any]

Convert the suggestion in text into the right data type.

extract_llm_suggestion

extract_llm_suggestion(response: str)

Extract the suggestion from the response.

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
)

Call the LLM with a prompt and return the response.

save

save(path: str)

Save the optimizer state to a file.

load

load(path: str)

Load the optimizer state from a file.

OptoPrimeMulti

OptoPrimeMulti(
    *args,
    num_responses: int = 3,
    temperature_min_max: Optional[List[float]] = None,
    selector: Optional[callable] = None,
    generation_technique: str = "temperature_variation",
    selection_technique: str = "best_of_n",
    experts_list: Optional[List[str]] = None,
    llm_profiles: Optional[List[str]] = None,
    llm_weights: Optional[List[float]] = None,
    **kwargs
)

Bases: OptoPrime

temperature_min_max instance-attribute

temperature_min_max = (
    temperature_min_max
    if temperature_min_max is not None
    else [0.0, 1.0]
)

candidates instance-attribute

candidates = []

selected_candidate instance-attribute

selected_candidate = None

num_responses instance-attribute

num_responses = num_responses

selector instance-attribute

selector = selector

generation_technique instance-attribute

generation_technique = generation_technique

selection_technique instance-attribute

selection_technique = selection_technique

experts_list instance-attribute

experts_list = experts_list

llm_profiles instance-attribute

llm_profiles = llm_profiles

llm_weights instance-attribute

llm_weights = (
    llm_weights or [1.0] * len(llm_profiles)
    if llm_profiles
    else None
)

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
    num_responses: int = 1,
    temperature: float = 0.0,
    llm=None,
) -> List[str]

Given a prompt, returns multiple candidate answers.

generate_candidates

generate_candidates(
    summary,
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    mask=None,
    max_tokens: int = None,
    num_responses: int = 3,
    generation_technique: str = "temperature_variation",
    temperature_min_max: Optional[List[float]] = None,
    experts_list: Optional[List[str]] = None,
) -> List[str]

Generate multiple candidates using various techniques. Args: summary: The summarized problem instance. system_prompt (str): The system-level prompt. user_prompt (str): The user-level prompt. verbose (bool): Whether to print debug information. mask: Mask for the problem instance. max_tokens (int, optional): Maximum token limit for the LLM responses. num_responses (int): Number of responses to request. generation_technique (str): Technique to use for generation: - "temperature_variation": Use varying temperatures - "self_refinement": Each solution refines the previous one - "iterative_alternatives": Generate diverse alternatives - "multi_experts": Use different expert personas temperature_min_max (List[float], optional): [min, max] temperature range. experts_list (List[str], optional): List of expert personas to use for multi_experts technique. Returns: List[str]: List of LLM responses as strings.

select_candidate

select_candidate(
    candidates: List,
    selection_technique="moa",
    problem_summary="",
) -> Dict

Select the best response based on the candidates using various techniques.

Args: candidates (List): List of candidate responses from generate_candidates. selection_technique (str): Technique to select the best response: - "moa" or "mixture_of_agents": Use LLM to mix the best elements of each response - "majority": Use LLM to choose the most frequent candidate - "lastofn" or "last_of_n" (choose also if selection technique is unknown): Simply return the last candidate

Returns: Dict: The selected candidate or an empty dictionary if no candidates exist.

OPRO

OPRO(*args, **kwargs)

Bases: OptoPrime

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Below are some example variables and their feedbacks.\n\n        {examples}\n\n        ================================\n\n        {instruction}\n        "
)

output_format_prompt class-attribute instance-attribute

output_format_prompt = dedent(
    '\n        Output_format: Your output should be in the following json format, satisfying\n        the json syntax:\n\n        {{\n        "suggestion": {{\n            <variable_1>: <suggested_value_1>,\n            <variable_2>: <suggested_value_2>,\n        }}\n        }}\n\n        When suggestion variables, write down the suggested values in "suggestion".\n        When <type> of a variable is (code), you should write the new definition in the\n        format of python code without syntax errors, and you should not change the\n        function name or the function signature.\n\n        If no changes or answer are needed, just output TERMINATE.\n        '
)

default_objective class-attribute instance-attribute

default_objective = (
    "Come up with a new variable in accordance to feedback."
)

buffer instance-attribute

buffer = []

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

TextGrad

TextGrad(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    max_tokens=4096,
    log=False,
    **kwargs
)

Bases: Optimizer

llm instance-attribute

llm = llm or LLM()

print_limit instance-attribute

print_limit = 100

max_tokens instance-attribute

max_tokens = max_tokens

new_variable_tags instance-attribute

new_variable_tags = [
    "<IMPROVED_VARIABLE>",
    "</IMPROVED_VARIABLE>",
]

optimizer_system_prompt instance-attribute

optimizer_system_prompt = format(
    new_variable_start_tag=new_variable_tags[0],
    new_variable_end_tag=new_variable_tags[1],
)

log instance-attribute

log = [] if log else None

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
)

Call the LLM with a prompt and return the response.

save

save(path: str)

Save the optimizer state to a file.

load

load(path: str)

Load the optimizer state from a file.

OptoPrimeV2

OptoPrimeV2(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    ignore_extraction_error: bool = True,
    include_example=False,
    memory_size=0,
    max_tokens=4096,
    log=True,
    initial_var_char_limit=100,
    optimizer_prompt_symbol_set: OptimizerPromptSymbolSet = OptimizerPromptSymbolSet(),
    use_json_object_format=True,
    truncate_expression=truncate_expression,
    **kwargs
)

Bases: OptoPrime

representation_prompt class-attribute instance-attribute

representation_prompt = dedent(
    "\n        You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.\n\n        Specifically, a problem will be composed of the following parts:\n        - {instruction_section_title}: the instruction which describes the things you need to do or the question you should answer.\n        - {code_section_title}: the code defined in the problem.\n        - {documentation_section_title}: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.\n        - {variables_section_title}: the input variables that you can change/tweak (trainable).\n        - {inputs_section_title}: the values of fixed inputs to the code, which CANNOT be changed (fixed).\n        - {others_section_title}: the intermediate values created through the code execution.\n        - {outputs_section_title}: the result of the code output.\n        - {feedback_section_title}: the feedback about the code's execution result.\n\n        In `{variables_section_title}`, `{inputs_section_title}`, `{outputs_section_title}`, and `{others_section_title}`, the format is:\n\n        For variables we express as this:\n        {variable_expression_format}\n\n        If `data_type` is `code`, it means `{value_tag}` is the source code of a python code, which may include docstring and definitions.\n        "
)

default_objective class-attribute instance-attribute

default_objective = "You need to change the `{value_tag}` of the variables in {variables_section_title} to improve the output in accordance to {feedback_section_title}."

output_format_prompt_template class-attribute instance-attribute

output_format_prompt_template = dedent(
    "\n        Output_format: Your output should be in the following XML/HTML format:\n\n        ```\n        {output_format}\n        ```\n\n        In <{reasoning_tag}>, explain the problem: 1. what the {instruction_section_title} means 2. what the {feedback_section_title} on {outputs_section_title} means to {variables_section_title} considering how {variables_section_title} are used in {code_section_title} and other values in {documentation_section_title}, {inputs_section_title}, {others_section_title}. 3. Reasoning about the suggested changes in {variables_section_title} (if needed) and the expected result.\n\n        If you need to suggest a change in the values of {variables_section_title}, write down the suggested values in <{improved_variable_tag}>. Remember you can change only the values in {variables_section_title}, not others. When `type` of a variable is `code`, you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.\n\n        If no changes are needed, just output TERMINATE.\n        "
)

example_problem_template class-attribute instance-attribute

example_problem_template = dedent(
    "\n        Here is an example of problem instance and response:\n\n        ================================\n        {example_problem}\n        ================================\n\n        Your response:\n        {example_response}\n        "
)

user_prompt_template class-attribute instance-attribute

user_prompt_template = dedent(
    "\n        Now you see problem instance:\n\n        ================================\n        {problem_instance}\n        ================================\n\n        "
)

example_prompt class-attribute instance-attribute

example_prompt = dedent(
    "\n\n        Here are some feasible but not optimal solutions for the current problem instance. Consider this as a hint to help you understand the problem better.\n\n        ================================\n\n        {examples}\n\n        ================================\n        "
)

final_prompt class-attribute instance-attribute

final_prompt = dedent(
    "\n        What are your suggestions on variables {names}?\n\n        Your response:\n        "
)

truncate_expression instance-attribute

truncate_expression = truncate_expression

use_json_object_format instance-attribute

use_json_object_format = (
    use_json_object_format
    if expect_json and use_json_object_format
    else False
)

ignore_extraction_error instance-attribute

ignore_extraction_error = ignore_extraction_error

llm instance-attribute

llm = llm or LLM()

objective instance-attribute

objective = objective or format(
    value_tag=value_tag,
    variables_section_title=variables_section_title,
    feedback_section_title=feedback_section_title,
)

initial_var_char_limit instance-attribute

initial_var_char_limit = initial_var_char_limit

optimizer_prompt_symbol_set instance-attribute

optimizer_prompt_symbol_set = optimizer_prompt_symbol_set

example_problem_summary instance-attribute

example_problem_summary = FunctionFeedback(
    graph=[
        (1, "y = add(x=a,y=b)"),
        (2, "z = subtract(x=y, y=c)"),
    ],
    documentation={
        "add": "This is an add operator of x and y.",
        "subtract": "subtract y from x",
    },
    others={"y": (6, None)},
    roots={
        "a": (5, "a > 0"),
        "b": (1, None),
        "c": (5, None),
    },
    output={"z": (1, None)},
    user_feedback="The result of the code is not as expected. The result should be 10, but the code returns 1",
)

example_problem instance-attribute

example_problem = problem_instance(example_problem_summary)

example_response instance-attribute

example_response = example_output(
    reasoning="In this case, the desired response would be to change the value of input a to 14, as that would make the code return 10.",
    variables={"a": 10},
)

include_example instance-attribute

include_example = include_example

max_tokens instance-attribute

max_tokens = max_tokens

log instance-attribute

log = [] if log else None

summary_log instance-attribute

summary_log = [] if log else None

memory instance-attribute

memory = FIFOBuffer(memory_size)

default_prompt_symbols instance-attribute

default_prompt_symbols = default_prompt_symbols

prompt_symbols instance-attribute

prompt_symbols = deepcopy(default_prompt_symbols)

initialize_prompt

initialize_prompt()

repr_node_value staticmethod

repr_node_value(node_dict)

repr_node_value_compact

repr_node_value_compact(
    node_dict,
    node_tag="node",
    value_tag="value",
    constraint_tag="constraint",
)

construct_prompt

construct_prompt(summary, mask=None, *args, **kwargs)

Construct the system and user prompt.

problem_instance

problem_instance(summary, mask=None)

extract_llm_suggestion

extract_llm_suggestion(response: str)

Extract the suggestion from the response.

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
)

Call the LLM with a prompt and return the response.

save

save(path: str)

Save the optimizer state to a file.

load

load(path: str)

Load the optimizer state from a file.