Skip to content

opto.optimizers.textgrad

GLOSSARY_TEXT module-attribute

GLOSSARY_TEXT = "\n### Glossary of tags that will be sent to you:\n# - <LM_INPUT>: The input to the language model.\n# - <LM_OUTPUT>: The output of the language model.\n# - <FEEDBACK>: The feedback to the variable.\n# - <CONVERSATION>: The conversation history.\n# - <FOCUS>: The focus of the optimization.\n# - <ROLE>: The role description of the variable."

OPTIMIZER_SYSTEM_PROMPT module-attribute

OPTIMIZER_SYSTEM_PROMPT = f"You are part of an optimization system that improves text (i.e., variable). You will be asked to creatively and critically improve prompts, solutions to problems, code, or any other text-based variable. You will receive some feedback, and use the feedback to improve the variable. The feedback may be noisy, identify what is important and what is correct. Pay attention to the role description of the variable, and the context in which it is used. This is very important: You MUST give your response by sending the improved variable between {new_variable_start_tag} {{improved variable}} {new_variable_end_tag} tags. The text you send between the tags will directly replace the variable.

{GLOSSARY_TEXT}"

TGD_PROMPT_PREFIX module-attribute

TGD_PROMPT_PREFIX = "Here is the role of the variable you will improve: <ROLE>{variable_desc}</ROLE>.\n\nThe variable is the text within the following span: <VARIABLE> {variable_short} </VARIABLE>\n\nHere is the context and feedback we got for the variable:\n\n<CONTEXT>{variable_grad}</CONTEXT>\n\nImprove the variable ({variable_desc}) using the feedback provided in <FEEDBACK> tags.\n"

TGD_MULTIPART_PROMPT_INIT module-attribute

TGD_MULTIPART_PROMPT_INIT = "Here is the role of the variable you will improve: <ROLE>{variable_desc}</ROLE>.\n\nThe variable is the text within the following span: <VARIABLE> {variable_short} </VARIABLE>\n\nHere is the context and feedback we got for the variable:\n\n"

TGD_MULTIPART_PROMPT_PREFIX module-attribute

TGD_MULTIPART_PROMPT_PREFIX = "Improve the variable ({variable_desc}) using the feedback provided in <FEEDBACK> tags.\n"

TGD_PROMPT_SUFFIX module-attribute

TGD_PROMPT_SUFFIX = "Send the improved variable in the following format:\n\n{new_variable_start_tag}{{the improved variable}}{new_variable_end_tag}\n\nSend ONLY the improved variable between the <IMPROVED_VARIABLE> tags, and nothing else."

MOMENTUM_PROMPT_ADDITION module-attribute

MOMENTUM_PROMPT_ADDITION = "Here are the past iterations of this variable:\n\n<PAST_ITERATIONS>{past_values}</PAST_ITERATIONS>\n\nSimilar feedbacks across different steps suggests that the modifications to the variable are insufficient.If this is the case, please make more significant changes to the variable.\n\n"

CONSTRAINT_PROMPT_ADDITION module-attribute

CONSTRAINT_PROMPT_ADDITION = "You must follow the following constraints:\n\n<CONSTRAINTS>{constraint_text}</CONSTRAINTS>\n\n"

IN_CONTEXT_EXAMPLE_PROMPT_ADDITION module-attribute

IN_CONTEXT_EXAMPLE_PROMPT_ADDITION = "You must base on the following examples when modifying the {variable_desc}:\n\n<EXAMPLES>{in_context_examples}</EXAMPLES>\n\n"

GRADIENT_TEMPLATE module-attribute

GRADIENT_TEMPLATE = "Here is a conversation:\n\n<CONVERSATION>{context}</CONVERSATION>\n\nThis conversation is potentially part of a larger system. The output is used as {response_desc}\n\nHere is the feedback we got for {variable_desc} in the conversation:\n\n<FEEDBACK>{feedback}</FEEDBACK>\n\n"

GRADIENT_MULTIPART_TEMPLATE module-attribute

GRADIENT_MULTIPART_TEMPLATE = "Above is a conversation with a language model.\nThis conversation is potentially part of a larger system. The output is used as {response_desc}\n\nHere is the feedback we got for {variable_desc} in the conversation:\n\n<FEEDBACK>{feedback}</FEEDBACK>\n\n"

https://github.com/zou-group/textgrad/blob/main/textgrad/autograd/llm_ops.py https://github.com/zou-group/textgrad/blob/main/textgrad/autograd/llm_backward_prompts.py

GLOSSARY_TEXT_BACKWARD module-attribute

GLOSSARY_TEXT_BACKWARD = "\n### Glossary of tags that will be sent to you:\n# - <LM_INPUT>: The input to the language model.\n# - <LM_OUTPUT>: The output of the language model.\n# - <OBJECTIVE_FUNCTION>: The objective of the optimization task.\n# - <VARIABLE>: Specifies the span of the variable.\n# - <ROLE>: The role description of the variable."

BACKWARD_SYSTEM_PROMPT module-attribute

BACKWARD_SYSTEM_PROMPT = f'You are part of an optimization system that improves a given text (i.e. the variable). You are the gradient (feedback) engine. Your only responsibility is to give intelligent and creative feedback and constructive criticism to variables, given an objective specified in <OBJECTIVE_FUNCTION> </OBJECTIVE_FUNCTION> tags. The variables may be solutions to problems, prompts to language models, code, or any other text-based variable. Pay attention to the role description of the variable, and the context in which it is used. You should assume that the variable will be used in a similar context in the future. Only provide strategies, explanations, and methods to change in the variable. DO NOT propose a new version of the variable, that will be the job of the optimizer. Your only job is to send feedback and criticism (compute 'gradients'). For instance, feedback can be in the form of 'Since language models have the X failure mode...', 'Adding X can fix this error because...', 'Removing X can improve the objective function because...', 'Changing X to Y would fix the mistake ...', that gets at the downstream objective.
If a variable is already working well (e.g. the objective function is perfect, an evaluation shows the response is accurate), you should not give feedback.
{GLOSSARY_TEXT_BACKWARD}'

CONVERSATION_TEMPLATE module-attribute

CONVERSATION_TEMPLATE = "<LM_INPUT> {prompt} </LM_INPUT>\n\n<LM_OUTPUT> {response_value} </LM_OUTPUT>\n\n"

CONVERSATION_START_INSTRUCTION_CHAIN module-attribute

CONVERSATION_START_INSTRUCTION_CHAIN = "You will give feedback to a variable with the following role: <ROLE> {variable_desc} </ROLE>. Here is a conversation with a language model (LM):\n\n{conversation}"

OBJECTIVE_INSTRUCTION_CHAIN module-attribute

OBJECTIVE_INSTRUCTION_CHAIN = "This conversation is part of a larger system. The <OP_OUTPUT> was later used as {response_desc}.\n\n<OBJECTIVE_FUNCTION>Your goal is to give feedback to the variable to address the following feedback on the LM_OUTPUT: {response_gradient} </OBJECTIVE_FUNCTION>\n\n"

CONVERSATION_START_INSTRUCTION_BASE module-attribute

CONVERSATION_START_INSTRUCTION_BASE = "You will give feedback to a variable with the following role: <ROLE> {variable_desc} </ROLE>. Here is an evaluation of the variable using a language model:\n\n{conversation}"

OBJECTIVE_INSTRUCTION_BASE module-attribute

OBJECTIVE_INSTRUCTION_BASE = "<OBJECTIVE_FUNCTION>Your goal is to give feedback and criticism to the variable given the above evaluation output. Our only goal is to improve the above metric, and nothing else. </OBJECTIVE_FUNCTION>\n\n"

EVALUATE_VARIABLE_INSTRUCTION module-attribute

EVALUATE_VARIABLE_INSTRUCTION = "We are interested in giving feedback to the {variable_desc} for this conversation. Specifically, give feedback to the following span of text:\n\n<VARIABLE> {variable_short} </VARIABLE>\n\nGiven the above history, describe how the {variable_desc} could be improved to improve the <OBJECTIVE_FUNCTION>. Be very creative, critical, and intelligent.\n\n"

SEARCH_QUERY_BACKWARD_INSTRUCTION module-attribute

SEARCH_QUERY_BACKWARD_INSTRUCTION = "Here is a query and a response from searching with {engine_name}:\n<QUERY> {query} </QUERY>\n<RESULTS> {results} </RESULTS>\n\n"

GRADIENT_OF_RESULTS_INSTRUCTION module-attribute

GRADIENT_OF_RESULTS_INSTRUCTION = "For the search results from {engine_name} we got the following feedback:\n\n<FEEDBACK>{results_gradient}</FEEDBACK>\n\n"

Gradient accumulation: reduce / sum

REDUCE_MEAN_SYSTEM_PROMPT module-attribute

REDUCE_MEAN_SYSTEM_PROMPT = "You are part of an optimization system that improves a given text (i.e. the variable). Your only responsibility is to critically aggregate and summarize the feedback from sources. The variables may be solutions to problems, prompts to language models, code, or any other text-based variable. The multiple sources of feedback will be given to you in <FEEDBACK> </FEEDBACK> tags. When giving a response, only provide the core summary of the feedback. Do not recommend a new version for the variable -- only summarize the feedback critically. "

GradientInfo dataclass

GradientInfo(
    gradient: str,
    gradient_context: Optional[Dict[str, str]],
)

gradient instance-attribute

gradient: str

gradient_context instance-attribute

gradient_context: Optional[Dict[str, str]]

TextGrad

TextGrad(
    parameters: List[ParameterNode],
    llm: AbstractModel = None,
    *args,
    propagator: Propagator = None,
    objective: Union[None, str] = None,
    max_tokens=4096,
    log=False,
    **kwargs
)

Bases: Optimizer

llm instance-attribute

llm = llm or LLM()

print_limit instance-attribute

print_limit = 100

max_tokens instance-attribute

max_tokens = max_tokens

new_variable_tags instance-attribute

new_variable_tags = [
    "<IMPROVED_VARIABLE>",
    "</IMPROVED_VARIABLE>",
]

optimizer_system_prompt instance-attribute

optimizer_system_prompt = format(
    new_variable_start_tag=new_variable_tags[0],
    new_variable_end_tag=new_variable_tags[1],
)

log instance-attribute

log = [] if log else None

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
)

Call the LLM with a prompt and return the response.

save

save(path: str)

Save the optimizer state to a file.

load

load(path: str)

Load the optimizer state from a file.

construct_tgd_prompt

construct_tgd_prompt(
    do_momentum: bool = False,
    do_constrained: bool = False,
    do_in_context_examples: bool = False,
    **optimizer_kwargs
)

Construct the textual gradient descent prompt.

:param do_momentum: Whether to include momentum in the prompt. :type do_momentum: bool, optional :param do_constrained: Whether to include constraints in the prompt. :type do_constrained: bool, optional :param do_in_context_examples: Whether to include in-context examples in the prompt. :type do_in_context_examples: bool, optional :param optimizer_kwargs: Additional keyword arguments for formatting the prompt. These will be things like the variable description, gradient, past values, constraints, and in-context examples. :return: The TGD update prompt. :rtype: str

construct_reduce_prompt

construct_reduce_prompt(gradients: List[GradientInfo])

Construct a prompt that reduces the gradients.

rm_node_attrs

rm_node_attrs(text: str) -> str

Removes trace node attributes (text inside square brackets) from a string.

Args: text: Input string that may contain trace node attributes like [ParameterNode]

Returns: String with trace node attributes removed

get_short_value

get_short_value(text, n_words_offset: int = 10) -> str

Returns a short version of the value of the variable. We sometimes use it during optimization, when we want to see the value of the variable, but don't want to see the entire value. This is sometimes to save tokens, sometimes to reduce repeating very long variables, such as code or solutions to hard problems. :param n_words_offset: The number of words to show from the beginning and the end of the value. :type n_words_offset: int