Skip to content

opto.trainer.suggest

Suggest

get_feedback

get_feedback(
    query: str,
    content: str,
    reference: Optional[str] = None,
    **kwargs
) -> str

Generate feedback for the provided content.

Args: query: The query to analyze (e.g., user query, task, etc.) content: The content to evaluate (e.g., student answer, code, etc.) reference: The expected information or correct answer **kwargs: Optional reference information (e.g., expected answer, execution logs), Additional context or parameters for specialized guide implementations

Returns: feedback: feedback from the teacher

SimpleReferenceSuggest

SimpleReferenceSuggest(*args, **kwargs)

Bases: Suggest

This guide only returns templated response based on the correctness of the content.

get_feedback

get_feedback(
    query: str,
    content: str,
    reference: Optional[str] = None,
    score: Optional[float] = None,
    **kwargs
) -> str

ReferenceSuggest

ReferenceSuggest(
    model: Optional[str] = None,
    llm: Optional[AbstractModel] = None,
    prompt_template: Optional[str] = None,
    system_prompt: Optional[str] = None,
    correctness_template: Optional[str] = None,
)

Bases: Suggest

A guide that uses an LLM to generate feedback by comparing content with expected information.

This guide sends prompts to an LLM asking it to evaluate content and provide feedback. Users can customize the prompt template to fit different feedback scenarios.

Example usage:

# Create a guide with default settings
guide = ReferenceGuide(model="gpt-4o")

# Get feedback on student answer
feedback = guide.get_feedback(content="The derivative of x^2 is 2x",
                             reference="The derivative of x^2 is 2x")

# Create a guide with custom prompt template
custom_guide = ReferenceGuide(
    model="gpt-4o",
    prompt_template="Review this code: {content}. Expected behavior: {reference}. Provide specific feedback."
)

Initialize the VerbalGuide with an LLM and prompt templates.

Args: model: The name of the LLM model to use (if llm is not provided) llm: An instance of AbstractModel to use for generating feedback prompt_template: Custom prompt template with {content} and {reference} placeholders system_prompt: Custom system prompt for the LLM correctness_template: Template to use when content is deemed correct by metric

DEFAULT_PROMPT_TEMPLATE class-attribute instance-attribute

DEFAULT_PROMPT_TEMPLATE = "The query is: {query}. The student answered: {content}. The correct answer is: {reference}. If the student answer is correct, please say 'Correct'. Otherwise, if the student answer is incorrect, please provide feedback to the student. The feedback should be specific and actionable."

DEFAULT_SYSTEM_PROMPT class-attribute instance-attribute

DEFAULT_SYSTEM_PROMPT = "You're a helpful teacher who provides clear and constructive feedback."

DEFAULT_CORRECTNESS_TEMPLATE class-attribute instance-attribute

DEFAULT_CORRECTNESS_TEMPLATE = 'Correct [TERMINATE]'

model instance-attribute

model = model

llm instance-attribute

llm = llm or LLM(model=model)

prompt_template instance-attribute

prompt_template = prompt_template or DEFAULT_PROMPT_TEMPLATE

system_prompt instance-attribute

system_prompt = system_prompt or DEFAULT_SYSTEM_PROMPT

correctness_template instance-attribute

correctness_template = (
    correctness_template or DEFAULT_CORRECTNESS_TEMPLATE
)

get_feedback

get_feedback(
    query: str,
    content: str,
    reference: Optional[str] = None,
    score: Optional[float] = None,
    **kwargs
) -> str

Get LLM-generated feedback by comparing content with reference information.

Args: query: The query to analyze (e.g., user query, task, etc.) content: The content to evaluate (e.g., student answer, code, etc.) reference: The expected information or correct answer score: Optional function that compares content and reference, returning a value between 0 and 1 **kwargs: Additional parameters (unused in this implementation)

Returns: A string containing the LLM-generated feedback

KeywordSuggest

KeywordSuggest(
    json_file: Optional[str] = None,
    keyword_response: Optional[Dict[str, str]] = None,
    custom_analyzers: Optional[
        List[Callable[[str, str], str]]
    ] = None,
    **kwargs
)

Bases: Suggest

A guide that matches keywords in execution log and returns corresponding responses.

The guide can be initialized with either a JSON file path or a dictionary mapping keywords to responses. When provided with content, it will return all responses for keywords that appear in the content.

Expected format for keyword-response dictionary: { "keyword1": "Response message for when keyword1 is found in content", "keyword2": "Response message for when keyword2 is found in content", ... }

Example: { "stride does not match": "The layout constraints are not satisfied. Please try to adjust the layout constraints...", "Execution time": "Now the mapping is valid. Please try to reduce the execution time...", "syntax error": "Typically the syntax happens in how you define functions. Reminder about syntax: generally code comments start with #..." }

When analyzing content, all keywords that appear in the content will have their corresponding responses included in the output, joined by newlines.

The guide can also be extended with custom analysis functions to provide more detailed feedback beyond simple keyword matching.

Initialize the KeywordGuide with either a JSON file or a keyword-response dictionary.

Args: json_file: Path to a JSON file containing keyword-response mappings keyword_response: Dictionary mapping keywords to responses custom_analyzers: List of custom analysis functions that take (content, reference_log) as input and return a string with additional feedback

keyword_response instance-attribute

keyword_response = load(f)

custom_analyzers instance-attribute

custom_analyzers = custom_analyzers or []

add_analyzer

add_analyzer(
    analyzer_func: Callable[[str, str], str],
) -> None

Add a custom analyzer function to the guide.

Args: analyzer_func: A function that takes (content, reference_log) as input and returns a string with feedback

match

match(log_content: str) -> str

Match keywords in the log content and return concatenated responses.

Args: log_content: The content to search for keywords

Returns: A string containing all matched responses, joined by newlines

run_custom_analyzers

run_custom_analyzers(
    content: str, reference_log: str
) -> List[str]

Run all custom analyzers on the content.

Args: content: The content to analyze (e.g., generated code) reference_log: The log content to analyze

Returns: A list of feedback strings from custom analyzers

get_feedback

get_feedback(
    task: str,
    content: str,
    info: Optional[str] = None,
    reward: Optional[float] = None,
    **kwargs
) -> str

Get feedback based on content and reference log.

Args: task: The task to analyze (e.g., user query, task, etc.) content: The content to analyze (e.g., generated code) info: The reference log containing execution information reward: The reward score for the content **kwargs: Additional parameters (unused in this implementation)

Returns: A string containing feedback based on keyword matches and all analyzers