Skip to content

opto.optimizers.optoprimemulti

OptoPrimeMulti

OptoPrimeMulti(
    *args,
    num_responses: int = 3,
    temperature_min_max: Optional[List[float]] = None,
    selector: Optional[callable] = None,
    generation_technique: str = "temperature_variation",
    selection_technique: str = "best_of_n",
    experts_list: Optional[List[str]] = None,
    llm_profiles: Optional[List[str]] = None,
    llm_weights: Optional[List[float]] = None,
    **kwargs
)

Bases: OptoPrime

temperature_min_max instance-attribute

temperature_min_max = (
    temperature_min_max
    if temperature_min_max is not None
    else [0.0, 1.0]
)

candidates instance-attribute

candidates = []

selected_candidate instance-attribute

selected_candidate = None

num_responses instance-attribute

num_responses = num_responses

selector instance-attribute

selector = selector

generation_technique instance-attribute

generation_technique = generation_technique

selection_technique instance-attribute

selection_technique = selection_technique

experts_list instance-attribute

experts_list = experts_list

llm_profiles instance-attribute

llm_profiles = llm_profiles

llm_weights instance-attribute

llm_weights = (
    llm_weights or [1.0] * len(llm_profiles)
    if llm_profiles
    else None
)

call_llm

call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
    num_responses: int = 1,
    temperature: float = 0.0,
    llm=None,
) -> List[str]

Given a prompt, returns multiple candidate answers.

generate_candidates

generate_candidates(
    summary,
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    mask=None,
    max_tokens: int = None,
    num_responses: int = 3,
    generation_technique: str = "temperature_variation",
    temperature_min_max: Optional[List[float]] = None,
    experts_list: Optional[List[str]] = None,
) -> List[str]

Generate multiple candidates using various techniques. Args: summary: The summarized problem instance. system_prompt (str): The system-level prompt. user_prompt (str): The user-level prompt. verbose (bool): Whether to print debug information. mask: Mask for the problem instance. max_tokens (int, optional): Maximum token limit for the LLM responses. num_responses (int): Number of responses to request. generation_technique (str): Technique to use for generation: - "temperature_variation": Use varying temperatures - "self_refinement": Each solution refines the previous one - "iterative_alternatives": Generate diverse alternatives - "multi_experts": Use different expert personas temperature_min_max (List[float], optional): [min, max] temperature range. experts_list (List[str], optional): List of expert personas to use for multi_experts technique. Returns: List[str]: List of LLM responses as strings.

select_candidate

select_candidate(
    candidates: List,
    selection_technique="moa",
    problem_summary="",
) -> Dict

Select the best response based on the candidates using various techniques.

Args: candidates (List): List of candidate responses from generate_candidates. selection_technique (str): Technique to select the best response: - "moa" or "mixture_of_agents": Use LLM to mix the best elements of each response - "majority": Use LLM to choose the most frequent candidate - "lastofn" or "last_of_n" (choose also if selection technique is unknown): Simply return the last candidate

Returns: Dict: The selected candidate or an empty dictionary if no candidates exist.