opto.optimizers.optoprimemulti¶
OptoPrimeMulti ¶
OptoPrimeMulti(
    *args,
    num_responses: int = 3,
    temperature_min_max: Optional[List[float]] = None,
    selector: Optional[callable] = None,
    generation_technique: str = "temperature_variation",
    selection_technique: str = "best_of_n",
    experts_list: Optional[List[str]] = None,
    llm_profiles: Optional[List[str]] = None,
    llm_weights: Optional[List[float]] = None,
    **kwargs
)
              Bases: OptoPrime
            temperature_min_max
  
      instance-attribute
  
¶
    
            llm_weights
  
      instance-attribute
  
¶
    call_llm ¶
call_llm(
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    max_tokens: int = 4096,
    num_responses: int = 1,
    temperature: float = 0.0,
    llm=None,
) -> List[str]
Given a prompt, returns multiple candidate answers.
generate_candidates ¶
generate_candidates(
    summary,
    system_prompt: str,
    user_prompt: str,
    verbose: Union[bool, str] = False,
    mask=None,
    max_tokens: int = None,
    num_responses: int = 3,
    generation_technique: str = "temperature_variation",
    temperature_min_max: Optional[List[float]] = None,
    experts_list: Optional[List[str]] = None,
) -> List[str]
Generate multiple candidates using various techniques. Args: summary: The summarized problem instance. system_prompt (str): The system-level prompt. user_prompt (str): The user-level prompt. verbose (bool): Whether to print debug information. mask: Mask for the problem instance. max_tokens (int, optional): Maximum token limit for the LLM responses. num_responses (int): Number of responses to request. generation_technique (str): Technique to use for generation: - "temperature_variation": Use varying temperatures - "self_refinement": Each solution refines the previous one - "iterative_alternatives": Generate diverse alternatives - "multi_experts": Use different expert personas temperature_min_max (List[float], optional): [min, max] temperature range. experts_list (List[str], optional): List of expert personas to use for multi_experts technique. Returns: List[str]: List of LLM responses as strings.
select_candidate ¶
Select the best response based on the candidates using various techniques.
Args: candidates (List): List of candidate responses from generate_candidates. selection_technique (str): Technique to select the best response: - "moa" or "mixture_of_agents": Use LLM to mix the best elements of each response - "majority": Use LLM to choose the most frequent candidate - "lastofn" or "last_of_n" (choose also if selection technique is unknown): Simply return the last candidate
Returns: Dict: The selected candidate or an empty dictionary if no candidates exist.