Skip to content

BaseLLMBackend Interface Class

This interface defines the contract that all LLMs implementations to be used in the backend by Memora must follow.

memora.llm_backends.base.BaseBackendLLM

Bases: ABC

Abstract base class for LLMs used in the backend by Memora.

Attributes

get_model_kwargs abstractmethod property

get_model_kwargs: Dict[str, Any]

Returns dictionary of model configuration parameters

Example

return { "model": self.model, # model_name: gpt-4o "temperature": self.temperature, # 1 "top_p": self.top_p, # 1 "max_tokens": self.max_tokens, # 1024 "stream": False, }

Functions

__call__ abstractmethod async

__call__(
    messages: List[Dict[str, str]],
    output_schema_model: Type[BaseModel] | None = None,
) -> Union[str, BaseModel]

Process messages and generate response (📌 Streaming is not supported, as full response is required at once)

PARAMETER DESCRIPTION
messages

List of message dicts with role and content e.g [{"role": "user", "content": "Hello!"}, ...]

TYPE: List[Dict[str, str]]

output_schema_model

Optional Pydantic base model for structured output (📌 Ensure your model provider supports this for the chosen model)

TYPE: Type[BaseModel] | None DEFAULT: None

RETURNS DESCRIPTION
Union[str, BaseModel]

Union[str, BaseModel]: Generated text response as a string, or an instance of the output schema model if specified

Source code in memora/llm_backends/base.py
@abstractmethod
async def __call__(
    self,
    messages: List[Dict[str, str]],
    output_schema_model: Type[BaseModel] | None = None,
) -> Union[str, BaseModel]:
    """
    Process messages and generate response (📌 Streaming is not supported, as full response is required at once)

    Args:
        messages (List[Dict[str, str]]): List of message dicts with role and content e.g [{"role": "user", "content": "Hello!"}, ...]
        output_schema_model (Type[BaseModel] | None): Optional Pydantic base model for structured output (📌 Ensure your model provider supports this for the chosen model)

    Returns:
        Union[str, BaseModel]: Generated text response as a string, or an instance of the output schema model if specified
    """
    pass

close abstractmethod async

close() -> None

Closes the LLM connection.

Source code in memora/llm_backends/base.py
@abstractmethod
async def close(self) -> None:
    """Closes the LLM connection."""
    pass