BaseLLMBackend
Interface Class
This interface defines the contract that all LLMs implementations to be used in the backend by Memora
must follow.
memora.llm_backends.base.BaseBackendLLM
Bases: ABC
Abstract base class for LLMs used in the backend by Memora.
Attributes
get_model_kwargs
abstractmethod
property
Returns dictionary of model configuration parameters
Example
return { "model": self.model, # model_name: gpt-4o "temperature": self.temperature, # 1 "top_p": self.top_p, # 1 "max_tokens": self.max_tokens, # 1024 "stream": False, }
Functions
__call__
abstractmethod
async
__call__(
messages: List[Dict[str, str]],
output_schema_model: Type[BaseModel] | None = None,
) -> Union[str, BaseModel]
Process messages and generate response (📌 Streaming is not supported, as full response is required at once)
PARAMETER | DESCRIPTION |
---|---|
messages
|
List of message dicts with role and content e.g [{"role": "user", "content": "Hello!"}, ...]
TYPE:
|
output_schema_model
|
Optional Pydantic base model for structured output (📌 Ensure your model provider supports this for the chosen model)
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Union[str, BaseModel]
|
Union[str, BaseModel]: Generated text response as a string, or an instance of the output schema model if specified |