π¦ Models#
- class text_machina.src.models.base.TextGenerationModel(model_config)[source]#
Bases:
ABC
Base class for LLMs.
- abstract generate_completion(prompt, generation_config)[source]#
Generates a completion for a prompt by decoding a model parameterized by generation_config. This method has to be overwritten to implement the completion code.
- class text_machina.src.models.ai21.AI21Model(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using AI21 models.
Requires the definition of the AI21_API_KEY=<api_key> environment variable.
- class text_machina.src.models.anthropic.AnthropicModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using Anthropic models.
Requires the definition of the ANTRHOPIC_API_KEY=<key> environment variable.
- class text_machina.src.models.azure_openai.AzureOpenAIModel(model_config)[source]#
Bases:
OpenAIModel
Generates completions using Azure OpenAI models.
Requires the definition of the AZURE_OPENAI_API_KEY=<key> env variable.
- class text_machina.src.models.bedrock.BedrockModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using AWS Bedrock models.
Requires the definition of the AWS_ACCESS_KEY_ID=<key> and AWS_SECRET_ACCESS_KEY=<key> environment variables.
- generate_completion(prompt, generation_config)[source]#
Generates a completion for a prompt by decoding a model parameterized by generation_config. This method has to be overwritten to implement the completion code.
- get_completion_from_response_body(response_body)[source]#
Obtains the completions from a response body returned by a bedrock model.
Considers the different API schemas that each model provider uses.
- Parameters:
response_body (Dict) β the body returned by models in bedrock.
- Returns:
the completion of the model extracted from the body.
- Return type:
- class text_machina.src.models.cohere.CohereModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using Cohere models.
Requires the definition of the COHERE_API_KEY=<key> environment variable.
- class text_machina.src.models.hf_local.HuggingFaceLocalModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using HuggingFaceβs models locally deployed.
- class text_machina.src.models.hf_remote.HuggingFaceRemoteModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using HuggingFaceβs models remotely deployed (HuggingFaceβs Inference API or Inference Endpoints).
Requires the definition of the HF_TOKEN=<token> environment variable.
- class text_machina.src.models.inference_server.InferenceServerModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using models deployed on inference servers like TRT or VLLM. This model assumes the default APIs are being used (e.g., no OpenAI-compatible API in VLLM)
- generate_completion(prompt, generation_config)[source]#
Generates a completion for a prompt by decoding a model parameterized by generation_config. This method has to be overwritten to implement the completion code.
- class text_machina.src.models.openai.OpenAIModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using OpenAI models.
Requires the definition of the OPENAI_API_KEY=<key> environment variable.
- class text_machina.src.models.vertex.VertexModel(model_config)[source]#
Bases:
TextGenerationModel
Generates completions using VertexAI models. Requires the definition of the VERTEX_AI_CREDENTIALS_FILE=<path> environment variable.