Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

⚠️ Warning: The documentation is still in development. Some sections and examples might change as the library evolves.

LatchLM is a provider-agnostic client library for AI models. Its goal is to provide a uniform, modular, interface for interacting with different providers. By abstracting away provider-specific details, LatchLM allows you to integrate with multiple AI APIs seamlessly.

At the core of LatchLM is the AiProvider trait, which defines an asynchronous interface to send requests to an LLM. This trait simplifies the development of non-blocking applications that interact with AI models.

Key components include:

  • AiProvider Trait: Abstracts the underlying API call, enabling provider-agnostic interactions.
  • AiModel Marker Trait: Represents specific AI model variants. Implementing this trait allows you to convert model variants into string identifiers (via AsRef<str>), ensuring reliable model referencing.
  • ModelId: encapsulates metadata about an Ai model, such as unique identifier and a human-readable name.
  • AiResponse: Wraps the response from a model, allowing you to handle responses in a uniform way.

LatchLM is built with extensibility in mind, meaning that adding support for a new provider often requires only implementing the core traits. This design allows you to focus on building AI-driven applications without being locked into a single vendor.

Why LatchLM?

There's plenty of rust crates out there that wraps LLMs APIs, why create yet another one?

Of the ones I tried, none of them suited my use case: swapping the model at runtime. I needed a way to just select a model and treat it for what it is, an AI model. I need dynamic dispatch for different providers, and installing a crate for each one of them, was not gonna cut it. I could have used a crate that supported different providers and add dynamic dispatch on top in my specific app, but that would have been less fun and I wouldn't have learned as much.

Overview

LatchLM is designed with a focus on flexibility, extensibility, and type safety. This section explains key architectural decisions and patterns used throughout the library.

Core Design Patterns

Dynamic Dispatch

LatchLM is built around dynamic dispatch to enable runtime model selection. The AiProvider trait uses &dyn AiModel parameters, allowing any implementor of the AiModel trait to be passed to provider methods. This design enables you to:

  • Switch between different AI models at runtime
  • Use different providers interchangeably with the same interface
  • Create provider-agnostic abstraction layers in your application

Example of dynamic model selection:

#![allow(unused)]
fn main() {
async fn get_response(
    provider: &dyn AiProvider,
    model: &dyn AiModel,
    prompt: &str,
) -> Result<String> {
    let response = provider.send_request(model, prompt).await?;
    Ok(response.text)
}
}

Asynchronous Interface

LatchLM uses Rust's async/await system to provide non-blocking operations. The BoxFuture type alias simplifies returning futures from trait methods:

#![allow(unused)]
fn main() {
pub type BoxFuture<T> = Pin<Box<dyn Future<Output = T> + Send + 'static>>;
}

Thread Safety

Key traits in LatchLM include Send + Sync bounds to ensure thread safety:

#![allow(unused)]
fn main() {
pub trait AiModel: AsRef<str> + Send + Sync {}

pub trait AiProvider: Send + Sync { /* ... */ }
}

Smart Pointers Support

LatchLM provides blanket implementations for the AiProvider trait for references (&T, &mut T, Box<T> and Arc<T>) allowing for flexible ownership and sharing patterns.

Providers

In LatchLM a provider is a struct that implements the AiProvider trait, encapsulating the logic to interact with a given API.

First-party Providers

LatchLM currently provides, with experimental support, a provider for the Gemini APi. More providers will follow.

Implementing a provider

Below is an example of how you can implement a custom provider.

#![allow(unused)]
fn main() {
use latchlm::{AiModel, AiProvider, AiResponse, BoxFuture, Result};
use reqwest::Client;
use secrecy::SecretString;

pub struct MyProvider {
    client: reqwest::Client,
    api_key: SecretString,
}

impl AiProvider for MyProvider {
    fn send_request(&self, model: &dyn AiModel, message: &str) -> BoxFuture<Result<AiResponse>> {
        // Your implementation goes here.
        // For instance, you might use an async block like:
        // Box::pin(async move {
        //      // Build and send an HTTP request with self.client.
        //      // Use model.as_ref() to obtain the identifier.
        //      // Process the response and construct an AiResponse.
        //      OK(AiResponse { text: "Example response text".into() })
        // })
    }
}
}

Models

In LatchLM, a model is an implementor of the marker trait AiModel. Models serve as unique identifiers for the different variants supported by AI providers.

Implementing a Model

Below is an example of how you can implement a custom model family.

#![allow(unused)]
fn main() {
use latchlm::AiModel;

// Custom AI model variants
pub enum MyModel {
    Fast,
    Advanced,
}

impl AsRef<str> for MyModel {
    fn as_ref(&self) -> &str {
        match self {
            MyModel::Fast => "mymodel-fast",
            MyModel::Advanced => "mymodel-advanced",
        }
    }
}

impl AiModel for MyModel {}
}

Errors

LatchLM uses a unified error type to simplify error handling across all providers and models.

Error Type

All fallible operations in LatchLM return a Result<T, Error>, where Error is an enum defined in the core crate. This makes it easy to handle errors in a consistent way, regardless of the provider or model you are using.

Error Variants

The main error variants are:

  • RequestError: Occurs when an HTTP request fails (e.g., network issues, timeouts, invalid URLs). Wraps a reqwest::Error.

  • ApiError: Represents an error returned by the API provider itself, such as invalid API keys, quota exceeded, or unsupported operations. Contains the HTTP status code and a message.

  • ParseError: Indicates a failure to parse the response from the provider (e.g., invalid JSON). Wraps a serde_json::Error.

  • InvalidModelError: Returned when an invalid or unsupported model name is used

Example

#![allow(unused)]
fn main() {
use latchlm::{AiProvider, AiModel, AiRequest, Error};

async fn call_model (
    provider: &dyn AiProvider,
    model: &dyn AiModel,
    prompt: &str,
) -> Result<String, Error> {
    let request = AiRequest { text: prompt.to_string() };
    let response = provider.send_request(model, request).await?;
    Ok(response.text)
}
}

Handling Errors

You can match on the Error type to handle different error cases

#![allow(unused)]
fn main() {
match result {
    Ok(response) => println!("AI response: {}", response.text),
    Err(Error::RequestError(e)) => eprintln!("Network error: {e}"),
    Err(Error::ApiError { status, message }) => {
        eprintln!("API error (status {status}): {message}")
    },
    Err(Error::ParseError(e)) => eprintln!("Failed to parse response: {e}"),
    Err(Error::InvalidModelError(name)) => eprintln!("Invalid model: {name}"),
}
}

Extensibility

The Error enum is marked as #[non_exhaustive], which means additional variants may be added in the future.

License

LatchLM is distributed under the terms of the Mozilla Public License, Version 2.0 (MPL-2.0).

Summary

  • You are free to:

    • Use, modify, and distribute this software, including for commercial purposes.
    • Combine this software with other code, including proprietary code, as long as MPL-covered files remain under MPL.
  • Conditions:

    • If you modify MPL-licensed files, you must make those files (and their modifications) available under the MPL.
    • You must include a copy of the MPL license in your distribution.
    • Larger works that combine MPL code with other code may be distributed under different terms, but MPL-covered files must remain under MPL.

For full details, please refer to the official MPL-2.0 text.


This is a summary for convenience only. The actual license text is legally binding.