- Google Search (Serpson API)
- Perplexity
- Exa (Metaphor Systems) (coming soon)
let perplexity_backend = PerplexityAI::new("api_key".to_string());
let google_backend = Google::new("api_key".to_string());
let backends: Vec<&dyn Chainable> = vec![&google_backend, &perplexity_backend];
let initial_input = "Where is france";
match chain_backends(backends, initial_input).await {
Ok(output) => println!("Chained Output: {}", output),
Err(e) => eprintln!("Error: {}", e),
}
We are using Tera templates, for prompt templates
We have implemented chainable, a fucntion that will help you chain vaious LLMS, functions , search together for your application
- Bash
let perplexity_backend = PerplexityAI::new("api_key".to_string());
let bash_backend = Bash { capture_stderr: false };
let mut context = Context::new();
context.insert("input", "Rust programming");
let backends = vec![
(&perplexity_backend as &dyn Chainable, "what is{{ input }}"),
(&bash_backend as &dyn Chainable, "echo {{ input }}"),
];
match chain_backends(backends, &mut context).await {
Ok(result) => println!("Chained Result: {}", result),
Err(e) => eprintln!("Error: {}", e),
}
Loaders can be used to load verious type of inputs from files, these are the current loaders implemented:
- Pdf loaders
- HTML loaders
- text loaders
use PdfLoader::PdfLoader as pdf;
let loader: pdf = pdf::new("test.pdf");
match loader.load().await {
Ok(data) => println!("Loaded data: {:?}", data),
Err(e) => eprintln!("Error loading data: {}", e),
}
We have access to a various LLMs by implementin there apis, these solutions can be used to interact with chains and other models etc.
- OpenAI
- Ollama
- Anthropic (coming soon)
Embeddings can be used to create numerical representations of words , they can be used to create similairties between sentences, text classifications etc. These are currently the models we support
- OpenAI
- Ollama
- Anthropic (coming soon)
- Computing cosine similarity using the above embeddings
use similarity::compute_cosine_similarity;
let openai_backend = OpenAIEmbeddings::new("api_key".to_string());
let model = "text-embedding-ada-002";
let text1 = "The quick brown fox jumps over the lazy dog.";
let text2 = "A fast, dark-colored fox leaps above a sleepy canine.";
match compute_cosine_similarity(&openai_backend, text1, text2, model).await {
Ok(similarity) => println!("Cosine Similarity: {}", similarity),
Err(e) => eprintln!("Error: {}", e),
}
//make sure ollama is running on your computer
let emb: OllamaEmbeddings = OllamaEmbeddings::new("https://localhost", 11434);
let model = "dolphin-phi";
let text1 = "The quick brown fox jumps over the lazy dog.";
let text2 = "A fast, dark-colored fox leaps above a sleepy canine.";
match compute_cosine_similarity(&emb, text1, text2, model).await {
Ok(similarity) => println!("Cosine Similarity: {}", similarity),
Err(e) => eprintln!("Error: {}", e),
}
Vector stores can be used to store vectors.
- Qdrant (coming soon)
Our framework also provides a simple Logger to log prompts, templates, and any other metric you like !
use crate::logging::{Experiment,LlmLogger};
// Create a new LlmLogger instance
let mut logger = LlmLogger::new("experiments.json".to_string());
// Log a new experiment
let prompt = "What is the capital of France?".to_string();
let experiment_id = logger.log_experiment(prompt);
println!("Logged experiment with ID: {}", experiment_id);
// Log data for the experiment
logger.log_data(&experiment_id, "answer", "Paris");
logger.log_data(&experiment_id, "accuracy", "0.95");
// Log another experiment
let prompt = "What is the largest planet in our solar system?".to_string();
let experiment_id = logger.log_experiment(prompt);
println!("Logged experiment with ID: {}", experiment_id);
// Log data for the second experiment
logger.log_data(&experiment_id, "answer", "Jupiter");
logger.log_data(&experiment_id, "accuracy", "0.98");
logger.log_data(&experiment_id, "model", "GPT-3");
// Display the experiments table
logger.display_experiments_table();
Heavily inspired by https://github.com/srush/MiniChain/tree/main