![mysql logo](https://raw.githubusercontent.com/github/explore/80688e429a7d4ef2fca1e82350fe8e3517d3494d/topics/mysql/mysql.png)
- Quebec City, Canada
-
21:25
(UTC -04:00)
Highlights
- Pro
Block or Report
Block or report Samy-Mammeri
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage
Sort by: Recently starred
Starred repositories
The code used to train and run inference with the ColPali architecture.
EPFL Course - Optimization for Machine Learning - CS-439
Shortest solutions for CS231n 2021-2024
🌐 Jekyll is a blog-aware static site generator in Ruby
Open-source low code data preparation library in python. Collect, clean and visualization your data in python with a few lines of code.
Open-source scientific and technical publishing system built on Pandoc.
a better dotenv–from the creator of `dotenv`
skimpy is a light weight tool that provides summary statistics about variables in data frames within the console.
Create markdown-backed Kanban boards in Obsidian.
QLoRA: Efficient Finetuning of Quantized LLMs
The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DN…
Helpers and such for working with Lambda Cloud
A Gradio web UI for Large Language Models.
Training open neural machine translation models
Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep N…
Public facing notes page
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supportin…
Efficient few-shot learning with cross-encoders.
Course repository for the session "Hands-on Transformers: Fine-Tune your own BERT and GPT" of the Data Science Summer School 2023
Easy to use, state-of-the-art Neural Machine Translation for 100+ languages
An autoregressive character-level language model for making more things
A Native-PyTorch Library for LLM Fine-tuning
Contains Solutions and Notes for the Machine Learning Specialization By Stanford University and Deeplearning.ai - Coursera (2022) by Prof. Andrew NG
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory