-
Université de Montréal
- Canada
- martin-wey.github.io
- @MWeyssow
Highlights
- Pro
Stars
Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models".
BigCodeBench: Benchmarking Code Generation Towards AGI
[TMLR] A curated list of language modeling researches for code and related datasets.
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answer based on user queries.
A complete computer science study plan to become a software engineer.
[EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code
CodeUltraFeedback: aligning large language models to coding preferences
The official implementation of Self-Play Fine-Tuning (SPIN)
Must-read papers on machine learning, deep learning, reinforcement learning and other learning methods for brain-computer interfaces.
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
A curated list of papers, theses, datasets, and tools related to the application of Machine Learning for Software Engineering
A playbook for systematically maximizing the performance of deep learning models.
Continual Learning papers list, curated by ContinualAI
Continual Learning tutorials and demo running on Google Colaboratory.
Source code of the paper "Do Syntax Trees Help Pre-trained Transformers Extract Information?" (EACL 2021)
📘 The experiment tracker for foundation model training
Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.
[NAACL 2021] QAGNN: Question Answering using Language Models and Knowledge Graphs 🤖