![:octocat: :octocat:](https://github.githubassets.com/images/icons/emoji/octocat.png)
-
University College Dublin
- https://www.linkedin.com/in/joana-tirana-85277919b/
Highlights
- Pro
Block or Report
Block or report jtirana98
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (11)
Sort Name ascending (A-Z)
Stars
Language: Python
Sort by: Most stars
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Training and serving large-scale neural networks with auto parallelization.
Make huge neural nets fit in memory
Proper implementation of ResNet-s for CIFAR10/100 in pytorch that matches description of the original paper.
Federated Learning Benchmark - Federated Learning on Non-IID Data Silos: An Experimental Study (ICDE 2022)
The first open Federated Learning framework implemented in C++ and Python.
Python-MIP: collection of Python tools for the modeling and solution of Mixed-Integer Linear programs
A library for federated learning (a distributed machine learning process) in an enterprise environment.
Wide Residual Networks (WideResNets) in PyTorch
Implementation of dp-based federated learning framework using PyTorch
Training neural networks in TensorFlow 2.0 with 5x less memory
Releasing the source code Version1.
EfficientNetV2 implementation using PyTorch
Investigating Split Learning and Federate Learning
a pytorch implement of mobileNet v2 on cifar10
EfficientNetV2 pytorch (pytorch lightning) implementation with pretrained model
PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021
[ICLR2022] Efficient Split-Mix federated learning for in-situ model customization during both training and testing time
FedDCT: A Novel Federated Learning Approach for Training Large Convolutional Neural Networks
reveal the vulnerabilities of SplitNN
LuckMonkeys / arxiv-daily
Forked from Vincentqyw/cv-arxiv-daily🎓Automatically Update Distributed Learning Papers Daily using Github Actions (Update Every 12th hours)
Applied Split Learning in PyTorch with torch.distributed.rpc and torch.distributed.autograd
Adaptive Resource-Aware Split-Learning, a framework for efficient model training in IoT systems