A flexible, modular, and easy to use library to facilitate federated learning research and development in healthcare settings
-
Updated
Jun 21, 2024 - Python
A flexible, modular, and easy to use library to facilitate federated learning research and development in healthcare settings
An open framework for Federated Learning.
Auto-Multilift is a novel learning framework for cooperative load transportation with quadrotors. It can automatically tune various MPC hyperparameters, which are modeled by DNNs and difficult to tune manually, via reinforcement learning in a distributed and closed-loop manner.
FedStream: Prototype-Based Federated Learning on Distributed Concept-drifting Data Streams
CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning
Extremely Randomized Trees with Privacy Preservation for Distributed Data (k-PPD-ERT)
Docker CLI package for the vantage6 infrastructure
subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment
[TMLR] CoDeC: Communication-Efficient Decentralized Continual Learning
Distributed machine learning using processes
A Federated Learning based Android Malware Classification System
Sparse Convex Optimization Toolkit (SCOT)
Federated Learning Utilities and Tools for Experimentation
The repository focuses on conducting Federated Learning experiments using the Intel OpenFL framework with diverse machine learning models, utilizing image and tabular datasets, applicable different domains like medicine, banking etc.
Dist-DGL running on wsl2, minikube with single machine
FedGraphNN: A Federated Learning Platform for Graph Neural Networks with MLOps Support. The previous research version is accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.
This repository is the code basis of the paper intitled "The learning costs of Federated Learning in constrained scenarios"
A script for training the ConvNextV2 on CIFAR10 dataset using the FSDP technique for a distributed training scheme.
CD-GraB is a distributed gradient balancing framework that aims to find distributed data permutation with provably better convergence guarantees than Distributed Random Reshuffling (D-RR). https://arxiv.org/pdf/2302.00845.pdf.
Add a description, image, and links to the distributed-learning topic page so that developers can more easily learn about it.
To associate your repository with the distributed-learning topic, visit your repo's landing page and select "manage topics."