Stars
X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning.
使用LSTM及股票因子数据预测未来收益,使用LRP(layer-wise relevance propagation)增强网络可解释性
Differential private machine learning
A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'
A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).
torch-optimizer -- collection of optimizers for Pytorch
interactive visualization of 5 popular gradient descent methods with step-by-step illustration and hyperparameter tuning UI
Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"
jkhlot / AMSGradpytorch
Forked from wikaiqi/AMSGradpytorchExperiments on AMSGrad
Experiments on AMSGrad -- pytorch version
zzwlinux / AdaBound
Forked from Luolc/AdaBoundAn optimizer that trains as fast as Adam and as good as SGD.
An optimizer that trains as fast as Adam and as good as SGD.
Some improvements on Adam
Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"
Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"
Simple Tensorflow implementation of "On the Convergence of Adam and Beyond" (ICLR 2018)
The code for the paper: https://arxiv.org/abs/1806.06317
Algorithms to recover input data from their gradient signal through a neural network
Implementation of dp-based federated learning framework using PyTorch
Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget
Diffprivlib: The IBM Differential Privacy Library
Differential Privacy Preservation in Deep Learning under Model Attacks
Analytic calibration for differential privacy with Gaussian perturbations