skip to main content
survey

Federated Machine Learning: Concept and Applications

Published: 28 January 2019 Publication History

Abstract

Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16). ACM, New York, NY, 308--318.
[2]
Abbas Acar, Hidayet Aksu, A. Selcuk Uluagac, and Mauro Conti. 2018. A survey on homomorphic encryption schemes: Theory and implementation. ACM Comput. Surv. 51, 4, Article 79 (July 2018), 35 pages.
[3]
Rakesh Agrawal and Ramakrishnan Srikant. 2000. Privacy-preserving data mining. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (SIGMOD’00). ACM, New York, NY, 439--450.
[4]
Yoshinori Aono, Takuya Hayashi, Le Trieu Phong, and Lihua Wang. 2016. Scalable and secure logistic regression via homomorphic encryption. In Proceedings of the 6th ACM Conference on Data and Application Security and Privacy (CODASPY’16). ACM, New York, NY, 142--144.
[5]
Toshinori Araki, Jun Furukawa, Yehuda Lindell, Ariel Nof, and Kazuma Ohara. 2016. High-throughput semi-honest secure three-party computation with an honest majority. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16). ACM, New York, NY, 805--817.
[6]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2018. How To Backdoor Federated Learning. arxiv:cs.CR/1807.00459
[7]
Raad Bahmani, Manuel Barbosa, Ferdinand Brasser, Bernardo Portela, Ahmad-Reza Sadeghi, Guillaume Scerri, and Bogdan Warinschi. 2017. Secure multiparty computation from SGX. Financial Cryptography. 477–497.
[8]
Dan Bogdanov, Sven Laur, and Jan Willemson. 2008. Sharemind: A framework for fast privacy-preserving computations. In Proceedings of the 13th European Symposium on Research in Computer Security: Computer Security (ESORICS’08). Springer, Berlin, 192--206.
[9]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS’17). ACM, New York, NY, 1175--1191.
[10]
Florian Bourse, Michele Minelli, Matthias Minihold, and Pascal Paillier. 2017. Fast homomorphic evaluation of deep discretized neural networks. IACR Cryptology ePrint Archive 2017 (2017), 1114.
[11]
Hervé Chabanne, Amaury de Wargny, Jonathan Milgram, Constance Morel, and Emmanuel Prouff. 2017. Privacy-preserving classification on deep neural network. IACR Cryptology ePrint Archive 2017 (2017), 35.
[12]
Kamalika Chaudhuri and Claire Monteleoni. 2009. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou (Eds.). Curran Associates, Inc., 289--296. https://papers.nips.cc/paper/3486-privacy-preserving-logistic-regression.pdf.
[13]
Fei Chen, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. 2018. Federated meta-learning for recommendation. CoRR abs/1802.07876 (2018). arxiv:1802.07876 https://arxiv.org/abs/1802.07876.
[14]
Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. Technical Report. Retrieved December 26, 2018 from https://www.microsoft.com/en-us/research/publication/cryptonets-applying-neural-networks-to-encrypted-data-with-high-throughput-and-accuracy/.
[15]
W. Du and M. Atallah. 2001. Privacy-preserving cooperative statistical analysis. In Proceedings of the 17th Annual Computer Security Applications Conference (ACSAC’01). IEEE Computer Society, Washington, DC, 102--. https://dl.acm.org/citation.cfm?id=872016.872181.
[16]
Wenliang Du, Yunghsiang Sam Han, and Shigang Chen. 2004. Privacy-preserving multivariate statistical analysis: Linear regression and classification. In SDM, Vol. 4. 222–233.
[17]
Wenliang Du and Zhijun Zhan. 2002. Building decision tree classifier on private data. In Proceedings of the IEEE International Conference on Privacy, Security and Data Mining - Volume 14 (CRPIT’02). Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 1--8. https://dl.acm.org/citation.cfm?id=850782.850784.
[18]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the 5th International Conference on Theory and Applications of Models of Computation (TAMC’08). Springer, Berlin, 1--19. https://dl.acm.org/citation.cfm?id=1791834.1791836.
[19]
EU. 2016. REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation). Retrieved December 26, 2018 from https://eur-lex.europa.eu/legal-content/EN/TXT.
[20]
Boi Faltings, Goran Radanovic, and Ronald Brachman. 2017. Game Theory for Data Science: Eliciting Truthful Information. Morgan 8 Claypool Publishers.
[21]
Jun Furukawa, Yehuda Lindell, Ariel Nof, and Or Weinstein. 2016. High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority. Cryptology ePrint Archive, Report 2016/944. https://eprint.iacr.org/2016/944.
[22]
Adrià Gascón, Phillipp Schoppmann, Borja Balle, Mariana Raykova, Jack Doerner, Samee Zahur, and David Evans. 2016. Secure linear regression on vertically partitioned datasets. IACR Cryptology ePrint Archive 2016 (2016), 892.
[23]
Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. CoRR abs/1712.07557 (2017). arxiv:1712.07557 https://arxiv.org/abs/1712.07557
[24]
Irene Giacomelli, Somesh Jha, Marc Joye, C. David Page, and Kyonghwan Yoon. 2017. Privacy-preserving ridge regression with only linearly-homomorphic encryption. Cryptology ePrint Archive, Report 2017/979. https://eprint.iacr.org/2017/979.
[25]
O. Goldreich, S. Micali, and A. Wigderson. 1987. How to play any mental game. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC’87). ACM, New York, NY, 218--229.
[26]
Rob Hall, Stephen E. Fienberg, and Yuval Nardi. 2011. Secure multiple linear regression based on homomorphic encryption. Journal of Official Statistics 27, 4 (2011), 669--691.
[27]
Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR abs/1711.10677 (2017).
[28]
Ehsan Hesamifard, Hassan Takabi, and Mehdi Ghasemi. 2017. CryptoDL: Deep neural networks over encrypted data. CoRR abs/1711.05189 (2017). arxiv:1711.05189 https://arxiv.org/abs/1711.05189.
[29]
Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep models under the GAN: Information leakage from collaborative deep learning. CoRR abs/1702.07464 (2017).
[30]
Qirong Ho, James Cipar, Henggang Cui, Jin Kyu Kim, Seunghak Lee, Phillip B. Gibbons, Garth A. Gibson, Gregory R. Ganger, and Eric P. Xing. 2013. More effective distributed ML via a stale synchronous parallel parameter server. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1 (NIPS’13). Curran Associates Inc., 1223--1231. https://dl.acm.org/citation.cfm?id=2999611.2999748.
[31]
Murat Kantarcioglu and Chris Clifton. 2004. Privacy-preserving distributed mining of association rules on horizontally partitioned data. IEEE Trans. on Knowl. and Data Eng. 16, 9 (Sept. 2004), 1026--1037.
[32]
Alan F. Karr, X. Sheldon Lin, Ashish P. Sanil, and Jerome P. Reiter. 2004. Privacy-preserving analysis of vertically partitioned data using secure matrix products. Journal of Official Statistics 25, 125–138.
[33]
Niki Kilbertus, Adria Gascon, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller. 2018. Blind justice: Fairness with encrypted sensitive attributes. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm, Sweden, 2630--2639. https://proceedings.mlr.press/v80/kilbertus18a.html.
[34]
Hyesung Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. 2018. On-Device Federated Learning via Blockchain and its Latency Analysis. arxiv:cs.IT/1808.03949
[35]
Miran Kim, Yongsoo Song, Shuang Wang, Yuhou Xia, and Xiaoqian Jiang. 2018. Secure logistic regression based on homomorphic encryption: Design and evaluation. JMIR Med Inform 6, 2 (17 Apr 2018), e19.
[36]
Jakub Konecný, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. CoRR abs/1610.02527 (2016). arxiv:1610.02527 https://arxiv.org/abs/1610.02527
[37]
Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. CoRR abs/1610.05492 (2016). arxiv:1610.05492 https://arxiv.org/abs/1610.05492
[38]
Gang Liang and Sudarshan S. Chawathe. 2004. Privacy-preserving inter-database operations. In International Conference on Intelligence and Security Informatics. Springer, 66--82.
[39]
Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J. Dally. 2017. Deep gradient compression: Reducing the communication bandwidth for distributed training. CoRR abs/1712.01887 (2017). arxiv:1712.01887 https://arxiv.org/abs/1712.01887.
[40]
Jian Liu, Mika Juuti, Yao Lu, and N. Asokan. 2017. Oblivious neural network predictions via MiniONN transformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS’17). ACM, New York, NY, 619--631.
[41]
H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. CoRR abs/1602.05629 (2016). arxiv:1602.05629 https://arxiv.org/abs/1602.05629.
[42]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private language models without losing accuracy. CoRR abs/1710.06963 (2017).
[43]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2018. Inference attacks against collaborative learning. CoRR abs/1805.04049 (2018). arxiv:1805.04049 https://arxiv.org/abs/1805.04049.
[44]
Payman Mohassel and Peter Rindal. 2018. ABY3: A mixed protocol framework for machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS’18). ACM, New York, NY, 35--52.
[45]
Payman Mohassel, Mike Rosulek, and Ye Zhang. 2015. Fast and secure three-party computation: The garbled circuit approach. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS’15). ACM, New York, NY, 591--602.
[46]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. In IEEE Symposium on Security and Privacy. IEEE Computer Society, 19--38.
[47]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. IACR Cryptology ePrint Archive 2017 (2017), 396.
[48]
Valeria Nikolaenko, Udi Weinsberg, Stratis Ioannidis, Marc Joye, Dan Boneh, and Nina Taft. 2013. Privacy-preserving ridge regression on hundreds of millions of records. In Proceedings of the 2013 IEEE Symposium on Security and Privacy (SP’13). IEEE Computer Society, Washington, DC, 334--348.
[49]
Richard Nock, Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2018. Entity resolution and federated learning get a federated resolution. CoRR abs/1803.04035 (2018). arxiv:1803.04035 https://arxiv.org/abs/1803.04035.
[50]
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (Oct. 2010), 1345--1359.
[51]
Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Information Forensics and Security 13, 5 (2018), 1333--1345.
[52]
M. Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M. Songhori, Thomas Schneider, and Farinaz Koushanfar. 2018. Chameleon: A hybrid secure computation framework for machine learning applications. CoRR abs/1801.03239 (2018).
[53]
R. L. Rivest, L. Adleman, and M. L. Dertouzos. 1978. On data banks and privacy homomorphisms. Foundations of Secure Computation 4, 11 (1978), 169--179.
[54]
Bita Darvish Rouhani, M. Sadegh Riazi, and Farinaz Koushanfar. 2017. DeepSecure: Scalable provably-secure deep learning. CoRR abs/1705.08963 (2017). arxiv:1705.08963 https://arxiv.org/abs/1705.08963.
[55]
Ashish P. Sanil, Alan F. Karr, Xiaodong Lin, and Jerome P. Reiter. 2004. Privacy preserving regression modelling via distributed computation. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’04). ACM, New York, NY, 677--682.
[56]
Monica Scannapieco, Ilya Figotin, Elisa Bertino, and Ahmed K. Elmagarmid. 2007. Privacy preserving schema and data matching. In Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data (SIGMOD’07). ACM, New York, NY, 653--664.
[57]
Amit P. Sheth and James A. Larson. 1990. Federated database systems for managing distributed, heterogeneous, and autonomous databases. ACM Comput. Surv. 22, 3 (Sept. 1990), 183--236.
[58]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS’15). ACM, New York, NY, 1310--1321.
[59]
David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529 (2016), 484--503. https://www.nature.com/nature/journal/v529/n7587/full/nature16961.html.
[60]
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S. Talwalkar. 2017. Federated multi-task learning. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4424--4434. https://papers.nips.cc/paper/7029-federated-multi-task-learning.pdf.
[61]
Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. 2013. Stochastic gradient descent with differentially private updates. 2013 IEEE Global Conference on Signal and Information Processing (2013). IEEE, 245--248.
[62]
Lili Su and Jiaming Xu. 2018. Securing distributed machine learning in high dimensions. CoRR abs/1804.10140 (2018). arxiv:1804.10140 https://arxiv.org/abs/1804.10140.
[63]
Latanya Sweeney. 2002. K-anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 10, 5 (Oct. 2002), 557--570.
[64]
Jaideep Vaidya and Chris Clifton. 2004. Privacy preserving naïve Bayes classifier for vertically partitioned data. In Proceedings of the 4th SIAM Conference on Data Mining, 2004. 330--334.
[65]
Jaideep Vaidya and Chris Clifton. 2002. Privacy preserving association rule mining in vertically partitioned data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’02). ACM, New York, NY, 639--644.
[66]
Jaideep Vaidya and Chris Clifton. 2003. Privacy-preserving K-means clustering over vertically partitioned data. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’03). ACM, New York, NY, 206--215.
[67]
Jaideep Vaidya and Chris Clifton. 2005. Privacy-preserving decision trees over vertically partitioned data. In Data and Applications Security XIX, Sushil Jajodia and Duminda Wijesekera (Eds.). Springer, Berlin, 139--152.
[68]
Li Wan, Wee Keong Ng, Shuguo Han, and Vincent C. S. Lee. 2007. Privacy-preservation for gradient descent methods. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’07). ACM, New York, NY, 775--783.
[69]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2018. When edge meets learning: Adaptive control for resource-constrained distributed machine learning. CoRR abs/1804.05271 (2018). arxiv:1804.05271 https://arxiv.org/abs/1804.05271.
[70]
Wikipedia. 2018. Facebook--Cambridge Analytica Data Scandal. https://en.wikipedia.org/wiki/Facebook-Cambridge_Analytica%_data_scandal.
[71]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2018. Federated learning. Communications of the CCF 14, 11 (2018), 49--55.
[72]
Andrew C. Yao. 1982. Protocols for secure computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (SFCS’82). IEEE Computer Society, Washington, DC, 160--164. https://dl.acm.org/citation.cfm?id=1382436.1382751.
[73]
Hwanjo Yu, Xiaoqian Jiang, and Jaideep Vaidya. 2006. Privacy-preserving SVM using nonlinear kernels on horizontally partitioned data. In Proceedings of the 2006 ACM Symposium on Applied Computing (SAC’06). ACM, New York, NY, 603--610.
[74]
Hwanjo Yu, Jaideep Vaidya, and Xiaoqian Jiang. 2006. Privacy-preserving SVM classification on vertically partitioned data. In Proceedings of the 10th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining (PAKDD’06). Springer, Berlin, 647--656.
[75]
Jiawei Yuan and Shucheng Yu. 2014. Privacy preserving back-propagation neural network learning made practical with cloud computing. IEEE Trans. Parallel Distrib. Syst. 25, 1 (Jan. 2014), 212--221.
[76]
Qingchen Zhang, Laurence T. Yang, and Zhikui Chen. 2016. Privacy preserving deep computation model on cloud for big data feature learning. IEEE Trans. Comput. 65, 5 (May 2016), 1351--1362.
[77]
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated Learning with Non-IID Data. arxiv:cs.LG/1806.00582

Cited By

View all
  • (2025)Quantifying Bytes: Understanding Practical Value of Data Assets in Federated LearningTsinghua Science and Technology10.26599/TST.2024.901003430:1(135-147)Online publication date: Feb-2025
  • (2025)Federated Incremental Learning algorithm based on Topological Data AnalysisPattern Recognition10.1016/j.patcog.2024.111048158(111048)Online publication date: Feb-2025
  • (2025)Federated learning data protection scheme based on personalized differential privacy in psychological evaluationNeurocomputing10.1016/j.neucom.2024.128653611(128653)Online publication date: Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 10, Issue 2
Survey Papers and Regular Papers
March 2019
214 pages
ISSN:2157-6904
EISSN:2157-6912
DOI:10.1145/3306498
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 January 2019
Accepted: 01 November 2018
Received: 01 September 2018
Published in TIST Volume 10, Issue 2

Check for updates

Author Tags

  1. Federated learning
  2. GDPR
  3. transfer learning

Qualifiers

  • Survey
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4,041
  • Downloads (Last 6 weeks)443
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Quantifying Bytes: Understanding Practical Value of Data Assets in Federated LearningTsinghua Science and Technology10.26599/TST.2024.901003430:1(135-147)Online publication date: Feb-2025
  • (2025)Federated Incremental Learning algorithm based on Topological Data AnalysisPattern Recognition10.1016/j.patcog.2024.111048158(111048)Online publication date: Feb-2025
  • (2025)Federated learning data protection scheme based on personalized differential privacy in psychological evaluationNeurocomputing10.1016/j.neucom.2024.128653611(128653)Online publication date: Jan-2025
  • (2025)Analysis of regularized federated learningNeurocomputing10.1016/j.neucom.2024.128579611(128579)Online publication date: Jan-2025
  • (2025)Survey of federated learning in intrusion detectionJournal of Parallel and Distributed Computing10.1016/j.jpdc.2024.104976195(104976)Online publication date: Jan-2025
  • (2025)A privacy-preserving federated learning approach for airline upgrade optimizationJournal of Air Transport Management10.1016/j.jairtraman.2024.102693122(102693)Online publication date: Jan-2025
  • (2025)Forecasting time to risk based on multi-party data: An explainable privacy-preserving decentralized survival analysis methodInformation Processing & Management10.1016/j.ipm.2024.10388162:1(103881)Online publication date: Jan-2025
  • (2025)An adaptive asynchronous federated learning framework for heterogeneous Internet of thingsInformation Sciences10.1016/j.ins.2024.121458689(121458)Online publication date: Jan-2025
  • (2025)Increasing trust in AI through privacy preservation and model explainability: Federated Learning of Fuzzy Regression TreesInformation Fusion10.1016/j.inffus.2024.102598113(102598)Online publication date: Jan-2025
  • (2025)Acceleration offloading for differential privacy protection based on federated learning in edge intelligent controllersFuture Generation Computer Systems10.1016/j.future.2024.107526163(107526)Online publication date: Feb-2025
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media