skip to main content
10.1145/3308560.3317586acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Collaborative Explanation of Deep Models with Limited Interaction for Trade Secret and Privacy Preservation

Published: 13 May 2019 Publication History

Abstract

An ever increasing number of decisions affecting our lives are made by algorithms. For this reason, algorithmic transparency is becoming a pressing need: automated decisions should be explainable and unbiased. A straightforward solution is to make the decision algorithms open-source, so that everyone can verify them and reproduce their outcome. However, in many situations, the source code or the training data of algorithms cannot be published for industrial or intellectual property reasons, as they are the result of long and costly experience (e.g. this is typically the case in banking or insurance). We present an approach whereby individual subjects on whom automated decisions are made can elicit in a collaborative and privacy-preserving manner a rule-based approximation of the model underlying the decision algorithm, based on limited interaction with the algorithm or even only on how they have been classified. Furthermore, being rule-based, the approximation thus obtained can be used to detect potential discrimination. We present empirical work to demonstrate the practicality of our ideas.

References

[1]
A. Chaudhuri and R. Mukerjee. Randomized Response: Theory and Techniques. Marcel Dekker, 1988.
[2]
J. Domingo-Ferrer, S. Martínez, D. Sánchez and J. Soria-Comas. Co-utility: self-enforcing protocols for the mutual benefit of participants. Engineering Applications of Artificial Intelligence, 59:148-158, 2017.
[3]
J. Domingo-Ferrer, R. Mulero-Vellido, and J. Soria-Comas. Multiparty computation with statistical input confidentiality via randomized response. In Privacy in Statistical Databases-PSD 2018, pp. 175–186. Springer, 2018.
[4]
General Data Protection Regulation. Regulation (EU) 2016/679. https://gdpr-info.eu
[5]
S. Greengard. Weighing the impact of GDPR. Communications of the ACM, 61(11):16–18, 2018.
[6]
S. Hajian and J. Domingo-Ferrer. A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25(7):1445–1459, 2013.
[7]
S. Hajian, J. Domingo-Ferrer, and O. Farràs. Generalization-based privacy preservation and discrimination prevention in data publishing and mining. Data Mining and Knowledge Discovery, 28(5-6):1158–1188, 2014.
[8]
S. Hajian, J. Domingo-Ferrer, A. Monreale, D. Pedreschi, and F. Giannotti. Discrimination-and privacy-aware patterns. Data Mining and Knowledge Discovery, 29(6):1733–1782, 2015.
[9]
The European Comission’s High-Level Expert Group on Artificial Intelligence. Draft Ethics Guidelines for Trustworthy AI. December 2018.
[10]
D. H. Park, L. A. Hendricks, Z. Akata, B. Schiele, T. Darrell, and M. Rohrbach. Attentive explanations: Justifying decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757, 2016.
[11]
D. Pedreschi, S. Ruggieri, and F. Turini. Measuring discrimination in socially-sensitive decision records. In Proceedings of the 2009 SIAM International Conference on Data Mining, pp. 581–592. SIAM, 2009.
[12]
D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In 14th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, pp. 560–568. ACM, 2008.
[13]
B. Poulin, R. Eisner, D. Szafron, P. Lu, R. Greiner, D. S. Wishart, A. Fyshe, B. Pearcy, C. MacDonell, and J. Anvik. Visual explanation of evidence with additive classifiers. In 21st AAAI Conf. on Artificial Intelligence–AAAI’06, pp. 1822-1829. AAAI, 2006.
[14]
M. T. Ribeiro, S. Singh, and C. Guestrin. Why should I trust you?: Explaining the predictions of any classifier. In 22nd ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, 2016.
[15]
M. T. Ribeiro, S. Singh, and C. Guestrin. Anchors: High-precision model-agnostic explanations. In 32nd AAAI Conf. on Artificial Intelligence–AAAI’18pp. 1527-1535. AAAI, 2018.
[16]
A. Shrikumar, P. Greenside, and A. Kundaje. Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685, 2017.
[17]
S. Singh, M. T. Ribeiro, and C. Guestrin. Programs as black-box explanations. arXiv preprint arXiv:1611.07579, 2016.
[18]
E. Strumbelj and I. Kononenko. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1–18, 2010.
[19]
R. Turner. A model explanation system. In IEEE Intl. Workshop on Machine Learning for Signal Processing–MLSP’16. IEEE, 2016.
[20]
M. M. C. Vidovic, N. Görnitz, K.-R. Müller, and M. Kloft. Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567, 2016.
[21]
S. Wachter, B. Mittelstadt, and L. Floridi. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2):76–99, 2017.
[22]
S. L. Warner. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63–69, 1965.
[23]
M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proc. of 26th Intl. Conf. on World Wide Web, pp. 1171–1180. 2017.

Cited By

View all
  • (2022)Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and PerspectivesIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.31338463:6(852-866)Online publication date: Dec-2022

Index Terms

  1. Collaborative Explanation of Deep Models with Limited Interaction for Trade Secret and Privacy Preservation
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    WWW '19: Companion Proceedings of The 2019 World Wide Web Conference
    May 2019
    1331 pages
    ISBN:9781450366755
    DOI:10.1145/3308560
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • IW3C2: International World Wide Web Conference Committee

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 May 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Auditing
    2. Explainability
    3. Machine Learning
    4. Privacy
    5. Transparency

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    WWW '19
    WWW '19: The Web Conference
    May 13 - 17, 2019
    San Francisco, USA

    Acceptance Rates

    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and PerspectivesIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.31338463:6(852-866)Online publication date: Dec-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media