Switch to: References

Citations of:

Ethics of Artificial Intelligence and Robotics

In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70 (2020)

Add citations

You must login to add citations.
  1. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Kantian Ethics and the Attention Economy.Timothy Aylsworth & Clinton Castro - 2024 - Palgrave Macmillan.
    In this open access book, Timothy Aylsworth and Clinton Castro draw on the deep well of Kantian ethics to argue that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by the problematic use of technology. The problematic use of technologies like smartphones threatens our autonomy in a variety of ways, and critics have only begun to appreciate the vast scope of this problem. In the last decade, we have seen a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The history of digital ethics.Vincent C. Müller - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press. pp. 1-18.
    Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: ‘pre-modernity’ prior to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The history of digital ethics.Vincent C. Müller - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: ‘pre-modernity’ prior to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Environmental and Biosafety Research Ethics Committees: Guidelines and Principles for Ethics Reviewers in the South African Context.Maricel Van Rooyen - 2021 - Dissertation, Stellenbosch University
    Over the last two decades, there was an upsurge of research and innovation in biotechnology and related fields, leading to exciting new discoveries in areas such as the engineering of biological processes, gene editing, stem cell research, CRISPR-Cas9 technology, Synthetic Biology, recombinant DNA, LMOs and GMOs, to mention only a few. At the same time, these advances generated concerns about biosafety, biosecurity and adverse impacts on biodiversity and the environment, leading to the establishment of Research Ethics Committees (RECs) at Higher (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Vivir con robots. Reflexiones éticas, jurídicas, sociales y culturales.Mario Toboso Martín & María Amparo Grau Ruiz - 2021 - Arbor 197 (802):a623.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why AI Ethics Is a Critical Theory.Rosalie Waelen - 2022 - Philosophy and Technology 35 (1):1-16.
    The ethics of artificial intelligence is an upcoming field of research that deals with the ethical assessment of emerging AI applications and addresses the new kinds of moral questions that the advent of AI raises. The argument presented in this article is that, even though there exist different approaches and subfields within the ethics of AI, the field resembles a critical theory. Just like a critical theory, the ethics of AI aims to diagnose as well as change society and is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Varieties of transparency: exploring agency within AI systems.Gloria Andrada, Robert William Clowes & Paul Smart - 2023 - AI and Society 38 (4):1321-1331.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater _transparency_ from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires _seeing through_ the artefact or device, widespread calls for transparency imply _seeing into_ different aspects of AI systems. These two notions are in apparent tension with (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Missing Ingredient in the Case for Regulating Big Tech.Bartlomiej Chomanski - 2021 - Minds and Machines 31 (2):257-275.
    Having been involved in a slew of recent scandals, many of the world’s largest technology companies embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, cannot be relied on (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • First-person representations and responsible agency in AI.Miguel Ángel Sebastián & Fernando Rudy-Hiller - 2021 - Synthese 199 (3-4):7061-7079.
    In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics.Johan Rochel & Florian Evéquoz - 2021 - AI and Society 36 (2):609-622.
    Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethics of Artificial Intelligence and Robotization in Tourism and Hospitality – A Conceptual Framework and Research Agenda.Stanislav Ivanov & Steven Umbrello - 2021 - Journal of Smart Tourism 1 (2):9-18.
    The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Images of Artificial Intelligence: a Blind Spot in AI Ethics.Alberto Romele - 2022 - Philosophy and Technology 35 (1):1-19.
    This paper argues that the AI ethics has generally neglected the issues related to the science communication of AI. In particular, the article focuses on visual communication about AI and, more specifically, on the use of certain stock images in science communication about AI — in particular, those characterized by an excessive use of blue color and recurrent subjects, such as androgyne faces, half-flesh and half-circuit brains, and variations on Michelangelo’s The Creation of Adam. In the first section, the author (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical concerns in rescue robotics: a scoping review.Linda Battistuzzi, Carmine Tommaso Recchiuto & Antonio Sgorbissa - 2021 - Ethics and Information Technology 23 (4):863-875.
    Rescue operations taking place in disaster settings can be fraught with ethical challenges. Further ethical challenges will likely be introduced by the use of robots, which are expected to soon become commonplace in search and rescue missions and disaster recovery efforts. To help focus timely reflection on the ethical considerations associated with the deployment of rescue robots, we have conducted a scoping review exploring the relevant academic literature following a widely recognized scoping review framework. Of the 429 papers identified by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Actionable Principles for Artificial Intelligence Policy: Three Pathways.Charlotte Stix - 2021 - Science and Engineering Ethics 27 (1):1-17.
    In the development of governmental policy for artificial intelligence that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Perspectives on computing ethics: a multi-stakeholder analysis.Damian Gordon, Ioannis Stavrakakis, J. Paul Gibson, Brendan Tierney, Anna Becevel, Andrea Curley, Michael Collins, William O’Mahony & Dympna O’Sullivan - 2022 - Journal of Information, Communication and Ethics in Society 20 (1):72-90.
    Purpose Computing ethics represents a long established, yet rapidly evolving, discipline that grows in complexity and scope on a near-daily basis. Therefore, to help understand some of that scope it is essential to incorporate a range of perspectives, from a range of stakeholders, on current and emerging ethical challenges associated with computer technology. This study aims to achieve this by using, a three-pronged, stakeholder analysis of Computer Science academics, ICT industry professionals, and citizen groups was undertaken to explore what they (...)
    Download  
     
    Export citation  
     
    Bookmark