Switch to: References

Add citations

You must login to add citations.
  1. Implementing artificial consciousness.Leonard Dung & Luke Kersten - 2024 - Mind and Language 40 (1):1-21.
    Implementationalism maintains that conventional, silicon-based artificial systems are not conscious because they fail to satisfy certain substantive constraints on computational implementation. In this article, we argue that several recently proposed substantive constraints are implausible, or at least are not well-supported, insofar as they conflate intuitions about computational implementation generally and consciousness specifically. We argue instead that the mechanistic account of computation can explain several of the intuitions driving implementationalism and noncomputationalism in a manner which is consistent with artificial consciousness. Our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Still no lie detector for language models: probing empirical and conceptual roadblocks.Benjamin A. Levinstein & Daniel A. Herrmann - forthcoming - Philosophical Studies:1-27.
    We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations