Switch to: References

Add citations

You must login to add citations.
  1. Automated Influence and Value Collapse.Dylan J. White - 2024 - American Philosophical Quarterly 61 (4):369-386.
    Automated influence is one of the most pervasive applications of artificial intelligence in our day-to-day lives, yet a thoroughgoing account of its associated individual and societal harms is lacking. By far the most widespread, compelling, and intuitive account of the harms associated with automated influence follows what I call the control argument. This argument suggests that users are persuaded, manipulated, and influenced by automated influence in a way that they have little or no control over. Based on evidence about the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2024 - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the information (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tightlacing and Abusive Normative Address.Alexander Edlich & Alfred Archer - 2023 - Ergo: An Open Access Journal of Philosophy 10.
    In this paper, we introduce a distinctive kind of psychological abuse we call Tightlacing. We begin by presenting four examples and argue that there is a distinctive form of abuse in these examples that cannot be captured by our existing moral categories. We then outline our diagnosis of this distinctive form of abuse. Tightlacing consists in inducing a mistaken self-conception in others that licenses overburdening demands on them such that victims apply those demands to themselves. We discuss typical Tightlacing strategies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In Defense of ‘Surveillance Capitalism’.Peter Königs - 2024 - Philosophy and Technology 37 (4):1-33.
    Critics of Big Tech often describe ‘surveillance capitalism’ in grim terms, blaming it for all kinds of political and social ills. This article counters this pessimistic narrative, offering a more favorable take on companies like Google, YouTube, and Twitter/X. It argues that the downsides of surveillance capitalism are overstated, while the benefits are largely overlooked. Specifically, the article examines six critical areas: i) targeted advertising, ii) the influence of surveillance capitalism on politics, iii) its impact on mental health, iv) its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Institutions, Automation, and Legitimate Expectations.Jelena Belic - 2024 - The Journal of Ethics 28 (3):505-525.
    Debates concerning digital automation are mostly focused on the question of the availability of jobs in the short and long term. To counteract the possible negative effects of automation, it is often suggested that those at risk of technological unemployment should have access to retraining and reskilling opportunities. What is often missing in these debates are implications that all of this may have for individual autonomy understood as the ability to make and develop long-term plans. In this paper, I argue (...)
    Download  
     
    Export citation  
     
    Bookmark