Browsing by Author "Hasselberger, William"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- “Where lies the grail? AI, common sense, and human practical intelligence”Publication . Hasselberger, William; Lott, MicahThe creation of machines with intelligence comparable to human beings—so-called "human-level” and “general” intelligence—is often regarded as the Holy Grail of Artificial Intelligence (AI) research. However, many prominent discussions of AI lean heavily on the notion of human-level intelligence to frame AI research, but then rely on conceptions of human cognitive capacities, including “common sense,” that are sketchy, one-sided, philosophically loaded, and highly contestable. Our goal in this essay is to bring into view some underappreciated features of the practical intelligence involved in ordinary human agency. These features of practical intelligence are implicit in the structure of our first-person experience of embodied and situated agency, deliberation, and human interaction. We argue that spelling out these features and their implications reveals a fundamental distinction between two forms of intelligence in action, or what we call “efficient task-completion” versus “intelligent engagement in activity.” This distinction helps us to see what is missing from some widely accepted ways of thinking about human-level intelligence in AI, and how human common sense is actually tied, conceptually, to the ideal of practical wisdom, or good (normative) judgment about how to act and live well. Finally, our analysis, if sound, also has implications for the important ethical question of what it means to have AI systems that are aligned with human values, or the so-called “value alignment” problem for artificial intelligence.
- With friends like these: love and friendship with AI agentsPublication . Lott, Micah; Hasselberger, WilliamThis paper focuses on a central question for Human-AI interaction: Can you be friends with an AI agent? If not, why not? Some have argued that friendship with AI agents is impossible because software artifacts do not, and cannot, care about you. Proponents of human–machine friendships have responded that such relationships may indeed be one-sided, but still count as relationships of genuine love and affection—perhaps constituting a whole new category of friendship. Our paper takes a different path. We argue that you cannot be friends with an AI agent because you cannot sensibly be a friend to an AI agent. Being a friend to an AI would require caring about the good of the AI agent for its own sake, and it does not make sense to care about an AI agent in that way, since these agents lack a good of their own. After spelling out this argument, and responding to several objections, we highlight some initial implications of our argument, the most important of which is that the very idea of a tool – or, technological fix – to address social isolation and loneliness is misguided.
