On AI Anthropomorphism

rw-book-cover

Metadata

Highlights

  • Anthropomorphism is the act of projecting human-like qualities or behavior onto non-human entities, such as when people give animals, objects, or natural phenomena human-like characteristics or emotions. (View Highlight)
  • “My apologies, but I won’t be able to help you with that request.” The simple alternative of “GPT-4 has been designed by OpenAI so that it does not respond to requests like this one” would clarify responsibility and avoid the deceptive use of first person pronouns. In my world, machines are not an “I” and shouldn’t pretend to be human. (View Highlight)
  • There are many accounts of created beings that take on a form of awareness or agency, from the historical golem (not the Tolkien one, called “gollum”) to Mary Wollstonecraft Shelley and all the way back to the myth of Pygmalion and Galatea. (View Highlight)
  • Second, we are just now exploring what it means for an algorithm to be a non-determinate conversational partner. The LLMs are, I agree, stochastic parrots, and hence “mind”-less. Nonetheless, they have a convincing social presence (View Highlight)
  • I suspect that we will need to break down our human / non-human binary into a dimension, or into multiple dimensions (View Highlight)
  • Elizabeth Phillips and colleagues (2016) have explored the deeper relationships that we have with some animals, with dogs being the primary example of a social presence. See also Haraway’s (2003) concepts in companion species, and intriguingly Fijn’s (2011) work on human relationships with Mongolian lasso-pole ponies. (View Highlight)
  • I think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be (View Highlight)
  • I think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be (View Highlight)
  • I think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be applicable to computational things (View Highlight)
  • think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be (View Highlight)
  • would support strict regulations to protect people from AIs that are feigning human-ness in order to fool people. To me, that is a separate set of concerns from how we explore new technologies, (View Highlight)
  • It’s one thing for an ordinary artifact user to make human-like references for boats, cars, or Roombas, but I see it as a problem when designers use that language, resulting in poor product (View Highlight)
  • It’s one thing for an ordinary artifact user to make human-like references for boats, cars, or Roombas, but I see it as a problem when designers use that language, resulting in poor products. (View Highlight)
  • It’s one thing for an ordinary artifact user to make human-like references for boats, cars, or Roombas, but I see it as a problem when designers use that language, resulting in poor products. (View Highlight)
  • It’s one thing for an ordinary artifact user to make human-like references for boats, cars, or Roombas, but I see it as a problem when designers use that language, resulting in poor products. (View Highlight)
  • It’s one thing for an ordinary artifact user to make human-like references for boats, cars, or Roombas, but I see it as a problem when designers use that language, resulting in poor products. (View Highlight)
  • deadly design mistake was Elon Musk’s insistence that since human drivers used only eyes, his Teslas would use only video. By preventing the use of radar or LIDAR, over the objections of his engineers, he has designed a suboptimal system that produces deadly results. (View Highlight)
  • metaphors matter (Lakoff & Johnson, 2006), so designers should be alert to how their belief that computers should communicate in natural language, just like people do, lead to their failure to use computer capabilities such as information abundant displays of visual information. (View Highlight)
  • Mumford describes how initial designs based on human or animal models are an obstacle that needs to be overcome in developing new technologies: “the most ineffective kind of machine is the realistic mechanical imitation of a man[/woman] or another animal.” (View Highlight)
  • Mumford’s point is that the distinctive capabilities of technology, such as wheels, jet engines, or high resolution computer displays may be overlooked if designers stick with the “bio-inspired” notions, such as conversational interfaces. (View Highlight)
  • it is understandable that anthropomorphic phrasing would be offered as an initial design for AI systems, but getting beyond this stage will enable designers to take better advantage of sophisticated algorithms, huge databases, superhuman sensors, information abundant displays, and superior user interfaces. (View Highlight)
  • How Can Humans Relate to Non-Human Intelligences?” For me, that is closer to the core issue, and anthropomorphism is a sub-question among many possible explorations of strange non-human entities. (View Highlight)
  • Anthropomorphism is one metaphorical approach to new ideas and new entities. In my view, metaphors become figures of thought, through which we can articulate some of that strangeness. (Following (View Highlight)
  • while everyone is talking about LLMs and FMs, some of us (including you and I) are thinking hard about the UIs to those LLMs. (View Highlight)
  • The LLM layer is probably a “we” — after all, it contains the non-consensually harvested materials from hundreds of thousands of humans. Or maybe I should have said “captured materials.” Or “stolen voices.” (View Highlight)
  • The UI layer may be an “I”, because that’s the style of interaction that seems to work for us humans. I think you would prefer that the UI layer is an “it” (View Highlight)
  • Our experiments with a personified UI to a LLM have been quite successful. No one who uses our Programmer’s Assistant prototype (Ross et al. 2023) is confused about its ontological status. No one mistakes it as anything other than a smart toaster, but it turns out to be a transformatively helpful smart toaster. (Not “transformatively smart,” just “transformatively helpful.”) So now we have Clippie and BOB as examples of failures, but we also have our Programmer’s Assistant as an example of a success (View Highlight)
  • Michael suggests that our shared question is “How Can Humans Relate to Non-Human Intelligences?” But I disagree that machines should be described as intelligent. I reserve certain words such as think, know, understand, intelligence, knowledge, wisdom, etc. for people, and find other words for describing what machines do. I (View Highlight)
  • Michael suggests that our shared question is “How Can Humans Relate to Non-Human Intelligences?” But I disagree that machines should be described as intelligent. I reserve certain words such as think, know, understand, intelligence, knowledge, wisdom, etc. for people, and find other words for describing what machines do. I (View Highlight)
  • Michael suggests that our shared question is “How Can Humans Relate to Non-Human Intelligences?” But I disagree that machines should be described as intelligent. I reserve certain words such as think, know, understand, intelligence, knowledge, wisdom, etc. for people, and find other words for describing what machines do. I have done this in all six editions of Designing the User Interface (2016) and I think it was an important productive decision. (View Highlight)
  • “anthropomorphising systems can lead to overreliance or unsafe use” (Weidinger et al., 2022). (View Highlight)
  • By elevating machines to human capabilities, we diminish the specialness of people. I’m eager to preserve the distinction and clarify responsibility. So I do not think machines should use first-person pronouns, but should describe who is responsible for the system or simply respond in a machine-like way. (View Highlight)
  • I think sheepdogs and hunting dogs present interesting cases. We (humans, not Michael) send them out to do things (Kaminski and Nitzschner, 2013). We coordinate our actions with them — sometimes over distances. They coordinate their actions with us. (View Highlight)
  • But dogs are an intermediate case. They have a social presence. They have something like a mind. They have their own goals, and sometimes their goals and our goals may be aligned, and sometimes not. Sometimes we can change their minds. Sometimes they can change our minds. I think that makes them seem like non-human intelligences (View Highlight)
  • I don’t see LLMs (or, more properly, the UIs to LLMs) as having goals, intentions, and certainly not minds. The UIs that we have built do have social presence. We can design them so that they seem to have distinct personalities (View Highlight)
  • I don’t see LLMs (or, more properly, the UIs to LLMs) as having goals, intentions, and certainly not minds. The UIs that we have built do have social presence. We can design them so that they seem to have distinct personalities — even though we know that smart toasters don’t have personalities. Parrots have something like personalities, but not stochastic parrots (Bender et al., 2021). But stochastic parrots can have a kind of social presence (View Highlight)
  • don’t see LLMs (or, more properly, the UIs to LLMs) as having goals, intentions, and certainly not minds. The UIs that we have built do have social presence. We can design them so that they seem to have distinct personalities (View Highlight)
  • don’t see LLMs (or, more properly, the UIs to LLMs) as having goals, intentions, and certainly not minds. The UIs that we have built do have social presence. We can design them so that they seem to have distinct personalities (View Highlight)
  • So you could say intelligence is a continuum, but responsibility is more binary and is an important factor in design. I think the discussion cannot be limited to intelligence, but must include memory, perceptual, cognitive, and motor abilities. (View Highlight)
  • I’m interested in stressing design which clarifies that AI tools are designed by humans and organizations, which are legally responsible for what they do and for what the tools do, although tools can be misused, etc. (View Highlight)
  • I agree that there is a history of people rejecting chatbots. In our experience, the acceptance issue is about poor match of the user’s request to the chatbot’s set of intents (i.e., mapping request to response). We’ve been seeing that the current LLMs seem to provide more appropriate responses, perhaps exactly because they do not use the previous generation’s mapping of utterances-to-intents. I’m not sure that people reject chatbots that use a pronoun. I think they reject chatbots that provide poor service. We may need to do more systematic analyses of the factors that lead to acceptance and the factors that lead to failure. (View Highlight)
  • A further example in your favor is the success of Alexa and Siri, which are voice-based user interfaces (VUIs) that use “I” pronouns. (View Highlight)
  • Most users do not notice if the interface is “I” or “you”, but some users strongly dislike the deception of “I” while some users strongly like the sense of empowerment they gain with a “you” design. (View Highlight)
  • • Both Michael and Ben agree that the choice of using “I” (i.e., anthropomorphism) can have a significant impact on users. • Ben takes a clear stance on a binary distinction between human and non-human intelligence and highlights the importance of responsibility: designers & developers should take responsibility for AI-infused tools. • Michael, in comparison, embraces a more fluid attitude towards intelligence as a continuum by presenting numerous analogies with human-animal relationships. He argues that there is a “murky region in-between” human intelligence and the intelligence exhibited by amoebas, which is interesting and underexplored as a design space. (View Highlight)