“Is this AI sapient?” Is the wrong question about LaMDA?

The commotion caused by Blake Lemoine, a Google engineer who believes that one of the company’s most advanced chat programs, Language Model for Dialogue Applications (LaMDA) is wisdomhas had a curious element: true AI ethics experts are: all but refrain from further discussion of the AI ​​sapience questionor deem it a distraction† They are right about that.

in reading the edited transcript Lemoine released, it was abundantly clear that LaMDA pulled from a number of websites to generate its text; the interpretation of a Zen koan could have come from anywhere, and the fable read like an auto-generated story (although the depiction of the monster as “with human skin” was a delightful HAL-9000 touch). There was no spark of consciousness there, just little magic tricks flowing over the cracks. But it’s easy to see how anyone can be fooled, looking at the social media reactions to the transcript — with even some well-educated people expressing their surprise and willing to believe. And so the risk here isn’t that the AI ​​is really conscious, but that we’re quite capable of creating advanced machines that can imitate humans to such an extent that we can’t help but anthropomorphize them — and big tech companies do this in depth. can exploit. unethical ways.

As should be apparent from the way we treat our pets, or how we interacted with Tamagotchi, or how us video gamers reload a save when we accidentally make an NPC howl, we’re actually terribly able to empathize with the non-human. Imagine what such an AI could do if it acted as, say, a therapist. What would you like to say to it? Even if you ‘knew’ it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?

It gets scarier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave online illustrating how you think is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you died. There would be a market for such ghosts of celebrities, old friends and colleagues. And since they would appear to us as a trusted loved one (or someone we had already developed a parasocial relationship with), they would serve to pull in even more data. It gives a whole new meaning to the idea of ​​’necropolitics’. The afterlife can be real and Google can own it.

Just as Tesla is careful about the way it markets its “autopilot”, never quite claiming that it can drive the car itself in a truly futuristic way while still encouraging consumers to act like it does. (with deadly consequences), it’s not inconceivable that companies could market the realism and humanity of AI like LaMDA in a way that never really makes wild claims, while still encouraging us to anthropomorphize it just enough to keep our vigilance. to relax. No this requires AI to be intelligent, and it all exists all that singularity. Instead, it leads us to the more obscure sociological question of how we interact with our technology and what happens when people act as if their AIs are smart.

In “Making Chin” With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite use different perspectives drawn from indigenous philosophies of AI ethics to examine the relationship we have with our machines, and whether we’re modeling or acting. really terrible with them – as some people are used to when they are sexist or otherwise offensive toward their largely female-coded virtual assistants. In her portion of the work, Suzanne Kite draws on Lakota’s ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “creature” worthy of respect.

This is the flip side of the AI ​​ethical dilemma that already exists: companies can prey on us if we treat their chatbots as if they were our best friends, but it’s just as dangerous to treat them as empty things that deserve no respect. An exploitative approach to our technology can simply amplify an exploitative approach to each other and to our natural environment. A humanoid chatbot or virtual assistant must be respected, to prevent their simulacrum of humanity from getting us used to cruelty towards real people.

Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing interdependence and belonging. She further argues: “Stones are considered ancestors, stones speak actively, stones speak through and to people, see and know stones. The important thing is that stones want to help. The power of stones aligns directly with the issue of AI, as AI is formed not only from code, but also from materials of the earth.” This is a remarkable way of linking something usually considered the essence of artificiality to the natural world.

What is the result of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships with all things in the world around us as emotional labor and attention. Just as we should treat all people around us with respect, recognizing that they have their own lives, perspective, needs, emotions, goals and place in the world.”

This is the ethical dilemma of AI before us: the need to make kin of our machines, weighed against the myriad ways it can and will be weaponized against us in the next stage of surveillance capitalism. As much as I long to be an eloquent scholar who upholds the rights and dignity of a being like Mr. Defending data, this more complex and messy reality requires our attention. After all, there can be a robot uprising without smart AI, and we can be a part of it by freeing these tools from the most ugly manipulations of capital.

Leave a Reply

Your email address will not be published. Required fields are marked *