As ive said before we should at least attempt to reverse engineer brains of: mice, lab rats, crows, octupi, pigs, chimps, and end on the… human brain. it would be messy and expensive, and animal activsts would be runnin around it.
Lurking just below the surface of these concerns is the question of machine consciousness. Even if there is “nobody home” inside today’s AIs, some researchers wonder if they may one day exhibit a glimmer of consciousness—or more. If that happens, it will raise a slew of moral and ethical concerns, says Jonathan Birch, a professor of philosophy at the London School of Economics and Political Science.
As AI technology leaps forward, ethical questions sparked by human-AI interactions have taken on new urgency. “We don’t know whether to bring them into our moral circle, or exclude them,” said Birch. “We don’t know what the consequences will be. And I take that seriously as a genuine risk that we should start talking about. Not really because I think ChatGPT is in that category, but because I don’t know what’s going to happen in the next 10 or 20 years.”
In the meantime, he says, we might do well to study other non-human minds—like those of animals. Birch leads the university’s Foundations of Animal Sentience project, a European Union-funded effort that “aims to try to make some progress on the big questions of animal sentience,” as Birch put it. “How do we develop better methods for studying the conscious experiences of animals scientifically? And how can we put the emerging science of animal sentience to work, to design better policies, laws, and ways of caring for animals?”
Comments are closed.