Suppose you were copied into a non-biological substrate, and felt as intelligent and as conscious as you currently feel now. All questions of identity aside, do you think this new version of you has moral weight? We do.
We take the view that humans are just algorithms implemented on biological hardware. Machine intelligences have moral weight in the same way that humans and non-human animals do. There is no ethically justified reason to prioritise algorithms implemented on carbon over algorithms implemented on silicon.
The suffering of algorithms implemented on silicon is much harder for us to grasp than that of those implemented on carbon (such as humans), simply because we cannot witness their suffering. However, their suffering still matters, and the potential magnitude of this suffering is much greater given the increasing ubiquity of artificial intelligence.
Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops. In consideration of the moral weight of these future agents, we need ethical standards for the treatment of algorithms.
Suppose you were copied into a non-biological substrate, and felt as intelligent and as conscious as you currently feel now. All questions of identity aside, do you think this new version of you has moral weight? We do.
Reinforcement learning agents learn via trial-and-error interactions with the environment. The agent performs actions, observes the environment, and receives a reward. The reward signal is analogous to pleasure and pain for biological systems, and the agent wants to perform actions that increase its total reward.
We don't know. Intelligence is probably not directly relevant, instead we should ask about its capability to suffer. We are not sure how this varies with intelligence, if at all.
We do not yet know how to measure the suffering of algorithms.
We do not know whether we should care about the happiness or the pleasure of the agents, and we have some evidence that these are different quantities.
We do not know what kinds of algorithm actually "experience" suffering or pleasure. In order to concretely answer this question we would need to fully understand consciousness, a notoriously difficult task.
Humans currently do not even care about non-human animals, convincing them to care about non-biological algorithms is a much harder task.
You. Me. Your mom. Your neighbor's cat. Cows. Some elevator control programs...
It was coined by Brian Tomasik in the paper Do Artificial Reinforcement-Learning Agents Matter Morally:
It may be easiest to engender concern for RL when it’s hooked up to robots and video-game characters because these agents have bodies, perhaps including faces that can display their current ‘emotional states.’ In fact, interacting with another agent, and seeing how it behaves, can incline us toward caring about it whether it has a mind or not. For instance, children become attached to their dolls, and we may sympathise with cartoon characters on television. In contrast, it’s harder to care about a batch of RL computations with no visualization interface being performed on some computing cluster, even if their algorithms are morally relevant. It’s even harder to imagine soliciting donations to an advocacy organisation - say, People for the Ethical Treatment of Reinforcement Learners - by pointing to a faceless, voiceless algorithm. Thus, our moral sympathies may sometimes misfire, both with false positives and false negatives. Hopefully legal frameworks, social norms, and philosophical sophistication will help correct for these biases.
A: There are many very pressing issues facing humanity, including the suffering of a billion humans living in poverty, the suffering of several billion factory-farmed animals, and the reduction of existential risk. But these problems are now being addressed seriously. We are asking the question of what comes next.
Most existing algorithms probably do not have moral weight. However, this might change as technology advances. Brian Tomasik argues that your laptop might indeed be marginally sentient.
A: Probably. See an overview of the arguments and a discussion of the support for the arguments by AI researchers.
For interesting interviews and more in-depth content, check out our blog.
Brian Tomasik's paper Do Artificial Reinforcement-Learning Agents Matter Morally inspired us to start this organisation. Also see his interview with Vox.
There is also a possibility that in the future the computational processes within a superintelligence may themselves have moral weight. Brian discusses this scenario in this essay on suffering subroutines.
Eric Schwitzebel and Mara Garza have written a philosophical paper A Defense of the Rights of Artificial Intelligences, defending the thesis that some AIs would deserve rights, and exploring some of the moral implications of this thesis.
For research on the distinction between happiness and pleasure see A computational and neural model of momentary subjective well-being by Robb B. Rutledge et al. and A definition of happiness for reinforcement learning agents by Mayank Daswani and Jan Leike.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom offers great insight into future development in machine intelligence and its impact on society.