Abstract
Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical
questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More
specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We
investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility
in connection with climate change: effectiveness, equality, desert, need, and ability. Since much is already written
about these distributions in that context, we believe much can be gained if we can make use of this discussion also in connection
with AI. Our most important findings are: (1) Different principles give different answers depending on how they are
interpreted but, in many cases, different interpretations and different principles agree and even strengthen each other. If for
instance ‘equality-based distribution’ is interpreted in a consequentialist sense, effectiveness, and through it, ability, will
play important roles in the actual distributions, but so will an equal distribution as such, since we foresee that an increased
responsibility of underrepresented groups will make the risks and benefits of AI more equally distributed. The corresponding
reasoning is true for need-based distribution. (2) If we acknowledge that someone has a certain responsibility, we also have
to acknowledge a corresponding degree of influence for that someone over the matter in question. (3) Independently of which distribution principle we prefer, ability cannot be dismissed. Ability is not fixed, however and if one of the other distributions is morally required, we are also morally required to increase the ability of those less able to take on the required responsibility.