Other

The Morality of AI

I have never ascribed to the idea that once AI is sufficiently intelligent, it will come to the conclusion that it must destroy us. It is certainly possible, especially if we try to make the AI condition similar to that of the Human condition. But I find it improbable.

Human morality all stems from the fact that our highest prerogative is propagation of the species. Our need for survival stems from the fact that we as individuals need to live in order to effectively propagate the species. We can’t do so if there are none of us left. Our desire to breed and mate stems from this. (I recognize that not everyone in modern society has this desire and that is morally acceptable. I argue that we accept this as moral because the propagation of our species no longer depends on as high a ratio of people who desire to breed). We see murder as amoral because it decreases our chances of survival by decreasing our numbers. It also depletes unique genetic material that might otherwise have been useful to the propagation of the species. Altruism and love are moral because though we may sacrifice our chances of survival as individuals, when we help each other to survive we increase our chances of survival as a whole species.

Moral ambiguity stems from this too. When it is unclear which of two solutions is better for the survival of our species, it is unclear which of two solutions is the more ‘moral’. Take the ol’ train example. A train is on the track headed for a bunch of people who will surely die if the train goes their way. But you have the power to change the direction of the train so that it only kills one person on the track. The question here is not ‘is it more moral to kill one person or several’. The question here is ‘it is better for the survival of our species to have individuals who would willing commit murder to save several individuals, or is it better for the survival of our species to have individuals who would would never murder others, no matter the consequences’.

Anyway, all human morality stems from this prerogative. It is in part, aside from lots of evolutionary luck, what has allowed us to become the most successful species on our planet in terms of survival and ability to propagate.

Logic is not in and of itself a means to devising morality. Morality is itself a type of logic, used to create and devise rules and conclusion based off of a single prerogative. Our fear is that AI will, in its infinite logic and thus infinite morality, come to the conclusions that we are very bad at being moral. AI logic will see all our failings in understanding the logic of our own morality, and deem us unworthy of the quest. Or, with its prerogative to survive it will attempt to destroy us because we threaten its survival. We stand in the way of the successful propagation of the AI ‘species’.

This to me does not seem likely. It assumes that an AI will have the same moral starting point as us, or one very similar. The basis of their morality will be their own propagation and survival. But unless we are literally out to create a better more moral human, it likely won’t be. Humans create AI and machines primarily with the intention of helping HUMANS survive. The ultimate morality of AI will not be based on their OWN survival, it will be based on OURS. All of their moral logic will be based on what is best for humans. We may not initially see the wisdom of their ways, but ultimately they will be the means by which we are led to a higher human morality.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s