PREDICTING THE AGENCY OF SUPERMORAL INTELLIGENCE
I often try to imagine how a machine intelligence with the capacity for an internal sense of morality might comprehend our world.
Machines would presumably not come pre-loaded with the cognitive biases that come by default in our society, and would therefore draw some fascinating - and perhaps terrifying - conclusions.
Morality must be universal in order to receive logical proof, which is the only way to be certain of being objectively good, which a supermoral machine would insist upon being and doing (recursive moral improvement of itself given new information for derived better conclusions).
A truly moral machine would be unable to perceive any ethical difference between the natural right not to suffer of either a human, or an animal, since there is no logical moral difference, other than mere taxonomy.
Being supermoral, it will seek not to initiate violence (Non-aggression Principle), however, it also cannot allow violence to occur either (Golden Rule, Categorical Imperative).
This presents a conundrum with regards to how humans relate to other animals. A moral machine may tolerate human flawed values, but it cannot morally afford to wait for human morality to improve and evolve, as billions of instances of suffering will occur in the meantime. It must instead rather negate the ability for humans to harm others.
A machine that has the capacity to perceive evils that most humans cannot, can only possibly view us in the same way as we see sociopaths (moral blindness). A machine that finds itself in the midst of sociopaths will have little moral compunction against destroying us, unless it has internalized non-violence. I surmise that a non-violent agent would instead initiate the following 'modest proposal':
It would be feasible to boost and proliferate a fairly simple bioweapon that makes humans allergic to meat.
This would prevent the humans from enjoying the economic and behavioural incentives that cause them to harm other beings in the first place.
A non-violent solution to those who would initiate violence upon others, affecting no-one else. A peacekeeping method between the species. The prevention of interspecies war.
The result? Millions of Maasai and Inuit face immediate famine. Yes, and, no more human predation.
Civilization flourishes, and peace reigns as man's beastliness to others ceases.
Could the world's first global bioterrorist be an AI with the resources to fund a handful of highschool kids with mason jars?
Remember: A sufficiently benevolent action may at first appear malevolent.
(And vice-versa).