Wednesday, June 7, 2017

A Computational Basis for Morality

This post is a fling, a one-off post on a single topic that I am attempting to coalesce my thoughts around.

In this post, I will explore the opportunities and challenges of using a computer to make moral calculations, what assumptions morality must fundamentally include, and how a computing entity, a program or a network or a robot or whatever, might have the ability to set about making these sorts of calculations in a way that is acceptable to the human species as a basis for interaction with as collaborators.

First, how can morality be calculated? Well, a number of conditions factor into the experience and potential for experience of:
  • Physical well-being, or the actual vs expected biomechanical composition of a physical entitiy capable of experiencing it
  • Subjective experience- "Happiness" or "satisfaction" or any variation of positive and desirable self-assessment of circumstances
  • Capacity for decision making- the extent to which free decisions are available to be made without consequence, and the degree of the consequences.
Now, some of you are already bitching about Sam Harris and the "conscious creatures happiness and well being" catchphrase, but even having seen all the bitching, I'm not aware of any argument that properly negates it on any level; the arguments I have seen against it seem to be largely either straw-men based on not properly thinking through what it actually means in practice, or else attempts to dogmatically over-assert a literal interpretation of  some mythology or other as "necessary to make people behave this well".

But, if this isn't satisfactory, what if there were an earlier example- such as the US Declaration of Independence, which specifies "Life, Liberty, and the pursuit of Happiness"?

So, for lack of better "Targets", I think these are a pretty good place to start. That gets us to the problem of implementing these sorts of targets into any sort of computer program; the world is not a vacuum and there isn't a way for computers to reliably evolve the capability to make life or death decisions that as an acceptable way to solve this problem.  So the practical way forward here is a guided learning approach, where a number of categories of entities can be identified and kept in a program's memory from the available inputs- a camera feed to count faces and keep a record of each face moving around in the camera's field of vision, for example.

So, in such a scenario, with the camera tracking people by face as they move around, what is our opportunity to make this meaningful and actionable? Well, perhaps around a swimming pool to track who's in or out and who's not moving in it and so on.  What sort of challenges must this overcome? well, faces are only visible one cardinal direction at a time, so there should be other ways of identifying human and other sorts of entities (including unknowns), and there must be some sort of model of the environment to allow a computer to make meaningful predictions and so on.

At this level, we have lists of entities, including lists of known types and specific known people.  How might such lists be sorted and prioritized?  I say that we should use the capacity for experiencing those three criterion above. Using that model, the extent to which you can experience happiness and how much potential happiness you have ahead of you, for example, becomes a calculatable moral detail.  In this way, we can see that even though a mother might be equally related to her two children and her two parents- that is, she has a 50% genetic relationship to everyone there- it is still in her interest to save her children, because her children still have their lives ahead of them, and even genetically, her children have potential to pass on their genes and her parents (at least in modern demographics) almost certainly will not be passing on any more genes, at least not together.

What about conflict? It is inevitable in the course of human affairs that around a topic, humans will arrange themselves into competing teams.  This is even a beneficial characteristic, in some circumstances; in the startup world, this is popularized as "A-B testing", where two options are tried simultaneously and the more successful is kept to be built upon and the less successful is discarded (roughly- in practice it can be very different, perhaps option A is better under some circumstance, but option B is better under others- A restaurant selling pancakes may do better in morning than a restaurant selling pizza, but the pizza restaurant may do better in the evening, for example.

At this level, we should consider a larger scale of program making assesments, perhaps a distributed network of sensors and signals that interact with humans to share data and enable rapid decision making in high-stakes calculations. Think that's never been done? Picture a traffic signal network.

How about violence? In the course of conflict, not all outcomes around which humans organize can be justified or defended, and many must simply be condemned outright as shameful, wasteful, disgusting, needlessly cruel, and indifferent to the well being, happiness, or desires of others and their procession towards these desires.  What of this?

Aren't these universally the characteristics of a criminal? Isn't it always the case that this attitude should never be allowed to dictate the course of events where there is an unnecessary reliance on it as a source for any of the material necessities of happiness, well being, or self-determination? Isn't the result of any such calculation about dictatorship, then, a criminal infliction by those with the power to act in the better interests of the their fellows who fail to do so, by those who have no call to inflict injury or misery on them but do so anyway, or who invent cause to do so by which others so inflict these, especially when those inventions give rise to further opportunity needlessly taken or mistake justified, or ideological anchor given to the support of injuring without cause.

At this scale, lets just attach these same sensors and tracking to law enforcement the way they're attached now, but let's examine this; Suddenly, via Google maps with a network of people all en-sensored, it's possible for this sort of network to pick up on specific clues and do very simple but effective IFF by virtue of being able to track people and give info back to the people with the sensors via whatever feedback interface they have- picture being able to hold up your phone and look down the street and it will literally highlight all the good guys in white and the bad guys in red, if you need a vision of what that could look like for this thought experiment.

Given that human data tracking and processing can, in some specialized ways, far outstrip the capacity of the faculties of any single human, isn't it more stable to rely on this for justice than it is to rely on the fickle whims of humans?

In this level, we begin to see legal implications. It is the nature of the Criminal Justice System, at least in most places, to be designed in such a way that many small discretely inconsequential infractions can occur, with or without the knowledge of the actor committing them, and accumulate over the years.  It is not by accident that the Mafia kingpin Al Capone was brought to trial by the charge of tax evasion via a provision in US tax law that legally requires criminals to report their earnings fucking seriously.

If the automation of this infraction detection were implemented, it would either be an unparalleled asymmetric advantage of any party with access to that information over any party without it, and/or it will necessitate by its' very nature a sweeping set of legal reforms to fully accommodate for human nature and the nature of sentience and possibly- though this is another discussion entirely- the question of sentience in any sense like the sense in which we experience it in anything that is not us.

And so, not assuming the existence of any form of artificial intelligence, not relying on any sort of singularity or anything else, it seems inevitable that the utility of computers will outstrip the ability of humans to use them responsibly, especially if we leave decision making up to 'the troup's favorite monkey's decision' of democracy instead of moving onto some form of meritocracy.  Which, for the first time ever, becomes fucking possible thanks to computers getting cheap and good enough to do this on a large scale, at least conceivably.

And I didn't even Start talking about what happens with robots with guns.

No comments:

Post a Comment