Philosophy and Morals of Technology – Digital Ethics

Jon Salisbury Uncategorized Leave a Comment

I recently read a few articles that are bringing up important questions on digital ethics and morals in technology. I believe that credible organizations such as MIT who are discussing the subject are headed in the wrong direction. The discussion of morality dates back to the very beginning of recorded history with the infamous rhetoric between philosophers such as Aristotle, Plato, and others. Today, as we have, in a broader sense, our society, is now working to put in place structures to govern the world of technology, and the miscalculated consequences of these structures could very well bring dire consequences. Never should we put in place “a non-static value on human life”, as the implications are quite daunting in the long run. I propose, looking through a technological lens, that human life should indubitably be valued at the highest level. I believe we should move forward with this criterion, with no exceptions other than humanity be included.

In saying this I am taking a utilitarian approach to how technology should work. A simple example of this strategy would be to say that robots/AI are not people and should not act like them. People have moral judgments and ethics. AI will not, and should not, move in that direction. The dilemma is what ethics we should put in place concerning technology and how should we apply them. There are three strategies I would propose we use for this issue which are Contractarianism, Act Utilitarianism, and Rule Utilitarianism. Since technology is really a utility, I believe a rule-based utilitarian approach is appropriate, with a proper overarching structure. Many people have written about this subject in the past.

Unsurprisingly but perhaps very interestingly were sci-fi writers of days past who dreamt of a future when technology would be put in the position of deciding where human life may have a negative consequence. The 3 laws of robotics is a perfect example, which is as follows: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Later, Asimov added a fourth, or zeroth law, that preceded the others in terms of priority: 4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Read more about the problems with a 4 law society here:

So why talk about all this and care so much to write an article? It is because the world of AI and driverless cars are coming fast and organizations (such as MIT) are trying to create systems that will try to mimic human morality which in itself is truly not 100% defined. I believe if we think we can make the machines just like humans we put in so many possible negative consequences. We will be pushed into an eventual system that has a unique value for each human life which are calculated to potentially pick the human which is deemed as more valuable to save. My call is for philosophers of our day to create the ethos needed to sit above the noise and create clarity to ensure our future; one in which human life is valued at the highest level at all times. I will plan on coming back to this topic with some thought experiments I have been working on which show the flaw clearly in my model and in others. The answer needs to be solved by some person much smarter than I.

If you haven’t been to the moral machine by MIT please take a look here:

Author – Jon Salisbury – CEO @ smartLINK

Leave a Reply

Your email address will not be published. Required fields are marked *