Search Links999

Links999 Hardware section index. Links999 home page. Artificial Intelligence AI.

Space exploration. Track satellites and spacecraft in real time, space hardware, future developments, tourism and industrial exploitation. Plus great views.

Artificial Intelligence Reference Materials


Ethical and moral issues regarding AI 

Ethical and moral issues of artificial intelligence 

Debates about the social impact of creating intelligent machines has occupied many organizations and individuals over the past decades. Since many of the early science fiction speculations and predictions from the late 19th century through to the 1960's have already become reality there is no reason to assume that robots and intelligent machines will not happen. We are already living in that era's future, experiencing a golden age of technology, with no end, or limit, in sight.  

The moral and ethical implications of artificial intelligences are obvious and there are three sides to the argument. While one party argues that there are already too many of us living in poverty without work there is little or no reason to create mechanical laborers (that can think independently). And that we certainly should not create machines that can argue with us about such issues.

Another party argues that society cannot develop or take advantage of resources without the help of machines that can think for themselves at least a little. And party number three simply doesn't care about the issue at all, as is typical of human society.

On a more detailed level, opinions also differ about the extent to which we should make machines intelligent and what these machines should look like.

Are we talking about autonomous devices like space explorers or robots that mimic human form, thought and behavior? As more and more of society gets automated will we entrust our children,  educational institutions, businesses, and governments to reasoning machines as well? -->


There are no clear answers here. Research is widespread and diverse, covering all of the aspects of artificial intelligence. We don't even agree on what exactly defines intelligence and already we are creating artificial ones. So who is to say what is right?

But if we do build android machines with a designed intelligence that think and behave like humans, shouldn't they be made absolutely subservient to us?

Isaac Asimov, the science fiction author, well known for his robot novels (amongst the myriad others), wrote the Three Laws of Robotics early in the last century which were incorporated into the "positronic" brains of his robots in order to protect humans from a "robot revolution", and to prevent other humans abusing them. : 

The Three Laws of Robotics
  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



The above three principles are a good example of the difficulty in programming an artificial brain. The human brain is evolved through millions of years of survival and social behavior. We are still undergoing this process.

Imitating the brain's workings is a tremendous challenge and, judging by the advance of current processor power and complexity, will take at least several decades more to reach even the most rudimentary levels. 

And once we have decided that we do want android robots and other machines with an artificially created intelligence sophisticated enough to rival our own, the question still remains with which ethical and moral values do we instill them?

Looking at human civilization with its diverse cultural, religious, ethical and moral values, what exactly are we trying to create here and to what purpose?

Do we needs robots, for example, that are religiously biased? Is that what human society needs, the perfect Catholic, Muslim, or Buddhist mind? Or do we want a mind that is ruthlessly calculative, for example, the perfect Capitalist or Efficiency Expert? A law enforcer perhaps?

Just defining those values would already prove impossible as they are all similar in many ways as well as being radically different. So instead we design the perfect ascetic mind, and then what? That certainly won't please a lot people.

And what about practical applications of these values? If one set of ethical or religious values dictates that we cannot assist in euthanasia, for example, and another dictates that it is imperative that we do, aren't we just duplicating current issues without any real answers? What would be the point?

Perhaps artificial intelligences will show the same diversity as humans. So what would be the point then of creating artificial humans? Don't we have problems enough with the biological ones? Or are we simply looking to design a perfect human? Would that be a god then?


On a more practical level we could create an artificial intelligence, in android or machine form, that would function as a neutral entity (if that is at all definable because it would have to have a set of values) and that this entity's sole purpose, for example, is to teach.

It would teach topics that do not involve any moral, ethical or religious values, such as geography or technical skills. Inevitably, certainly if there are children involved, it would get questions such as "Yes, but, why?"

If related to the topic it would answer appropriately, but inevitably it would come to a point of no return. How then would it answer such a simple question, except for with a "Does not compute" or other similar non-committal answer. Perhaps it could say "Ask a human teacher", or "This question is not allowed." or "'Why' is not a valid question, please restate.".

Not really good enough, is it? Asking why is the most fundamental question of all, isn't it? Without it we'd be animals with only instinct and reflex to guide us. We'd be automatons...

So the issue of which ethical, moral and cultural values to instate on our artificially created intelligence goes on. If it can't even answer a simple "Why?" then perhaps we should make sure these machines aren't intelligent at all. Not capable of making any decision beyond mechanical, programmed movement, and certainly not capable of any deductive reasoning and not in any position where it could influence or have control over humans or human society.

See also Ethical Issues Concerning Robots And Android Humanoids in Robots and join our debate.

We have selected the following articles that discuss the ethical and moral pros and cons of artificially intelligent machines. Decide for yourself. 

Ethical and moral issues of Artificial Intelligence

International Association of Science and Technology for Development (IASTED) - Conferences 2005
Committee for the Scientific Investigation of Claims for the Paranormal - "Darwin in Mind - Intelligent Design meets Artificial Intelligence"
Hollywood Jesus, "AI Artificial Intelligence" - the movie
American Library Association - Intelligent computers


  Your ad here. Ads by Links999 - a better deal.