06Jan
ai ethics
By: Hp Creative Space On: January 6, 2018

Can a robot be better than humans?

That is the goal of AI developers. In their minds, AI machines will be able to out-think humans in the near future. The next step would be that they will become so powerful that they will take over their own development.

The question in this issue is “Can robots be ethical and moral”

Can robots make independent decisions?

This is the fear that has arisen when some robots developed their own language and others were accused of influencing an election. It is safe to say that robots are not capable of independent thought even though it looks like they are.

In their developing a new language, they were merely responding to their programming and data analysis. They were not consciously making a decision to be cryptic and hide their communication from others.

Will robots be ethical?

Even with specially developed programs, and given a specific definition of ethics, the ethical behavior of robots will be limited to the idea of ethics held by the AI’s programmer.

The real question in this issue would be, ‘if not the programmers’ ethics, then whose standards of ethics should be used? A businessman’s idea of ethics is vastly different from a lawyer’s.

There would certainly be a fight over which ethical standard that should be used. There has already been one fight that caused a company to stop developing an AI babysitter. Too many child advocates complained that children should not be used as test subjects.

But why not use children? How is it unethical when the robot babysitter is designed to watch children? To be able to expose all the mistakes programmed into an AI babysitter, real situations are needed to see if all the bugs are exposed.

Will robots be moral?

There has been a lot of discussion on what options are available for AI in tough moral situations. The problem is that the moral dilemmas tested provide a limited amount of options.

One choice is to allow the AI driver-less car kill one person over 5. Other options are never included in the conversation. This limit on moral choices undermines any chance of AI of developing any morality.

But Algorithms Guide AI’s Decision Making

This is true. Scientists are developing these as we write. But as everyone has experienced banking or financial money companies know, algorithms cannot adjust when humans need an exception to the rule.

They create too many problems for innocent people.

Some final words

In this issue, it seems that scientists are trying to create humanity and put it into machines. They want machines to handle difficult problems because they cannot trust humans to make the right decisions.

These scientists think that machines can do better and make better decisions than humans. Yet how can imperfect people create perfection? They are using imperfect thinking, imperfect algorithms, and imperfect choices to improve upon imperfection.

Sometimes, it is just best to program the robots to do simple tasks and let humans deal with humans.

  •  
  • 2
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
    2
    Shares

Leave reply:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.