Ethic issues concerned with Artificial Intelligence(AI) in decision making processes

image0

Image from wikipedia

Artificial intelligence refers to “the idea of making computers and machinery think, learn and even correct itself from its own mistakes”1. Hence, from the definition, we can immediately grasp the potential benefits of having AI be involved in the decision making process. While the AI might not be infallible, its ability to learn coupled with its permanent memory means that the decisions it makes will get better over time. For extremely complex decisions with many factors and repercussions to consider, a computer would be capable of making a far more rational and calculated decision compared to a human. However, is it ethical to allow AI to make decision for us? Would you accept national policies that are formulated by a computer?

In this post, I wish to explore if it is ethical to involve current AI technology in decision making processes particularly in the field of medicine and law. As such, I am not interested in a fantasy world where robots have colonized the earth and enslaved humans. I am also not interested in future AI technology that have far surpassed the human brain and can formulate perfect ethical theories which the human brain is incapable of finding any fault with. Therefore, I would be using existing ethical theories to evaluate the use of current AI technology.

Medicine is an extremely complicated field of study largely due to the complexity of the human body. Among doctors, there are specialists who specialize in the treatment of certain conditions such as cardiologists who specialize in conditions to do with the heart and pediatricians who specialize in ailments affecting children. Even among specialists, there are doctors who specialize even further, e.g. pediatric cardiologists, and even among them, there are a few who specialize in certain rare diseases that affect 1 in a million people. Making their job harder, symptoms such as nausea and dizziness can actually result from many diseases from all specializations,they may not be the typical symptoms seen in that particular disease, it might even be caused by two unrelated diseases. Suffice to say, misdiagnoses is unavoidable as it is impossible for doctors to know everything in the medical field.

In such a situation, AI has the advantage due to its practically unlimited and perfect memory. It can store information on all diseases known to mankind, provide statistics on how common the diseases are, all the various symptoms that might be exhibited. Using AI to diagnose a disease, the computer can rank the various disease according to likelihood based on symptoms described. It can even personalise diagnosis based on factors such as race, age, weight, family history and genetics to determine if a certain disease might be more likely. Lastly and most importantly, AI has the ability to learn from past diagnoses, keeping accurate statistics which helps to provide even more accurate diagnoses in the future. In short, it is leaps and bounds better than the current system. However, we do face certain ethical issues when AI is used in diagnoses of illnesses.

The first such issue is doctor patient confidentiality. Such an arrangement is crucial because it allows for trust between the doctor and patient. The patient can thus divulge personal information without the fear of being judged, allowing for more accurate diagnoses. If AI is being used in the diagnoses process, patient information has to be shared in order for the AI to learn from past diagnoses, therefore compromising the arrangement. Information can be anonymised, however doing so would reduce the amount the system can learn from it. There is also no guarantee that anonymised data cannot be traced back to the patient. Lastly, if it was discovered that the diagnosis was wrong, it might not be able to trace back to the patient to rectify the mistake.

The next issue facing AI technology is the problem of misdiagnoses. Especially in the early trials of the system, due to the lack of statistical information, the system might not be as accurate. Which party should then be deemed responsible for the misdiagnoses? All fingers might point to the programmer first, however we must realize that the system is designed to learn from past diagnoses, as such it is not designed to be correct 100% of the time. Especially when the issue might result in life and death circumstances, is it ethical to let patients pay the price of a mistake made by a machine? Of course, we can allow doctors to override the diagnosis made by the AI. However, doctors are also not right 100% of the time, if the doctor was wrong, we would be “teaching” the AI the wrong thing, which increases the likelihood of all future diagnosis to be wrong.

Using utilitarian methods to examine the issue of using AI in medical diagnosis, benefits would include much more accurate diagnosis of diseases in the long run. Costs would include the potential loss of confidentiality of medical condition as well as the possibility of misdiagnoses especially in the short run. I believe that the benefits outweigh the cost but most of the benefits can only be realised in the long term while most of the costs are short term. It is also difficult to quantify the costs since the misdiagnoses could result in aggravation of condition or even death. Therefore, there is a huge disparity in the distribution of benefits and costs. While everyone wants to enjoy better diagnosis in the future, no one is willing to be the guinea pig to allow the AI to practice on them.

Using the second formulation of Kantian ethics to analyse the situation, by using the patients data to improve its diagnoses, the AI is using the patients as a means to an end. However, Kantian ethics does not take into account the fact that the AI is doing so to provide even greater accuracy for future diagnosis, which would ultimately benefit patients. Using the first formulation, if AI gave out wrong diagnoses, people would gradually stop trusting the AI and revert back to doctors. Therefore, nobody would go to the AI for diagnoses and the AI would be unable to give out wrong diagnoses anymore. Therefore, using AI for diagnoses is unethical when analyzed using the Kantian perspective.

In order for a move to using AI to increase the accuracy of diagnoses. Certain societal and legal changes might have to be made. Patient doctor confidentiality might have to be sacrificed. Society also has to accept the fact that the AI is not perfect and learns from its mistakes. Perhaps change has to be implemented incrementally, with AI assisting doctors in diagnoses during early years and slowly transiting towards fully automating the diagnosis process.

Next, I would like to consider the ethicality of using AI in the decision making process of our justice system.I would be considering the use of AI only for the process of deciding the length of the sentence, and not the entire judicial process. Our current justice system requires judges to take into consideration a lot of factors such as the circumstances in which the crime was committed, the motive for committing the crime, the attributes of the crime, before passing a sentence based on prevailing laws as a guideline. Even then, the legal system is often caught up in endless appeal cases which consume lots of time and resources.

As with any other human involved in a decision making process, it is often difficult to be impartial to a certain situation. One tends to bring in personal morals and values when making such decisions. This might ultimately lead to unfairness during sentencing when certain judges might dish out heavier punishments than others for specific types of crimes. An AI system would solve this situation using a utilitarian approach. By assigning values to the certain factors involved, the system would be able to calculate the degree to which the accused is guilty and sentence him accordingly. This system would also be much fairer as another person who committed the exact same crime in the exact same circumstances would get the same punishment. Lastly, since the outcome is always the same, the entire appeal process can be done away with.

Such a system definitely sound good in theory. It does indeed provide greater degree of fairness during sentencing. However, careful planning is necessary as the system will need to make important decisions such as the value of certain factors involved in the case. In short, the task of weighing the factors has never been taken away, it has just been transferred from the judge to the system. The AI system will then use these exact same values to judge all other cases, thus ensuring impartial sentencing. Nevertheless, it might be difficult to assign values to certain factors, especially when human life might be at stake.

The next potential issue that the system might face is the ethicality of allowing a machine to play the role of god. Currently, there are already people who feel that a judge, being human, should not be given the power to decide whether someone lives or not. By bringing the power of decision down to a mere machine, we are further marring the sanctity of life itself, demoting it to a value which is decided by a machine. Has human life become such a commodity?

As technology advances, a greater degree of automation can be provided by machines for increasingly complex tasks. From simple tasks such as assembly line automation, to complicated tasks such as market analysis, computers have largely either replaced or assisted humans greatly. To date however, some of the most complex tasks such as diagnosis and sentencing are still largely performed by humans. Nevertheless, it has been shown that AI has the ability to perform these tasks with greater efficiency and impartiality as compared to humans. We have already begun on the slippery slope of allowing computers to take over our daily tasks. If we do not set our boundaries, we would very soon be sliding further and further down. We might one day wake up to machines controlling every aspect of our lives.

  1. Attila Narin. The Myths of Artificial Intelligence.

Retrieved from http://www.narin.com/attila/ai.html

2.Peter Szolovits, Ph.D, Ramesh S. Patil, Ph.D, and William B. Schwartz, M.D, Artificial Intelligence in Medical Diagnosis. ANNALS OF INTERNAL MEDICINE Vol.108; No.1 pages 80-87. January 1988.

    1. Hughes, Ph.D, Doctors Are Dinosaurs In a High Tech World

New York Newsday, July 1, 1995

  1. Legal Applications of Artificial Intelligence.

Retrieved from http://www.gslis.utexas.edu/~palmquis/courses/project98/ailaw/ailaw.htm