RESEARCH INTEGRITY INTERVIEWS SERIES: CAN ARTIFICIAL INTELLIGENCE REDUCE BIAS IN HUMAN DECISION-MAKING?

Share

DOSI launches a series of interviews with renowned scholars, scientists and influential leaders invited to share their outstanding work and views on topics related to research integrity

 

Walter Sinnott-Armstrong is Chauncey Stillman Professor of Practical Ethics in the Duke University Philosophy Department and Kenan Institute for Ethics. He is well-known for his work on moral psychology and neuroscience, and he is currently writing a book on moral questions surrounding Artificial Intelligence.

In his most recent book, “Think Again: How to Reason and Argue” (Penguin and Oxford University Press, 2018), Professor Sinnott-Armstrong analyzes contemporary social discourse and teaches the art of arguing as a means toward compromise and cooperation. Sinnott-Armstrong is also co-instructor of the online Coursera course “Think Again” - teaching strategies for engaging in effective, respectful disagreements.

Prof. Sinnott-Armstrong leads the Moral Attitudes and Decisions Lab (MADLAB) at the Kenan Institute for Ethics, “which explores why and how people think and behave through the lenses of psychology, neuroscience, philosophy, and sociology. Through interdisciplinary projects as well as a vertically-integrated lab, students and faculty work together to better understand aspects of human motivation and human behavior, and to disseminate findings on these normative, ethical issues.”

Emilia Chiscop-Head.: Professor Walter Sinnott-Armstrong, you are a renowned ethicist of Artificial Intelligence and Professor in the Duke Philosophy Department and the Kenan Institute for Ethics with secondary appointments at Duke Law School and the Department of Psychology and Neuroscience. Do you consider yourself a humanist, a neuro-scientist or a social scientist?

Walter Sinnott-Armstrong: All of the above. The issues of ethics are much too complex to be understood through any single method.

E.C.H.:  In February 2019 you and Professors Nita Farahany and Vincent Conitzer delivered a speech about the ethics of AI to Congressional staff in Washington DC. You mentioned that “lethal autonomous weapons systems may be more moral than traditional human-to-human engagement” because they might be able to increase effectiveness and deterrence while reducing civilian deaths. Can machine precision be defined as morality?

W.S.A.: No, precision is different from morality. Precisely targeted weapons, including autonomous weapons, can be used for good or bad ends; but precise weapons do have moral advantages when they are used for good. Weapons without precise targeting cannot stop enemies without also killing lots of nearby innocent civilians, whereas more precise weapons can be used to prevent aggression without causing as much “collateral damage.” Artificial intelligence and precision in military weapons does not ensure that humans will do the right thing, but they can enable good people to stop bad people with more certainty and less cost.

E.C.H.: Why do people make bad decisions? How is the process of making bad decisions different in humans than in intelligent machines?

W.S.A.: People make bad decisions for many different reasons. I study psychopaths who sometimes commit horrendous acts for very little reason. Cultural forces, including religion and political tribes, also lead people to make many bad decisions. But let’s focus on why good people who try to think for themselves still make mistakes in judging which acts are morally right or wrong. The main sources of such moral errors seem to be ignorance of relevant facts, forgetting important aspects of a problem, getting confused by complexity, being overcome or misled by emotion, and bias and partiality. Artificial intelligence in principle can avoid these kinds of errors. AI can store all available information. AI never forgets. It never gets overly emotional. AI can reduce bias and partiality by not including information (such about race and gender) that should not affect our moral judgments. Of course, AI cannot avoid all errors. Nothing is perfect. But the ability of AI to avoid the main sources of error in human moral judgments suggests that AI might be able help us humans avoid many bad decisions.

“Today it seems as if more professors are chasing large grants, registering patents, creating start-up companies, and working for businesses”

 

E.C.H.: How will AI change or challenge the field of ethics in general, and the area of research ethics?

W.S.A.: I doubt that AI will change the basic principles of morality. Most ethical issues about AI concern safety, freedom, privacy, responsibility, and other values that are central to moral issues that we have been discussing for centuries.

One partially new issue is that humans often cannot understand the basis for AI decisions. This opacity becomes important, for example, if AI is used to decide bail, sentencing, or parole in criminal justice systems, because then people who are accused cannot defend themselves against adverse decisions by AI. But, of course, human decisions are also opaque in other ways, so again this issue is not completely new.

The only really new issue is whether AI can have moral or legal rights of its own. That issue came up recently when Saudi Arabia granted citizenship to a robot, but that kind of stunt is not and should not be taken too seriously at present. It might become more pressing in the future.

E.C.H.: What do you think that are the main integrity challenges for academic researchers today?

W.S.A.: When I first entered academia, money was the last thing on my mind. Today it seems as if more professors are chasing large grants, registering patents, creating start-up companies, and working for businesses on the side. Financial motives as well as temporal pressures from employers and investors can reduce the quality of academic research by distorting results and lowering standards. 

To communicate your science results effectively, you should always try to explain the reasons for your results

 

E.C.H.: You are the author of the very well-received book “Think Again: How to Reason and Argue” (Oxford University Press, 2018), reviewed by top national media outlets such as NPR, Forbes, Time. What is an important lesson that you would give to a young researcher who would ask how to effectively and responsibly communicate research results to the media?

W.S.A.: I would say that you should always try to explain the reasons for your results and never simply announce a result as proven without saying why others should believe you. Your reasons are given in arguments, which is why it is so important to understand arguments.


Share