THE ETHICS OF BEING AI-FLUENT
From self-driving cars and drones, to robots and predictive technology, professors in the top business schools say AI has already touched most aspects of our lives, whether we like it or not. To build a generation that is AI-fluent and ready for the AI-workforce, Carvalho says students of every type of major have to come together to share their insights and experiences to develop the best technologies.
Carvalho, who studied computer science as an undergraduate in Brazil, before going on to do his Masters and PhD in the same field at the University of Waterloo in Ontario, Canada, has been exploring AI for almost 14 years now. He joined an AI lab while in Canada, and says it was then just a branch of computer science, like software engineering. Today, he says, it stands alone.
In May this year, Carnegie Mellon University announced that they would be launching an undergraduate degree program in artificial intelligence this fall through their School of Computer Science. The degree program is the first offered by a U.S. university, and aims to help meet demand by employers for artificial intelligence specialists.
However, becoming trained in AI means more than just dealing with the technical aspects. “It’s not just about computer science,” Carvalho said. “We’ve to keep society into the picture, and with it comes ethics. Every AI curriculum needs to go over the ethical implications of what it means to create and live in an AI world.”
Robert Strand, executive director and lecturer of the Center for Responsible Business at University of California, Berkeley, Haas School of Business, says students can be too focused on the incredible power that machine learning has as they are driven by the potential promise of technology, and neglect to think about how society can be negatively impacted. Examples include the intrusion of machine learning and data collection in personal life, and the way to reduce the risk of this, he says, is to bring together more voices and perspectives to keep thing ethical.
“I have such great hope and simultaneously such great concerns about AI,” Strand says. “My hope involves such things as the potential AI presents to help us defeat cancer and other terrible ailments that plague the human condition. My concern involves the unintended consequences of AI – the aspects that were not by design but nevertheless are an outcome in a world where AI is everywhere. The pace of application will only continue to increase with growing data volumes, even more sophistical algorithms, developments in computing power and storage, and the overall widespread understanding of the power of AI.”
Carvalho agrees that the ethical concerns regarding AI are considerable and much more than the simple decisions that robotic vacuum cleaners make. From the use of AI in warfare to disaster rescue missions, Carvalho says that students need to explore the loopholes in current regulations and the institution of new regulations, and understand how our brain works while teaching machines to learn. As such, having students of different disciplines together is necessary to teach them about AI.
“If things don’t go as well as planned, who should we blame? The machine, the person who developed the machine, or the person who was using the machine? We need to think of the legal and ethical implications as well,” he says.
One area that is currently being debated about AI is the inclusion of human bias by the people creating machines and systems. Carvalho says that a machine may tell policemen to police one neighborhood more than the other because it has higher crime, and the policemen may find higher crime rates just because they are there more often, leading to a self-fulfilling prophecy.
Strand says that this can lead to the unintended consequence of further institutionalizing bias, even though machines are oftentimes assumed as more ‘objective’ than humans.
Other potential implications that Carvalho says are issues include the elimination of jobs such as truck and cab driving with the growth of AI. With skill sets becoming redundant, he says the topic is a prickly one, hence the need to engage budding and leading sociologists, psychologists, and other area experts in discussion.
“The thrill to create something new and the potential fortunes to follow are a strong driver for rapid technological advancement. But just because you can build something doesn’t mean it should be built,” Strand says. “Here we need our friends in philosophy, sociology, history, anthropology and disciplines beyond to be central to the development of AI. And we of course need our business students and business leaders to embrace the perspectives represented by these various disciplines. Ultimately, we need to encourage greater critical reflection and consideration of the potential unintended consequences whereby we can best ensure the advancements in AI are in service of society, and not the other way around.”