The Ethics Of Responsible AI: Inside The New Course At Berkeley Haas

The Ethics Of Responsible AI: Inside The New Course At Berkeley Haas

Lecturer Genevieve Smith, founding co-director of Berkeley’s Responsible & Equitable AI Initiative, leads a discussion in the new class she created for Haas School of Business: Responsible AI Innovation Management. Photo Copyright Noah Berger

As a senior at UC Berkeley’s Haas School of Business, Kate Ye will be entering the workforce in a couple of short months. She felt like she knew little about this “AI craze” and that maybe she was falling behind. Companies are adopting AI tools into recruiting, onboarding, interviewing, and everyday operations. She wanted to at least speak the language.

As she chose her last semester of classes, a brand new course caught her eye: Responsible AI Innovation and Management.

While many business schools scramble to incorporate AI into courses and curriculum in some form or fashion, Haas’ new course is one of the few so far to confront the ethical questions so directly. It was created by Genevieve Smith, the founding co-director of Berkeley’s Responsible & Equitable AI Initiative through the Berkeley AI Research Lab.

The hyper-fast speed of AI innovation just in the course of the semester astonished Ye. Every class, students discussed new developments in the space and it seemed like every week brought a brand new gamechanger. That includes Sora, OpenAI’s text-to-video generative AI model, and reports that Israel used a facial recognition AI database called Lavender to target people for airstrikes in its war with Hamas.

“It’s fascinating to learn about how AI boundaries are being pushed by major companies everyday,” says Ye, a business administration major at Haas.

“I honestly think everyone who plans to become a leader in the business world should take this class or take the time to learn about responsible AI applications. It’s come to a point where AI is no longer able to be ignored – it is coming to the workforce and it is coming soon.

“I would say the best way to soothe any concerns about AI taking over jobs or AI having a negative effect in the workforce is to simply learn about it and educate others about it too.”

The Ethics Of Responsible AI: Inside The New Course At Berkeley Haas

Haas students in the new class, Responsible AI Innovation Management. Throughout the course, students engage in formal debates about different ethical AI positions. Photo Copyright Noah Berger

DOES REGULATION STIFLE AI INNOVATION?

Smith’s new course has shown that business majors have a clear appetite for AI ethics. It enrolled 54 undergrads, had a full waitlist, and several MBAs asked if they could take the course for MBA credit (they could not). Several MBAs and other students audited it. Moving forward, the class will be offered during fall semesters.

The course starts with discussions about core issues around ethical and responsible AI including bias discrimination, data privacy, environmental implications, and the future of work. Students were assigned to have a conversation with ChatGPT about the biases it has and why it has them, and then critique ChatGPT’s understanding of itself. For their last assignment, students created a personal strategy for AI use as well as a group strategy for an existing or fictitious company.

A large part of the course is conducted through debates. Students are assigned a position on a particular topic, and must research and defend their position before the class.

Up until one particular class debate – “Government regulation of AI stifles innovation and should be limited” – junior Hunter Esqueda believed AI companies should be heavily regulated because of the threat of bad actors using the tech to do harm. He believed it was a good thing that a few select companies were granted licenses to develop advanced AI (a closed foundation model versus open source) because it controlled who could play with the technology.

“However, a classmate pointed out that specifically limiting access to development to a few companies may limit access to underrepresented communities,” he says.

Esqueda now recognizes both pros and cons to AI regulation, but does believe that there should be explicitly defined limits for what generative AI models can be used for.

“With the potential for bias to reflect in AI based models, use cases that are particularly sensitive – such as evaluating an individual’s likelihood to commit a crime or credit scoring – should not rely on AI, or at the very least be required to have appeals processes that then turn to human oversight,” he says.

A LESSON IN TETHICS

Throughout the semester, Smith used case studies to explore how companies are implementing AI and confronting its ethical challenges.

During their discussion of a case study of Google’s self-developed AI guiding principles, which Smith co-authored, she showed students a clip from HBO’s “Silicon Valley.” In the scene, Hooli’s ousted CEO Gavin Belson creates a voluntary pledge of ethics for tech companies to sign as a way to evade government regulation. The pledge, which Belson named “Tethics,” has all the platitudes you’d expect and none of the teeth you’d hope for.

The Ethics Of Responsible AI: Inside The New Course At Berkeley Haas

UC-Berkeley’s Haas School of Business

While the episode debuted nearly five years ago, it is eerily relevant in the age of AI. Students had some of the same skeptical questions of Google that show protagonist Richard Hendrix had about Tethics: Does this actually mean anything? Are there any teeth to the principles, and what do they look like? How do those principles get called into question when business priorities and tensions really come into play?

For Sohan Dhanesh, a senior majoring in business and economics, one of the most interesting cases was looking at AI systems in credit decisions. He has identical twin brothers who have very similar backgrounds in nearly every way. But when they applied for the same credit card, one was approved and one was denied. It made Dhanesh question the apparent objectivity of AI for the first time.

He believes any regulation around AI should be built around the responsible tenants of accountability and transparency.

“Accountability means having checks and balances for AI systems, and I think this is especially important as AI penetrates higher stakes verticals such as healthcare and defense,” he says.

“Transparency around the AI development process is also key as I feel there are still gaps in our understanding in how these systems precisely work (ie. the black box problem). Uncovering these mysteries would not only help us root out any biases in AI systems, but also give us a more fundamental understanding of AI technology itself.”

A MULTIDISCIPLINARY APPROACH TO AI

Within business schools in particular, there’s a sense that responsible and ethical AI is a technical problem that requires technical solutions, Smith tells Poets&Quants. It’s previously been left largely to data science and computer science departments to confront bias through data sets that over-represent certain demographics (like white people) while underrepresenting others.

But there are other ways AI bias comes from data.

“For example, if you’re creating an AI system that is reviewing applications for a large tech firm and if it’s trained on existing data from that firm, it will pick out and learn gender biases,” says Smith. “In tech, there are a lot of men that are in leadership positions. The AI learns from that and applies that to future resumes and applications. So, in that sense, bias is not just over or under representing a certain demographic, it’s also carrying with it these histories of exclusion and current realities of exclusion that become reinforced in the technology.”

Smith is a leading researcher in responsible AI. Beyond her work at Berkeley’s Responsible & Equitable AI Initiative, she is the Gender and AI fellow for the US Agency for International Development (USAID) and a research affiliate at the Minderoo Centre for Technology & Democracy at the University of Cambridge. She was also an affiliate at the Technology & Management Centre for Development at Oxford University where she is currently doing her doctoral research on machine learning tools assessing credit worthiness and how that impacts gender equity.

She created the Berkeley AI initiative in October to focus on supporting multidisciplinary research in responsible AI.

“A lot of the responsible AI research conversations tend to be focused on policy, which is important, but that’s not the only space that conversation should be occurring,” she says.

“In this space, if you’re really in it, you notice that people can get really segmented in their different departments. But really responsible AI is multidisciplinary, socio technical and technical. There’s a lot of need for more multidisciplinary research and collaboration.”

This course, created for a business school, expands on the initiative’s multidisciplinary focus. Its lessons are particularly important for business students as many of them will someday lead the companies and organizations that develop or implement AI tools. They will be setting the incentives, speed to market, and other considerations around their AI strategy.

“Ethics can require slowing down, there can be needs for more pause points. These are all business decisions and business opportunities to build in more responsibility,” says Smith.

Now, AI development is happening so fast, and of course, there are huge opportunities. But there are also huge risks. Business leaders have a critical role to play.

“I really hope that this is something that other business schools start to implement because we can’t leave responsible AI to technical departments alone,” Smith says.

“If we’ve learned anything over the last year, it’s that business leaders are the ones that are really dictating the moves when it comes to how AI is developed and implemented in organizations. If future business leaders aren’t prepared for these conversations, we’re doing a disservice to them, to our communities, and to society more broadly.”

DON’T MISS: INSIDE NYU STERN’S NEW SENIOR CAPSTONE: EXPERIENTIAL LEARNING AT SCALE AND 10 UNDERGRAD BUSINESS SCHOOLS TO WATCH IN 2024

Questions about this article? Email us or leave a comment below.