Please consider downloading the latest version of Internet Explorer
to experience this site as intended.
Skip to content

Course Work: Ethics & ChatGPT

Getting the Digital World We WantStudents in a philosophy course put AI tools to an ethics test.By Karen McCally ’02 (PhD)

Course: Data, Algorithms, and Justice (PHL 235/435)

Spring 2023

Instructor: Jonathan Herington, Assistant Professor of Philosophy

Overview

Last November, the release of the artificial intelligence (AI) application ChatGPT was making headlines as Jonathan Herington, an assistant professor of philosophy and expert on ethics in science, health, and technology, was gearing up to teach Data, Algorithms, and Justice, a course he has offered to both undergraduate and graduate students.

Spring 2023 marked the fourth time Herington has taught the course, but it’s never the same twice. It addresses the nascent, fast-developing field of digital justice, which identifies biases in machine-learning algorithms and points the way toward solutions. Algorithms, after all, help determine such things as “who gets bail, parole, a job, or a loan,” Herington notes. They’re also systems designed to achieve fairness and objectivity but developed by fallible human beings who also come at problems with assumptions and values that are rarely universal.

Who takes the course?

There were 38 undergraduates and two graduate students in the course this spring. About half of the undergraduates were computer science majors, and the other half split among the humanities and social sciences, clustered around philosophy, political science, and digital media studies. Several students were double majors in computer science and philosophy, “which is becoming a progressively more popular option,” Herington says.

What do students learn?

Herington requires students to have taken at least one prior philosophy course, but that still means some students may be unfamiliar with the field of AI, which has a language of its own, including many acronyms that have yet to enter common parlance. Fortunately, the uninitiated have a machine-learning algorithm (it goes by the name of “Google”) to help them school themselves in such details so that lessons can focus exclusively on larger learning objectives, such as:

  • identifying ethical challenges posed by machine learning
  • describing major theories of justice and applying them to the technical framework of machine learning
  • assessing various solutions to ethical challenges
  • constructing an argument for a solution and communicating it clearly and persuasively.

Herington guides students through the objectives in a series of units organized around specific ethical questions and challenges. They include how values are embedded in algorithms; how programmers strive for accuracy and fairness (and how and where they fall short); and how a digital world based on algorithms can limit speech, encourage echo chambers, and spread false information.

Assignment: ChatGPT encouraged

This spring, Herington used ChatGPT as a teaching tool. ChatGPT relies on a type of algorithm called a large-language model (LLM). One assignment required students to use it or a similar LLM to help them write a short paper, after which they analyzed the algorithm’s performance. The topic: Given that we disagree about the limits of free speech, how should we design content filters for large-language models?

Herington offered a few highly specific prompts for students to provide the LLM—for example, “summarize either the welfarist or egalitarian approach to the ethics of algorithmic systems in the context of moral disagreement.” In assessing the LLM’s performance, the students gained a sense of where it excelled and where it struggled.

Their verdict? Says Herington: “LLMs made the mundane parts of writing much less painful, from summarizing general knowledge topics to editing their prose for clarity and concision. It isn’t the lazy writer’s one-click solution, though. It fabricated quotations, cited articles inaccurately, and struggled to explain—much less critique—the details of authors’ arguments.”

Reading and discussion: An algorithm helps evaluate

Instructors who assign a lot of reading rarely escape the problem of students who come unprepared. Herington’s remedy is an online annotation platform that students use to access the readings, enter comments, respond to the comments of others, and pose questions. An algorithm evaluates their contributions based on criteria that Herington sets. For example, if you don’t understand something, clearly articulate the point of confusion. If you disagree with the author or with a classmate’s comments, offer reasons. Make at least four high-quality contributions.

And make sure they're not all on the first page.