Skip to content
Epistemologist Jens Kipper has joined the University’s Department of Philosophy, bringing with him a focus on the nature of intelligence that spans the fields of philosophy, computer science, and artificial intelligence. (University of Rochester photo / J. Adam Fenster)

Artificial intelligence may be the province of machines, but it also reveals much about the human intelligence it mimics. Looking to replicate and expand on our mental abilities in artificial intelligence systems, we turn an engineer’s eye on our own intellects, says philosopher Jens Kipper.

He joined Rochester’s faculty this year, as an assistant professor of philosophy. He specializes in the philosophy of mind and the philosophy of language.

Kipper bolsters Rochester’s research prowess in fundamental philosophical questions related to thought and language, says Randall Curren, chair of the Department of Philosophy, while also strengthening opportunities for collaboration with the departments of linguistics and computer science, and the Goergen Institute for Data Science. “At one end of the philosophy of artificial intelligence spectrum are questions about the nature of intelligence, and at the other end of the spectrum are immensely important questions about the social and ethical aspects of artificial intelligence,” Curren says. Kipper spans both.

He earned his PhD in philosophy at the University of Cologne in 2012 and later joined the faculty at the University of Bielefeld in Germany. At Cologne and Bielefeld, he taught courses on such topics as perception, scientific explanation, consciousness, and mental content. He’s the coauthor of Research Ethics: An Introduction, with Thomas Fuchs and others (Metzler, 2010) and the author of A Two-Dimensionalist Guide to Conceptual Analysis (Ontos, 2012).

Answers with Jens Kipper

There’s a kind of information that we call objective. For example, the city of Rochester’s geographic coordinates. It’s true no matter where you are or what your point of view is.

There’s also another kind, which we can call “self-locating” information. It locates you in relation to something else. For example, if I say you and I are two meters apart, that tells us something about our relation to each other. The truth of that statement depends on your location, and when you move through time and space, it changes. Much of the information we get is self-locating.

Two-dimensional semantics is a theory of meaning and information content that is useful in characterizing how the objective and the self-locating information we receive are coordinated. A big part of my work involves understanding exactly how this works.

It’s a topic that is important for artificial intelligence. Think of a self-driving car. It has to navigate its environment. It’s receiving GPS information, and that’s objective. But it also needs information about its relation to the cars around it—that’s self-locating information. And the car has to integrate all of that.

You’d think machines are neutral, or even beautifully objective. … They’re going to perpetuate our biases; it’s a big issue.

One thing that I think is interesting about artificial intelligence is that it gives us a different perspective on ourselves. When we’re trying to understand how we should build something that solves a certain problem, we are also considering ourselves from an engineer’s point of view.

Intelligence involves data compression. It’s a term from computer science. For example, you take a picture and then you compress it so that you can keep it using less storage space. Human cognition is all about compressing data. Every second, our senses get huge amounts of information from our environment. We can’t attend to all of it, and much of it isn’t very important to us.

But seeing important patterns is absolutely crucial in cognition, and it’s a central reason why artificial systems have struggled with these kinds of tasks. Here’s an example: the first time a computer beat the world chess champion in a game of chess was 1997. At that time, computers struggled to recognize faces. Intuitively, you’d probably think playing chess was a more sophisticated problem for the computer to solve. After all, small children recognize faces, but they don’t play chess. But a major problem in facial recognition is that a face has so many different features—and the crucial part is focusing on the right kind of features. We’ve evolved to be particularly good at compressing data. We’re good at filtering the patterns that matter to us.

Artificial intelligence involves many ethical issues. I have a background in applied ethics, and I’ve introduced a course on the philosophy of artificial intelligence. I’m talking with students about the future of jobs, and privacy, and biases. For example, when you train an artificial neural network, you need to provide a lot of data. But our society has racial and gender biases, and that’s reflected in the data we’re putting in machines. They’re going to perpetuate our biases; it’s a big issue. You’d think machines are neutral, or even beautifully objective. But they’re also the product of the information they get. If the data is biased, that will be reflected in the judgments that machines make—and we might not even notice.

One thing I immediately noticed when I came to Rochester, and to the philosophy department, is that people are very open to interdisciplinary collaboration. There are universities where people in the sciences or engineering, for example, say, “Oh, philosophers….” And I haven’t had that experience at all. Much of my work closely connects with other fields. But I’m not a computer scientist, and when I work on artificial intelligence, it’s really important to connect with people who have a certain expertise and exchange information with them. In Germany, I taught an artificial intelligence course, but it was for philosophy majors. Here, I have a mix of students—computer science majors, philosophy majors, engineering majors, economics majors. Artificial intelligence connects to so many different things, and it’s nice not to have a uniform cohort of students.

I’m also going to be teaching a new course on philosophy and science fiction. When the topic you’re teaching is very theoretical, it’s not always easy for students to find a way in. But if you have a fictional text where the characters face certain kinds of problems that are philosophically relevant, it’s more vivid. It’s a version of the “thought experiments” that philosophers often use, and I’ve found it’s a great way of teaching philosophy.

Return to the top of the page