Please consider downloading the latest version of Internet Explorer
to experience this site as intended.
Skip to content

In Review

DATA SCIENCEWho’s Telling the Truth? Can a data-informed system help identify those with something to hide? By Bob Marcotte
smileLIAR, LIAR? Can you tell who’s being deceptive? A Rochester research group is using data science to improve screening systems so that security officers and others can better identify people who may be trying to hide important, or even sinister, information. One of these student models is deliberately not telling the truth. Can you spot the liar? The answer is C. (Photo: J. Adam Fenster)

Imagine someone is fidgeting in a long line at an airport security gate. Is that person simply nervous about the wait? Or does the passenger have something to hide?

Even highly trained TSA (Transportation Security Administration) airport security officers have a difficult time telling whether someone is lying or telling the truth—despite the billions of dollars and years of study that have been devoted to the subject.

In a project led by Tay Sen and Kamrul Hasam, PhD students in the lab of Ehsan Hoque, the Asaro-Biggar ’92 Family Fellow in Data Science and an assistant professor of computer science, researchers are exploring a screening system that they say may be able to more accurately detect deception based on facial and verbal cues.

In a report this spring, the team used data science and an online crowdsourcing game to put together a database of more than 1.3 million frames of facial expressions. Further crunching the data, they identified five smile-related faces. The one most frequently associated with lying was a high-intensity version of the so-called Duchenne smile, a facial expression that involves involuntary movement of muscles along the cheekbone.

The team plans to further refine the system, but they think they’ve only scratched the surface of potential findings from the data they’ve collected, work that could have implications for how TSA officers are trained.

“In the end, we still want humans to make the final decision,” Hoque says. “But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions.”