Researchers from Cornell University have created an earphone system that can track a wearer’s facial expressions even when they’re wearing a mask. C-Face can monitor cheek contours and convert the wearer’s expression into an emoji. That could allow people to, for instance, convey their emotions during group calls without having to turn on their webcam.
“This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions,” Cheng Zhang, director of Cornell’s SciFi Lab and senior author of a paper on C-Face, said in a statement. “In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions.”
The earphone uses two RGB cameras that are positioned below each ear. They can record changes in cheek contours when the wearer’s facial muscles move.
Once the images have been reconstructed using computer vision and a deep learning model, a convolutional neural network analyzes the 2D images. The tech can translate those to 42 facial feature points representing the position and shape of the wearer’s mouth, eyes and eyebrows.
C-Face can translate those expressions into eight emoji, including ones representing neutral or angry faces. The system can also use facial cues to control playback options on a music app. Other possible uses include having avatars in games or other virtual settings express a person’s actual emotions. Teachers might be able to track how engaged their students are during remote classes too.
Due to the impact of COVID-19, the researchers could only test C-Face with nine participants. Still, the emoji recognition was more than 88 percent accurate and the facial cues more than 85 percent accurate. The researchers found the earphones’ battery capacity limited the system. As such, they plan to develop less power-intensive sensing tech.