Our faces can tell when it comes to our state of mind and emotions. Facial expressions are an essential factor in voiceless communication for people. Even if we cannot explain how we do it, we can usually see in someone’s face how they feel. In many cases, learning facial expressions is very important. For example, a teacher may do something to test whether a student is engaged or not, and a nurse may do well to check whether a patient’s condition is getting better or worse.
Thanks to technological advances, computers can do an excellent job regarding facial recognition. However, facial recognition is an entirely different matter. Many researchers in the field of Artificial Intelligence (AI) have tried to address this problem using modeling and classification techniques, including convolutional neural networks (CNNs). However, facial recognition is complex and requires complex emotional networks, requiring a lot of training and costly mathematically.
To address these issues, a team of researchers led by Drs. Jia Tian of Jilin Engineering Normal University in China recently developed a new CNN model for facial recognition. As described in an article published in the Journal of Electronic Imaging, the team focused on finding the right balance between training speed, memory usage, and model accuracy.
Another significant difference between CNN’s standard models and the one suggested by the team was the use of highly diversified convolutions. This type of convolution — the primary function of each CNN layer — differs from usual in that it independently processes different channels (such as RGB) for image upload and integrates the end results.
By combining this type of convolution with the so-called “residual block blocks,” the proposed model was able to process the surface-to-cone-coated surface conditions. In this way, the team significantly reduced the calculation costs and the required number of parameters to be studied by the system to classify it accurately. “We were able to find a model with a good general capacity with 58,000 parameters,” Tian said.
Researchers tested their model by comparing its performance with facial recognition to other reported models in the study setting. Train and test all models using the popular database called the “Cohn-Canada Extended Data Database,” which contains more than 35,000 labeled images with faces that express normal emotions. The results were encouraging, with a model developed by the Tian team showing the highest accuracy (72.4%) with a small number of parameters.
“The model we have developed is very effective in detecting facial expressions when using smaller sample data sets. The next step in our study is to further improve model design and achieve better segmentation performance,” said Tian.
Since facial recognition can be used extensively in personal computer interaction, safe driving, intelligent monitoring, surveillance, and medication, let us hope the team will see its vision soon.