How to Develop a Face Recognition System Using Neuronal Networks

Reading time: 6 min
Share this Share on email Share on twitter Share on linkedin Share on facebook

There is a lot of noise about artificial intelligence lately - plenty of controversy regarding what it is and, especially what it isn’t. From a researcher’s perspective, neuronal networks are a powerful tool that enables automation of tedious big data tasks, but also one that also encourages experiments in other areas of research.

Together with Cristian Minea, Technical Lead in Bitdefender’s Content Filtering department, we’ve started a project in the field of facial recognition technologies.

We used artificial neuronal networks to construct a competitive facial detection and recognition system with promising results.

How it all started

Neuronal networks are commonly used for text recognition and automated email spam detection. Starting with 2005, “image spam” was using “noising” techniques to bypass filters (adding random pixels, inserting legitimate content such as logos or splitting images in smaller parts) and this was becoming a growing problem.

We created a version of Optical Character Recognition (OCR), a tool to help extract texts from difficult and atypical contexts, based on two techniques - adaptive image thresholding and scale and rotation invariant features. With impressive initial results and first-hand experience, we took things further and, two years ago, we developed several computer vision technologies.

Face detection

This technology can detect where human faces are positioned inside a photograph. To test it, we recorded 1,200 images with different facial expressions and head positions.


In the training phase, we relied on a machine learning algorithm called AdaBoost.            

AdaBoost is used in conjunction with other types of learning algorithms to improve their performance. Problems in machine learning often suffer from the curse of dimensionality - unlike neural networks. The AdaBoost training process selects only features known to improve the predictive power of the model, reducing dimensionality and accelerating execution as irrelevant features aren’t computed.

We performed several internal tests to measure detection accuracy and were quite satisfied with the results.

In 2015, to further validate the effectiveness of the system, we’ve submitted it to an independent, external comparative test called Multi-Attribute Labelled Faces (MALF). MALF tested both independent academic algorithms and commercial systems on a data set of 5,000 never-before-seen images with 11,300 faces.


Fig. BdFD reaches a 90% detection rate in the MALF independent tests

Overall, our detection rate was roughly 90%, higher than most of our competitors’ and 15% better than open-source technologies (OpenCV). Scanning images was also fast, at 2.8 seconds/image. Our competitors (other than Microsoft), performed the same task in about 5 – 6 seconds.

“This exercise showed us that it’s not the corpus of images that matters, but rather its diversity and the technique of accurately pinpointing faces in the training phase”, Minea says.

Face recognition

For the second stage, the face recognition process, we used an implementation of a Convolutional Neural Network (CNN), one of the best options available today, with a proven track record in the field of image recognition and classification. Inspired by the organization of the animal visual cortex, CNN has been successful in a myriad of scenarios, from identifying faces, objects and traffic signs to powering vision in robots and self-driving cars.

“Convolutional Neural Networks were a good fit for our project because they hold up to modifications and we can easily change the architecture of convolutional network to fit our needs”, Minea says.

To put it to the test, we created an internal testing platform called Celebrity identification. For training, we inserted pictures with famous celebrities, some 200 pictures for each person. We also began inserting our own pictures to see whether we have a celebrity doppelganger somewhere in the world, and the system managed to detect similar faces with 80% accuracy.

However, the neural network is still sensitive to changes in light direction and overall low-light conditions. So, we proceeded work on face verification, the ability to identify whether two faces belong to the same person, while taking into account a variety of poses and lighting conditions.

We are still trying to improve the technique, to drive detection up to 90%, while also working on other features, such as logo recognition. This allows brands to see if their names are used in phishing schemes, mentioned in the media or appear anywhere online.

It is a work in progress, yet we are confident these technologies will become mainstream in all sorts of industries and areas, such as banking (for credit card scanning) and smart city video surveillance systems. When this happens, we will be ready.


    • continuous sec