Facial recognition
28
The bigger issue is the inherent
biases that seem to exist in
some facial recognition algorithms
www.criticalcomms.com March 2020
in the production of facial recognition technologies from a
technology perspective in three different ways: providing its
processors to developers; providing tools, software libraries
and a software development environment to enable developers
to more easily develop facial recognition applications; and
providing a complete facial recognition solution (including
software and hardware).
Tateosian adds: “In some of our newer devices, there
are hardware accelerators that are specifically designed to
accelerate neural networks and our software development
environment, called eIQ, enables developers to take a wide
variety of models and inference engines and port those in an
efficient way to NXP processors.”
NXP has just launched a new ‘vision solution’ that
includes a hardware module design and associated software
to facilitate offline face and expression recognition. This
means, Tateosian explains, “a customer can buy a module
with a camera on it and the second they plug it in, they
can see their face through the camera and see that it’s not
recognising their face. Then we have a set of different ways
that they can train faces on the device, instantly.”
This simplification of the development cycles behind
facial recognition technologies opens the doors to more
widescale use in the coming years. Tateosian says: “What
used to require very high-performance processors and
cloud-based support for processing can now be done on the
edge without cloud intervention. The devices themselves are
changing, of course, but really the bigger breakthrough is
on the software side. The software is getting more and more
sophisticated and streamlined, and that is enabling face
recognition to become more prevalent in the future.
“There’s always – or maybe I’m overstepping to say
‘always’ – but there’s always going to be this use-case for
really sophisticated, cloud-based facial recognition for
public security or through customs or immigration and
other areas. There may be these powerful engines that can
recognise multiple faces at a single time and process those in
the cloud.”
He adds: “Being able to do all the processing and learning
on the edge means the devices themselves don’t need to be
connected to the cloud and any faces that are registered on
the device can simply be erased by the user.” This capability
might help lower citizen discomfort with facial recognition
technologies being used by law enforcement in public
spaces. Products like those developed by NXP mean the
device “doesn’t ever need to send the face ID information to
the cloud. All the facial data and the camera feed remains
local on the device itself.”
NXP says its facial recognition portfolio is powered by
an ‘interference engine’. Tateosian says this term is used to
explain the processing that takes place on the device itself.
He explains that these engines are created based on a large
dataset and, for vision, the first thing the camera needs to do
is find the head in the frame. The interference engine is used
to find the face in the frame of the image and then focuses
on the face and creates a model that can be pushed through
the engine to see if there is a match on the other side.
Understanding the limits
Facial recognition falls under a broader category of
biometric data. Yet other types of biometric data do not have
a similar perception of inaccuracy. Merritt Maxim, vicepresident
and research director at Forrester, explains that
fingerprints have a long history of research, and fingerprint
analysis is supported by intellectual property that is accepted
as highly accurate. He says: “Fingerprints have been used for
decades, so there is a good understanding of how it works
and how fingerprints can change. Facial recognition is much
newer and therefore hasn’t had the same level of usage in the
field and that’s why there continues to be questions about
how well it adapts to behavioural changes. You get older,
your hair gets gray, you grow a beard, or you get wrinkles
or other things… does a facial recognition technology have
the ability to adapt to those physiological changes and still
maintain a high level of accuracy?”
That question, Maxim says, will not be answered in the
near term. He says: “We won’t really know until this has
been used for an extended period of time. Right now, we’re
still in the early stages. What I looked like 10 years ago
is a little bit different to what I look like now. Can facial
recognition adjust? Until we see that in reality, I think that’s
still an open question.”
Maxim adds: “The bigger issue is some of the inherent
biases that seem to exist in some facial recognition
algorithms, with strong racial or gender biases that mean
people are misidentified. Some of that can be based on
the data that’s used to train the model, and that can
potentially lead to the apprehension of the wrong individual
because the facial recognition technology didn’t make the
correct match.”
This does not sound like an easy issue to iron out.
When asked whether technology companies or public
safety agencies will need to build new datasets that are
representative of the population being observed, Maxim said
he believes how this will play out is to be determined. More
diverse sets of training data would of course help, but other
functionalities or capabilities might emerge that also help
deal with the issue of bias.
A need to improve the accuracy of these technologies is
also an area identified by NXP’s Tateosian. He says: “I think
there’s going to be more and more improvement both in
the fundamental technology and doing so under different
conditions. Lighting conditions, for example, have a large
role to play in this. Being able to maintain accuracy across a
wide range of lighting conditions is important, and I think
that’s something that’s going to happen.”
Tateosian also anticipates the systems running facial
recognition algorithms to develop in a way that means they
require lower power. This “means you’re going to start to see
these things in battery-operated devices as well”.
Existing functionalities will also gradually reduce in cost,
which will make them more accessible to a wider range
of applications. Those include emotion detection, age
prediction and gender prediction, and Tateosian says they
are “really on the high end but I think you’re going to see
them become more mainstream”.
/www.criticalcomms.com