Interview
22
Mahesh Saptharishi CV
Mahesh Saptharishi, Motorola Solutions’ CTO, has more than 20 years of technology
leadership experience and leads innovation across Motorola Solutions’ platforms
in mission-critical communications, video and command centre software. Areas of
focus will include applications that bring together artificial intelligence (AI) and human
intelligence to rapidly interpret vast quantities of data, as well as new user interfaces
for efficiently delivering information. Previously, Saptharishi spent five years as CTO for
Avigilon, a Motorola Solutions company and a provider of video and analytics solutions.
Under his leadership, Avigilon became a market leader in video analytics and AI with
capabilities such as self-learning video analytics, appearance search and unusual
motion detection.
Saptharishi earned a doctorate degree in machine learning from Carnegie Mellon
University and has also authored numerous scientific publications, articles and patents.
www.criticalcomms.com September 2019
insights in visual and adaptive ways. This can help to get less
experienced operators quickly up to speed.
CCT: What are your thoughts on the use of AI
and machine learning for video- and other forms
of analytics?
MS: When we started our work on video analytics, it took
almost an hour to calibrate and tune the analytics to deliver
accurate results. At sites where two or three cameras were
deployed it was not that big a problem. However, think
about a city security operation where you need to deploy
thousands of cameras; spending half an hour to an hour
trying to configure every camera takes considerable time. So,
we pioneered this concept of self-learning analytics, where
the camera watches and provides accurate results for human
verification within 20-30 seconds.
Every time an operator saw an event they could very quickly
confirm the accuracy of the results and give feedback that
makes every subsequent result even better. That paradigm of
having a “human in the loop” and giving their feedback is core
to our approach and is something we will embed into every AI
solution we make.
It’s worth noting that there is quite a lot of work going
on in quite a few non-public safety domains to use machine
learning to measure the tone of people’s voices and perform
emotional analysis, allowing AI to detect if someone is happy,
angry, aggressive or experiencing other emotions. This isn’t
something Motorola Solutions is developing per se, but it
shows that additional capabilities are being developed outside
of our industry that can be applied to enable better outcomes
for public safety and enterprise industries over time.
CCT: What are your thoughts on the privacy
concerns that some people and groups have
about AI?
MS: The concerns around the use of data and privacy are all
valid. We realise that with the great power of AI there is also
great responsibility and there needs to be checks and balances.
We want AI to help people to make better decisions as
well as providing them with historical data that helps them
to see what kind of decisions led to a successful outcome in
similar situations.
When AI is deployed thoughtfully and incrementally, it has
great potential to help agencies better to improve safety and
efficiency for the public and their own personnel.
CCT: Do you think control room operators could
eventually be replaced with AI?
MS: A natural consequence of automating more tasks with
AI means that costs can be saved and some roles may no
longer be required – that’s just reality. However, we’re quite
far away from AI being able to replace critical roles such as
public safety dispatchers. We’ve spent lots of time working
with dispatchers and examining their workflows. Even the
simplest uses for AI-enhanced decision-making require people
to recognise exceptional circumstances and to determine
if the recommendations that the system puts forward are
appropriate. Human decision-making is currently irreplaceable
because a single mistake could put lives at risk. However,
AI will be very good at automating the mechanical and
monotonous tasks.
CCT: Paul Steinberg, what do you think will be
the benefits of 5G for mission-critical
communications users?
Paul Steinberg (PS): At a high level, 5G will basically
mean three things: increased capacity, reduced latency, and
increased connectivity of devices.
First, 5G represents an increase in capacity by giving access
to large swathes of new spectrum. The size of the spectrum
essentially defines the ‘size’ of the wireless pipe. While this
new spectrum comes in large chunks, it is typically very highfrequency
spectrum (3.5, 5-6GHz and even millimetre wave).
The flipside is these high bands don’t propagate nearly as far
as current cellular bands so the cell sites will be much smaller
coverage areas. The simple way to think about it in terms of
capacity and coverage is like a Wi-Fi network on steroids.
The second thing is how low levels of latency opens the
door for more ‘mission-critical services’. The PPDR public
protection disaster relief definition of that is somewhat
different from when 3GPP talked about it, which is invariably
in relation to augmented and virtual reality, automated
vehicles, remote robotics being used in surgery and so on.
The third major component of 5G is to do with increased
/www.criticalcomms.com