NOVEMBER 2019
Two articles in last month’s
issue caught my eye. The
fi rst was about the Royal
Navy’s decision to test
extra-large autonomous
submarines with a view to
incorporating them in its fl eet, and
the second concerned the MOD’s
acquisition of fi ve unmanned
ground vehicles for ba lefi eld
resupply missions (Qinetiq’s Titan
UGV is pictured below).
Now, as I’m a science fi ction
author, you might be expecting me
to leap straight to the conclusion
that these automated vehicles will
somehow rise up against us and
destroy the world in a Terminatorstyle
apocalypse. And while
that may be a fun scenario for a
Hollywood blockbuster, frankly
any species dumb enough to place
its entire off ensive capability in
the charge of a single artifi cial
intelligence deserves everything
it gets.
No, in this month’s column, I
want to look at some of the stranger
implications of this technology.
To start with, let me state the
obvious: war produces casualties,
and if we’re deploying autonomous
vehicles into active theatres, they
are going to get damaged. It’s easy
to imagine automated ambulances
ferrying human casualties away
from the front line, but what about
unmanned tow trucks and drones
equipped to repair autonomous
vehicles? Machines repairing
other machines without human
intervention.
If those machines can be
repaired on the ba lefi eld, perhaps
they can also be improved and
modifi ed in situ to cope with
unexpected changes in terrain,
mission requirement, or threat
level? Throw in some simple
learning algorithms for the tow
trucks, and that sounds like
something I could write a story
about: a fl eet of war machines that
are turned loose and adapt to the
needs of the ba le as it happens,
Natural selection
on the unmanned
ba lefi eld
Forget the Terminator scenario. The future of AI
based warfare could be far weirder than that
undergoing a rapid Darwinian
machine evolution dictated by the
circumstances in which they are
operating.
What might such machines
look like by the end of a protracted
confl ict? If the other side also uses
similar technology, would the
evolution be accelerated as each
side became involved in a race
to outclass the other? A simple
unmanned supply truck might
evolve into a heavily armoured
stealth vehicle with fat mesh tires
that allow it to traverse any kind of
May 2020 / www.theengineer.co.uk 38
rough terrain, while being almost
immune to IEDs and other hazards.
Earlier, I mentioned how
unwise it would be to place your
entire military capability under
the command of a single artifi cial
intelligence. However, the ‘smarter’
an unmanned vehicle is, the more
chance it has to survive, so an
ongoing upgrade of its onboard
processing power wouldn’t be
unreasonable. But how smart do
you want a drone to be? At what
point will it assess its situation and
realise its best chance of survival is
ANY SPECIES DUMB ENOUGH TO PLACE ITS ENTIRE
OFFENSIVE CAPBILITY IN THE CHARGE OF A SINGLE AI
DESERVES EVERYTHING IT GETS
to refuse to follow orders or defect
to the enemy?
Assuming we somehow
manage to avoid insurrection in the
ranks, we face another potential
problem when machines start
upgrading machines on an ad hoc
basis. We run the risk that sooner
or later, they might become too
complex for us to understand.
We’ll lose the ability to repair our
own creations, as they diverge into
a multitude of sub-species, each
with its particular specialisms and
evolutionary history. What started
out as a tank might come back to
us as a swarm of complex drones or
a slick of nanotechnological goop.
At that point, even if they don’t
evolve the intelligence to become
disloyal, could we still really claim
to be in control of them? If we can’t
understand how they work, can
we trust them to make the life-ordeath
decisions that are necessary
on a ba lefi eld? If an unmanned
vehicle decides the success of its
mission would be increased by the
neutralisation of civilian targets,
would we be able to convince it
otherwise?
Some of you may remember
the talking bomb in the movie Dark
Star, which discovers philosophy,
decides it’s god, and with the words,
“Let there be light,” detonates
while still a ached to the ship that
should have dropped it. That is
something we defi nitely want to
avoid.
We also want to avoid the
situation described in Philip K.
Dick’s story ‘Second Variety’, where
the few remaining human soldiers
on both sides of a confl ict discover
that their automated weapons have
gained sentience and joined forces,
and are now lying to their former
masters about the progress of a war
that’s no longer happening.
Leon Trotsky claimed that,
“War is the locomotive of history.”
If our unmanned vehicles go on to
evolve beyond us, then perhaps war
will also provide the future of the
locomotive.
S c i - f i eye
gareth L. POWELL
/www.theengineer.co.uk