Last week I attended the CIPD Northern Area Partnership (NAP) conference. A strong theme throughout the event was how we, as professional developers, can adjust to a world that is becoming increasingly led by technology. How can we put the ‘human' back into human resources, when the focus is on using machine learning to automate so much of our work?
This is the real world
In the 1999 film The Matrix - a dystopian future is presented, where the human race is enslaved by computers in a simulated reality. The characters who are liberated from the simulation eventually take back control from the robots to reclaim their humanity- using guns, belief and kung fu.
Back in 1999, this was science fiction, yet we’ve now reached a point where we are baking artificial intelligence (AI) into machines, allowing them to think for themselves, and in some cases (driverless cars), act for themselves.
In the words of Lawrence Fishbourne’s character Morpheus:
"Welcome to the real world."
A guideline for ethical design
In September 2016, the British Standards Institution (BSI) produced a document (BS8611) - a world first - a guideline for the ethical design of robots and robotic systems. Developed by a team of scientists, academics, and philosophers, it recognises that there are potential ethical issues arising from the increasing number of automated systems that are being introduced into our environments (both in industry and as consumers). BS8611 also emphasises that there must always be transparency as to who is responsible for the behaviour of the robot, even if it is behaving autonomously.
The standard begins in a similar way to Isaac Asimov’s three laws of robotics, proposed in his science fiction short story Runaround in 1942:
- A robot may not injure a human being or allow a human to come to harm through in action
- A robot must obey all instructions given by humans, except those that conflict with the first law
- A robot must protect its own existence as long as this does not conflict with the first two laws
The BSI guidelines also explore issues such as robot deception, robot addiction and the potential for a learning system to exceed its remit.
These types of guidelines will prove useful, but will also be fraught with our own (human) ethical dilemmas. What would happen, for example if a driverless car encounters an impossible choice - swerve to avoid a hazard to protect those inside the car, or risk hitting another vehicle and injuring those inside? In these scenarios, who would even be responsible: the occupants, the designers of the AI, or (somehow) the car?
The Anthropocentric Delusion
Part of our fear of making machines more intelligent stems from the belief that intelligence is embodied- that it is somehow connected to our human-ness.
This is known as the Anthropocentric Delusion: the belief that we should measure everything else based on the assumption that being human is necessarily best.
But consciousness and intelligence are not always linked. We've assumed that the brain is like a computer. Which leads to an assumption that if we can build a similar computer to complete tasks ‘intelligently’, then there's an automatic danger it will also become conscious.
These are huge leaps and we need to tread carefully in inferring one from the other, including the language we use. For example, robots are unlikely to ‘misbehave’ - they are more likely to be ‘misprogrammed’.
What cannot be replicated
Peter Cheese, CEO of the CIPD, spoke at the #cipdnap17 conference on the impact of technology on the future of work. During his talk, he cited Moravec’s Paradox: the finding that the most difficult human skills to reverse engineer are those that are unconscious. As Hans Moravec writes:
“it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one year old when it comes to perception and mobility.”
This, says Peter, is where we should focus our efforts. On the things that computers cannot replace, the things that make us deeply human, the things that humans have held for the longest. Things like emotion. We can then utilise technology to enhance what we do, not replace it.
This paradox has given rise to the term ‘cobots'- robots that collaborate with us rather than exist to be subservient. It is far more likely that we will end up with a range of symbiotic relationships with machines, where we have a range of specialised AI systems that complement and serve as cognitive prosthetics, rather than human replacements.
Machines can understand what we show, but not what we feel
Research backs this up: machines (such as fingerprint readers and eye detection) can understand what people show. But not what we feel.
Maja Pantic, Professor of Affective and Behavioural Computing at Imperial College London, has been working on machine analysis of human behaviour. Maja’s work has shown that AI can be taught to understand smiles, frowns, intonation of the voice, gestures….all indicative of different emotions that people may be experiencing. However, she has found that if someone wants to mask their emotions, it’s near-impossible for AI to detect it in a non-intrusive way (i.e. without putting physiological sensors on the body).
On the other hand, recent tests with researchers in Finland, have managed to create an algorithm to detect micro-expressions when participants were invited to demonstrate an emotion. Compared to human detectors' rates of 72% accuracy in identifying the emotion, the computers achieved 82% accuracy.
Fundamentally, we are designing computers, artificial intelligence and machine learning in our own image. Perhaps we shouldn’t be surprised when we get scared by what we see?
Or maybe, we’d better start brushing up on our kung fu skills, just in case.