"Discuss how you dealt with an ethical dilemma" goes the essay topic for a college application. A question commonly posed to comprehend the dynamics that force us to make those oh-so-difficult judgment calls. These are dilemmas for which there is no "right" answer varying from scenario to scenario, falling into the "gray area" – with no black-and-white answer. Humans are wired to make tough decisions bringing all the context and principles to bear. Similarly, can devices apply the available information to make the right judgment calls? To some extent, I would say. Will the IoT face ethical dilemmas? Absolutely! Question is will the IoT realize it? That is the real dilemma.
It is those "little gray cells" – one of the favorite phrases of the fictional character Hercule Poirot created by Agatha Christie. Perhaps, they are gray having worked with a world of information -- and emotions – to make those difficult, but real, decisions.
Take the case of the Trolley Problem introduced in 1967 referenced in this in this Aeon magazine article by Tom Chatfield. Imagine you are the driver of a runaway tram heading down a track where 5 men are working. All are certain to die when the trolley reaches them. You can switch the trolley's path to an alternative spur. Unfortunately, one man is working on this spur and will be killed if the switch is made. As Phoebe explains, conventional wisdom would be to minimize the losses (1 fatality better than 5). We could build in such rules into devices to re-route the trolley accordingly.
Enter emotion. As the driver, you recognize the individual on the other track as someone you care about -- or taking it to the other extreme -- someone you perceive to be an adversary. Suddenly, the rules discussed above undergo some distortion. And you make a last minute decision in a fleeting second -- ethical post-mortem to follow later as needed.
Despite the ability to continuously collect data, is it really possible to introduce the "gray thinking matter" into the IoT? I say “No.” Only humans can think like humans. Moreover, the manner and extent to which ethics are applied defines our character and varies person to person. Perhaps, this is why computers can never be Data Scientists.
So, what could we do then to build ethics into the device that drives the trolley? We can never really anticipate all possible scenarios and program appropriate measures. Avoidance of the impact with early warning detections may be the most effective strategy even though it is not within the domain of applicable solutions for this problem.
The debate continues about business ethics being an oxymoron. IoT and ethics may be a challenging proposition too – because IoT can never really be fully automated to being a human.
And that is not exactly a dilemma – it is just plain and simple reality.