so there's a lot of guff about in recent techblogs droning on about robots (drones) and ethics
here's a very simple thought experiment which doesn't need Terminator/Skynet to present a dilemma Real Soon Now
Cars are being fittted with devices that detect if they are heading for an obstacle and actve the brakes automagically to safely stop.....
However, if not all cars have such a tech, then the car behind might re-end you
the front and rear impacts represent different risks (the crumple zones in a car are more designed to absorp impact in front, rather than rear)
so if you detect an obstacle ahead, and a car behind and (assuming cooperation) a car behind with, or without a robot safety braker...do you choose to brake less so that you amortize the impact over two cars?
so do you want a robot to act in the interests of ALL passengers in all vehicles, or "selfisly" on behalf of "this car only"???
I.e. if we design robot brakers according to Asimov's laws, do we want the 4th law as well as the usual 1st three
[see the 4th law for ethical synthetic humanism, #101]
here's a very simple thought experiment which doesn't need Terminator/Skynet to present a dilemma Real Soon Now
Cars are being fittted with devices that detect if they are heading for an obstacle and actve the brakes automagically to safely stop.....
However, if not all cars have such a tech, then the car behind might re-end you
the front and rear impacts represent different risks (the crumple zones in a car are more designed to absorp impact in front, rather than rear)
so if you detect an obstacle ahead, and a car behind and (assuming cooperation) a car behind with, or without a robot safety braker...do you choose to brake less so that you amortize the impact over two cars?
so do you want a robot to act in the interests of ALL passengers in all vehicles, or "selfisly" on behalf of "this car only"???
I.e. if we design robot brakers according to Asimov's laws, do we want the 4th law as well as the usual 1st three
[see the 4th law for ethical synthetic humanism, #101]
No comments:
Post a Comment