If you read Arthur C. Clarke’s 2001: A Space Odyssey or saw the film, you must know who Hal is. Good. Keep that in mind while reading this. So… South Korea “deploys robots to detect and kill intruders.” I don’t want to be a scaremonger by just waving Hal in your face. Let me give you two reasons why robocops, or battle robots, or judge robots for that matter are bad from the legal standpoint.
First, robots follow programs that cannot predict all real-life possibilities. Robots lack that uniquely human ability of discretion. The best a machine can do to emulate discretion is to generate a random number. A grenade-launching machine exercising discretion would be like you loading one round in a revolver, spinning the cylinder, and pulling the trigger. Yes, it is called the Russian roulette. Especially, if you point the gun at your own head or at an “intruder.”
Second, a robot is not accountable. It doesn’t care if you appeal and have its decision overturned. If the reviewing body sends the case to a human for reconsideration why use the machine in the first place? And sometimes, the case will be moot, especially if the robot’s decision involved using live fire.
Law assumes human actors. Our entire legal system and tradition is based on this premise. Law doesn’t micromanage because it routinely delegates to human discretion. Sometimes it doesn’t strike the right balance—as with the law of street protest in Canada, but I’ll go for unsophisticated humans in uniforms over armed robots any day. Human discretion rests on a thick layer of experience, learning, feelings, values, and responsibility. If the state is to make decisions affecting our fundamental rights and freedoms, only its human agents should have this power. No robocops, please.