The NYPD's first robo-dog, built by Boston Dynamics, on patrol last week |
Isaac Asimov’s 1950 novel I, Robot is most famous today for introducing the world to his influential Three Laws of Robotics. Less famously, it introduced Asimov’s early justification for robotics: he believed technology was a manifestation of science. And science, unlike humans, is wholly rational. The more of society we could entrust to robots, which Asimov saw as more rational forms of humanity, the better he believed society would be.
I couldn’t help remembering Asimov’s principles last week when a video began circulating on social media. The NYPD, a bureaucratic agency not known for restraint and forbearance, had purchased a robo-dog to patrol New York streets. The video portrays this as cute and upbeat. It’s surely mere coincidence that the NYPD has deployed its robo-dog in the Bronx, Queens, and Brooklyn, the boroughs most populated by Black, Hispanic, and immigrant populations.
Dr. Asimov believed technology was morally neutral, excluding human prejudices. Relying on technological algorithms would exclude human weakness from running common society. Freed from the burdens of morally fraught decision-making, humans could relax, letting robots automate everything. Anyway, he believed that in 1950. Throughout his career, his writings increasingly dealt with moral conundrums created by his purportedly neutral laws, which he constantly rewrote and tweaked.
As often happens with myth, from the Laws of Moses to Star Wars, early contributions obscure later revisions. Asimov’s early robot stories, written in the afterglow of post-WWII optimism, often get remembered more fondly (if vaguely) than his more nuanced later stories. I know I often forget his later stories. Boston Dynamics has learned from this; their early mythology foregrounds robots dancing to Motown, hoping we’ll forget they’re primarily a defense contractor.
Machines, Asimov ultimately concluded, can never be morally neutral. They reflect the desires and prejudices of those who built them. With Boston Dynamics, we might add those who pay to build them. Their limits always reflect their programmers’ ideal. The robo-dog, a prototype for Asimov’s robot detective Olivaw, was programmed to enforce NYPD goals, which, we now know, often privilege order over justice. The robo-dog will do likewise.
Isaac Asimov |
Artificial intelligence algorithms, written by an industry that’s remarkably White, famously have difficulty recognizing Black faces. This has already created problems with deploying facial recognition software. Algorithms inevitably reflect the biases of those who wrote them. Technology enthusiasts might counter that we could write learning heuristics so robot police could overcome their limitations. But the machine will only ever learn within that heuristic. What if the heuristic is also wrong?
Put another way, what makes humans capable of changing our minds? What makes it possible for us to determine that long-held beliefs in politics, religion, national identity, and other deep-seated domains, are wrong, and abandon them? Philosophers and theologians argue this extensively, with no definitive answers. We fall on what philosophers call epistemology, the study of origins of knowledge.
Though epistemologists agree humans know things, we cannot agree how. Recent developments in the discipline suggest our knowledge is always positional, that is, we know from our origins, which is why people who move to other countries, or rise or fall on the economic ladder, have difficult periods of transition. Knowledge, and the metaphysical ideas which arise from knowledge, is conditioned by race, religion, language, economics, and the kitchen sink.
We also wander perilously close to religion. Reading the Hebrew Tanakh, I’ve struck something uncomfortably familiar: Moses wrote a law to govern a poor mountain-dwelling nation during the Late Bronze Age. Then kings and priests began enforcing that law inflexibly, dare I say robotically, without consideration of circumstance. The prophets then inveigh against mindless obedience. The Law of Moses demands compliance; the prophets demand mercy and justice.
Sound familiar?
The NYPD robo-dog’s algorithms reflect the NYPD’s enforcement priorities, which we’ve seen automatically presume Black and Brown people are incipient criminals. Because it enforces its algorithms without ill feeling, courts could construe this enforcement as morally neutral. But clearly it isn’t. Just because the algorithm programmer isn’t present, doesn’t make that person less culpable for malicious enforcement of pre-programmed laws.
Though atheist himself, Dr. Asimov’s later novels turn toward a theological concept, eschatology. Laws written as geometric algorithms never compass the complexity of human interaction. His virtually immortal robots struggled with this limit, never reaching a satisfactory answer. This reflected a conflict within Asimov himself, because he considered science trustworthy, but ultimately realized it wasn’t sufficient. Morals, with all their ambiguity, persist.
That’s something the robo-dog, programmed to enforce the law mechanically, can never adequately handle.
No comments:
Post a Comment