Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Cyborg Justice and the Risk of Technological-Legal Lock-In

Abstract

Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will “cyborg justice”—hybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy

Similar works

This paper was published in University of Richmond.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.