I remember Lessig talking about this. What if we treated the law (as regards technology products, anyway) as if it were a set of regression tests that all new code had to pass? So, you express the legal constraint in some sort of test-able way that the authors of the law agree constitutes a valid expression (“all users of the software must be able to easily erase any record of the history of their usage of the software” => a test spec: sign up as a user, do some things, find the mechanism to erase history (perhaps limited by number of clicks or time involved or … to provide a metric of “easily”), execute it, verify that history is in fact unobtainable.
As the law evolves, the test suite evolves. As the software evolves, it is constantly re-tested against the latest version of the test suite.
Legal terms, such as “reasonable”, “easily”, … tend to be well-defined within the context of the law. As such, they’re like shortcuts, and perhaps can be implemented using something like Prolog.
no, wait: Lessig’s formulation was “code as law”, precisely the opposite: the embedding of social values, etc, into technological artifacts (consciously or not).
cf: “smart contracts” in blockchain. otoh, witness the lack of transparency in machine learning, which can result in seemingly arbitrary decisions and classifications with no explanation of their rationale. Interesting reading in “The Expansion of Algorithmic Governance: From Code is Law to Law is Code”, Hassan and de Filippi, https://journals.openedition.org/factsreports/4518