Just because it is difficult for “all law” to be authoritatively coded, this does not mean that some law cannot be reliably represented in a computational model.
Jason Morris makes this point in his LLM Thesis.113 We adopt Morris’ broader argument: that the fact a computational model is only an interpretation of the law does not mean that all coded models cannot be useful.
An important point of difference is that Morris’ thesis relates to the use of automated legal reasoning tools by lawyers in order to provide legal advice to clients, and different considerations may apply in a government-to-citizen context (the latter being the focus of this report).114
The use of [declarative logic programming] tools should be understood to involve, as Susskind suggests, not an encoding of a categorically correct representation of the meaning of the relevant law, but an encoding of the internally coherent understanding of the law that a responsible legal professional believes would be appropriate for people receiving automated legal services to rely upon, in all the relevant context. If what we are encoding is not “the law”, but one person’s understanding of it, all the abstract concerns about whether expert systems can accurately represent the “true” meaning of a law disappear. … Difficulties involved in determining what a law means must be overcome before either the lawyer gives advice, or they encode their understanding. Interpretation is necessary in both circumstances, and so the need for interpretation is not a critique of expert systems at all, but a critique of laws.
Morris expresses confidence that changes in the law that affect whether an interpretation is correct can be dealt with by changing the encoded model: this would be much more challenging if code were to be given the status of legislation.115
… if a lawyer is aware of a statutory error, they can encode their corrected understanding of the law as easily as they can advise using it. If they are not aware of the error, they would provide incorrect advice in any case. The encoded version will still meet the “no worse than a lawyer” standard. With regard to changes to the meaning of laws that arise due to changed circumstances, the same thing is true. A lawyer may anticipate that the change in general circumstances will require a change in the meaning of a provision, or they won’t. If they do, they can encode that understanding. Legislative error and changes in circumstances are difficulties with statutory interpretation that have no particular impact on the use of expert systems. The fact that the meaning of statutes can change over time, even in the absence of explicit amendment, suggests that the maintenance of automated systems will be an important factor in whether they continue to be reasonable.
By contrast with the situation anticipated by Morris (legal advice in a lawyer client relationship), updating a model is likely to be a much more complex exercise when the model is being used by an Executive government department. The fact that this may be more complex – requiring various levels of sign-off or accountability procedures – is an indication of the importance of having a coded model that is legally correct when in active operational use by a government agency, even if we accept the model is simply an interpretation. Those sign-off procedures exist because of the significant consequences tweaks to a model may entail for the relevant agency and for people subject to coded systems.
Morris acknowledges that there are some situations where automated systems should not be used because of specific legal ambiguities, but he persuasively argues that:116
We cannot justify refusing to use DLP tools for what they can do because there remain things they cannot do. Responsible use of these tools will always include deciding when not to use them, and issues of open-texture, vagueness, or uncertainty may remain good reasons to come to that conclusion.
In a footnote to that statement, Morris notes the potential of a better rules approach, primarily because of its ability to improve the quality of the underlying policy, and its ability to be encoded in computational systems:117
The Better Rules conversation (Better Rules for Government Discovery Report […]) proposes a fascinating possible resolution to this problem: that public rules ought to be drafted with an eye to how easily they could be automated, encouraging the avoidance of open-textured terminology except where the vagueness serves an explicit policy objective. Such a change in how legal rules are drafted would be a sea change for the applicability of DLP tools, and statutory interpretation itself.
Morris acknowledges that “the real challenge” is the knowledge acquisition bottleneck:118
The knowledge acquisition bottleneck is the only common criticism left unaddressed. It applies to all possible uses of expert systems. It is not resolved by using modern tools. It cannot be resolved by merely avoiding the standard of perfection. The viability of expert systems as a tool for increasing the supply of legal services may legitimately turn on whether there is a realistic and appropriate solution to this problem. To reiterate, the knowledge acquisition bottleneck refers to the high cost and low reliability of the method of having a legal subject matter expert and a programmer work side by side to develop expert systems.
The knowledge acquisition problem disappears entirely when the person who holds the subject expertise and the person who understands the programming language are the same person. There is no risk of anything being lost in translation, missed, or misunderstood when the process of legal encoding involves only one person.
By contrast with the approach proposed by Morris (the use of user-friendly coding tools that can be directly used by legally trained people), the better rules approach incorporates this multidisciplinary knowledge through the use of teams and business process modelling approaches. This ameliorates some of the knowledge acquisition bottleneck, but not all, and it is also important to note the way that a better rules approach can be seen to produce policies, regulatory systems and legislation that are more easily implemented in digital systems, thereby reducing the risk of incorrect computational modelling by subsequent actors responsible for implementing the law in computer systems.
A persuasive point made by better rules and rules as code advocates is that government departments and other users of legislative and other rules are already engaged in preparing and encoding their own interpretations of the law. If an authoritative interpretation can be made available, then it would limit the variability of interpretations available in the “marketplace for interpretation”. So long as the “authoritative interpretation” is understood as being of lesser authority than a legal instrument itself, and still remains open to judicial and legal contest, this raises far fewer fundamental constitutional considerations.
Given our conclusions, what examples might we emulate where highly authoritative interpretations of legal instruments – whether or not they are in code – have generated public or private benefit?
One useful example we have identified is the Agreement for Sale and Purchase of Real Estate, produced by the Auckland District Law Society. “The Agreement” is now in its Tenth Edition.
We explain notable insights about this agreement in greater detail in an appendix.
The agreement, and other standard form agreements like it:
Our primary interest in the ADLS Agreement is that it illustrates the wider value of exceptionally reliable and reproducible legal instruments which nevertheless are only non-authoritative interpretations of how the law works. Parties attempting to create law as code models should pay particular attention to the following points, which we believe are integral to the success of the ADLS Agreement:
The Agreement creates a reliable and dependable legal environment within which parties can transact. It does not exhaustively state the law, nor is it held up as having greater authority than the other primary legal sources (or even secondary legal sources in the form of academic commentary) that inform its drafting. It is reproducible and scalable in the way that many copies of it can be produced and used rapidly. It avoids the need for bespoke individual agreements to be drafted and negotiated for every new property transaction, which would generate massive cost and legal uncertainty.
The Agreement is one of several legal instruments produced by the ADLS. It is drafted and revised by a committee of the ADLS convened for that purpose. Membership of the committee is comprised of legal practitioners and academics with significant authority on the area of land law in New Zealand, including the author of the leading academic text.
In a panel discussion on access to civil justice, a justice of the Court of Appeal noted that very few disputes about the sale and purchase of land are heard in appellate courts today.120 The role of the Agreement in this outcome cannot be overlooked. Interviewees we spoke to estimated more than 90% of sale and purchase of land transactions are conducted using the ADLS agreement.
As a result of its ubiquity (which itself is a testament to its effectiveness), the Agreement has become remarkably embedded within the legal system. Notably, some providers of professional legal training teach their property law modules by reference predominantly to this Agreement itself, as much as the original legislation it reflects.
As a natural language legal instrument, the Agreement is interpreted using legal interpretive practices. This is one limitation of using it as an illustration of how coded law might work. However, because the legal instrument is drafted by a non-Parliamentary body, the same kinds of constitutional issues that are raised by authoritative coded models do not arise.
We also note the way the Agreement highlights the role of the courts in developing a reliable legal instrument. The New Zealand Supreme Court has made comment on the drafting of the Agreement. In fact, judicial comment has led to amendments to the Agreement in revised editions.121 While making it easier for lawyers to practice in this area of law, the Agreement still benefits from authoritative interpretations provided by the courts in the course of legal disputes.
In a recent book, Richard Susskind proposes what we say is substantively similar to a better rules approach to the design of legal, operational, and digital procedures for online courts. Susskind’s immediate concern is how to create procedures for online courts in ways that do not limit what someone can or cannot do within a digital system in a way that lacks any legal foundation. His contended solution looks substantially similar to the way we suggest a better rules approach could be combined with the authority of a body such as the Auckland District Law Society committees to produce dependable and reliable computational models of legal interpretations. Susskind’s process is set out at p 163:122
… (1) A rules committee should lay down general rules … that conform with an agreed high-level specification of the functionality of the system (agreed amongst politicians, policy-makers, and judges).
(2) The committee should delegate rule-making/code-cutting responsibility and discretion to a formally established smaller group that can work out the detail and proceed in an ‘agile’ way.
(3) the rules and code that this group create would need to be formally articulated and made explicit, partly for public scrutiny and partly for a periodic, formal review by the main rules committee.
(4) The committee and group should be encouraged to approach the task in the spirit of proportionality and resist the temptation to generate an over-complicated set of rules.
In this way, code is law but it is law whose creation has been formally sanctioned through some kind of delegated authority. This may seem heavy-handed but I do not think we can simply leave the rule-making and code-cutting to a group of developers and judges, no matter how senior and well-motivated. We cannot allow coding to become law-making.
There is already a method of legislative drafting for the implementation of AES to exercise legal powers. These legal powers are not limited to powers of decision: they also include complying with obligations, exercising powers, and performing legal functions.
Below we outline the legislative provisions that shape this authority to use AES for such purposes. We do so to illustrate the way that a coded interpretation of the law produced using better rules or rules as code methods could be operationally deployed within legislative boundaries set by Parliament, and in a way that still permits scrutiny by an identifiable person responsible for the system; anyone subject to the use of the system; or by judicial or regulatory oversight institutions.
The simplified pattern of drafting generally includes:
A power to arrange for a system to be used.
Stating the effect of the system and any dispute resolution mechanisms.
Sometimes, a criminal offence for interfering with the system’s operation.
The phrase “automated electronic system” is not defined.
We use the Biosecurity Act 1993 as an example. Below are the statutory provisions empowering a person to arrange for an AES:
142F Arrangement for system
- The Director-General may arrange for the use of an automated electronic system to do the actions described in subsection (2) that this Act or another enactment allows or requires the persons described in subsection (3) to do.
- The actions are—
- exercising a power:
- carrying out a function:
- carrying out a duty:
- making a decision, including making a decision by—
(i) analysing information that the Director-General holds or has access to about a person, goods, or craft; and
(ii) applying criteria predetermined by the Director-General to theanalysis:
- doing an action for the purpose of exercising a power, carrying out a function or duty, or making a decision:
- communicating the exercising of a power, carrying out of a function or duty, or making of a decision.
- The persons are—
- the Director-General:
- chief technical officers:
- authorised persons:
- accredited persons:
- assistants of inspectors or authorised persons.
- The Director-General may make an arrangement only if satisfied that—
- the system has the capacity to do the action with reasonable reliability; and
- a process is available under which a person affected by an action done by the system can have the action reviewed by a person described in subsection (3) without undue delay.
- A system used in accordance with an arrangement may include components outside New Zealand.
- The Director-General must consult the Privacy Commissioner about including in an arrangement actions that involve the collection or use of personal information.
Below are the statutory provisions concerned with the effect of the use of the electronic system:
142G Effect of use of system
- This section applies to an action done by an automated electronic system.
- An action allowed or required by this Act done by the system—
- is treated as an action done properly by the appropriate person referred to in section 142F(3); and
- is not invalid by virtue only of the fact that it is done by the system.
- If an action allowed or required by another enactment done by the system is done in accordance with any applicable provisions in the enactment on the use of an automated electronic system, the action—
- is treated as an action done properly by the appropriate person referred to in section 142F(3); and
- is not invalid by virtue only of the fact that it is done by the system.
- If the system operates in such a way as to render the action done or partly done by the system clearly wrong, the action may be done by the appropriate person referred to in section 142F(3).
An example of a criminal offence related to an AES is s 133A of the Animal Products Act 1999:
133A Offences involving automated electronic system
- A person commits an offence who intentionally obstructs or hinders an automated electronic system that is doing an action under section 165B.
- A person commits an offence who knowingly damages or impairs an automated electronic system.
- A person who commits an offence against this section is liable on conviction,—
- for a body corporate, to a fine not exceeding $250,000:
- for an individual, to imprisonment for a term not exceeding 3 months and a fine not exceeding $50,000.
The Customs and Excise Act 2018 is a more comprehensive statutory regime and one that has been recently updated since it was originally implemented in 2009.
Section 296 of the Customs and Excise Act 2018 authorises the Chief Executive to approve the use of AES for an expansive range of activities:
296 Use of automated electronic systems by Customs to make decisions, exercise powers, comply with obligations, and take related actions
- The chief executive may approve the use of automated electronic systems by a specified person to make any decision, exercise any power, comply with any obligation, or carry out any other related action under any specified provision.
- The chief executive may approve the use of an automated electronic system only if—
- the system is under the chief executive’s control; and
- the chief executive is satisfied that the system has the capacity to make the decision, exercise the power, comply with the obligation, or take the related action with reasonable reliability; and
- 1 or more persons are always available, as an alternative, to make the decision, exercise the power, comply with the obligation, or take the related action.
- An automated electronic system approved under subsection (1)—
- may include components that are outside New Zealand; and
- may also be used for making decisions, exercising powers, complying with obligations, or taking related actions under other enactments.
- The chief executive must consult the Privacy Commissioner on the terms and the privacy implications of any arrangements to use an automated electronic system under subsection (1) before—
- finalising the arrangements; or
- making any significant variation to the arrangements.
- A decision that is made, a power that is exercised, an obligation that is complied with, or a related action that is taken using an automated electronic system under this section must be treated for all purposes as if it were made, exercised, complied with, or taken (as the case may be) by a specified person authorised by the specified provision to make the decision, exercise the power, comply with the obligation, or take the related action.
A “specified person” for the purposes of the Customs and Excise Act 2018 is defined as “means the chief executive, Customs, or a Customs officer (as the case may be) carrying out a function under a specified provision”.
Where the Chief Executive is using an AES, s 297 requires them to publicly identify the legal power being delegated to an AES, as well as to “identify” the AES. It is not clear how this requirement can be complied with, especially given the fact that s 296(3) acknowledges that the “components” of a system may be “outside New Zealand”. Publication must be effected “as soon as practicable” but the use of a system is not rendered invalid only by failure to publish those details “as soon as practicable”.
A variation or substitution to a decision made by an AES can be made a specified person (s 298). The person may:
The Customs and Excise Act states, for the avoidance of doubt, that a decision made through an AES does not deprive a person of rights of appeal, or administrative or judicial review:
299 Appeals and reviews unaffected
To avoid doubt, a person has the same rights of appeal or right to apply for administrative or judicial review (if any) in relation to a decision made, power exercised, obligation complied with, or other action taken by an automated electronic system as the person would have had if the decision, power, obligation, or other action had been made, exercised, complied with, or taken by a specified person.
We have identified a number of statutes where some or all of this pattern of drafting is replicated, and where the phrase “automated electronic system” is used.123 While some legislative instruments seem old, the relevant provisions were generally introduced more recently through amendment legislation from 2010 onward. This repeated pattern of statutory drafting suggests a broader Legislative attitude toward how AES should be governed by legislation. Relevant statutes include:
There are more references to “electronic systems” across the statute book (including in the internet filters Bill we discuss later) and we cannot identify any reason for why drafting around “automated” electronic systems has been adopted in some situations, and avoided in others. “Electronic systems” are mentioned in:
There are restrictions and safeguards on the exercise of delegating to an automated electronic system. The most significant safeguard is that the relevant individual authorising the system (usually a Chief Executive of a government agency) is “satisfied” of the system’s “reasonable reliability”.
There is specific legislative clarification that decisions made using an AES must be capable of appeal, but otherwise can be treated as if they were made by a relevant decision-maker.
The Official Information Act 1982 provides one possible tool for requesting the details of an AES being used to perform legal tasks.124
22 Right of access to internal rules affecting decisions
- On a request made under this section, a person has a right to, and must be given access to, any document (including a manual) that—
- is held by a public service agency, a Minister of the Crown, or an organisation; and
- contains policies, principles, rules, or guidelines in accordance with which decisions or recommendations are made in respect of any person or body of persons in their personal capacity. …
“Document” is defined widely under the Official Information Act, as is “information” which a requester is entitled to seek. Despite the apparent utility of s 22 for this purpose, there are extensive exceptions to this provision which require further investigation to assess the section’s suitability for requesting the specifics of an AES. At a minimum, s 22 provides a sound principled basis for seeking the details of rules (including algorithmic systems) affecting decisions.
If AES are to be adopted as a matter of wider government policy, then Official Information access regimes should be bolstered in support.
Government agencies using AES to exercise powers of decision or other statutory powers are not required to have specific legislative authorisation to do so. Such agencies are Crown Entities with the equivalent powers of a legal person.
Despite this, we think that it would be desirable for agencies intending to automate legally significant operations through the use of digital systems to receive greater legislative guidance as to what is required to make that system “reasonably reliable”, and what level of “satisfaction” is required.
This could be achieved through the inclusion of more specific legislative guidance as to what makes an AES “reasonably reliable”, which is currently left (perhaps ironically) to statutory interpretation.
We think the “reasonable reliability” of AES can be understood in various ways.
It is possible to leave these requirements unstated and implicit in relevant legislation. One benefit of this approach is that it preserves greater flexibility for the agency to decide how it achieves compliance with the law and for the “reasonable reliability” standard of a system to change according to context, over time, and as technological methods develop. A related shortcoming is that people with any concerns about the lawfulness of an AES are not able to point to a specific list of legislative criteria against which the system can be measured.
Alternatively, if the specifics of what makes an AES “reasonably reliable” are left to be a matter of implicit interpretation, these criteria may have to be tested through litigation. This approach could be taken at any time by anyone seeking to test the lawfulness of an AES. Relevant arguments would include that Parliament did not intend that the power to delegate a legal task to an AES would be used to perform that task unlawfully.
An AES that is used to exercise a decision-making power, or a power of similar legal effect, should indisputably be categorised as an “automated decision-making system”. Automated decision-making systems are the subject of significant attention and investigation, primarily because of the increased usage of artificial intelligence techniques to assist (or entirely automate) aspects of decision-making. The modern focus on automated decision-making systems stems from the increasing use of machine learning techniques, which are usually driven by statistical modelling. This can mean that bias in datasets or algorithmic training can lead to perverse or discriminatory outcomes. But that does not mean that simpler rule-based systems cannot also have substantial negative outcomes, and they should be treated with similar care.
At the point where coded models of the law (“rules as code”) are operationalised in digital systems, advocates must engage with the wider academic and policy discussions about the impact of algorithmic decision-making. In New Zealand, key documents and investigations in this area include:
In summary, coded models of an agency’s interpretation of the law may be useful in some operational situations. Where Parliament intends that these systems be used with greater public and Executive government confidence, Parliament should include better guidance around system requirements. We think the use of better rules approaches and adherence to isomorphic development practices will help make AES easier to assess for their compliance with the law.
One concern we have is that automated decisions could be subject to appeal, but that the jurisdiction of the Court on appeal may not deal directly with the system’s lawfulness. Instead, the system’s output may be simply put aside, and the decision made again based on evidence at the time of the decision, or available on appeal. In that situation, there would be no judicial scrutiny of the accuracy of the coded interpretation of the law being used within the AES. It could nevertheless be treated as having been judicially approved.
We also note the risk of strategic litigation practices by government departments to avoid judicial scrutiny of a computational model used in an AES. Where a government agency is using an AES, that system will be giving effect to a particular interpretation of the law. Government agencies may wish to preserve their ability to operationalise that interpretation at scale, even where there is a risk that it is wrong in law. As a result, agencies may choose to settle individual cases rather than risk that judicial scrutiny of an AES’s coded model determines that it is wrong in law. It will therefore be important for both judicial and non-judicial auditing and scrutiny processes to be incorporated into the use of coded models in AES.
We also note there is a risk that, because of the jurisdiction of the court on appeal, the judiciary declines to consider a computational model of the law as a whole, instead only considering the relevant statutory provisions to the dispute at hand. This would be consistent with the Court’s general reluctance to comment on academic matters or matters of general interpretation without the benefit of full argument. This is a risk because the model as a whole may be treated by government agencies or by others as having received judicial approval, when only the provisions relevant to the facts of that individual case have been considered.
Code is frequently given legal status without making the code itself ‘the law.’ Code can also be given delegated authority to perform tasks which have legal status.
This is one reason why better rules and rules as code advocates, as well as scholars such as Hildebrandt, Brownsword, Lessig, Susskind and Diver argue that it must be clear when a coded system performing legal tasks is acting with the force of law, and when it is imposing restrictions that have no legal foundation.
This is a core aim of some advocates of better rules and rules as code approaches. We briefly indicate the way these scholars have considered this topic below.
All of these authors have considered this issue because of a perception that digital systems will be used in more and more situations by governments and non-government actors seeking to perform legal tasks or to influence behaviour.
In support of this conclusion, it is useful to briefly name some examples of the way that digital systems are already given legal status, legal authority, are used for legal tasks, or are intended to have legal effects.
Finally, we note that there are strong indications that the New Zealand government too intends to make greater use of digital systems to achieve highly sensitive legal and regulatory outcomes. The AES drafting pattern, as well as policy initiatives such as the algorithm charter, show this is already the case, and we deal with a specific example related to proposed internet filters in more detail next.
As more and more policy issues take on a digital dimension, this tendency toward the use of code-as-law systems will only increase.
During its previous Parliamentary term, the New Zealand Government introduced the Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill (268—1). The Bill had its first reading on 11 February 2021.
The Bill is part of a suite of reforms following the 15 March 2019 terror attacks in Christchurch, New Zealand. Among other things, it creates a statutory regime that authorises the use of “electronic systems” to prevent access to “objectionable material” (a defined term).
It is noticeable that the Bill lays the foundation for a future framework without ever making the case that a framework is needed now. The Bill’s explanatory statement includes the following explanation:
In New Zealand, the only current government-backed web filter is designed to block child sexual exploitation material (the Digital Child Exploitation Filtering System). This filter is voluntary and operates at the Internet service provider (ISP) level. It currently applies to about 85% of New Zealand’s ISP connections.
The Bill facilitates the establishment of a government-backed (either mandatory or voluntary) web filter if one is desired in the future. It provides the Government with explicit statutory authority to explore and implement such mechanisms through regulations, following consultation.
We deal with this Bill in some detail here for a number of reasons:
We believe this Bill to be part of a broader trend across different jurisdictions that aims to expand the scope of what kinds of information may not be published or accessed on the internet. For example:
The Bill delegates all the specifics for how the web filter will work to secondary legislation (regulations). While there are consultation obligations imposed on Executive agencies before regulations are made, the incorporation of insights from consultation are left largely to the judgment of that Executive government actor.
Matters to be dealt with in regulations also include mechanisms of review and appeal, which are separated from the existing appeal mechanisms under the Films Videos and Publications Classification Act 1993. As it is, the Bill provides for no appeal process.
The explanatory statement to the Bill repeatedly uses the word “clarify” to describe what regulations will do. It would be more accurate to say that regulations will “create” the regime, given the way that the principal Act provides little guidance as to how such a filter should operate. Regulations would, apparently, do the following:
clarify the criteria for identifying and preventing access to objectionable content that the filter would block
clarify governance arrangements for the system
specify reporting arrangements for the system
clarify the review process and right of appeal should an ISP, online content host, or other individual or entity dispute a decision to prevent access to a website, part of a website, or an online application:
clarify the obligations of ISPs in relation to the operation of the system:
provide detail of how data security and privacy provisions would be addressed.
Clause 119M of the Bill provides for the establishment of the system. However, it leaves the overall “design and form” of the system entirely up to regulations.
119M Establishment of electronic system
- When establishing the electronic system to be approved for operation under section 119N, the Secretary must consult the following on the design and the final form of the system:
- service providers; and
- technical experts and online content hosts to the extent the Secretary thinks necessary; and
- the public.
- When deciding on the design and form of the system, the Secretary must consider—
- the need to balance—
- any likely impact on public access to non-objectionable online publications; and
- the protection of the public from harm from objectionable online publications; and
- any likely impact on performance for all other network traffic; and
- departmental and technical capacity to operate the system; and
- likely compliance costs.
- However, each of the factors in subsection (2) needs be considered only to the extent that it is relevant in the Secretary’s view.
- The system—
- must have the capacity to both identify and prevent access to a particular online publication with reasonable reliability, based on criteria set out in regulations made under section 149; and
- is subject to governance arrangements required by regulations made under section 149; and
- is subject to requirements for administration and technical oversight prescribed by regulations made under section 149, including relating to data security and privacy; and
- is subject to reporting requirements required by regulations made under section 149.
- Obligations of service providers relating to the operation of the system may be prescribed by regulations made under section 149.
Leaving aside the question of whether a State-enforced internet filter is desirable from a policy and human rights perspective, we make the following observations from a law-as-code perspective, which we think makes the Bill an essential candidate for a transparent and open application of the better rules approach before it is enacted as legislation.
If there is one thing that might be universally agreed about a legislative proposal to implement a digital censorship system, it is that the proposal should have sufficient detail to be scrutinised by members of the public and Parliament before it becomes law.
As drafted, the Bill defers all of the important detail about how the system would operate to regulations, meaning Members of Parliament are not required to take responsibility for how this system would operate. Equally, in more than one of the speeches in support of the Bill at first reading, it was suggested that Select Committee is the appropriate place to work out any extra detail in the Bill. We think this approach of consistently pushing the detail of the internet censorship system is suboptimal and can be avoided by the adoption of a better rules approach which more holistically develops the policy at the outset from a multidisciplinary perspective.
There was some parliamentary support for this proposition from Green MP Chloe Swarbrick:143
[L]eaving all of this stuff to the regulations, is the equivalent of me handing you a piece of paper and saying, “Please draw the rules,” and then enforcing those rules without having had any parliamentary oversight of what those rules actually are. … We are centralising far too much control with the progression of this legislation.
Despite the decision not to use the word “automated” in relation to the electronic system, this filter will be a self-executing system acting with legal force and legal consequences. It is an example of self-executing code-as-law of the kind scholars indicate should be approached with extreme caution, especially because of the way that such a filter will deprive citizens of the shield and tools provided by the ambiguity of natural language. The filter breaks down the constitutional space between the written language used by Parliament and the Judicial interpretation of that language in specific cases. This makes the absence of any legislatively provided dispute resolution mechanism even more concerning: by omission, the judiciary’s role in relation to this filter has been completely removed, other than by judicial review or other inherent powers.
The Bill if passed would confer the power of algorithmic regulation (discussed by Hildebrandt) onto the New Zealand government in relation to matters impinging on freedom of expression and rights to privacy to some degree. There may be an argument that such limitations can be demonstrably justified, but where are these to be made? The Bill delegates review and appeal mechanisms to secondary legislation to be devised by the same agency responsible for operating the algorithmic system.
The algorithmic system will generally act solely based on data inputs with little opportunity for human intervention once the initial parameters of the system are set. Presumably, a register of banned content will be created, but once that has been set, there are no clear mechanisms to challenge the operation of the system.
Where someone believes the electronic system has strayed beyond the bounds of its legal authority, there are not clear standards against which the system can be assessed. We urge extreme caution in the progress of this Bill through the House and advise that the electronic system of filtering web access be developed in close consultation with non-government actors using a better rules approach before it is enacted as legislation.
Morris, J “Spreadsheets for Legal Reasoning: The Continued Promise of Declarative Logic Programming in Law” LLM Thesis, 2020, University of Alberta. ↩︎
Ibid at 47-48. ↩︎
Ibid at 49. ↩︎
Ibid at p 51. ↩︎
Footnote 63 on p 51. ↩︎
Ibid at p 52. ↩︎
Ibid at p 53. ↩︎
“How to make the civil justice system more accessible, discussed by a panel of experts”, RNZ (6 October 2019): <https://www.rnz.co.nz/programmes/otago-university-panel-discussions/story/2018714651/how-to-make-the-civil-justice-system-more-accessible-discussed-by-a-panel-of-experts>. ↩︎
Property Ventures Investments Limited v Regalwood Holdings Limited  NZSC 47. ↩︎
Susskind, R “Online Courts and the Future of Justice” (2019, Oxford University Press, Oxford, United Kingdom) at 163. ↩︎
Drafting around “automated electronic systems” in the Courts Matters Act 2018 was noted in Colin Gavaghan, Alistair Knott, James Maclaurin, John Zerilli, Joy Liddicoat “Government Use Of Artificial Intelligence In New Zealand: Final Report on Phase 1 of the New Zealand Law Foundation’s Artificial Intelligence and Law in New Zealand Project” (New Zealand Law Foundation, Wellington, 2019). ↩︎
This has also been noted by the authors of Gavaghan et al (2019). ↩︎
See the Principles of Safe and Effective Data and Analytics (May 2018) prepared by the Office of the Privacy Commissioner and Stats NZ: https://www.stats.govt.nz/assets/Uploads/Data-leadership-fact-sheets/Principles-safe-and-effective-data-and-analytics-May-2018.pdf. ↩︎
See the Algorithm charter for Aotearoa New Zealand: https://www.data.govt.nz/use-data/data-ethics/government-algorithm-transparency-and-accountability/algorithm-charter/. ↩︎
See Algorithm Assessment Report, Department of Internal Affairs and Stats NZ, October 2018: <https://www.data.govt.nz/assets/Uploads/Algorithm-Assessment-Report-Oct-2018.pdf>. ↩︎
Roger Brownsword “In the year 2061: from law to technological management” (2015) 7(1) Law, Innovation and Technology at 1-51. ↩︎
Hildebrandt, M “Legal protection by design: objections and refutations” (2011) 5(2) Legisprudence 223 at 234. ↩︎
Lawrence Lessig “Code 2.0” (2006, Basic Books, New York, USA). ↩︎
Richard Susskind “The End of Lawyers?: Rethinking the nature of legal services” (2008, Oxford University Press, New York). ↩︎
Susskind, R “Online Courts and the Future of Justice” (2019, Oxford University Press, Oxford, United Kingdom). ↩︎
See Diver “Digisprudence” (2019), above. See also Laurence Diver “Law as a User: Design, Affordance, and the Technological Mediation of Norms” (2018) 15(1) Scripted 4. ↩︎
See for example: Nataliia Filatova “Smart contracts from the contract law perspective: outlining new regulative strategies” (2020) 28(3) International Journal of Law and Information Technology 217. ↩︎
Ruscoe v Cryptopia Limited (in liquidation)  NZHC 728 (8 April 2020). ↩︎
Crimes Act 1961, s 249: accessing computer system for a dishonest purpose. ↩︎
Ibid, s 252: accessing computer system without authorisation. The mens rea of the offence is knowledge or recklessness as to the absence of authorisation. ↩︎
For example, see Auckland Council Unitary Plan Geomaps: <https://unitaryplanmaps.aucklandcouncil.govt.nz/upviewer/>. ↩︎
See “Find out if you need a resource consent”, Wellington City Council: <https://wellington.govt.nz/property-rates-and-building/building-and-resource-consents/resource-consents/find-out-if-you-need-a-resource-consent>. For transparency, we note one of the authors was involved in the production of this tool. ↩︎
See: <https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response#executive-summary>. ↩︎
See: <https://ec.europa.eu/digital-single-market/en/digital-services-act-package>. ↩︎
See: <https://www.ag.gov.au/crime/abhorrent-violent-material>. ↩︎
First reading, Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill (10 February 2021) Volume 749 NZPD. ↩︎