Blog

Critical Reflections on ISED’s Proposed Canadian Guardrails for Generative AI

August 25, 2023
Critical Reflections on ISED’s Proposed Canadian Guardrails for Generative AI

Introduction

Innovation, Science and Economic Development Canada's proposed code of practice for generative AI highlights a number of areas of consideration for generative AI developers, deployers, and operators. The code is divided into six areas that concern generative AI:

  • Safety,
  • Fairness,
  • Transparency,
  • Human oversight and monitoring,
  • Validity and robustness, and
  • Accountability

The first aim of this piece is to examine the code in terms of what it misses. Then, this piece focuses on three of the code’s sections individually: safety, fairness, and human oversight and monitoring.

What’s absent

The overarching theme of what is missing from the code are the secondary effects of AI development, deployment, and operation. A secondary effect is a result that is often an undesirable effect of developing, deploying, or operating a generative AI system. Just because we describe an effect as ‘secondary’ does not mean it is not important or harmful. Secondary effects are simply sidelined when compared to other areas of AI risk, including (but not limited to) transparency or bias. The code is silent on the secondary effects in the following areas:

Environmental impacts

Energy consumption: training foundation models requires a tremendous amount of energy and the current arms race to build bigger models does not incentivize developer efficiency. Moreover, according to researchers at UC Riverside and UT Arlington, “...training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater.” One consideration here is how to incentivize developers to be more efficient when it comes to AI model environmental considerations.

Social impacts

Generative AI has the potential to mass-produce low-quality, malicious, or misinformative content that unsuspecting readers quickly spread on the internet. How will developers, deployers, and operators mitigate ‘data pollution’?

Generative AI systems (such as AutoGPT) can now traverse the internet by integrating with search engines such as Bing or Google and iterating through search results. As a result, there is potential for their capabilities to extend beyond simply reading web content to instead interact with users (such as via social media or email) by themselves. This poses a unique challenge for AI developers, deployers, and operators on how AI agent behaviour ought to be regulated.

Automation Risk

As generative systems scale up, so will safety testing. Safety testing in the form of red-teaming means humans (often employed in developing countries whose labour laws permit these activities) will poke and prod AI systems to see if they produce toxic output. As a result, red teamers are exposed to toxic content generated by AI systems and the question remains–have AI safety labs adequately prepared and supported red-teamers who are exposed to such content?

Privacy Risk

When considering privacy risk, note that when a generative system is trained on private data there is the risk of it resurfacing in model use by users. In an attempt to mitigate data leaks, companies may consider data anonymization. However, there is a conceptual distinction between anonymizing data and de-identifying it. As Lea Kissner, Chief Privacy Officer at Humu notes:

“Deidentification doesn’t tend to successfully anonymize data because there are so many sources of data in the world that still have identifying information in them; figure out where some identified dataset and the deidentified data align, and you’ve re-identified the dataset.” 

The difference between deidentification and true anonymization is that anonymized datasets cannot be correlated with other data to uncover individuals’ identities whereas some deidentified datasets can be re-identified.

Synthetic data is another measure to navigate the privacy issues with using real people’s data. Unlike de-identified or anonymized data, synthetic data is not derived from actual user data. Instead, synthetic data mimics user data in a way that is still useful to developers without compromising privacy. A question that then arises is what considerations should developers be aware of when building synthetic datasets?

Safety

Malicious actors can now target AI systems that act as automated agents. Attackers can accomplish these adversarial attacks against automated agents by embedding toxic content in conspicuous places (e.g. poisoning search engine results). As a result, entities (e.g. Google) beyond generative AI developers, deployers, and operators carry blame for enabling toxic content that may harm automated agents and their users. This harm may manifest in the form of adversarial attacks where attackers misdirect automated agents to leak information or act in a way the user did not intend, leading users to cybersecurity threats like attack sites.

Fairness

Measuring and mitigating bias is one aspect of fairness. As the documentation for the popular open source project for fairness in AI systems Fairlearn notes

“Machine learning systems used in the real world are inherently sociotechnical systems, which include both technologies and social actors. Designers of machine learning systems typically translate a real-world context into a machine learning model through abstraction: focusing only on ‘relevant’ aspects of that context, which are typically described by inputs, outputs, and the relationship between them. However, by abstracting away the social context they are at risk of falling into ‘abstraction traps’: a failure to consider how social context and technology are interrelated.”

Imagine an AI developer tests its model on a bias metric without giving consideration to the broader context in which the model exists. As a result, the developer might overlook the assumptions underlying a chosen bias metric and consequently might not consider whether those assumptions are actually true about the model’s social context. The consequence of falling into an abstraction trap is that an AI developer may be under the impression that they mitigated bias in their model when in reality they did no such thing. Fairness oversights can happen when a developer fails to understand the social context of their sociotechnical system.

Human Oversight and Monitoring

Generative AI systems produce outputs at scale. What this means is that as generative AI systems grow, so will the need for oversight and monitoring. The question in this case is how can AI system operators scale oversight and monitoring? Furthermore, even if operators introduce measures like chain-of-thought prompting (which AI researchers Wei et al. note are “a series of intermediate reasoning steps”) to explain outputs, scaling systems up will mean that there will be even more ‘chains of thought’ and therefore outputs to monitor. In essence, having a generative AI system produce an audit log means as the system scales, so will the volume of logs.

Conclusion

This piece provides a concise overview regarding gaps and additional issues in ISED’s code of practice from its proposed guardrails for generative AI. There could be lengthy discussion on each section. However, the goal of this piece was not to provide an in-depth policy analysis but rather provide a preliminary analysis on areas that require further consideration. 

As AI systems develop further, there will likely be a new taxonomy that extends existing AI risk categories like oversight and monitoring or transparency, while potentially introducing new categories in AI risk as well.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help