The Government of Canada recently published a guide for federal institutions to follow with regards to the use of Generative AI (GenAI) tools. The guide provides an overview of GenAI, describes the risks and challenges associated with the adoption of such tools, identifies principles for responsible GenAI, and offers policy considerations that organizations can consider embedding. The guide is also in compliance with existing laws and policies related to privacy, security, intellectual property, and human rights.
Continue reading this article for a summary and main takeaways from the Guide on the Use of Generative AI.
What is Generative AI?
GenAI is a type of AI that outputs content such as text, videos, and images based on information that a user inputs, usually in the form of short instructional text or a question.
Examples of GenAI tools include:
- Large language models (LLMs) such as ChatGPT and Copilot
- GitHub Copilot and FauxPilot, which produce code based on text prompts
- DALL-E, Midjourney and Stable Diffusion, which produce images from text or image prompts
Many GenAI models have been trained on large volumes of data. Reinforcement from users can provide the model with feedback to modify their response.
Challenges and Opportunities
It is critical for federal institutions to assess and mitigate the ethical and legal risks associated with using GenAI and ensure that personal information and sensitive data are protected.
There are many challenges in using GenAi tools sch as limited transparency and the outdatedness of training data. Moreover, training data can be riddled with biases as a result of the lack of “diversity of views” from the Internet. GenAI can also pose risks to the “integrity and security of federal institutions” from the potential misuse of various actors.
Recommended Approach
Although GenAI tools present concerns, federal institutions can also use GenAI tools to support their operations and efficiency. The Government of Canada identifies both low-risk and higher-risk use of the tools such as writing an email to a colleague versus deploying a tool for use by the public respectively. With this, the Government suggests that federal institutions experiment with low-risk uses before considering higher-risk uses.
To ensure the responsible use of GenAI tools, the Treasury Board Secretariat has developed the acronym “FASTER”, defined below:
Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations; engage with affected stakeholders before deployment
Accountable: take responsibility for the content generated by these tools and the impacts of their use. This includes making sure generated content is accurate, legal, ethical, and compliant with the terms of use; establish monitoring and oversight mechanisms
Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected; assess and manage cyber security risks and robustness when deploying a system
Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; provide information on institutional policies, appropriate use, training data and the model when deploying these tools; document decisions and be able to provide explanations if tools are used to support decision-making
Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to better outcomes for clients; consider the environmental impacts when choosing to use a tool; identify appropriate tools for the task; AI tools aren’t the best choice in every situation
Other arms of the Canadian Government including the Canadian Centre for Cyber Security, Statistics Canada, and the Office of the Chief Information Officer of Canada can also provide support.
Responsibilities for Federal Institutions
Federal institutions are encouraged to explore the uses of GenAI tools and to enable employees to optimize their work while ensuring alignment with the FASTER principles and regulatory policies.
Employees should also be provided with access to effective training tools and be supported in learning about detecting biases and inaccurate content. Furthermore, employees should be engaged in effective change management programs to ensure skill progression and alignment.
Policy Considerations and Best Practices
The Directive on Automated Decision-Making applies to automated systems that are used to make administrative decisions, including those that rely on AI. Organizations should ensure that they meet the requirements of the directive including completing the Algorithmic Impact Assessment.
It is also important to recognize that GenAI may not be suited for use in administrative decision-making because of the limitations associated with transparency, accountability, and fairness. Moreover, some tools may not allow the use of their models for decisions of certain nature.
Some best practices include:
- Conducting regular system testing before and during the operation of a system to ensure that risks of potential adverse impacts are identified and mitigated.
- Applying more in-depth testing methods to identify potential risks in instances where systems will be made publicly available.
- Planning independent audits for assessing GenAI systems against risk and impact frameworks. See the guidance on the Risk management page.
- Developing a plan to document and respond to cyber security events and incidents, aligned with the Government of Canada Cyber Security Event Management Plan (GC CSEMP).
Privacy Considerations
Personal information that is handled by federal institutions is mandated by the requirements of the Privacy Act. Privacy risks associated with GenAI will depend on how the system or tool is being used, how it processes information, and whether it is publicly accessible or deployed on the Government’s secure network.
Public servants should not input personal information into publicly available online GenAI tools. However, when using a GenAI tool controlled by the Government, employees are responsible for following privacy requirements but may be able to input personal information depending on the security controls in place.
Support and Resources
Federal institutions are encouraged to use the Guide on the Use of Generative AI to help develop their own guidance. The guide will continue to evolve. Contact the Canadian Centre for Cyber Security, TBS’s Responsible Data and AI team ([email protected]) Statistics Canada, and the Office of the Chief Information Officer of Canada for further support.
Click here to read the full comprehensive guide.