In a recent CBC opinion piece, Toronto-based management consultant Jonah Prousky shares his thoughts on the legislative process regarding the regulation of artificial intelligence. He suggests that OpenAI’s release of ChatGPT has reminded society of AI’s potential to “disrupt many aspects of the human experience” in our complacency and reliability for a tool to perform work for us. As a result, Prousky deems it critical to manage the future of AI through increased legislation.
The Digital Charter Implementation Act, Bill C-27, contains Canada’s first shroud of AI legislation through the Artificial Intelligence and Data Act (AIDA). This law would place guidelines on the uses of AI, and enforce non-compliance policies of up to $25 million. However, several challenges may ensue.
Given the nature of the legislative process and the many phases through which a Bill must exist, it may be years until something is passed. Technology, however, will not slow down and wait for this process to take its course. In fact, as Prousky puts it, “technology develops exponentially,” making it difficult to predict or fathom what AI may be capable of in the future. He relates it to the exponential growth of Covid-19 and how challenging it proved to catch up with it in the beginning stages.
As well, AIDA is mainly focusing on intentionally harmful AI misuses, such as financial crime. There are countless instances of less threatening uses of AI that will still have an impact on the future of the “human experience” such as school children using AI to complete their homework assignments. Moreover, the net effect of such implications on society may be largely unknown at this point.
The government regulation of AI will also mean that ultimately, corporations will own this technology – which has its pros and cons. If corporate and social interests are competing, the way forward is blurry. It will be interesting as Canadians to see how regulators navigate the ethical boundaries associated with AI.
Read the full article from the CBC.