Canadian privacy czars release principles for responsible development of AI

On the heels of cybersecurity guidance for generative AI systems issued by the federal government, Canada’s federal, provincial, and territorial privacy regulators have issued their own set of privacy-related principles to be followed.

Announced Thursday, the principles are aimed at advancing the responsible, trustworthy and privacy-protective development and use of generative artificial intelligence (AI) technologies in this country.

While Parliament is debating the proposed Artificial Intelligence and Data Act (AIDA), which would put mandatory rules around high-risk AI systems, the law likely won’t come into effect for several years. Governments and regulators hope in the meantime the guidelines will give application developers, businesses, and government departments some idea of how far they should — or shouldn’t — go.

And though laws regulating AI aren’t on the books yet, the regulators note that organizations developing, providing, or using generative AI have to follow existing privacy laws and regulations in Canada.

Also on Thursday, the government announced that eight more companies have signed on to its voluntary AI Code of Conduct. They include AltaML, which helps firms with AI solutions; BlueDot, which uses AI to track infectious diseases; solutions provider CGI, Kama.ai, which uses AI for building marketing and customer relationship applications; IBM, Protexxa, which offers a SaaS cybersecurity platform; Resemble Ai, which lets organizations create human-like voices for answering queries in call centres; and Scale Ai, which helps firms create AI models. The voluntary code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing advanced generative AI systems.

Federal Privacy Commissioner Philippe Dufresne announced the new principles document today at the beginning of an international Privacy and Generative AI Symposium organized by his office.

The document lays out how key privacy principles apply when developing, providing, or using generative AI models, tools, products and services. These include:

  • establishing legal authority for collecting and using personal information used in AI systems, and when relying on consent, ensuring that it is valid and meaningful;
  • being open and transparent about the way information is used by AI systems and the privacy risks involved;
  • making AI tools explainable to users;
  • developing safeguards for privacy rights; and
  • limiting the sharing of personal, sensitive or confidential information.

Developers are also urged to take into consideration the unique impact that these tools could have on vulnerable groups, including children.

The document provides examples of best practices, including implementing “privacy by design” into the development of the tools, and labeling content created by generative AI.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.