Responsible AI headlines at ALL IN as Minister Champagne announces new AI code of conduct

Innovation Minister François-Philippe Champagne this morning announced a new voluntary code of conduct that identifies measures pertaining to the responsible development and management of advanced generative AI systems.

He made the announcement at ALL IN, a two-day conference in Montreal, organized by Scale AI, that is convening industry heavyweights from over 20 countries to discuss Canadian AI.

“Generative AI breakthroughs have important impacts for society,” Champagne said. “We’re at the point where we must take action. Clear frameworks are necessary to make sure that we’re building trust.”

The code outlines measures around the following principles:

  1. Accountability – Implement a clear risk management framework proportionate to the scale and impact of activities. Share information on best risk management practices and employ multiple lines of defense including third-party audits
  2. Safety – Perform impact assessments, take steps to mitigate risks, malicious or inappropriate uses
  3. Fairness and Equity – Test systems for biases throughout their lifecycle, implement diverse training methods
  4. Transparency – Publish information on capabilities and limitations of AI systems, develop methods to identify output generated by AI, disclose type of training data used and ensure that systems that could be mistaken for humans are clearly identified as AI.
  5. Human oversight – Ensure systems are monitored and incidents are reported and acted on
  6. Validity and Robustness – Conduct testing, red teaming, and benchmarking against recognized standards to ensure systems operate effectively and are secured against attacks.

These measures, the Innovation, Science and Economic Development Canada (ISED) revealed, will provide a critical bridge between now and when Bill C-27, the government’s proposed AI and Data Act (AIDA), would be coming into force.

Proposed over a year ago, Bill C-27 is tasked to promote the responsible design, development and use of AI systems in Canada’s private sector, with a focus on high-impact systems affecting health, safety, and human rights.

The legislation has faced extensive scrutiny, including yesterday at the House of Commons Standing Committee on Industry and Technology, with critics urging the minister to address language poorly defined in AIDA, commit to more active consultation with stakeholders beyond industry insiders and expand AI regulation to public sectors as well.

Champagne explained at ALL IN, “After meeting with experts, we realized that while we are developing a law here in Canada, it will take time,” adding, “if you ask people on the street, they want us to take action now to make sure that we have specific measures that companies can take now to build and trust in their AI products.”

Companies like Cohere, OpenText, Appen, Blackberry and more have signed to commit to the code of conduct.

In the coming days, the government will publish a summary of feedback it received during the consultation it carried out with stakeholders for the development of the code of conduct.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Related Tech News

Featured Tech Jobs


CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.