Channel Daily News

Google, Microsoft and others agree to voluntary AI safety action

Getty Images

Seven major American artificial intelligence companies including Google and Microsoft have promised that new AI systems will go through outside testing before they are publicly released, and that they will clearly label AI-generated content, U.S. President Joe Biden announced Friday.

“These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust,” Biden told reporters.

— the companies have an obligation to make sure their technology is safe before releasing it to the public, Biden said.  “That means testing the capabilities of their systems, assessing their potential risk, and making the results of these assessments public;

— companies promised to prioritize the security of their systems by safeguarding their models against cyber threats and managing the risks to U.S. national security, and also sharing the best practices and industry standards;

— companies agreed they have a duty to earn the people’s trust and empower users to make informed decisions — labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm;

— companies have agreed to find ways for AI to help meet society’s greatest challenges — from cancer to climate change — and invest in education and new jobs to help students and workers prosper from the enormous opportunities of AI.

Those companies agreeing also include Amazon, Meta, OpenAI, Anthropic and Inflection.

These voluntary commitments are only a first step in developing and enforcing binding obligations to be adopted by Congress. Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement, a White House background paper says. The administration will continue to take executive action and pursue bipartisan legislation to help America lead the way in responsible innovation and protection.

“As we advance this agenda at home, we will work with allies and partners on a strong international code of conduct to govern the development and use of AI worldwide,” the statement adds.

The agreement says the companies making this commitment recognize that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. They commit to establishing bounty systems, contests, or prizes to incent the responsible disclosure of weaknesses, such as unsafe behaviors, for systems within scope, or to include AI systems in their existing bug bounty programs.

There was some skepticism after the announcement. PBS quoted James Steyer, founder and CEO of the nonprofit Common Sense Media, who said, “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

Exit mobile version