Should the CIO be solely responsible for keeping AI in check? Info-Tech weighs in

In a recent webinar, research director at Info-Tech Research Group Brian Jackson explained how he thought it was surprising that IT workers think that the CIO should be solely responsible for AI.

The next most popular answer after that, he added, was “well, nobody.”

The research company surveyed 894 respondents who either work in IT or direct IT for its 2024 Tech Trends report.

“It’s early days for many organizations who are deploying AI, and that’s probably why we’re seeing these types of responses. But making the CIO solely accountable is likely not what you want to do,” said Jackson. “If AI is being deployed to drive business outcomes, then you have to get the business leaders involved.”

Info-Tech also examined how organizations that have already invested in AI or plan to invest in AI, which it refers to as “AI adopters”, compare to organizations that either do not plan to invest in AI or don’t plan to invest until after 2024, referred to as “AI skeptics”.

Only one in six AI adopters are going to be creating a committee that will be accountable, and one in 10 share accountability across two or more executives.

Jackson advises organizations to think about three key concepts when implementing a responsible AI model:

  1. Trustworthy AI – Do people understand how it works, how it generates output, or what data goes into its training?
  2. Explainable AI – Ability to explain how an AI model makes its predictions, its anticipated impact, and its potential biases
  3. Transparent AI – Can we communicate the impacts of the decisions that are being made regarding AI, can we monitor the results and report on them, show people the negative aspects, and adjust accordingly.

Having guardrails in place would be even more critical as AI starts creating customer value directly, Info-Tech said.

AI will no longer be just complimentary to the core value of an e-commerce business or an entertainment business, such as when Netflix predicts what you’re going to watch next, explained Jackson.

“We’re seeing business models created where AI is the value that the customer gets out of the service,” he affirmed.

 OpenAI is a perfect example of that, but we also see firms like Intuit, which is retooling its whole platform around generative AI. Specifically, it released a custom-trained financial large language model it calls GenOS that sits at the center of the company’s operating system, and solves tax, accounting, marketing, cashflow, and many other personal finance challenges.

However, as much as it is valid to hold executives accountable for regulating AI, security by design would be equally critical, explained Jackson.

Every year, organizations are investing more in cybersecurity, and yet they continue to face more attacks than ever before.

“Somehow, we’ve created this industry where software vendors create the risk, yet the customers bear the cost mitigating it,” Jackson noted.

He added, “It’s becoming everybody’s job. I bet you’ve been through some sort of phishing, email testing or cybersecurity training from your own organization, no matter what your job title is. So how do we get out of the cycle of always spending more on cybersecurity? How do we start to shift the accountability for security back away from the users to the builders?”

In 2024, he pointed out, we will see the White House and the new National Cybersecurity Strategy put the onus on technology makers to prioritize security or mandate, for instance, internal and external testing of AI systems before release.

“The bottom line is – if you’re making new AI models, you don’t have a choice,” said Jackson. “We can’t afford to build fast and cheap today and pay the cost of vulnerability later. We need to build with security by design now. And if you’re on the other side of it – you’re a customer of these AI providers, you have more leverage.”

Another key trend that organizations looking to mitigate AI risks need to think about is their digital sovereignty, Jackson said.

Organizations can, for instance, update their robots.txt file if they do not want their website data to be used to train an AI model. 

However, you’ll need a lot more, he added, to keep your data locked down, with people using data to train open source models. Artists have been especially at the receiving end of the widespread mimicry by AI.

Many artists and organizations have already sought to protect their intellectual property with tools like Glaze from the University of Chicago, which puts images through a filter tasked to protect the style from being interpreted by an AI algorithm. 

The university is also developing another project called Nightshade which “poisons” the training data, rendering the outputs useless – dogs become cats, cars become cows and so forth.

“While we wait for the courts to make the rulings, maybe the law makers will catch up and introduce new laws that redefine copyright in this AI age, “said Jackson. “But for now, it seems like it’s open season on scraping your data and imitating your intellectual property. So to defend our digital sovereignty, we have to use technology against technology.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.