Making AI explainable to bridge trust gaps: Forrester weighs in

Artificial intelligence has invaded industries and companies of all sizes, but the backstory of what makes these tools powerful and erratic alike remains somewhat obscure.

Understanding how and why AI systems arrive at their outputs, Forrester explained in a new report, is a critical transparency mechanism, called explainable AI. And that is key for enterprises to minimize the trust gap in AI systems across all stakeholders.

Companies that have attained a higher level of AI maturity wherein they start to leverage opaque methods like neural networks for additional predictive power are the most concerned with explainability challenges, the report explained.

These neural networks are the only way to analyze text, images, and video at scale, so industries with use cases involving unstructured data will be more inclined to invest in explainability. At the same time, these companies will have even more regulatory exposure.

However, with the explosion of generative AI-enabled natural language interactions, eventually all companies will need to invest in explainability, the report noted.

Regulatory compliance is one factor, but explainability can also help companies unlock the business value of their AI algorithms. For example, in a use case like credit determination, explainability can inform future credit risk models. Customer insights is another big one that enterprises are eyeing in order to drive business value.

Furthermore, explainability can drive trust among employees who use AI systems to perform daily functions. AI adoption suffers significantly when employees do not at least have a minimum understanding of how the system produces results. Forrester’s 2023 Data and Analytics survey, in fact, showed that 25 per cent of data and analytics decision makers say that lack of trust in AI systems is a major concern in using AI.

To achieve these outcomes with explainability, researchers have developed interpretability techniques like SHAP and LIME, which are open source and which are widely used by data scientists. Many larger machine learning platform vendors also offer explainable AI capabilities on top of their existing model development functionality. These vendors also serve other responsible AI needs like model interpretability, bias detection, model lineage and more.

But data scientists are not the only ones who will need explainability. AI governance teams need model intelligence platforms that provide responsible AI assessments to oversee an enterprise’s use of AI.

Business users can also use these model intelligence platforms, or they can use machine learning engines with explainable AI techniques, especially for high-risk or highly regulated use cases such as credit determination and hiring.

Forrester recommends enterprises seeking explainability to do the following:

  1. Look at the different AI use cases, classify risk accordingly, and then define explainability requirements for each tier. Companies, for instance, have been borrowing from the EU’s AI Act that classifies AI systems into four categories — unacceptable risk, high risk, medium risk, and low risk. As a result, high-risk use cases, for example, may require complete transparency, while interpretability may suffice for moderate risk use cases.
  2. Demand explainability from AI vendors and beware of the black box, as you may be held accountable for any vulnerabilities or flaws. 
  3. Ensure that explainability goes beyond individual models and covers how the entire system works — the interoperability of all the pieces — and measure business outcomes and customer satisfaction as well as model performance to ensure the system is delivering as expected.

The full Forrester report is available for purchase here.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.