More work needed to blunt public’s AI privacy concerns: Report

Organizations aren’t making much progress in convincing the public their data is being used responsibly in artificial intelligence applications, a new survey suggests.

The report, Cisco Systems’ seventh annual data privacy benchmark study, was released Thursday in conjunction with Data Privacy Week.

It includes responses from 2,600 security and privacy professionals in Australia, Brazil, China, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom, and the United States. The survey was conducted in the summer of 2023.

Among the findings, 91 per cent of respondents agreed they need to do more to reassure customers that their data was being used only for intended and legitimate purposes in AI.

“This is similar to last year’s levels,” Cisco said in a news release accompanying the report, “suggesting not much process has been achieved.”

Most respondents said their organizations were limiting the use of generative AI (GenAI) over data privacy and security issues. Twenty-seven per cent said their firm had banned its use, at least temporarily.

Customers increasingly want to buy from organizations they can trust with their data, the report says, with 94 percent of respondents agreeing their customers would not buy from them if they did not adequately protect customer data.

Many of the survey responses show organizations recognize privacy is a critical enabler of customer trust. Eighty per cent of respondents said their organizations were getting significant benefits in loyalty and trust from their privacy investment. That’s up from 75 per cent in the 2022 survey and 71 per cent from the 2021 survey.

Graphic from Cisco Systems 2024 Privacy Benchmark report

Nearly all (98 per cent) of this year’s respondents said they report one or more privacy metrics to the board, and over half are reporting three or more. Many of the top privacy metrics tie very closely to issues of customer trust, says the report, including audit results (44 per cent), data breaches (43 per cent), data subject requests (31 per cent), and incident response (29 per cent).

However, only 17 per cent said they report progress to their boards on meeting an industry-standard privacy maturity model, and only 27 per cent report any privacy gaps that were found.

Respondents in this year’s report estimated the financial benefits of privacy remain higher than when Cisco started tracking them four years ago, but with a notable difference. On average, they estimated benefits in 2023 of US$2.9 million. This is lower than last year’s peak of US$3.4 million, with similar reductions in large and small organizations.

“The causes of this are unclear,” says the report, “since most of the other financial-oriented metrics, such as respondents saying privacy benefits exceed costs, respondents getting significant financial benefits from privacy investment, and ROI (return on investment) calculations, all point to more positive economics. We will continue to track
this in future research to identify if this is an aberration or a longer-term trend.”

One challenge facing organizations when it comes to building trust with data is that their
priorities may differ somewhat from those of their customers, says the report. Consumers surveyed said their top privacy priorities are getting clear information on exactly how their data is being used (37 per cent), and not having their data sold for marketing purposes (24 per cent). Privacy pros said their top priorities are complying with privacy laws (25 per cent) and avoiding data breaches (23 per cent).

“While these are all important objectives [for firms], it does suggest additional attention on transparency would be helpful to customers — especially with AI applications where it may be difficult to understand how the AI algorithms make their decisions,” says the report.

The report recommends organizations:

— be more transparent in how they apply, manage, and use personal data, because this will go a long way towards building and maintaining customer trust;
— establish protections, such as AI ethics management programs, involving humans in the
process, and work to remove any biases in the algorithms, when using AI for automated
decision-making involving customer data;
— apply appropriate control mechanisms and educate employees on the risks associated with generative AI applications;
— continue investing in privacy to realize the significant business and economic benefits.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.