Salesforce customers receive generative AI boost with new launch

Salesforce on Monday launched AI Cloud, which it described as a means for its customer base to experience the benefits of generative artificial intelligence (AI) safely and securely.

According to the San Francisco-based CRM giant, the offering is a “suite of capabilities optimized for delivering trusted open and real-time generative experiences across all applications and workflows.”

The heart of AI Cloud, it added, is Einstein, the company’s AI engine that was first launched in September 2016 and “now powers over one trillion predictions per week.”

Salesforce stated in a backgrounder document that “unlike consumer AI, like Apple’s Siri and Amazon Alexa, enterprise customers require higher levels of trust and security, especially in regulated industries.

“Salesforce has thousands of customers each using their own models. Einstein needs to ensure that the models are trusted, that customer data remains safe and secure, delivers accurate, unbiased results, and adheres to compliance requirements for customers dealing with more sensitive data, as in the finance, healthcare, or government sectors. And once a new model is shipped, it has to be re-trained on their freshest data.”

The core piece of the security strategy is the Einstein Trust Layer, a standard that “helps resolve enterprise concerns of risks associated with adopting generative AI by meeting enterprise data security and compliance demands. The Einstein Trust Layer prevents LLMs from retaining sensitive customer data, ensuring customers can maintain data governance controls, while still leveraging the immense potential of generative AI.”

That is critical, new findings from a research study conducted by Salesforce released this month reveal. A recent survey of over 4,000 full-time employees found that while upwards of 73 per cent have plans to use the technology, a majority admit that generative AI poses new security risks.

The survey revealed that:

  • 61 per cent are using or plan to use generative AI very soon, 68 per cent said it will help them better serve their customers and 67 per cent said it will help them get “more out of their other technology investments such as other AI or machine learning (ML) tools.
  • Nearly 60 per cent of those who plan to use the technology do not know how to do so using trusted data sources or while ensuring sensitive data is kept secure.
  • A trust gap exists in that 83 per cent of C-suite executives are confident they can use it safely and securely, compared to 29 per cent of employees on the front line.

Paula Goldman, chief ethical and human use officer at Salesforce, said that generative AI has the potential to help businesses connect with their audiences in new, more personalized ways. As companies embrace this technology, they need to ensure that there are ethical guidelines and guardrails in place for safe and secure development and use of generative AI.

The Einstein Trust Layer, the company added, will also provide deployment capabilities for any relevant Large Language Model (LLM), while helping organizations maintain their data privacy, security, residency, and compliance goals.

One such organizations is RBC Wealth Management USA. Greg Beltzer, its head of technology, said, “Embedding AI into our CRM has delivered huge operational efficiencies for our advisors and clients.

“We believe that this technology has the potential to transform the way businesses interact with their customers, deliver personalized experiences and drive customer loyalty.”

Also Monday, at an event in New York City called Salesforce AI Day, the company announced an expansion of its Generative AI Fund from US$250 million to US$500 million.

Paul Drews, managing partner of Salesforce Ventures, said that the expansion “enables us to work with even more entrepreneurs who are accelerating the development of transformative AI solutions for the enterprise.”

The fund has already invested in several AI firms, including Hearth, You.com, Anthropic and Cohere.

During a keynote address at the NYC event, Salesforce chief executive officer (CEO) Marc Benioff said that with the AI Cloud launch, the company’s customer base now has the “ability to use generative AI without sacrificing their data privacy and data security. This is critical for each and every one of our customers all over the world for every transaction and every conversation in Salesforce begins and ends with the word ‘Trust’. We understand that well.

“And there’s one other critical part of all of this. It’s not just about trusted AI, and delivering the technology to the right person at the right time, but it’s also about responsibility.

“As we’re all going to learn – because we’re now on a societal AI journey – there is going to be a lot about responsibility in this technology. We’ve all seen the movies, and we’ve all seen where this can go, haven’t we? We all have these crazy ideas in our head of what could happen. There are many different possible scenarios, so that’s why responsible AI use is so critical.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Paul Barker
Paul Barker
Paul Barker is the founder of PBC Communications, an independent writing firm that specializes in freelance journalism. He has extensive experience as a reporter, feature writer and editor and has been covering technology-related issues for more than 30 years.

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.