Hashtag Trending Feb.13th-Google employees criticize “botched” announcement of Bard, AI learns without being trained and coping with the biggest issue with hybrid and remote work

Google employees have choice words for their company’s mismanagement of their AI rollout last week. AI that might be able to learn without being trained and ways to cope with what some are calling the “biggest issue in hybrid and remote work.”

Hashtag Trending on Amazon Alexa Google Podcasts badge - 200 px wide

 

It’s Monday, February 13th. These stories and more on  Hashtag Trending – today’s top technology news stories. I’m your host, Jim Love.

Employees at Google had choice words for their company’s performance last week in the rollout of the Google’s AI offerings. According to a story on MSNBC, staffers took to the internal message forum Memegen to express their displeasure, calling the rollout of Bard, “rushed” “botched” and even “un-Googley.”

They were of course, referring to the fact that Google’s AI – called Bard – had made a very simple factual error, one that was not detected and found its way into the corporate presentation. That caused an uproar on twitter and sent the company’s stock plummeting.

It also raised the ire of Google employee’s, already unhappy in the face of recent layoffs.

“Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic,” read one meme that included a serious picture of Google CEO Sundar Pichai. “Please return to taking a long-term outlook.” The post received many upvotes from employees.

Another popular post stated: “Sundar, and leadership, deserve a Perf NI,” which is the the lowest rating in the company’s employee performance review system. The author goes on to say, “They are being comically short sighted and un-Googlely in their pursuit of ‘sharpening focus.’”

Source: CNBC

Popular user forum Reddit was the victim of a cybersecurity breach. According to Bleeping Computer, hackers managed to gain access to the company’s internal business systems and to steal internal documents and even source code.

Hackers gained access by using a phishing technique which targeted Reddit employees and sent them to a landing page which impersonated Reddit’s internal intranet site. Using that now common technique, they were able to steal employee’s credentials and even two factor authentication tokens.

As Reddit explained in their security incident notice, “After successfully obtaining a single employee’s credentials, the attacker gained access to some internal docs, code, as well as some internal dashboards and business systems,” But the report further notes:

“We show no indications of a breach of our primary production systems (the parts of our stack that run Reddit and store the majority of our data).”

Reddit said that the employee self-reported the incident to the company’s security team.

The stolen data includes limited contact information for company contacts and current and former employees, according to Reddit and the data also included some details about the company’s advertisers. Reddit also noted that credit card information and passwords were not affected.

Source: Bleeping Computer

The Cisco 2023 Data Privacy Benchmark Report had some very interesting insights which point to a real divide between consumer and corporate perceptions of how personal data should be handled, particularly when it comes to its use by Artificial Intelligence based systems.

In the study, 95 per cent of companies called privacy a “business imperative” and said that is was an “integral” part of company culture. 94 per cent believe customers will not buy from them if they are not protecting personal data.

That belief has shown itself in corporate spending on privacy, which, according to the report, increased in 2019, and “at least held steady “in 2022 as companies saw or believed there were financial benefits from these investments.

But the report noted some strong negative responses in terms of customer trust, particularly when it comes to the use of AI. Only 43 per cent of consumers believe AI will prove useful in improving people’s lives. Only just over half (54 per cent) are willing to share even anonymized personal data to improve AI products. 60 per cent say they are concerned about how business will make use of AI, and most punishing, 65 per cent say that current uses have eroded their trust

Companies seem to recognize this, with 92 per cent saying that they need to do a better job of reassuring customers that “AI solutions will only use data for intended and legitimate purposes.”

But the study showed a real disconnect in terms of what is important to consumers versus what is important to the companies surveyed. Consumers stated that that their priority was clear transparency into how data is being used. Organizations were focused on compliance with privacy laws with transparency taking second place.

The big surprise came when 90 per cent of consumers seemed to prefer global data storage providers in terms of security. In prior studies consumers have strongly preferred the idea of local storage or what has come to be termed data sovereignty.

Source: Cisco

According to an article in Vice, researchers at the Massachusetts Institute of Technology, Stanford University, and Google have discovered a “apparently mysterious” phenomenon where AI systems appear to be learning new tasks they have not been trained for – what they have referred to as “in-context” learning.

Normally, to learn how to perform a new task, machine learning models need to be retrained with new data —a tedious and time-consuming process. But what if these systems could learn new tasks from only a few examples, essentially picking up new skills it hasn’t been explicitly trained for?

That’s exactly what these researchers appeared to have observed. If true, this means their model isn’t just copying training data, it’s building on previous knowledge, doing just what humans would do.

The researchers weren’t using ChatGPT or other popular tools, but they are smaller version of the same types of models, so their work does offer insights into these larger tools and data sets.

The researchers fed their model synthetic data, and gave it prompts that the program never could have seen before. “Despite this, the language model was able to generalize and then extrapolate knowledge from them”, according to one of the researchers.

The team hypothesized that AI models that exhibit in-context learning actually create smaller models inside themselves to achieve new tasks. This may be possible due to a concept called “self-attention” used by transformer models, like ChatGPT to track relationships in sequential data, like words in a sentence.

The paper is just being posted and undoubtedly will be the subject of further research, but unraveling how and why this occurs could be a huge step forward in understanding how large language models learn and store information.

Source: Vice

Many companies are reporting that one of the biggest challenges in hybrid and remote work is the on-the-job training and integration of junior employees. A report in Forbes claims that remote and hybrid mentoring may be the solution to this problem.

The article notes that in an effective structured mentoring program companies need to pair up senior staff members with junior staff members for virtual mentoring sessions. They should also be part of a team that also includes two members from outside the employees regular work group. One should be from the junior staff members’ business unit, and another from a different unit. At least one should be from a different geographical area.

The team composition will address one of the key problems in remote and hybrid work – not just helping junior staff build their network, but increasing cross functional connections for staff.

Staff members from the person’s own team should meet with their mentee monthly in a brief 20-30 minute meeting, and go through a checklist that includes a check up on their progress, a list of questions to determine how the employee is feeling and how confident they are in their role. In addition, it should look at what obstacles they are facing and what further things they need for their progress and growth.

The article has a list of questions, but suggests that each company or even team should customize these for their own use.

Source: Forbes

That’s the top tech news stories for today.

Links to these stories can be found in the article posted on itworldcanada.com/podcasts. You can also find more great stories and more in- depth coverage on itworldcanada.com or in the US on technewsday.com.

If you’re trying to keep up on cybersecurity, you might want to follow our sister podcast, CyberSecurityToday.

Hashtag Trending goes to air five days a week with a daily newscast and we have a special weekend edition with an interview featuring an expert in some aspect of technology that is making the news.

Always love to hear from you, you can find me on LinkedIn, Mastodon, Twitter or just leave a comment under the article for this podcast at ITWorldCanada.com

I’m Jim Love – Have great Monday.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Jim Love
Jim Lovehttp://www.itworldcanada.com/
I've been in IT and business for over 30 years. I worked my way up, literally from the mail room and I've done every job from mail clerk to CEO. Today I'm CIO of a great company - IT World Canada - Canada's leading ICT publisher.

Follow this Podcast

More #Hashtag Trending