Hashtag Trending Oct.30-G7 countries introduce AI code of conduct; Google has new tools to authenticate pictures; How much data is your smart speaker gathering on you?

The G7 countries initiate a roadmap for responsible AI they call the Hiroshima AI Process.  Google has new tools to help authenticate pictures. And how much data is your smart speaker really gathering on you – hint – more than than they earlier admitted.

Hashtag Trending on Amazon Alexa Google Podcasts badge - 200 px wide

These and more top tech stories on Hashtag Trending

I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.

The G7 countries, which consists of Canada, France, Germany, Italy, Japan, Britain, the United States, as well as the European Union, is set to introduce a code of conduct for companies diving deep into the world of advanced artificial intelligence (AI). This move is a response to growing concerns about the potential misuse, privacy issues, and security risks associated with AI. 

The “Hiroshima AI process,” initiated by the G7 leaders aims to provide a roadmap for the safe and responsible development of AI.

The 11-point code emphasizes the importance of creating AI that is “safe, secure, and trustworthy.” It encourages companies to be transparent about their AI’s capabilities and limitations and to actively address any risks or misuses. Moreover, companies are advised to release public reports detailing their AI systems and to invest in strong security measures.

While the EU has been proactive in regulating AI with its stringent AI Act, other countries like the U.S. and Japan have adopted a more relaxed stance to spur economic growth. Vera Jourova, the European Commission’s digital chief, highlighted the significance of this code of conduct, viewing it as a foundational step towards ensuring AI safety until more concrete regulations are established.

Sources include: Reuters 

OpenAI the company behind ChatGPT, has formed a new team named “Preparedness” to tackle what they call “catastrophic risks” of AI. 

Led by Aleksander Madry from MIT, this team isn’t just looking at AI sending you phishing emails, but also at the potential for AI to cause “chemical, biological, radiological and nuclear” threats. 

While OpenAI’s CEO, Sam Altman, has often voiced concerns about AI’s potential dangers, this move seems straight out of a sci-fi movie. But don’t fret, they’re also looking at more “grounded” risks. And if you’ve got ideas on AI risks, OpenAI is offering $25,000 and a job for the best ones. So, got any AI doomsday scenarios in mind?

Sources include: Tech Crunch 

In the age of AI-generated images that can deceive even the sharpest eyes, Google is stepping in with tools to help users verify the authenticity of images online. Remember that viral image of Pope Francis in a trendy white puffer jacket? Yep, that’s the kind of mischief we’re talking about. Google’s new “About this image” feature, unveiled at Google I/O, provides users with an image’s history, its usage across different sites, and crucially, its metadata. This metadata can reveal if an image was AI-generated, giving users a heads-up on its authenticity. To use this feature, simply click on the three dots next to an image in Google Images or the “more about this page” option in search results. Additionally, Google’s “Fact Check Explorer” tool aids journalists in verifying images or topics. Since its release, 70 per cent of beta users have reported a reduction in their investigation time. And if you’re thinking Google’s alone in this fight, think again. Camera manufacturer Leica has introduced a camera that embeds metadata at the point of capture, detailing who, when, and how the image was taken.

Sources include: ZDNET 

Smart speakers, like Amazon’s Echo, are undeniably convenient. Want to play a song or reorder some essentials? Just ask. But, as Umar Iqbal from Washington University in St. Louis points out, there’s a hidden cost to this convenience: your privacy. Iqbal and his team discovered that Amazon uses data from smart speaker interactions to profile users for targeted ads. This revelation wasn’t initially clear in Amazon’s privacy policies. Only after the team’s findings were made public did Amazon update its Alexa Privacy Hub to admit this use of data. The research also highlighted that many advertisers share their cookies with Amazon, which then syncs with numerous third parties. Iqbal’s message is clear: consumers should be aware of the data they’re sharing with these devices. That song request might just come back as an ad for concert tickets.

And consider this in the context of our next story…

Sources include: TechXplore

There was a 2013 sci-fi film called “Her” where the lead character falls head over heels for an AI named Samantha.  It turns out that with ChatGPT’s new voice features, people are having hours-long heart-to-hearts with the AI, reminiscent of the movie’s premise. While ChatGPT isn’t as emotionally intuitive as Samantha, it is an absolutely realistic conversation.  

The AI even simulates human-like vocal nuances, including breathing sounds and the occasional cough. Beyond casual chats, users are finding ChatGPT a handy brainstorming buddy, helping them flesh out creative ideas during long walks. So, while we’re not quite in a sci-fi romance, it’s clear that AI companionship is becoming more of a reality.

Sources include: Ars Technica 

Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”

You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts 

I’m your host, Jim Love – have a marvelous Monday.  

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada
Avatar photo

Follow this Podcast

More #Hashtag Trending