No, Google Bard is not trained on Gmail data

本文共有2854个字。 # / a

Bard is a generative AI tool that can get things wrong

 

 

AppleInsider may earn an affiliate commission on purchases made through links on our site.

Google's large language model tool named Bard says that it was trained with Gmail — but Google has denied that is the case.

Bard is a generative AI or Large Language Model (LLM) which can provide information based on its large data set. Like ChatGPT and similar tools, it isn't actually intelligent and will often get things wrong, which is referred to as "hallucinating."

A tweet from Kate Crawford, author and principal researcher at Microsoft Research, shows a Bard response suggesting Gmail was included in its dataset. This would be a clear violation of user privacy, if true.

Umm, anyone a little concerned that Bard is saying its training dataset includes... Gmail?

I'm assuming that's flat out wrong, otherwise Google is crossing some serious legal boundaries. pic.twitter.com/0muhrFeZEA

— Kate Crawford (@katecrawford) March 21, 2023

But, Google's Workspace Twitter account responded, stating that Bard is an early experiment and will make mistakes — and confirmed that the model was not trained with information gleaned from Gmail. The pop-up on the Bard website also warns users that Bard will not always get queries right.

These generative AI tools aren't anywhere near foolproof, and users with access often try to pull out information that would otherwise be hidden. Queries such as Crawford's can sometimes provide useful information, but in this case, Bard got it wrong.

Generative AI and LLMs have become a popular topic in the tech community. While these systems are impressive, they are also filled with early problems.

Users are urged, even by Google itself, to fall back onto web search whenever an LLM like Bard provides a response. While it might be interesting to see what it will say, it is not guaranteed to be accurate.

版权声明:本文来源自网络,经修正后供个人鉴赏、娱乐,如若侵犯了您的版权,请及时联系我们进行删除!

添加新评论

暂无评论