Google’s answer to ChatGPT, Google Bard, is out

A large Google logo is displayed amidst foliage.

Enlarge (credit: Sean Gallup | Getty Images)

Google Bard is out—sort of. Google says you can now join the waitlist to try the company's generative AI chatbot at the newly launched bard.google.com site. The company is going with "Bard" and not the "Google Assistant" chatbot branding it was previously using. Other than a sign-up link and an FAQ, there isn't much there right now.

Google's blog post calls Bard "an early experiment," and the project is covered in warning labels. The Bard site has a bright blue "Experiment" label right on the logo, and the blog post warns, "Large language models will not always get it right. Feedback from a wide range of experts and users will help Bard improve." A disclaimer below the demo input box warns, "Bard may display inaccurate or offensive information that doesn't represent Google's views."

Microsoft has been criticized for taking a very aggressive stance toward rolling out AI, even cutting its AI ethics team. Google is trying to paint itself as more cautious, saying, "Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety. We’re using human feedback and evaluation to improve our systems, and we’ve also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic."

Read 2 remaining paragraphs | Comments



A large Google logo is displayed amidst foliage.

Enlarge (credit: Sean Gallup | Getty Images)

Google Bard is out—sort of. Google says you can now join the waitlist to try the company's generative AI chatbot at the newly launched bard.google.com site. The company is going with "Bard" and not the "Google Assistant" chatbot branding it was previously using. Other than a sign-up link and an FAQ, there isn't much there right now.

Google's blog post calls Bard "an early experiment," and the project is covered in warning labels. The Bard site has a bright blue "Experiment" label right on the logo, and the blog post warns, "Large language models will not always get it right. Feedback from a wide range of experts and users will help Bard improve." A disclaimer below the demo input box warns, "Bard may display inaccurate or offensive information that doesn't represent Google's views."

Microsoft has been criticized for taking a very aggressive stance toward rolling out AI, even cutting its AI ethics team. Google is trying to paint itself as more cautious, saying, "Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety. We’re using human feedback and evaluation to improve our systems, and we’ve also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic."

Read 2 remaining paragraphs | Comments


March 21, 2023 at 08:28PM

Post a Comment

Previous Post Next Post