Google's AI chatbot Bard makes a factual mistake in its first demo.
Google's AI chatbot Bard, which competes with OpenAI's ChatGPT, was revealed on Monday and will be "more freely available to the public in the coming weeks." Experts have pointed out, though, that Bard got a fact wrong in its first demo, so it's not off to a great start.
What new things can I tell my 9-year-old about what the James Webb Space Telescope has found? Google posts a GIF of Bard's answer. One of Bard's three bullet points says that the telescope "took the very first photos of a planet outside of our own solar system."
Astronomers said on Twitter that this isn't true and that, according to NASA's website, the first picture of an extrasolar planet was taken in 2004.
Grant Tremblay, an astrophysicist, tweeted, "For the record: JWST did not take 'the very first picture of a planet outside our solar system.'" I know Bard will be beautiful.
Bruce Macintosh, who is in charge of the UC Santa Cruz Observatories, also saw the mistake. I think you should find a better example, since he tweeted that he had thought of an exoplanet 14 years before JWST was built.
In a later tweet, Tremblay said, "I do enjoy and appreciate that one of the most powerful companies in the world is using a JWST search to promote their LLM." Awesome! But ChatGPT and other similar programs are often wrong, even though they seem very impressive. It will be interesting to see if LLMs self-correct over time.
Tremblay says that one of the biggest problems with AI chatbots like ChatGPT and Bard is that they often insist that false information is true. Since the systems are mostly autocomplete, they often "hallucinate," or make up information.
They learn from huge amounts of text and look for patterns to figure out which word comes next in any given sentence, rather than asking a database of facts that have been proven to work to find answers. One well-known AI professor calls them "bullshit generators" because they are based on probabilities instead of rules.
Even though there is already a lot of wrong and misleading information on the internet, Microsoft and Google's plans to use their products as search engines have made the problem worse. There, the chatbots' answers are given with the authority of a machine that says it knows everything. https://ejtandemonium.com/
Microsoft, which showed off its new AI-driven Bing search engine yesterday, has tried to address these worries by putting the onus of responsibility on the user. "Bing is powered by AI, so surprises and mistakes are possible," says the company's disclaimer. Check the information and let us know what you think so that we can grow and learn.
A Google representative told The Verge, "This shows how important a thorough testing process is. We're starting this week with our Trusted Tester program." We'll use both external feedback and our own internal testing to make sure that Bard's answers meet high standards for quality, safety, and information based on real-world data. http://sentrateknikaprima.com/
Google's AI chatbot Bard, which competes with OpenAI's ChatGPT, was revealed on Monday and will be "more freely available to the public in the coming weeks." Experts have pointed out, though, that Bard got a fact wrong in its first demo, so it's not off to a great start.
What new things can I tell my 9-year-old about what the James Webb Space Telescope has found? Google posts a GIF of Bard's answer. One of Bard's three bullet points says that the telescope "took the very first photos of a planet outside of our own solar system."
Astronomers said on Twitter that this isn't true and that, according to NASA's website, the first picture of an extrasolar planet was taken in 2004.
Grant Tremblay, an astrophysicist, tweeted, "For the record: JWST did not take 'the very first picture of a planet outside our solar system.'" I know Bard will be beautiful.
Bruce Macintosh, who is in charge of the UC Santa Cruz Observatories, also saw the mistake. I think you should find a better example, since he tweeted that he had thought of an exoplanet 14 years before JWST was built.
In a later tweet, Tremblay said, "I do enjoy and appreciate that one of the most powerful companies in the world is using a JWST search to promote their LLM." Awesome! But ChatGPT and other similar programs are often wrong, even though they seem very impressive. It will be interesting to see if LLMs self-correct over time.
Tremblay says that one of the biggest problems with AI chatbots like ChatGPT and Bard is that they often insist that false information is true. Since the systems are mostly autocomplete, they often "hallucinate," or make up information.
They learn from huge amounts of text and look for patterns to figure out which word comes next in any given sentence, rather than asking a database of facts that have been proven to work to find answers. One well-known AI professor calls them "bullshit generators" because they are based on probabilities instead of rules.
Even though there is already a lot of wrong and misleading information on the internet, Microsoft and Google's plans to use their products as search engines have made the problem worse. There, the chatbots' answers are given with the authority of a machine that says it knows everything. https://ejtandemonium.com/
Microsoft, which showed off its new AI-driven Bing search engine yesterday, has tried to address these worries by putting the onus of responsibility on the user. "Bing is powered by AI, so surprises and mistakes are possible," says the company's disclaimer. Check the information and let us know what you think so that we can grow and learn.
A Google representative told The Verge, "This shows how important a thorough testing process is. We're starting this week with our Trusted Tester program." We'll use both external feedback and our own internal testing to make sure that Bard's answers meet high standards for quality, safety, and information based on real-world data. http://sentrateknikaprima.com/