WHEN YOU GOOGLE SOMETHING – YOU EXPECT FACTS… NOT FICTION.
BUT GOOGLE’S A-I FEATURE MIGHT TELL YOU TO PUT GLUE ON YOUR PIZZA… OR EXPLAIN THE MEANING OF “SLAP A GOOSE.”
THIS IS ALL THANKS TO “A-I OVERVIEW” – A TOOL THAT’S SUPPOSED TO SAVE YOU TIME… BUT MIGHT JUST LEAVE YOU QUESTIONING REALITY.
YOU MAY HAVE NOTICED THE NEW FEATURE POP UP ABOVE CERTAIN GOOGLE SEARCHES LAST MAY.
ACCORDING *TO GOOGLE… THE FEATURE WILL “appear successful Google Search results erstwhile our systems find that generative responses tin beryllium particularly adjuvant — for example, erstwhile you want to quickly understand accusation from a scope of sources, including accusation from crossed nan web and Google’s Knowledge Graph.”
BUT AS SOCIAL MEDIA USERS ARE POINTING OUT… IT SEEMS TO BE MAKING UP DEFINITIONS TO RANDOM PHRASES OR TERMS.
TRY A SIMPLE SEARCH OF A RANDOM STRING OF WORDS AND ADD THE WORD ‘MEANING” AT THE END. WE TRIED “MILK THE THUNDER MEANING”… WHICH – ACCORDING TO A-I OVERVIEW – IS A METAPHOR THAT SUGGESTS USING OR EXPLOITING A SITUATION TO ONE’S ADVANTAGE.
BUT WHEN YOU CLICK THE ASSOCIATED HYPERLINK CLAIMING THE SOURCE… THE LINKED ARTICLE MENTIONS *NOTHING ABOUT MILKING THUNDER… AND INSTEAD TWO SEPARATE PHRASES.
THE ARTICLE ENTITLED “WEIRD ENGLISH PHRASES AND THEIR MEANING” BY ‘E-F ENGLISH LIVE’ LISTS “Steal someone’s thunder” AND “Crying complete spilt milk”
OR AN EXAMPLE FROM WIRED THAT HIGHLIGHTS A SOCIAL MEDIA EXAMPLE – “YOU CAN’T LICK A BADGER TWICE MEANING” … TO WHICH THE A-I OVERVIEW SAYS IS AN IDIOM MEANING YOU CAN’T TRICK OR DECEIVE SOMEONE A SECOND TIME AFTER THEY’VE BEEN TRICKED ONCE.
IF THIS ALL SOUNDS FAMILIAR… IT’S BECAUSE GOOGLE FACED A SIMILAR ISSUE DURING THE SUPER BOWL.
TRAVEL BLOGGER NATE HAKE POINTED OUT ERRORS IN GOOGLE’S FIFTY SHORT ADS HIGHLIGHTING SMALL BUSINESSES – ONE FROM EVERY STATE.
IN WISCONSIN’S AD – FITTINGLY SET IN AMERICA’S DAIRYLAND – GOOGLE’S GEMINI CHATBOT HELPED A CHEESEMONGER WRITE A PRODUCT DESCRIPTION… CLAIMING GOUDA ACCOUNTS FOR FIFTY TO SIXTY PERCENT OF THE WORLD’S CHEESE CONSUMPTION.
HAKE FACT-CHECKED ON X SAYING – “Gemini provides nary source, but that is conscionable unequivocally false. Cheddar & mozzarella would for illustration a word.”
ACCORDING *TO GOOGLE – AN A-I HALLUCINATION IS AN INCORRECT OR MISLEADING RESULT THAT AN A-I MODEL CAN GENERATE.
SINCE MODELS ARE TRAINED ON DATA – THEY LEARN TO MAKE PREDICTIONS BASED ON PATTERNS BUT THE ACCURACY DEPENDS ON THE QUALITY OF THE DATA.
A GOOGLE EXEC REPLIED TO HAKE… SAYING – “not a hallucination, Gemini is grounded successful nan Web – and users tin ever cheque nan results and references. In this case, aggregate sites crossed nan web see nan 50-60% stat.” BUT… THEY DID QUIETLY RE-EDIT THE ADS.
MANY AMERICANS REMAIN SKEPTICAL OF A-I… AND THESE LATEST GLITCHES COULD FUEL THEIR DOUBTS.
EDELMAN’S 20-25 TRUST BAROMETER STUDY SHOWS WHILE 72-PERCENT OF PEOPLE IN CHINA TRUST A-I… ONLY 32-PERCENT OF AMERICANS DO.
EDELMAN SAYS SOME SEE A-I AS A FORCE FOR PROGRESS – WHILE OTHERS WORRY ABOUT ITS UNINTENDED CONSEQUENCES.
AND ACCORDING TO TECH TIMES… GOOGLE SPOKESPERSON MEGHANN FARNSWORTH SAYS THEIR SYSTEM ATTEMPTS TO OFFER CONTEXT WHENEVER IT CAN… BUT NONSENSICAL PROMPTS ARE STILL LIKELY TO CAUSE A-I OVERVIEWS.