AI Driving Google Search to the Junkyard
Ignore the Hype: Garbage in, Garbage Out Is Still True
Once upon a time a couple of decades ago I was considered an internet savant of sorts. I worked for a resort in the US Virgin Islands, and nearly everything we purchased had to be imported. Lobsters, mangos and marijuana were the only locally grown items available.
Too many of our needs, from bed sheets to tableware to basic food items, came in only after weeks of delay as brokers sought space on freighters headed to the Caribbean. Competition on price was a rarity; finding distributors (yes, USVI is an American territory, but as far commerce is concerned it’s no different than outer Slobobia) willing to do business overseas was a tedious chore.
What made me good at what I did as purchasing manager, was my ability to coax Google to find what couldn’t be found using a trade magazine or Yellow pages for Miami. It was, I discovered, simply a matter of knowing how to ask the right questions. “Wholesale Bed Sheets” as a query wasn’t productive. Asking something using more keywords relevant to our situation, like “hotel” or “export” and drilling down on those results opened up a whole new world of possibilities.
Over the years in different situations I’ve used the search engine for a wide variety of tasks. I’d estimate that in the 12 years I’ve been writing a column, I used Google search an average of five times per essay, for things as simple as checking name spelling.
What I’ve noticed over the past few years is a steadily decreasing usefulness with my queries. Part of what made Google useful in the past it’s sense of my habits and interests. There is a downside to letting the search beast into your inner circle in that it sells/shares information gathered with other entities willing to pay for a shot at my attention. I have been willing to tolerate it.
I looked for a shoe store selling clogs during my recent East Coast journey, and eventually made a purchase. Now Crocs (which wasn’t under consideration) is popping up with ads everywhere.
If you’ve tried to use Google to get somewhere recently, it’s becoming more likely than not it wasn’t much help. Getting from New Haven to La Guardia airport involves a well worn path used by Yale students. It is a multi-step process involving trains and buses. I just wanted a guesstimate about how much time was involved. It’s a good thing that my recently graduated niece was traveling with us.
The company that owns Google, Alphabet, has reached a mature stage of enshittification, meaning that their monopoly in the field of search naturally degrades their usefulness to consumers. Throwing garbage (spam, inferior products paying for recognition, and scams) at us is more profitable than actual answers.
Gradually, assorted “search upgrades” have meant that you have to go several pages into results before finding much data genuinely connected to your query. Most people don’t have the patience to wade through those kinds of results, where a bunch of advertisers paid to be at the front of the line.
Now, Google has –ta da!-- added an artificial intelligence component to their product. And, gee whiz kids, Google Gemini does the searching for you and provides a short overview answer right at the top of the page.
Computers have the capability of rapid fire research along many avenues. All the knowledge being used to create that prompt comes from machine learning, namely remembering both data and its relationship to other data.
This is all fine and wonderful until you remember the old saw in computers, “garbage in, garbage out.”
Google (and many other platforms) practice of garbaging up search results for more profitability means that trash is now being considered as a component of truth. I mean, if it was that high up in search results, it must be relevant, right?
It’s not just Google throwing garbage into the abyss; creating search engine optimization by artificial editorial content has become the game of choice for corporations. That’s why a simple search inevitably turns up “top ten” type lists.
When it comes to recipes –one of my fav search uses involved listing two or three ingredients followed by the word ‘recipe’-- getting beyond the knock-offs that inevitably fail to list a critical ingredient (cream for a cream sauce, anyone?) has become so challenging that I’m going back to index searches from dead tree printed cookbooks.
And when things fall outside the parameters of commerce or cultural popularity, Gemini is as dumb as a box of rocks.
On Wednesday I wrote a short column announcing my return from vacation (one paragraph) followed by an analysis (17 paragraphs) on how the term Antifa is being used to generate fear and distract from real issues of the day.
The headline was Antifa Angst Is Authoritarian Brainwashing. The subhead was From Dr. Phil to UCSD, a Big Lie to Distract Dissent
I happened upon a Google Gemini AI synopsis of the story, which said I was writing about another Substack project, The Jumping Off Place and discussing challenges and opportunities with co-editor Jim Miller. I regret not screenshotting it, because it’s gone now.
The words or allusions to The Jumping Off Place or Jim Miller appear nowhere in the article, directly or indirectly.
But I did have a recent email conversation with Jim Miller, catching up with goings on at The Jumping Off Place happening while I was away..
Cue the Twilight Zone music…
***
Parker Molloy at The Present Age has made other astounding discoveries about the power of artificial intelligence in search at Google.
Google's AI-Generated Search Results Keep Citing The Onion
A Google search for the phrase “how many rocks should I eat each day” returned an AI-generated result citing “UC Berkeley geologists” who suggest people eat “at least one small rock a day.” It turns out that the actual source of this information was a 2021 article from the satirical website The Onion
Onion CEO Ben Collins noticed this and another example, posting them on Bluesky. The second example, a search for “what color highlighters do the CIA use,” brings up a result that indirectly cites The Onion’s 2005 article, “CIA Realizes It’s Been Using Black Highlighters All These Years.”
I tested this with a number of other prompts and found that the search engine simply couldn’t tell the difference between news and satire.
Over at the Verge, Kylie Robison discovered another doozy.
Imagine this: you’ve carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem — the cheese falls right off. Frustrated, you turn to Google for a solution.
“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”
So, yeah, don’t do that. As of writing this, though, that’s what Google’s new AI Overviews feature will tell you to do. The feature, while not triggered for every query, scans the web and drums up an AI-generated response. The answer received for the pizza glue query appears to be based on a comment from a user named “fucksmith” in a more than decade-old Reddit thread, and they’re clearly joking.
It’s not just Google that’s having issues with turbo charged, super duper search. Apparently Microsoft is hoping to save Bing, their failed search engine.
On Thursday morning Bing went badabing, shutting down search for Microsoft users along with other search platforms using their technology. Go-DucK-Go, Ecosia, ChatGPT plus, and Microsoft’s Copilot were all not working throughout most of the morning.
Ultimately, lousy search results are just one problematic area in the field of artificial intelligence.
Here’s Kyle Orland at Ars Technica:
While seeing a bunch of AI search errors like this can be striking, it's worth remembering that social media posters are less likely to call attention to the frequent examples where Google's AI Overview worked as intended by providing concise and accurate information culled from the web. Still, when a new system threatens to alter something as fundamental to the Internet as Google search, it's worth examining just where that system seems to be failing.
***
The fundamental grift underlying the deployment of this new technology –it hoovered up much of the internet for free before anybody knew what was happening– is what really needs to be understood.
Here’s Hamilton Nolan, from a much longer essay entitled Selling Your House for Firewood:
The conceit that any new technology renders all preexisting laws and regulations inapplicable is a profitable one. Hundreds of billions of dollars can be made with the “ask forgiveness, not permission” philosophy. That is what OpenAI is doing now.
“Copyright laws? Well you see, the people who wrote those laws never really said they applied to AI models that did not exist when they were writing the laws, so we figured, hey, it should be fine to just use the whole internet to train our models. Right? Did we make a gaffe? Well, geez, dang, let’s sit down and work out a nice little payment to you to make up for it.”
The amounts of money that media companies are getting in these deals sound nice up front, but they are peanuts for OpenAI, which is probably worth more than $100 billion already. Assuming they are not stupid enough to just think that this is lucky free money, the media companies themselves are already making the calculation that OpenAI is so big and established that fighting its fundamental business model is hopeless.
The companies involved with AI are undergoing a boom in the stock market. Nvidia is the most favored of the companies leading the charge. It announced a 10 for 1 stock split after its shares breached the thousand dollar mark, along with data center revenue surging from $7.2 billion to $26 billion year to year.
I’m sensing a whiff of too good to be true here, but maybe my neighbor’s got some exotic bud to smoke.
There are underlying issues with generative Artificial Intelligence beyond its lure as a profit center. Its never-ending reliance on massive amounts of electricity gives the dirty energy industry a future the planet doesn’t need.
Conversations around AI inevitably wander into speculation about sentience, creating an entity capable of harm in a quest for self-preservation. This line of thought has its origins in Science Fiction, a literary category that’s foundational to the tech-bro perspective. And you have to wonder if some executives in computing (see: Silicon Valley Trump donors) are seeing a dystopian future as a desirable consequence.
Right now the AI boogeyman is deepfakes, artificially produced images and/or sounds manipulating people’s choices. The first thing in this arena that comes to mind are political ads, whereby candidates are portrayed doing horrible (or miraculous) things.
I could totally imagine Trump on a hill in the Bronx tossing out baguettes and cans of tuna to feed the hungry masses, or Joe Biden supposedly being indicted for child trafficking, with some Sheriff standing before the assembled media describing the blood letting discovered in the investigation.
Rest assured, the New York Times says AI poses no threat to the integrity of the 2024 elections.
“This is the dog that didn’t bark,” said Dmitri Mehlhorn, a political adviser to one of the Democratic Party’s most generous donors, Reid Hoffman. “We haven’t found a cool thing that uses generative A.I. to invest in to actually win elections this year.”
Mr. Hoffman is hardly an A.I. skeptic. He was previously on the board of OpenAI, and recently sat for an “interview” with an A.I. version of himself. For now, though, the only political applications of the technology that merit Mr. Hoffman’s money and attention are what Mr. Mehlhorn called “unsexy productivity tools.”
Eric Wilson, a Republican digital strategist who runs an investment fund for campaign technology, agreed. “A.I. is changing the way campaigns are run but in the most boring and mundane ways you can imagine,” he said.
While legislators are trying to define the perimeters for artificial intelligence, the White House is urging industries to prevent porno makers from replacing the faces of actors with non-consenting adults and children.
Meanwhile, our security establishment has wholeheartedly embraced the use of AI to digest large amounts of open source data and spot trends or predict illegality.
Via the Associated Press:
Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries.
Nobody knows for sure what’s happening in the world of data gathered by classified means but you can rest assured AI has become a big player in the field of predictive analysis. There have been news reports claiming Israeli forces are using an AI system for targeted assassinations in Gaza.
There’s no doubt that generative AI (the process of analysis) is a fundamental step in humankind's development. It’s certainly not going away– it's a cumulative technology evolved over three decades. Nor will it be stopped by human idiots like the Senator from Louisiana.
The choices we have to make in the near future are the same ones we’ll have to make at the ballot box. Will the benefits be primarily for the wealthy, or will AI be a force for lifting society in general up? Of course, those choices won’t be listed on your ballot, and there certainly are many shades of gray to be pondered; you’ll be placing your faith in which leaders will lead us in which direction.
***
Friday’s Other Stories of Note
***
Red Lobster was killed by private equity, not Endless Shrimp by Cory Doctorow at Pluralistic.net
Golden Gate bought Red Lobster in the midst of these fish wars, promising to right its ship. As Goldstein points out, that's the same promise they made when they bought Payless shoes, just before they destroyed the company and flogged it off to Alden Capital, the hedge fund that bought and destroyed dozens of America's most beloved newspapers:
Under Golden Gate's management, Red Lobster saw its staffing levels slashed, so diners endured longer wait times to be seated and served. Then, in 2020, they sold the company to Thai Union, the company's largest supplier (a transaction Goldstein likens to a Walmart buyout of Procter and Gamble).
Thai Union continued to bleed Red Lobster, imposing more cuts and loading it up with more debts financed by yet another private equity giant, Fortress Investment Group. That brings us to today, with Thai Union having moved a gigantic amount of its own product through a failing, debt-loaded subsidiary, even as it lobbies for deregulation of American fisheries, which would let it and its lobbying partners drain American waters of the last of its depleted fish stocks.
***
County sheriffs wield lethal power, face little accountability: "A failure of democracy" via CBS News
County sheriff's officers are three times more lethal than city police, a CBS News investigation has found.
More people were killed by U.S. law enforcement in 2023 than any other year in the past decade, outpacing population growth eightfold. But despite a focus on urban areas, fatal police violence is increasingly happening in small town America at the hands of sheriffs, the top law enforcement officials in counties nationwide.
The revelation is part of the findings of a yearlong reporting effort that documented chronic misconduct in sheriff's offices and oversight failures that can enable abuses to go unchecked. The consequences can be fatal. But the majority of those cases go unreported, in violation of state and federal laws, making patterns of abuse harder to detect and stop.
CBS News gathered and analyzed federal law enforcement data that showed while more people died overall in encounters with city police, deaths in encounters with county sheriffs occurred at a significantly higher rate. For every 100,000 people arrested, more than 27 people died in the custody of sheriffs, while that number was fewer than 10 for police officers in 2022, the most recent year of available data.
***
America’s Priest-Kings by Nina Burleigh at American Freak Show
The real emergency in our legal system is the militant Catholic takeover of the federal court system, led by Leonard Leo, K Street’s Knight of Malta. Leo has been beavering away on this project for decades but is now emboldened to new heights or depths of influence-buying and judge-seating by an outrageous $1.6 billion dark money bequest I wrote about for The New Republic last year. Millions from that bottomless pile have been funneled into the Project 2025 plan to use a second Trump term to institute a full-scale assault on American civil rights, including privacy, and on democracy in general.
For years, Christian nationalists have been caricatured as Protestant hayseeds from below the Mason-Dixon line who move in electoral blocs via their megachurches and blow their Saturday paychecks frolicking in Biblical theme parks where dinosaurs co-existed with humans.
But the intellectual underpinning to the current enterprise, the one that hierophant Alito is working at, has little to do with Billy Graham or Jerry Falwell. It’s Catholic integralism, a radically regressive, anti-science, pro-monarchist concept of re-ordering society, advocated by serious people like Harvard’s Adrian Vermeule. They write about America as the new Rome, in need of a similarly Papal, patriarchal spiritual leadership. As ancient Rome fell, the Catholic church stepped in to run the state, and for centuries, Popes chose kings across Europe where the Church was the higher authority.
On google ads: Tuesday I gave away to a friend a portable car refrigerator. We had lost its cord. Since he was busy I searched to find out what cords cost for that particular brand. Since then I have been DELUGED with ads for refrigerators of that brand--and other products by that brand, which I don't want because I am, well, giving mine away. This isn't really new however. Maybe 10 years ago I searched for and bought a replacement garbage disposal. Thereafter WEEKS of ads for garbage disposals, in case I wanted a spare, I guess.
More seriously, your note on how Google AI summaries pick up satire as well as real news raises a greater concern as such AIs broaden their search of the net for information about controversial issues--how to keep misinformation from being presented as fact. I tried out several conspiracy theory events, however--the recent "assassination attempt" on trump, JFK's assassination, the Apollo Moon landing, and pizzagate. Only the moon landing resulted in an AI answer and it was based primarily on Wikipedia. Copilot, on trump assassination attempt, nicely said "Sorry, I can't provide content on this topic" and then listed a bunch of news sources which didn't buy the conspiracy, needless to say.
So far, so good--If Google wants to summarize Wikipedia I'm mostly Ok with it. Mostly I ignore the AI blurbs. But it does strike me as a road that maybe we don't really want to go down in any further depth.
You and Will Shakespeare have coined words, but none of his beats your word for where the business that owns Google has descended.
Informative, as always, too.