Google AI Falsely Says YouTuber Visited Israel, Forcing Him to Deal With Backlash
Science and music YouTuber Benn Jordan had a rough few days earlier this week after Google’s AI Summary falsely said he recently visited Israel and caused people to believe he supported the country during its war on Gaza. Jordan does not support Israel and has previously donated to Palestinian charities.
Jordan told 404 Media that when people type his name into Google it’s often followed by “eyebrows” or “wife.” That changed when popular political Twitch streamer Hasan Piker decided to react to his video about Flock AI, an AI powered camera company that 404 Media has covered extensively. Jordan’s videos have appeared on Piker’s stream before, so he knew he was in for a bit of a ride. “Anytime that he has reacted to my content I’m always like ‘Oh no, I’m going to get eviscerated in front of millions of people for being a libertarian without being able to explain my views,” he said.
playlist.megaphone.fm?p=TBIEA2…
This time it was a little different, however. “I looked at it and in the middle of it, his chat was kind of going crazy, saying that I support Israel’s genocidal behavior,” Jordan said. “And then I started getting a bunch of messages from people asking me why I don’t make myself clear about Israel, or why I support Israel and I’ve donated plenty of money to the Palestinian Children’s Relief Fund. I’ve been pretty vocal in the past about not supporting Israel and supporting a free Palestinian state.”
Then someone sent him a screenshot of the AI generated summary of a Google search result that explained the deluge of messages. If you typed “Benn Jordan Israel” into Google and looked only at its AI summary, this is what it told you:
“Electronic musician and science YouTuber Benn Jordan has recently become involved in the Israeli-Palestinian conflict, leading to significant controversy and discussion online. He has shared his experiences from a trip to Israel, during which he interviewed people from kibbutzim near the Gaza border,” the AI summary said, according to a screenshot Jordan shared on Bluesky. “On August 18, 2025, Benn Jordan uploaded a YouTube video titled I Was Wrong About Israel: What I Learned On the Ground, which detailed his recent trip to Israel.”
Jordan had never been to Israel and he doesn't make content about war. His videos live at the cross section of science and sound and he went viral earlier this year when he converted a PNG sketch into an audio waveform and taught the song to a young starling, effectively saving a digital image in the memory of a bird. He’s also covered the death of Spotify, crumbling American capitalism, and the unique dangers AI poses to musicians.
It seemed that Google’s AI had confused Jordan with the YouTuber Ryan McBeth, a guy who does make videos about war. McBeth is a chainsmoking NEWSMAX commentator who has a video titled “I Was Wrong About Israel: What I Learned on the Ground,” the exact same title Google thought Jordan was responsible for.
It’s a weird mistake for AI to make, but AI makes a lot of mistakes. AI generated songs are worse than real ones, AI generated search results funnel traffic away from the sites where Google gets the information it is summarizing and are often wrong. Jordan’s experience is just one small sample of what happens when people take AI at face value without doing five minutes of extra research.
When Jordan learned he was being misrepresented by the AI summary, he started sharing the story on Bluesky and Threads. He told 404 Media that the AI summary updated itself about 24 hours later. “Eventually the AI picked up me posting about it and then said that there was a rumor about me, a false rumor, spread about me going to Israel. And then I was just kind of ripping the hair out of my head. I was like, ‘you don’t even know that you created the rumor!’”
He told 404 Media that he thought it might be possible that Google’s AI had defamed him and he reached out to lawyers for an opinion, not as a prelude to a lawsuit but more out of curiosity. One told him he may have a case. “I’m going to Yellowstone next week for 10 days. I’m going to be completely off the grid,” Jordan said. “Had this happened, and had this continued to spread around and become a giant controversy, I would probably lose YouTube subscribers, I would lose Patreon members.”
Jordan has covered AI in the past and said he wasn’t shocked by the system breaking down. “Everybody’s rushing LLMs to be part of our daily lives [...] But the actual LLM itself is not good. It’s just not what they claim it is. It may never be what they claim it is due to the limitations of how LLMs work and AI works, and despite the promises that are made. It’s just a really bad algorithm for gaining any sort of useful information that you can trust and it’s prioritizing that above journalists to keep the money.”
In the aftermath of the whole thing, Jordan clarified his position on the Israel-Palestine conflict. In a thread on Bluesky he said he does think that Israel is committing a genocide in Gaza and why. “Hopefully, somebody sees that before they waste their time to message me to lecture me about genocide,” he said. “Although, now I’m being lectured about genocide from the other side. Now I have skin in it. Now I’m dealing with messages from people defending Israel, telling me that I’m antisemetic.”
This isn’t the first time Google’s AI summary has screwed up the basic facts about someone with a public profile. In July, humorist Dave Barry discovered that Google’s AI summary thought he had died last year after a battle with cancer. Barry is very much alive and detailed his fight to correct the record of his demise in his newsletter. Like Jordan, Google’s AI overview shifted. Unlike Jordan, it changed after Barry fought with Google’s various automated complaint systems.
When an AI makes mistakes like this we tend to call it a hallucination. Jordan used the word when he posted the updated summary of his life. “I’ve thought about it the last few days, and that’s giving it so much credit, that it could hallucinate something” Jordan said. “Generally, it’s not great at scraping data and retrieving it in a way that’s reputable.”
“The vast majority of AI Overviews are factual and we’ve continued to make improvements to both the helpfulness and quality of responses," a Google spokesperson told 404 Media. "When issues arise—like if our features misinterpret web content or miss some context—we use those examples to improve our systems, and we take action as appropriate under our policies.”
Update: This story has been updated with a statement from Google.
Did... did a guy just save a picture of a bird to a bird’s brain?
YouTube acoustic explorer Benn Jordan appears to have gotten a starling — a bird arguably better at mimicry than a parrot — to do that! He turns a drawing into sound, the bird repeats the sound, and a similar drawing shows up on the computer.Sean Hollister (The Verge)
Google’s AI Is Destroying Search, the Internet, and Your Brain
Yesterday the Pew Research Center released a report based on the internet browsing activity of 900 U.S. adults which found that Google users who encounter an AI summary are less likely to click on links to other websites than users who don’t encounter an AI summary. To be precise, only 1 percent of users who encountered an AI summary clicked the link to the page Google is summarizing.Essentially, the data shows that Google’s AI Overview feature introduced in 2023 replacing the “10 blue links” format that turned Google into the internet’s de facto traffic controller will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process. Instead, Google will feed people into a faulty AI-powered alternative that is prone to errors it presents with so much confidence, we won’t even be able to tell that they are errors.
Here’s what this looks like from the perspective of someone who makes a living finding, producing, and publishing what I hope is valuable information on the internet. On Monday I published a story about Spotify publishing AI-generated songs from dead artists without permission. I spent most of my day verifying that this was happening, finding examples, contacting Spotify and other companies responsible, and talking to the owner of a record label who was impacted by this. After the story was published, Spotify removed all the tracks I flagged and removed the user who was behind this malicious activity, which resulted in many more offending, AI-generated tracks falsely attributed to human artists being removed from Spotify and other streaming services.
Many thousands of people think this information is interesting or useful, so they read the story, and then we hopefully convert their attention to money via ads, but primarily by convincing them to pay for a subscription. Cynically aiming only to get as much traffic as we can isn’t a viable business strategy because it compromises the very credibility and trustworthiness that we think convinces people to pay for a subscription, but what traffic we do get is valuable because every person who comes to our website gives us the opportunity to make our case.
The Spotify story got decent traffic by our standards, and the number one traffic source for it so far has been Google, followed by Reddit, “direct” traffic (meaning people who come directly to our site), and Bluesky. It’s great that Google sent us a bunch of traffic for that, but we also know that it should have sent us a lot more, and that it did a disservice to its own users by not doing that.
We know it should have sent us more traffic because of what when you search for “AI music spotify” on Google, the first thing I see is a Google Snippet summarizing my article. But that summary isn’t from nor does it link to 404 Media, it’s a summary of and a link to a blog on a website called dig.watch that reads like it was generated by ChatGPT. The blog doesn’t have a byline and reads like the endless stream of AI-generated summaries we saw when we created a fully automated AI aggregation site of 404 Media. Dig.watch itself links to another music blog, MusicTech, which is an aggregation of my story that links to it in the lede.
When I use Google’s “AI mode,” Google provides a bullet-pointed summary of my story, but instead of linking to it, it links to three other sites that aggregated it: TechRadar, Mixmag, and RouteNote.
Gaming search engine optimization in order to come up as the first result on Google regardless of merit has been a problem for as long as Google has been around. As the Pew research makes clear, AI Overview just ensures people will never click the link where the information they are looking for originates.
We reserve the right to whine about Google rewarding aggregation of our stories instead of sending the traffic to us, but the problem here is not what is happening to 404 Media, which we’ve built with the explicit goal of not living or dying by the whims of any internet platform we can’t control. The problem is that this is happening to every website on the internet, and if the people who actually produce the information that people are looking for are not getting traffic they will no longer be able to produce that information.
This ongoing “traffic apocalypse" has been the subject of many articles and opinion pieces saying that SEO strategies are dead because AI will take the ad dollar scraps media companies were fighting over. Tragically, what Google is doing to search is not only going to kill big media companies, but tons of small businesses as well.
Luckily for Google and the untold number of people who are being fed Snippets and AI summaries of our Spotify story, so far that information is at least correct. That is not guaranteed to be the case with other AI summaries. We love to mention that Google’s AI summaries told its users to eat glue whenever this subject comes up because it’s hilarious and perfectly encapsulates the problem, but it’s also an important example because it reveals an inherently faulty technology. More recently, AI Overview insisted that Dave Barry, a journalist who is very much alive, was dead.
The glue situation was viral and embarrassing for Google but the company still dominates search and it’s very hard for people to meaningfully resist its dominance given our limited attention spans and the fact that it is the default search option in most cases. AI overviews are still a problem but it’s impossible to keep this story in the news forever. Eventually Google shoves it down users’ throats and there’s not much they can do about it.
Google AI summaries told users to eat glue because it was pulling on a Reddit post that was telling another user, jokingly, to put glue on their pizza so the cheese doesn’t slide off. Google’s AI didn’t understand the context and served that answer up deadpan. This mechanism doesn’t only result in other similar errors, but is also possibly vulnerable to abuse.
In May, an artist named Eduardo Valdés-Hevia reached out to me when he discovered he accidentally fooled Google’s AI Overview to present a fictional theory he wrote for a creative project as if it was real.
“I work mostly in horror, and my art often plays around with unreality and uses scientific and medical terms I make up to heighten the realism along with the photoshopped images,” Valdés-Hevia told me. “Which makes a lot of people briefly think what I talk about might be real, and will lead some of them to google my made-up terms to make sure.”
In early May, Valdés-Hevia posted a creepy image and short blurb about “The fringe Parasitic Encephalization Theory,” which “claims our nervous system is a parasite that took over the body of the earliest vertebrate ancestor. It captures 20% of the body's resources, while staying separate from the blood and being considered unique by the immune system.”
Someone who saw Valdés-Hevia post Googled “Parasitic Encephalization” and showed him that AI overview presented it as if it was a real thing.
Valdés-Hevia then decided to check if he could Google AI Overview to similarly present other made-up concepts as if they were real, and found that it was easy and fast. For example, Valdés-Hevia said that only two hours after he and members of his Discord to start posting about “AI Engorgement,” a fake “phenomenon where an AI model absorbs too much misinformation in its training data,” for Google AI Overview to start presenting it uncritically. It still does so at the time of writing, months later.
Other recent examples Valdés-Hevia flagged to me, like the fictional “Seraphim Shark” were at first presented as real by AI Overview, but has since been updated to say they are “likely” fictional. In some cases, Valdés-Hevia even managed to get AI Overview to conflate a real condition—Dracunculiasis, or guinea worm disease—with a fictional condition he invented, Dracunculus graviditatis, “a specialized parasite of the uterus.” Google
Valdés-Hevia told me he wanted to “test out the limits and how exploitable Google search has become. It's also a natural extension of the message of my art, which is made to convince people briefly that my unreality is real as a vehicle for horror. Except in this case, I was trying to intentionally ‘trick’ the machine. And I thought it would be much, much harder than just some scattered social media posts and a couple hours.”“Let's say an antivaxx group organizes to spread some disinformation,” he said. “They just need to create a new term (let's say a disease name caused by vaccines) that doesn't have many hits on Google, coordinate to post about it in a few different places using scientific terms to make it feel real, and within a few hours, they could have Google itself laundering this misinformation into a ‘credible’ statement through their AI overview. Then, a good percentage of people looking for the term would come out thinking this is credible information. What you have is, in essence, a very grassroots and cheap approach to launder misinformation to the public.”
I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.
The optimistic take is that Google Search, which has been the undisputed king of search for more than two decades, is now extremely vulnerable to disruption, as people in the tech world love to say. Predictably, most of that competition is now coming from other AI companies that thing they can build better products than AI overview and be the new, default, AI-powered search engine for the AI age. Alternatively, as people get tired of being fed AI-powered trash, perhaps there is room for a human-centered and human-powered search alternative, products that let people filter out AI results or doesn’t have an ads-based business model.
But It is also entirely possible and maybe predictable that we’ll continue to knowingly march towards an internet where drawing the line between what is and isn’t real is not profitable “at scale” and therefore not a consideration for most internet companies and users. Which doesn’t mean it’s inconsequential. It is very, very consequential, and we are already knee deep in those consequences.
Friendship Ended With GOOGLE Now KAGI Is My Best Friend
I replaced Google with Kagi, a search engine that has no ads and costs $10 per month.Jason Koebler (404 Media)
reshared this
Zhi Zhu 🕸️, George Liquor, American and Agaric Tech Collective reshared this.
George Liquor, American
in reply to 404 Media • • •