Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
Just wait until you get a group of nerds talking about keyboards - suddenly it'll sound like there is no such thing as a keyboard worth buying either.
I think the main problems for Google (and others) from this type of issue will be "down the road" problems, not a large and immediately apparent change in user behavior at the onset.
Try Kagi’s Research agent if you get a chance. It seems to have been given the instruction to tunnel through to primary sources, something you can see it do on reasoning iterations, often in ways that force a modification of its working hypothesis.
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
I have permanent prompts in Gemini settings to tell it to never include videos in its answers. Never ever for any reason. Yet of course it always does. Even if I trusted any of the video authors or material - and I don't know them so how can I trust them? - I still don't watch a video that could be text I could read in one-tenth of the time. Text is superior to video 99% of the time in my experience.
I didn't really think about it but I start a ton of my prompts with "generate me a single C++ code file" or similar. There's always 2-3 paragraphs of prose in there. Why is it consuming output tokens on generating prose? I just wanted the code.
That's interesting ... why would you want to wall off and ignore what is undoubtedly one of the largest repositories of knowledge (and trivia and ignorance, but also knowledge) ever assembled? The idea that a person can read and understand an article faster than they can watch a video with the same level of comprehension does not, to me, seem obviously true. If it were true there would be no role for things like university lecturers. Everyone would just read the text.
Most of the "educational" and documentation style content there is usually "just" gathered together from other sources, occasionally with links back to the original sources in the descriptions.
I'm not trying to be dismissive of the platform, it's just inherently catered towards summarizing results for entertainment, not for clarity or correctness.
YouTube has a lot of junk, but there are also a lot of useful videos that demonstrate various practical skills or the experiences of using certain products, or recordings of certain natural environments, which are original, in the sense that before YouTube you could not find equivalent content anywhere, except by knowing personally people who could show you such things, but there would have been very small chances to find one near you, while through YouTube you can find one who happens to live on the opposite side of the World and who can share with you the experience in which you are interested.
You are being dismissive, though. There is no "original knowledge" anywhere. If the videos are the best presentation of the information, best suited to convey the topic to the audience, then that is valuable. Humans learn better from visual information conveyed at the same time as spoken language, because that exploits multiple independent brain functions at the same time. Reading does not have this property. Particularly for novices to a topic, videos can more easily convey the mental framework necessary for deeper understanding than text can. Experts will prefer the text, but they are rarer.
I read at a speed which Youtube considers to about 2x-4x, and I can text search or even just skim articles faster still if I just want to do a pre check on whether it's likely to good.
Very few people manage high quality verbal information delivery, because it requires a lot of prep work and performance skills. Many of my university lectures were worse than simply reading the notes.
Furthermore, video is persuasive through the power of the voice. This is not good if you're trying to check it for accuracy.
There are obviously many things that are better shown than told, e.g. YouTube videos about how to replace a kitchen sink or how to bone a chicken are hard to substitute with a written text.
Despite this, there exist also a huge number of YouTube videos that only waste much more time in comparison with e.g. a HTML Web page, without providing any useful addition.
As someone who used to do instructional writing, I'm not sure that's true for those specific examples, but I acknowledge that making a video is exponentially cheaper and easier than generating good diagrams, illustrations, or photography with clear steps to follow.
Or to put it another way, if you were building a Lego set, would you rather follow the direction book, or follow along with a video? I fully acknowledge video is better for some things (try explaining weight lifting in text, for example, it's not easy), but a lot of Youtube is covering gaps in documentation we used to have in abundance.
Sounds very misleading. Web pages come from many sources, but most video is hosted on YouTube. Those YouTube videos may still be from Mayo clinic. It's like saying most medical information comes from Apache, Nginx, or IIS.
> Google’s search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions
It matters in the context of health related queries.
> Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
> “This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
To the Guardian's credit, at the bottom they explicitly cited the researchers walking back their own research claims.
> However, the researchers cautioned that these videos represented fewer than 1% of all the YouTube links cited by AI Overviews on health.
> “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organisations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.
> “So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”
Might be but aren't. They're inevitably someone I've never heard of from no recognizable organization. If they have credentials, they are invisible to me.
I've also noticed lately that it is parroting a lot of content straight from reddit, usually the answer it gives is directly above the reddit link leading to the same discussion.
If you click through to the study that the Guardian based this article on [1], it looks like it was done by an SEO firm, by a Content Marketing Manager. Kind of ironic, given that it's about the quality of cited sources.
> Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation
The credentials don't matter, the actual content does. And if it's misinformation, then yes, you can be a quadruple doctor, it's still misinformation.
In France, there was a real doctor, epidemiologist, who became famous because he was pushing a cure for Covid. He did some underground, barely legal, medical trials on his own, and proclaimed victory and that the "big bad government doesn't want you to know!". Well, the actual proper study finished, found there is basically no difference, and his solution wasn't adopted. He didn't get deplatformed fully, but he was definitely marginalised and fell in the "disinformation" category. Nonetheless, he continued spouting his version that was proven wrong. And years later, he's still wrong.
Fun fact about him: he's in the top 10 of scientists with the most retracted papers, for inaccuracies.
A good first step would be to distrust each and every individual. This excludes every blog, every non-peer-reviewed paper, every self-published book, pretty much every YouTube channel and so on. This isn't to say you can't find a nugget of truth somewhere in there, but you shouldn't trust yourself to be able to differentiate between that nugget of truth and everything surrounding it.
Even most well-intentioned and best-credentialed individuals have blind spots that only a different pair of eyes can spot during through rigorous editing. Rigorous editing only happens in serious organizations, so a good first step would be to ignore every publication that doesn't at the very least have an easy-to-find impressum with a publicly-listed editor-in-chief.
The next step would be to never blame the people listed as writers, but their editors. For example, if a shitty article makes it way to a Nature journal, it's the editor that is responsible for letting it through. Good editorial team is what builds up the reputation of a publication, people below them (that do most of the work) are largely irrelevant.
To go back to this example, you should ignore this guy's shitty study before it's published by a professional journal. Even if it got published in a serious journal, that doesn't guarantee it's The Truth, only that it has passed some level of scrutiny it wouldn't have otherwise.
Like for example website uptime, no editorial team is capable of claiming 100% of the works that passed through their hands is The Truth, so then you need to look at how transparently they're dealing with mistakes (AKA retractions), and so on.
Separating credentialed but bad faith covid grift from evolving legitimate medical advice based on the best information available at the time did not require anything but common sense and freedom from control by demagoguery.
And when I'm nice and relaxed, my common sense is fully operational. I'm pretty good at researching medical topics that do not affect me! However, as soon as it's both relevant to me, and urgent, I become extremely incapable of distinguishing truthful information from blatant malpractice. At this point, I default to extreme scepticism, and generally do nothing about the urgent medical problem.
This is called disinformation that will get you killed, so yeah, probably not good to have on youtube.
- After saying he was attacked for claiming that natural immunity from infection would be "stronger" than the vaccine, Johnson threw in a new argument. The vaccine "has been proven to have negative efficacy," he said. -
The authoritative sources of medical information is debatable in general. Chatting with initial results to ask for a breakdown of sources with classified recommendations is a logical 2nd step for context.
It's tough convincing people that Google AI overviews are often very wrong. People think that if it's displayed so prominently on Google, it must be factually accurate right?
"AI responses may include mistakes. Learn more"
It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.
It absolutely baffles me they didn't do more work or testing on this. Their (unofficial??) motto is literally Search. That's what they're known for. The fact it's trash is an unbelievably damning indictment of what they are
That's because decent (but still flawed) GenAI is expensive. The AI Overview model is even cheaper than the AI Mode model, which is cheaper than the Gemini free model, which is cheaper than the Gemini Thinking model, which is cheaer than the Gemini Pro model, which is still very misleading when working on human language source content. (It's much better at math and code).
What's surprising is how poor Google Search's transcript access is to Youtube videos. Like, I'll Google search for statements that I know I heard on Youtube but they just don't appear as results even though the video has automated transcription on it.
I'd assumed they simply didn't feed it properly to Google Search... but they did for Gemini? Maybe just the Search transcripts are heavily downranked or something.
Basic problem with Google's AI is that it never says "you can't" or "I don't know". So many times it comes up with plausible-sounding incorrect BS to "how to" questions. E.g., "in a facebook group how do you whitelist posts from certain users?" The answer is "you can't", but AI won't tell you.
Google AI overviews are often bad, yes, but why is youtube as a source necessarily a bad thing? Are these researchers doctors? A close relative is a practicing surgeon and a professor in his field. He watches youtube videos of surgeries practically every day. Doctors from every field well understand that YT is a great way to share their work and discuss w/ others.
Before we get too worked up about the results, just look at the source. It's a SERP ranking aggregator (not linking to them to give them free marketing) that's analyzing only the domains, not the credibility of the content itself.
> A close relative is a practicing surgeon and a professor in his field. He watches youtube videos of surgeries practically every day.
A professor in the field can probably go "ok this video is bullshit" a couple minutes in if it's wrong. They can identify a bad surgeon, a dangerous technique, or an edge case that may not be covered.
You and I cannot. Basically, the same problem the general public has with phishing, but even more devastating potential consequences.
I don't think anyone is talking about "medical sites" but rather medical sites. Indeed "medical sites" are no better than unvetted youtube videos created by "experts".
That said, if (hypothetically) gemini were citing only videos posted by professional physicians or perhaps videos uploaded to the channel of a medical school that would be fine. The present situation is similar to an LLM generating lots of citations to vixra.
It's crazy to me that somewhere along the way we lost physical media as a reference point. Journals and YouTube can be good sources of information, but unless heavily confined to high quality information current AI is not able to judge citation quality to come up with good recommendations. The synthesis of real world medical experience is often collated in medical textbooks and yet AI doesn't cite them nearly as much as it should.
The vast majority of journal articles are not available freely to the public. A second problem is that the business of scientific journals has destroyed itself by massive proliferation of lower quality journals with misleading names, slapdash peer review, and the crisis of quiet retractions.
There are actually a lot of freely available medical articles on PubMed. Agree about the proliferation of lower quality journals and articles necessitating manual restrictions on citations.
Ohhh, I would make one wild guess: in the upcoming llm world, the highest bidder will have a higher chance of appearing as a citation or suggestion! Welcome to gas town, so much productivity ahead!! For you and the high bidding players interested in taking advantage of you
How long will it be before somebody seeks to change AI answers by simply botting Youtube and/or Reddit?
Example: it is the official position of the Turkish government that the Armenian genocide [1] didn't happen.. It did. Yet for years they seemingly have spent resources to game Google rankings. Here's an article from 2015 [2]. I personally reported such government propaganda results in Google in 2024 and 2025.
Current LLMs really seem to come down to regurgitating Reddit, Wikipedia and, I guess for Germini, Youtube. How difficult would it be to create enough content to change an LLM's answers? I honestly don't know but I suspect for certain more niche topics this is going to be easier than people think.
And this is totally separate from the threat of the AI's owners deciding on what biases an AI should have. A notable example being Grok's sudden interest in promoting the myth of a "white genocide" in South AFrica [3].
Antivaxxer conspiracy theories have done well on Youtube (eg [4]). If Gemini weights heavily towards Youtube (as claimed) how do you defend against this sort of content resulting in bogus medical results and advice?
I imagine that it is rare for companies to not preferentially reference content on their own sites. Does anyone know of one? The opposite would be newsworthy. If you have an expectation that Google is somehow neutral with respect to search results, I wonder how you came by it.
People don't flag comments because of tone, they flag (and downvote) comments that violate the HN guidelines (https://news.ycombinator.com/newsguidelines.html). I skimmed your comment history and a ton of your recent comments violate a number of these guidelines.
Follow them and you should be able to comment without further issue. Hope this helps.
Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
All of that and you're still a heavy user? Why would google change how Gemini works if you keep using it despite those issues?
Just wait until you get a group of nerds talking about keyboards - suddenly it'll sound like there is no such thing as a keyboard worth buying either.
I think the main problems for Google (and others) from this type of issue will be "down the road" problems, not a large and immediately apparent change in user behavior at the onset.
> Gemini cites lots of "AI generated" videos as its primary source
Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views
Try Kagi’s Research agent if you get a chance. It seems to have been given the instruction to tunnel through to primary sources, something you can see it do on reasoning iterations, often in ways that force a modification of its working hypothesis.
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
Ourobouros - The mythical snake that eats its own tail (and ingests its own excrement)
Google is in a much better spot to filter out all AI generated content than others.
It's not like chatgpt is not going to cite AI videos/articles.
I have permanent prompts in Gemini settings to tell it to never include videos in its answers. Never ever for any reason. Yet of course it always does. Even if I trusted any of the video authors or material - and I don't know them so how can I trust them? - I still don't watch a video that could be text I could read in one-tenth of the time. Text is superior to video 99% of the time in my experience.
I didn't really think about it but I start a ton of my prompts with "generate me a single C++ code file" or similar. There's always 2-3 paragraphs of prose in there. Why is it consuming output tokens on generating prose? I just wanted the code.
That's interesting ... why would you want to wall off and ignore what is undoubtedly one of the largest repositories of knowledge (and trivia and ignorance, but also knowledge) ever assembled? The idea that a person can read and understand an article faster than they can watch a video with the same level of comprehension does not, to me, seem obviously true. If it were true there would be no role for things like university lecturers. Everyone would just read the text.
YouTube has almost no original knowledge.
Most of the "educational" and documentation style content there is usually "just" gathered together from other sources, occasionally with links back to the original sources in the descriptions.
I'm not trying to be dismissive of the platform, it's just inherently catered towards summarizing results for entertainment, not for clarity or correctness.
YouTube has a lot of junk, but there are also a lot of useful videos that demonstrate various practical skills or the experiences of using certain products, or recordings of certain natural environments, which are original, in the sense that before YouTube you could not find equivalent content anywhere, except by knowing personally people who could show you such things, but there would have been very small chances to find one near you, while through YouTube you can find one who happens to live on the opposite side of the World and who can share with you the experience in which you are interested.
I've noticed that the YouTubers I enjoy the most are the ones that are good presenter's, good editor's, and have a traditional text blog as well.
You are being dismissive, though. There is no "original knowledge" anywhere. If the videos are the best presentation of the information, best suited to convey the topic to the audience, then that is valuable. Humans learn better from visual information conveyed at the same time as spoken language, because that exploits multiple independent brain functions at the same time. Reading does not have this property. Particularly for novices to a topic, videos can more easily convey the mental framework necessary for deeper understanding than text can. Experts will prefer the text, but they are rarer.
> If the videos are the best presentation of the information, best suited to convey the topic to the audience, then that is valuable
Still doesn’t make them a primary source. A good research agent should be able to jump off the video to a good source.
I think you've never read real investigative journalism before
We live in an era where people lack the ability to read and digest written content and rely on someone speaking to them about it instead.
It's a step beyond that. Where people who only consume the easily digestible content don't believe there is a source to any of it
I read at a speed which Youtube considers to about 2x-4x, and I can text search or even just skim articles faster still if I just want to do a pre check on whether it's likely to good.
Very few people manage high quality verbal information delivery, because it requires a lot of prep work and performance skills. Many of my university lectures were worse than simply reading the notes.
Furthermore, video is persuasive through the power of the voice. This is not good if you're trying to check it for accuracy.
There are obviously many things that are better shown than told, e.g. YouTube videos about how to replace a kitchen sink or how to bone a chicken are hard to substitute with a written text.
Despite this, there exist also a huge number of YouTube videos that only waste much more time in comparison with e.g. a HTML Web page, without providing any useful addition.
As someone who used to do instructional writing, I'm not sure that's true for those specific examples, but I acknowledge that making a video is exponentially cheaper and easier than generating good diagrams, illustrations, or photography with clear steps to follow.
Or to put it another way, if you were building a Lego set, would you rather follow the direction book, or follow along with a video? I fully acknowledge video is better for some things (try explaining weight lifting in text, for example, it's not easy), but a lot of Youtube is covering gaps in documentation we used to have in abundance.
Sounds very misleading. Web pages come from many sources, but most video is hosted on YouTube. Those YouTube videos may still be from Mayo clinic. It's like saying most medical information comes from Apache, Nginx, or IIS.
> Google’s search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions
It matters in the context of health related queries.
> Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
> “This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
To the Guardian's credit, at the bottom they explicitly cited the researchers walking back their own research claims.
> However, the researchers cautioned that these videos represented fewer than 1% of all the YouTube links cited by AI Overviews on health.
> “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organisations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.
> “So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”
Might be but aren't. They're inevitably someone I've never heard of from no recognizable organization. If they have credentials, they are invisible to me.
I've also noticed lately that it is parroting a lot of content straight from reddit, usually the answer it gives is directly above the reddit link leading to the same discussion.
If you click through to the study that the Guardian based this article on [1], it looks like it was done by an SEO firm, by a Content Marketing Manager. Kind of ironic, given that it's about the quality of cited sources.
[1] https://seranking.com/blog/health-ai-overviews-youtube-vs-me...
Further context: https://health.youtube/ and https://support.google.com/youtube/answer/12796915?hl=en and https://www.theverge.com/2022/10/27/23426353/youtube-doctors... (2022)
> High-quality health information
Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation
I'm glad that we've decided Youtube is the oracle for everything
> Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation
The credentials don't matter, the actual content does. And if it's misinformation, then yes, you can be a quadruple doctor, it's still misinformation.
In France, there was a real doctor, epidemiologist, who became famous because he was pushing a cure for Covid. He did some underground, barely legal, medical trials on his own, and proclaimed victory and that the "big bad government doesn't want you to know!". Well, the actual proper study finished, found there is basically no difference, and his solution wasn't adopted. He didn't get deplatformed fully, but he was definitely marginalised and fell in the "disinformation" category. Nonetheless, he continued spouting his version that was proven wrong. And years later, he's still wrong.
Fun fact about him: he's in the top 10 of scientists with the most retracted papers, for inaccuracies.
How is any non-expert supposed to judge the content without some kind of guide like say credentials? Credentials do matter when the author is unknown.
A good first step would be to distrust each and every individual. This excludes every blog, every non-peer-reviewed paper, every self-published book, pretty much every YouTube channel and so on. This isn't to say you can't find a nugget of truth somewhere in there, but you shouldn't trust yourself to be able to differentiate between that nugget of truth and everything surrounding it.
Even most well-intentioned and best-credentialed individuals have blind spots that only a different pair of eyes can spot during through rigorous editing. Rigorous editing only happens in serious organizations, so a good first step would be to ignore every publication that doesn't at the very least have an easy-to-find impressum with a publicly-listed editor-in-chief.
The next step would be to never blame the people listed as writers, but their editors. For example, if a shitty article makes it way to a Nature journal, it's the editor that is responsible for letting it through. Good editorial team is what builds up the reputation of a publication, people below them (that do most of the work) are largely irrelevant.
To go back to this example, you should ignore this guy's shitty study before it's published by a professional journal. Even if it got published in a serious journal, that doesn't guarantee it's The Truth, only that it has passed some level of scrutiny it wouldn't have otherwise.
Like for example website uptime, no editorial team is capable of claiming 100% of the works that passed through their hands is The Truth, so then you need to look at how transparently they're dealing with mistakes (AKA retractions), and so on.
Separating credentialed but bad faith covid grift from evolving legitimate medical advice based on the best information available at the time did not require anything but common sense and freedom from control by demagoguery.
And when I'm nice and relaxed, my common sense is fully operational. I'm pretty good at researching medical topics that do not affect me! However, as soon as it's both relevant to me, and urgent, I become extremely incapable of distinguishing truthful information from blatant malpractice. At this point, I default to extreme scepticism, and generally do nothing about the urgent medical problem.
I'm talking about people like this https://rumble.com/vt62y6-covid-19-a-second-opinion.html who at one point or another have all been censored
You mean people like this - The COVID vaccine “has been proven to have negative efficacy.”
https://www.politifact.com/factchecks/2023/jun/07/ron-johnso...
This is called disinformation that will get you killed, so yeah, probably not good to have on youtube.
- After saying he was attacked for claiming that natural immunity from infection would be "stronger" than the vaccine, Johnson threw in a new argument. The vaccine "has been proven to have negative efficacy," he said. -
Unfortunately it's not disinformation, it's going to be a while for people to discover how many things they were lied to about
https://www.wpr.org/health/health-experts-officials-slam-ron...
Extraordinary claims require extraordinary evidence instead of just posting bs on rumble.
The authoritative sources of medical information is debatable in general. Chatting with initial results to ask for a breakdown of sources with classified recommendations is a logical 2nd step for context.
It's tough convincing people that Google AI overviews are often very wrong. People think that if it's displayed so prominently on Google, it must be factually accurate right?
"AI responses may include mistakes. Learn more"
It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.
It absolutely baffles me they didn't do more work or testing on this. Their (unofficial??) motto is literally Search. That's what they're known for. The fact it's trash is an unbelievably damning indictment of what they are
Testing what every possible combination of words? Did they test their search results before AI in this way?
That's because decent (but still flawed) GenAI is expensive. The AI Overview model is even cheaper than the AI Mode model, which is cheaper than the Gemini free model, which is cheaper than the Gemini Thinking model, which is cheaer than the Gemini Pro model, which is still very misleading when working on human language source content. (It's much better at math and code).
Google AI cannot be trusted for medical adivice. It has killed before and it will kill again.
What's surprising is how poor Google Search's transcript access is to Youtube videos. Like, I'll Google search for statements that I know I heard on Youtube but they just don't appear as results even though the video has automated transcription on it.
I'd assumed they simply didn't feed it properly to Google Search... but they did for Gemini? Maybe just the Search transcripts are heavily downranked or something.
Basic problem with Google's AI is that it never says "you can't" or "I don't know". So many times it comes up with plausible-sounding incorrect BS to "how to" questions. E.g., "in a facebook group how do you whitelist posts from certain users?" The answer is "you can't", but AI won't tell you.
Don't all real/respectable medical websites basically just say "Go talk to a real doctor, dummy."?
...and then there's WebMD, "oh you've had a cough since yesterday? It's probably terminal lung cancer."
WebMD is a real doctor, I guess. It's got an MD right in the name!
Google AI overviews are often bad, yes, but why is youtube as a source necessarily a bad thing? Are these researchers doctors? A close relative is a practicing surgeon and a professor in his field. He watches youtube videos of surgeries practically every day. Doctors from every field well understand that YT is a great way to share their work and discuss w/ others.
Before we get too worked up about the results, just look at the source. It's a SERP ranking aggregator (not linking to them to give them free marketing) that's analyzing only the domains, not the credibility of the content itself.
This report is a nothingburger.
> A close relative is a practicing surgeon and a professor in his field. He watches youtube videos of surgeries practically every day.
A professor in the field can probably go "ok this video is bullshit" a couple minutes in if it's wrong. They can identify a bad surgeon, a dangerous technique, or an edge case that may not be covered.
You and I cannot. Basically, the same problem the general public has with phishing, but even more devastating potential consequences.
The same can be said for average "medical sites" the Google search gives you anyway.
It's a lot easier for me to assess the Mayo Clinic's website being legitimate than an individual YouTuber's channel.
I don't think anyone is talking about "medical sites" but rather medical sites. Indeed "medical sites" are no better than unvetted youtube videos created by "experts".
That said, if (hypothetically) gemini were citing only videos posted by professional physicians or perhaps videos uploaded to the channel of a medical school that would be fine. The present situation is similar to an LLM generating lots of citations to vixra.
Your comment doesn't address my point. The same criticism applies to any medium.
The point is you can't say "an expert finds x useful in their field y" and expect it to always mean "any random idiot will find x useful in field y".
It's crazy to me that somewhere along the way we lost physical media as a reference point. Journals and YouTube can be good sources of information, but unless heavily confined to high quality information current AI is not able to judge citation quality to come up with good recommendations. The synthesis of real world medical experience is often collated in medical textbooks and yet AI doesn't cite them nearly as much as it should.
The vast majority of journal articles are not available freely to the public. A second problem is that the business of scientific journals has destroyed itself by massive proliferation of lower quality journals with misleading names, slapdash peer review, and the crisis of quiet retractions.
There are actually a lot of freely available medical articles on PubMed. Agree about the proliferation of lower quality journals and articles necessitating manual restrictions on citations.
Related:
Google AI Overviews put people at risk of harm with misleading health advice
https://news.ycombinator.com/item?id=46471527
It’s slop all the way down. Garbage In Garbage Out.
Ohhh, I would make one wild guess: in the upcoming llm world, the highest bidder will have a higher chance of appearing as a citation or suggestion! Welcome to gas town, so much productivity ahead!! For you and the high bidding players interested in taking advantage of you
The assumption appears to be that the linked videos are less informative than "netdoktor" but that point is left unproven.
I'm getting fucking sick of it. this bubble can go ahead and burst
Same energy as “lol you really used Wikipedia you dumba—“
How long will it be before somebody seeks to change AI answers by simply botting Youtube and/or Reddit?
Example: it is the official position of the Turkish government that the Armenian genocide [1] didn't happen.. It did. Yet for years they seemingly have spent resources to game Google rankings. Here's an article from 2015 [2]. I personally reported such government propaganda results in Google in 2024 and 2025.
Current LLMs really seem to come down to regurgitating Reddit, Wikipedia and, I guess for Germini, Youtube. How difficult would it be to create enough content to change an LLM's answers? I honestly don't know but I suspect for certain more niche topics this is going to be easier than people think.
And this is totally separate from the threat of the AI's owners deciding on what biases an AI should have. A notable example being Grok's sudden interest in promoting the myth of a "white genocide" in South AFrica [3].
Antivaxxer conspiracy theories have done well on Youtube (eg [4]). If Gemini weights heavily towards Youtube (as claimed) how do you defend against this sort of content resulting in bogus medical results and advice?
[1]: https://en.wikipedia.org/wiki/Armenian_genocide
[2]: https://www.vice.com/en/article/how-google-searches-are-prom...
[3]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[4]: https://misinforeview.hks.harvard.edu/article/where-conspira...
> Google AI Overviews cite YouTube more than any medical site for health queries
Whaaaa? No way /s
Like, do you people not understand the business model?
Google AI (owned by Meta) favoring YouTube (also owned by Meta) should be unsurprising.
> Google AI (owned by Meta) favoring YouTube (also owned by Meta) should be unsurprising.
...what?
This is absolute nonsense. Neither Google AI or YouTube are owned by Meta. What gave you the idea that they were?
Probably asked an llm
I imagine that it is rare for companies to not preferentially reference content on their own sites. Does anyone know of one? The opposite would be newsworthy. If you have an expectation that Google is somehow neutral with respect to search results, I wonder how you came by it.
How do I respond to this nicely without getting my comment flagged
People don't flag comments because of tone, they flag (and downvote) comments that violate the HN guidelines (https://news.ycombinator.com/newsguidelines.html). I skimmed your comment history and a ton of your recent comments violate a number of these guidelines.
Follow them and you should be able to comment without further issue. Hope this helps.
I feel like you completely missed the point of the rhetorical question.