Why do search engines and AI select ‘dubious medical articles’? — The limitations of algorithms and SEO —

📖Author: Nao

Image illustrating algorithmic bias
📖 Estimated reading time: This article takes approximately 2 min to read.
Note: ※This article was created using AI-generated text in part.

This time, we will discuss the technical limitations of how misinformation, exploiting gaps in human psychology, is amplified and evaluated by AI and search algorithms.

Conclusion: AI and search algorithms have different criteria for evaluating correct information than humans

Specifically, AI and search algorithms judge information not based on its truthfulness per se, but on criteria such as “structuredsignals” and “features that are easily evaluated”.

For example, Google employs ‘E-E-A-T’ as a key SEO metric.

E-E-A-T (pronounced ‘ee-ee-ay-t’) is an acronym for four elements defined in Google’s Search Quality Rating Guidelines:

  • Experience: Is it based on real-world experience?
  • Expertise: Does it demonstrate specialised knowledge?
  • Authoritativeness: Is it recognised as an authority in the field?
  • Trustworthiness: Can the operator or content be trusted?

Typically, this standard functions to protect users. Probabilistically, it is correct to display articles by ‘doctors with years of proven experience’ higher than health advice from ‘some unknown person’.

When the “assumption of good faith” breaks down, the system develops bugs

However, this system has a significant “flaw”. It is built upon the assumption of good faith (the premise of benevolence) that “authoritative experts (such as doctors) should always disseminate correct information”.

Search engines cannot experimentally verify whether “the content of a paper is chemically correct”. All it can do is evaluate attributes (metadata) such as ‘Does this author hold a medical licence?’ or ‘Are they affiliated with a renowned hospital?’

What consequences does this produce?

  1. Misattribution of Authority : Even when doctors write personal opinions or information lacking scientific consensus, search engines judge it as ‘High Authority’ and display it prominently in search results.
  2. AI Learning Contamination : Subsequently, when AI (large language models) learns from online information, it incorporates the ‘authoritative information’ found at the top of search results as correct answers.
  3. Misinformation Entrenchment : When users ask AI questions, it innocently responds with misinformation, stating things like ‘According to Dr. XX, this is effective.’

Self-Defence Measures We Can Take

Thus, the structural flaw of the modern web is that ‘information is not necessarily correct simply because responsibility is clearly assigned (to a doctor)’.

Eliminating this harm entirely through technology alone is, at present, nearly impossible.

This is precisely why we humans must now cultivate information literacy—avoiding blind acceptance of search rankings or AI responses—alongside basic medical knowledge (such as understanding standard treatments) to protect ourselves.

Algorithms may filter ‘popularity’ or ‘authority,’ but they lack the capacity to filter ‘truth.’

Recommended reading alongside this article

Why do we trust articles by doctors? – How well-meaning medical information can harm patients ―

The previous article explains why we tend to believe information from medical professionals, even when it is incorrect. It details the psychological mechanisms behind this and the actual harm that has occurred.

Share this article

If you found this article helpful, please share it.