Why do search engines and AI select “dubious medical articles”? — The limitations of algorithms and SEO —
📖Author: Nao

Note: ※Given the nature of this article, which deals with search algorithms and AI evaluation structures, its composition and terminology have undergone technical review by AI.
This time, we will discuss the technical limitations of how misinformation, which exploits gaps in human psychology, can be amplified and evaluated by AI and search algorithms.
Conclusion: AI and search algorithms possess different evaluation criteria for identifying accurate information than humans
Specifically, AI and search algorithms judge information not based on its inherent truthfulness, but on “structured signals” and “features that are easily evaluated”. For example, Google employs “E-E-A-T” as a key SEO metric. E-E-A-T (pronounced “ee-ee-a-t”) is an acronym for the four elements defined in Google’s Search Quality Rating Guidelines:
- Experience: Is it based on real-world experience?
- Expertise: Does it demonstrate specialised knowledge?
- Authoritativeness: Is it recognised within the field?
- Trustworthiness: Are the operator and content trustworthy?
Typically, this standard functions to protect users. Probabilistically, it is correct to display articles by ‘doctors with years of proven experience’ higher than health advice from ‘unknown individuals’.
When the “assumption of benevolence” breaks down, the system develops bugs
However, this system has a significant “flaw”. It is built upon the assumption of benevolence (the assumption of good faith) that “authoritative experts (such as doctors) should always disseminate correct information”. Search engines cannot experimentally verify whether the content of a paper is chemically correct. All it can do is evaluate attributes (metadata) such as ‘does this author hold a medical licence?’ or ‘are they affiliated with a renowned hospital?’. What happens as a result?
- Misattribution of Authority: Even if a doctor writes personal opinions or information lacking scientific consensus, the search engine judges it as ‘High Authority’ and displays it prominently in search results.
- AI Learning Contamination: Subsequently, when AI (large language models) learns from online information, it incorporates this ‘authoritative information’ from top search results as correct answers.
- Misinformation Entrenchment: When users query AI, it innocently responds with misinformation like ‘According to Dr. XX, this is effective.’
Self-Defence Measures We Can Take
Thus, the structural flaw of the modern web is that ‘information is not necessarily correct simply because responsibility is clearly assigned (to a doctor)’.
Eliminating this harm entirely through technology alone is currently near impossible.
That is precisely why what is now required of us humans is information literacy – not swallowing search rankings or AI responses whole – and basic medical knowledge (such as understanding standard treatments) to protect ourselves.
Algorithms may be able to filter “popularity” or “authority”, but they lack the function to filter “truth”.
Recommended reading alongside this article
「Why do we trust medical articles? ― Harmless medical information that hurts patients ―」
The previous article explains why we tend to believe information from medical professionals, even when it is incorrect. It details the psychological mechanisms behind this and the actual harm that has occurred.