The videos pop up between wedding reels and cricket highlights: a calm, bespectacled “doctor” speaking with reassuring authority, promises that a herbal syrup or a “detox protocol” can normalise blood sugar in weeks.
For millions of Indians living with diabetes, such clips offer hope and certainty. But there is a problem: they are fictitious. Artificial intelligence has made it cheap to fabricate experts, clone trusted faces and generate a persuasive patter of biomedical jargon. The result is a fast-growing shadow industry of misinformation in which chronic disease becomes content, and content becomes commerce.
India is fertile ground for such distortions. The country has an estimated 100 million people with diabetes, according to various epidemiological estimates. In this environment, social media platforms double as informal clinics.
Academic research analysing health content online suggests the scale of the problem: a 2025 study of diabetes videos on TikTok found large variations in quality; many clips scored poorly on measures of reliability and clarity.
Where information is uneven, misinformation thrives.
Of particular concern is the use of generative AI to industrialise the process of spreading misinformation. Deepfake videos—synthetic media in which a real person’s likeness is manipulated—have begun to mimic doctors and television personalities with unsettling realism.
Reports from multiple countries, including India, describe fabricated clips in which well-known clinicians appear to endorse untested “natural” remedies or denounce established drugs such as metformin. The technology exploits a simple asymmetry: trust is slow to build but easy to counterfeit. A white coat, a confident tone and a familiar face are often enough.
The claims follow familiar patterns. Some assert that diabetes can be “reversed” through proprietary programmes sold via subscription. Others promise dramatic glucose reductions from spices, seeds or supplements. Others attack mainstream medicine, warning that standard drugs are “toxic” or unnecessary.
These narratives are not new; what has changed is their scale and sophistication. Paid advertisements embedded in interviews or influencer feeds funnel viewers into paid courses or product sales, often with guarantees of stopping medication altogether. The language of empowerment—“take control”, “heal naturally”—masks a commercial funnel.
Medical evidence offers a prosaic story. Type 2 diabetes is a complex metabolic disorder influenced by genetics, body weight, diet and physical activity. While lifestyle changes can improve glycaemic control and, in some cases, induce remission, they do not constitute a universal cure. Nor can single foods or supplements reliably “flush out” the disease.
Claims to the contrary are unsupported by rigorous clinical trials. Even seemingly benign advice can mislead. Social-media demonstrations using continuous glucose monitors, for example, often label foods as “good” or “bad” based on short-term glucose spikes, ignoring broader metabolic effects and individual variation. Such over-simplification is dangerous.
The economic incentives are obvious. Diabetes is induces anxiety, creates a perfect market. Influencers monetise attention through affiliate links, branded products and subscription programmes.
AI lowers the cost of entry: one need not be a doctor, or even a person, to appear as one. A single fabricated video can be localised into multiple languages, tailored to regional diets and distributed across platforms at negligible cost. In effect, misinformation has become scalable.
The social consequences are harder to quantify, but real. Health experts warn that deepfake endorsements can lead patients to abandon effective treatments or delay seeking care. Even when individuals do not act on false claims, the cumulative effect is erosion of trust.
If every authority can be faked, who should one believe? In India, where out-of-pocket spending remains high and doctor–patient ratios are stretched, such uncertainty can push people further toward informal advice networks.
It also reshapes how disease is understood. Influencer narratives tend to “moralise” diabetes, framing it as the result of dietary “mistakes” that can be corrected with discipline or the right product. This dovetails with the logic of social media, which rewards simple messages over nuance. However, it obscures structural drivers such as urbanisation, food environments and genetic predisposition. The result is a distorted picture in which responsibility is individualised, and solutions commodified.
India’s experience with misinformation offers clues about why this ecosystem has taken root. Studies of digital communication found that the country is a hotspot for false health information, fuelled by rapid internet adoption and uneven digital literacy.
Messaging platforms and short-video apps amplify emotionally engaging content, regardless of accuracy. Algorithms favour virality; virality favours certainty. A claim that “this drink cures diabetes” will travel further than a careful explanation of insulin resistance.
Regulation has struggled to keep pace. Platforms have policies against misleading health claims, but enforcement is nonexistent, especially across languages and formats. Deepfakes add a further layer of complexity: detecting them requires technical tools that are imperfect in their own right.
Some companies are experimenting with facial recognition systems to flag manipulated content, but these remain in early stages. Meanwhile, advertisers and content creators adapt quickly, shifting accounts, formats and messaging.
Public-health authorities face a dilemma. Heavy-handed censorship risks driving misinformation into less visible channels, while laissez-faire approaches allow harmful content to proliferate.
A promising path may lie in counter-programming: producing credible, engaging content that competes on the same platforms. The TikTok study’s recommendation—that clinicians actively participate online—reflects a broader shift in thinking.
Education is another lever. Improving health literacy—understanding what constitutes evidence, how treatments are evaluated and why individual anecdotes are unreliable—can inoculate against some forms of deception. So can transparency about uncertainty. Medicine rarely offers absolutes; acknowledging that may paradoxically build trust. By contrast, the false certainty of influencer claims is precisely what makes them appealing.
None of this is uniquely Indian. Deepfake doctors have appeared in Europe and Australia, and the global supplement industry has long thrived on regulatory grey zones. However, India’s scale magnifies the stakes. A small shift in behaviour, multiplied across tens of millions of people, becomes a public-health issue. If even a fraction of patients delay treatment or abandon medication, the consequences will be felt in clinics and hospitals for years.
There is a temptation to treat this as a technological problem with a technological fix: better detection algorithms, stricter platform policies, more sophisticated verification. These are necessary but insufficient.
The deeper issue is one of incentives. As long as misinformation is profitable, it will persist. Changing that calculus requires coordinated action from platforms, regulators, advertisers and, not least, audiences.
In the meantime, the videos keep coming. They promise control in a condition defined by its chronicity, simplicity in a system defined by its complexity. They speak the language of science while discarding its method. And they remind us that in the contest between evidence and assertion, the latter often has the advantage of being easier to understand. The task for medicine is not merely to be right, but to be heard—and, increasingly, to be seen as real.
-30-
Copyright©Madras Courier, All Rights Reserved. You may share using our article tools. Please don't cut articles from madrascourier.com and redistribute by email, post to the web, mobile phone or social media.Please send in your feed back and comments to [email protected]
