Following COVID’s Ghost Smells

When a 3-year study on COVID’s lasting effects forgets to ask if participants were vaccinated—and why this matters more than you think

How Integrated Next Generation Analysis (INGA314) Reveals Critical Flaws in COVID Research


https://www.us.elsevierhealth.com/sciencedirectai.html?utm_source=ScienceDirect&utm_medium=Pendo&utm_campaign=Pendo+H

Picture this: You’re reading a study about how COVID-19 affects smell and cognition over three years. It tracks 202 Armenian patients from 2020 to 2023, meticulously documenting their phantom smells, depression scores, and cognitive changes. The researchers interview participants, run statistical analyses, and publish their findings in a peer-reviewed journal.

There’s just one problem—they never asked if anyone was vaccinated.

This oversight, discovered through a systematic logical analysis, reveals how even well-intentioned research can contain critical blind spots that fundamentally undermine its conclusions. What’s more troubling? This isn’t an isolated incident. When we cross-validated these concerns against the broader scientific literature, we found these same flaws plague COVID research worldwide, leading to what experts call “overly optimistic interpretations” of recovery rates.

The Study: Following COVID’s Ghost Smells

The research in question followed Armenian COVID patients who developed smell problems. Some lost their sense of smell entirely (anosmia), others smelled things that weren’t there (phantosmia—imagine constantly smelling burning rubber), and many experienced distorted smells (parosmia—your coffee smells like gasoline).

The researchers made several key claims:

  1. People with phantom smells (phantosmia) had worse depression
  2. Recovery plateaued at 20% per year
  3. If you still couldn’t smell after 4 months, you’d likely develop worse problems
  4. About 14% never recovered after 3 years

Sounds comprehensive, right? Let’s see what INGA314 revealed—and what the scientific literature says about these issues.

The Vaccine-Shaped Hole: A Validated Concern

The study ran from October 2020 to March 2023. COVID vaccines became widely available in early 2021. That means for roughly two-thirds of the study period, vaccination was possible—yet the word “vaccine” appears exactly zero times in the paper.

Why does this matter? While vaccines don’t prevent infection entirely and their effectiveness varies, vaccination status remains a crucial variable that could affect:

  • Initial infection severity
  • Viral load and clearance time
  • Inflammatory response intensity
  • Recovery trajectories
  • Risk of reinfection during the study period

The scientific literature confirms this oversight is critical:

  • Tervo et al. (2024) conducted a similar longitudinal olfactory study but also completely omitted vaccination status, which peer reviewers called “a significant methodological oversight”
  • Boscolo-Rizzo et al. (2023) found vaccination status influenced olfactory recovery patterns, even though vaccines didn’t prevent infection
  • Multiple studies show vaccinated individuals who do get infected often have different symptom profiles and recovery patterns compared to unvaccinated individuals

The issue isn’t whether vaccines prevent infection (we know they don’t completely). The issue is that vaccination status could be a confounding variable affecting outcomes. It’s like studying recovery from car accidents without noting whether airbags deployed—even if airbags don’t prevent all injuries, they change injury patterns and recovery trajectories.

Without this data, we can’t know if the study tracked:

  • Only unvaccinated people?
  • A mix of vaccinated and unvaccinated?
  • People with different numbers of doses?
  • People who got COVID before and after vaccination?

Each scenario would mean completely different things for interpreting the results.

The Survivorship Bias Trap: A Known Problem

Here’s where it gets more problematic. The study notes: “only the participants who encountered olfactory distortion in the previous visit came for the follow-up visits.”

Translation: If you recovered, you disappeared from the study.

The scientific literature extensively documents this exact problem:

  • Czeisler et al. (2021) in Epidemiology and Psychiatric Sciences studied 4,039 COVID patients and found survivorship bias led to “overly optimistic interpretations” by excluding individuals with worse outcomes
  • They explicitly recommend: “Survivorship bias assessment should therefore be among bias assessments applied before conclusions based on repeated assessments from participants in a longitudinal study are generalised”
  • Frontiers in Medicine (2024) showed that improper handling of patients who dropped out led to overestimation of recovery by 14-15 percentage points

This creates what statisticians call survivorship bias—the same error that led World War II analysts to armor the wrong parts of planes (they only studied the planes that made it back, not the ones that got shot down).

By only tracking people who stayed sick, the study systematically excluded success stories. That “14% never recovered” finding? It might actually mean “14% of the people who were still sick enough to keep coming back never recovered”—a very different claim.

The Paradox of Predicting the Past

The study claims that “persistent anosmia up to four months post-COVID predicts parosmia development.” But here’s the logical problem: they discovered this by looking backward at their data, not by making predictions and testing them.

The literature is clear on this distinction:

  • Proper prediction requires stating hypotheses before data collection
  • Post-hoc pattern finding requires different statistical corrections
  • The STROBE guidelines specifically warn against presenting retrospective associations as predictive models

It’s like claiming you can predict yesterday’s weather by looking at today’s puddles. True prediction requires stating the rule first, then seeing if future cases follow it. This study did the reverse—found a pattern in past data and called it “predictive.”

The 75% Problem Nobody Mentions

The study was 75% women. The authors mention this briefly as a “limitation” then proceed to make universal claims about COVID’s effects. But we know smell perception, depression rates, and even COVID outcomes differ between sexes.

Research validates this concern:

  • Multiple studies show sex-based differences in olfactory function
  • Women have higher rates of phantosmia even without COVID
  • COVID severity and long COVID prevalence differ by sex

INGA314’s scope analysis flags this immediately: you can’t generalize from a predominantly female Armenian sample to all humans. The honest conclusion would be: “In mostly female Armenian COVID patients, we found…”

The Math That Doesn’t Add Up: A Statistical Red Flag

Here’s a puzzling claim: “olfactory perception was assessed to be consistently recovering at approximately 20.0% for 3 years.”

The problem, validated by statistical guidelines:

  • WHO guidance specifies clear formulas for recovery rates: Recovery Rate = Recovered/(Recovered + Deaths + Active Cases) at time T
  • The STROBE Item 13c explicitly requires flow diagrams showing “numbers of individuals at each stage” with clear denominators
  • Without knowing how many people were assessed at each timepoint, the “20%” is meaningless

The study also reports “13.9% showed no improvement after three years,” but never clarifies if this is 13.9% of the original 202 or 13.9% of whoever showed up for the final visit. This ambiguity violates basic statistical reporting standards.

The Discussion Section: Where Logic Goes to Die

Perhaps the most revealing part of any research paper is its discussion section—where authors interpret their findings and acknowledge limitations. Under INGA314 analysis, this study’s discussion reveals a masterclass in minimizing fatal flaws while maximizing impact.

The Circular Logic Trap

The discussion opens with: “The research indicates that COVID-19-related olfactory disturbances may persist for years, with a clear trend showing that symptoms which do not improve within the first 10 days tend to become chronic.”

INGA314 flags multiple issues:

  • They only tracked people who didn’t improve, then conclude that not improving predicts… not improving
  • What percentage improved within 10 days? We don’t know because those people were excluded
  • It’s a self-fulfilling prophecy dressed up as a finding

Speculation Masquerading as Mechanism

Perhaps most egregious is this claim: “Memory and attention were the two cognitive functions that were most negatively impacted. Furthermore, the high correlation of memory deterioration with phantosmic group allows us to assume that central lesions play a particularly crucial role in the development of phantosmia.”

INGA314 reveals the logical leaps:

  • They jump from correlation to “central lesions” without any brain imaging
  • No baseline cognitive testing means they can’t prove “deterioration”
  • The phrase “allows us to assume” is doing heavy lifting—correlation never “allows” assumption of mechanism

They cite brain imaging studies to support their point, but those studies actually did brain scans. This study did not.

The Missing Variant Discussion

The discussion completely ignores that their study spanned multiple COVID variants:

  • Alpha/Beta era (2020-2021)
  • Delta wave (2021)
  • Omicron emergence (late 2021-2023)

We know different variants have different symptom profiles. Omicron, for example, causes less olfactory dysfunction than earlier variants. By lumping all variants together, they’re comparing apples to oranges to bananas—then claiming to understand fruit.

The “Limitations” Lipservice

Near the end, they acknowledge limitations but immediately minimize them:

  • “High proportion of female participants… requires a study with balanced gender proportion”—yet they still generalize to all populations
  • Selection bias is mentioned but not integrated into their conclusions
  • They note “changing healthcare practices… and variants” but don’t discuss how this invalidates their unified analysis

INGA314 principle: If a limitation fundamentally undermines your conclusions, it’s not a limitation—it’s a fatal flaw.

What’s Conspicuously Absent

The discussion never addresses:

  1. Why vaccination status wasn’t collected (or if it was, why it’s not reported)
  2. How reinfection might affect outcomes (many participants likely had COVID multiple times during 3 years)
  3. Whether treatments were standardized (did some take steroids? antivirals?)
  4. Cultural factors in an Armenian population that might affect reporting

The discussion doesn’t critically examine their findings—it packages them for maximum impact while minimizing fundamental flaws.

Published Critiques Echo These Concerns

Our analysis isn’t happening in a vacuum. The scientific community has been raising these exact issues:

  • 2023 Rhinology position paper stated: “Translational research in this field is limited by heterogeneity in methodological approach, including definitions of impairment, improvement and appropriate assessment techniques”
  • Chemical Senses meta-analysis found subjective methods (44% prevalence) vs objective testing (77% prevalence) showed massive underreporting of olfactory problems
  • Quinn et al. (2021) found COVID-19 research was 6.32 times more likely to have high risk of bias compared to non-COVID research
  • Hemani et al. (2020) in Nature Communications warned: “Collider bias undermines our understanding of COVID-19 disease risk and severity”

What This Means for You

Beyond this specific study, the INGA314 analysis and literature review reveal crucial lessons about reading research:

1. Always Ask What’s Missing

The most critical flaw wasn’t what the study measured—it’s what they forgot to ask. Regardless of one’s views on vaccine effectiveness, vaccination status is a variable that could affect outcomes and should be recorded. Major medical organizations now list vaccination status as mandatory for COVID studies post-2021.

2. Beware Selection Bias

When someone says “we had 100% response rate,” ask: “100% of whom?” Research shows COVID studies with this type of selection bias overestimate recovery rates by 14-15%. Millard et al. (2023) demonstrated that “selection into analytical subsamples can induce non-negligible bias.”

3. Question Predictive Claims

Real prediction means stating rules before testing them. The STROBE guidelines specifically warn against retrospective “predictions.” If a study “predicts” based on looking backward, it’s pattern recognition, not prediction.

4. Check the Math

Even published research contains arithmetic errors. WHO and STROBE guidelines provide clear formulas—if a study’s math seems vague or inconsistent, it probably is.

5. Scope Matters

A study of Armenian women isn’t a study of humanity. Good research clearly states its boundaries; suspicious research pretends they don’t exist.

6. Read the Discussion Critically

The discussion section often reveals more through what it doesn’t say than what it does. Watch for minimized limitations, speculation presented as fact, and conclusions that ignore the study’s own constraints.

The Bigger Picture: A Systematic Problem

This analysis isn’t meant to trash earnest researchers trying to understand COVID’s long-term effects. Their work provides valuable data about a subset of patients. The problem arises when limited findings are packaged as universal truths.

The complete absence of vaccine data transforms what could have been groundbreaking research into a historical curiosity—a snapshot of some unknown mix of vaccinated and unvaccinated Armenians who stayed sick enough to keep returning for tests.

The literature consensus is clear: Studies conducted during the vaccine era that fail to include vaccination status violate established methodological standards and likely produce biased results.

Moving Forward: What Good Research Looks Like

Based on published guidelines and critiques, future COVID studies should:

  1. Always record vaccination status (type, dates, doses)—essential for understanding confounding variables
  2. Track all participants, not just those who remain symptomatic
  3. Make predictions then test them, not find patterns and call them predictive
  4. State scope clearly: “In our specific population…” not “COVID causes…”
  5. Address confounders directly rather than hiding them in limitations sections
  6. Follow STROBE guidelines for clear reporting of participant flow and denominators
  7. Write honest discussion sections that integrate limitations into conclusions

The INGA314 Bottom Line

The Integrated Next Generation Analysis, validated by extensive scientific literature, revealed that this well-meaning study contains:

  • A critical vaccine-shaped hole in the data (confirmed as a major methodological flaw)
  • Survivorship bias that inflates negative outcomes (shown to cause 14-15% overestimation)
  • Mathematical inconsistencies (violating WHO and STROBE guidelines)
  • Overgeneralized claims from a narrow sample
  • Retrospective “predictions” that aren’t predictions at all
  • A discussion section that minimizes fatal flaws while maximizing claims

These aren’t minor quibbles—they’re fundamental flaws recognized across high-impact journals (BMJ, The Lancet, JAMA, Nature) as systematic failures that have undermined COVID research globally.

Instead of “COVID causes long-term smell problems in 14% of people,” the accurate conclusion might be: “Among predominantly female Armenian COVID patients who remained symptomatic enough to attend follow-ups, with unknown vaccination status, 14% still reported smell problems after 3 years.”

That’s a much narrower claim—but it’s honest about what the data actually shows.

Why This Matters Now More Than Ever

In an era of information overload, we need tools like INGA314 more than ever. Not to cynically dismiss all research, but to separate solid findings from overstated claims. The Armenian olfactory study provides useful data about a specific population, but packaging it as universal truth does a disservice to both science and patients seeking answers.

The scientific community agrees: These methodological issues aren’t just academic nitpicking. They lead to real-world consequences—overestimated recovery rates give false hope, while missing crucial variables like vaccination status makes findings clinically useless.

The next time you read a breakthrough study, channel your inner INGA314 analyst:

  • What questions didn’t they ask?
  • Who’s missing from their sample?
  • Do their predictions actually predict?
  • Are they claiming more than their data supports?
  • Does the discussion acknowledge or minimize fatal flaws?

Good science welcomes such scrutiny. After all, the goal isn’t to be right—it’s to get closer to the truth, one integrated analysis at a time.


INGA314 (Integrated Next Generation Analysis) is a systematic approach to evaluating scientific claims, checking for scope violations, temporal paradoxes, hidden biases, and logical inconsistencies. This analysis, cross-validated against published scientific literature, demonstrates how even peer-reviewed studies can contain critical oversights that fundamentally alter their conclusions.

Published by:

Unknown's avatar

Dan D. Aridor

I hold an MBA from Columbia Business School (1994) and a BA in Economics and Business Management from Bar-Ilan University (1991). Previously, I served as a Lieutenant Colonel (reserve) in the Israeli Intelligence Corps. Additionally, I have extensive experience managing various R&D projects across diverse technological fields. In 2024, I founded INGA314.com, a platform dedicated to providing professional scientific consultations and analytical insights. I am passionate about history and science fiction, and I occasionally write about these topics.

Categories כלליLeave a comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.