How a single-dose pharmacokinetic study became a clinical endorsement

The 150-Minute Chronic Pain Treatment

How a single-dose pharmacokinetic study became a clinical endorsement

An INGA314 Case Study — Almog et al., European Journal of Pain, 2020 (PMID 32445190)


BOTTOM LINE. 27 patients. One inhalation per session. 150 minutes of follow-up. The conclusion: a treatment that lets chronic pain patients “regain their quality of life.” Quality of life was never measured. Four of eight authors are employees of the device manufacturer. The trial was never registered. The study’s 0.5 mg arm did not beat placebo — a fact the abstract obscures through framing. CADTH’s living systematic review excluded this paper for “inadequate duration.”


https://pubmed.ncbi.nlm.nih.gov/32445190/

The Three Numbers That Don’t Reconcile

Read this carefully: twenty-seven patientsThree sessions eachOne inhalation per session. A total of about seven and a half minutes of cumulative drug exposure across the entire trial — distributed across three visits separated by at least two days. Each session was followed for one hundred and fifty minutes. That is the entire empirical foundation of the paper.

The conclusion the authors draw from that empirical foundation is the following:

“It is the first time that the delivery of selective, significantly low, and precise therapeutic single doses of inhaled THC demonstrates an analgesic effect. It allows patients to reach the optimum balance between symptom relief and controlled side effects, enabling patients to regain their quality of life.”

Read those two passages back to back. Hold them in mind simultaneously. Quality of life was not measured. The longest a single patient was observed in this trial was 150 minutes. The disease the device proposes to treat — chronic neuropathic pain — is by definition a condition lasting months or years. The empirical window the trial measured does not overlap with the temporal scope of the disease it claims to treat.

This paper is not bad. It is, in fact, a competent Phase 1b pharmacokinetic feasibility study. The bad part is the gap between what the methods can support and what the conclusions assert. That gap is the subject of this case study.


What the Paper Actually Did

Almog et al. enrolled 27 patients with neuropathic pain or complex regional pain syndrome at the Rambam Health Care Campus in Haifa between March 2016 and July 2017. Every patient was already a licensed medical-cannabis user under the Israeli Ministry of Health. Most — 77.8% — had been smoking cannabis at 16 to 30 grams per month. Two-thirds were also taking opioids.

Each patient came to the clinic three times. On each visit, they took one inhalation from the Syqe Medical inhaler, which delivered either 0.5 mg of THC, 1.0 mg of THC, or a placebo, in random order. After the inhalation, the team measured plasma THC levels at twelve time-points across 150 minutes, recorded pain on a visual analog scale at eight time-points, ran selected subtests of the CANTAB cognitive battery at 15 and 75 minutes, and checked for adverse events. After 150 minutes, the patient went home. Two days later — at minimum — they came back for the next session.

That is the trial. That is the entire trial. There is no chronic dosing arm. There is no follow-up beyond 150 minutes. There is no measurement of function, mood, sleep, return-to-work, or any of the outcomes a chronic-pain patient or their physician would actually care about. There is no quality-of-life instrument.

It is a single-dose pharmacokinetic-and-acute-pharmacodynamic study, and it is well-designed for that purpose. It establishes that the Syqe device produces reproducible plasma THC curves across two doses with reasonable consistency. That is a useful contribution. The problem is that the paper does not stop there.


The Inflation Map

INGA314 treats the gap between what evidence can support and what a paper claims as a measurable quantity. We call it the confidence inflation factor: how much more confident the conclusion is than the evidence justifies. For Almog et al., the table below maps the principal claims against their empirical foundation.

Discussion / Conclusion ClaimWhat the Methods TestedJustified ConfidenceInflation
“Effective treatment for chronic pain”Single-session VAS over 150 min; only 1 of 2 active doses beat placebo0.253.4×
“Safe analgesic effect”27 patients, ~7.5 min cumulative drug exposure, 150 min monitoring0.204.3×
“No consistent cognitive impairment”CANTAB at 15 and 75 min; Tmax = 4 min — peak window not tested0.204.0×
“Most effective delivery method for cannabis-based medicine”Cross-study PK comparison; no head-to-head trial0.155.3×
“Patients regain their quality of life”Quality of life was never measured0.00
“Pharmaceutical-standard precise dosing”7–10% imprecision, +7–8% inaccuracy, 11–12% uncertainty0.551.6×

The aggregate inflation across primary claims is in the 3× to 5× range. Two of the most prominent claims — quality-of-life restoration and “most effective delivery method” — have inflation factors approaching infinity, because they make assertions for which no measurement was performed in this study at all.

Quality of life was never measured. The conclusion claims patients can regain it.


Five Critical Defects

1. The Acute-to-Chronic Temporal Leap

The most important methodological feature of this paper is that its longest observation window is 150 minutes per patient, and the disease it claims to address is chronic. The temporal mismatch is not subtle. Chronic pain is defined as pain persisting beyond three months. Treatment trials for chronic pain conventionally follow patients for weeks, months, or longer to detect tolerance, dependence, withdrawal, sustained efficacy, and the cognitive and physical effects of repeated dosing.

None of these can be assessed in 150 minutes. None of them are assessed in this paper. The Discussion nevertheless concludes that the device is “an effective treatment for chronic pain in adults.” This is not a Discussion-section overstatement; it is a structural mismatch between what the methods can speak to and what the conclusion claims.

The Canadian Agency for Drugs and Technologies in Health, in its living systematic review on cannabis for chronic pain, has already adjudicated this gap. They excluded the paper from their evidence synthesis under the explicit reason: “inadequate duration.” Other systematic reviewers have done the same.

2. The Author-Sponsor Concentration

Of the eight authors, four are employees of Syqe Medical, the company manufacturing the device under evaluation. One additional author is a paid consultant for Syqe. Two more received research support from Syqe. The remaining author is the only one without disclosed financial ties to the sponsor. The study itself was “supported by Syqe Medical.”

AuthorListed AffiliationDisclosed Relationship to Syqe Medical
AlmogSheba / Tel-Aviv UniversityPaid consultant for Syqe
Aharon-PeretzRambamNone disclosed
VulfsonsRambam / TechnionResearch support from Syqe
OgintzRambam / SyqeSyqe employee
AbaliaSyqeSyqe employee
LupoSyqeSyqe employee
HayonSyqeSyqe employee
EisenbergRambam / TechnionResearch support from Syqe

This is not a third-party-evaluated trial. It is an internal evaluation of a device by the company building the device, published in a peer-reviewed journal. There is nothing categorically wrong with industry-funded research, but the conflict density here is high enough to warrant explicit weighting in any meta-analysis or regulatory consideration. The Discussion does not adjust its tone to reflect this.

3. The Unregistered Trial

Buried in the Methods section is the following sentence:

“Due to an administrative error, the study was not registered at the NIH ClinicalTrail.gov website.”

The misspelling of ClinicalTrials.gov in the disclosure is itself instructive. Trial pre-registration is the principal mechanism by which the scientific community guards against outcome switching, selective reporting, and post-hoc hypothesis revision. Pre-registration is a one-page online form. The trial ran for sixteen months. “Administrative error” is doing a lot of work in that sentence.

The practical consequence: the reader has no independent record of which outcome was prespecified as primary, what statistical analysis was planned in advance, or whether the dose comparisons reported were the comparisons originally intended. The paper reports a pharmacokinetic primary outcome and an efficacy primary outcome — both share the “primary” label. The reader cannot tell which was prespecified or whether the framing was finalized post hoc.

4. The Buried Negative Result

The abstract reads:

“Both doses, but not the placebo, demonstrated a significant reduction in pain intensity compared with baseline. The 1-mg dose showed a significant pain decrease compared to the placebo.”

Read it twice. The first sentence describes within-arm change against baseline. In a placebo-controlled trial, within-arm change against baseline is not the appropriate efficacy test — that is what the placebo arm is for. The relevant comparison is between arms.

The between-arm comparison appears only later, and quietly: the 0.5 mg dose was not statistically significantly different from placebo. Only the 1.0 mg dose beat placebo. So one of the two active arms — the lower-dose arm the paper celebrates as a precision-dosing achievement — failed the actual efficacy test.

The abstract framing reverses this. By leading with the within-arm comparison and listing both active doses together as having “demonstrated a significant reduction,” the paper creates the impression that both doses worked. They did not. One did. A reader who reads only the abstract — and most readers do read only the abstract — will leave with a materially different impression than the data support.

5. The Cognitive Window That Missed the Peak

The paper claims the device produces “no evidence of consistent impairments in cognitive performance.” The basis for this claim is CANTAB testing at two post-dose time-points: 15 minutes and 75 minutes.

The peak plasma THC concentration occurred at approximately 4 minutes after inhalation. By 15 minutes, plasma THC has already declined substantially from its peak. The interval during which cognitive impairment is most likely to be detectable was not measured. Then the absence of detected impairment is presented as positive evidence of safety.

This is the classic absence-of-evidence-as-evidence-of-absence pattern. When you do not test in the window where an effect is most likely to occur, your null finding is uninformative. The paper does not flag this; it presents the negative finding as a positive safety signal.

Add to this the issue of test selection: only four CANTAB subtests were used. Driving-relevant tasks, executive function under cognitive load, judgment, and divided attention — the cognitive domains most relevant to real-world cannabis safety — were not tested. The cognitive safety claim rests on a narrow battery applied at the wrong times.


The Unblinding the Authors Almost Acknowledge

Among the limitations, the authors concede:

“Patients were not asked at the end of each session what treatment they felt they had received, so we cannot be sure that patients were fully blinded to treatment arms.”

The paper then dismisses this concern by noting that the cartridge contents were not visible. That is true and irrelevant. The blinding-relevant fact is not what the patient sees. It is what the patient feels.

Every patient in this study had previously used cannabis. 77.8% had previously smoked cannabis at 16 to 30 grams per month. These are experienced users. They can reliably distinguish the subjective effect of THC from inert placebo. And the data confirms this: the most commonly reported adverse event was “drug high,” and its incidence was strictly dose-graded. Nine patients reported it on placebo. Twelve on 0.5 mg. Sixteen on 1.0 mg.

That dose-graded “high” signal is a functional unblinding signature. Patients in the 1.0 mg arm knew, or correctly inferred, that they had received active drug. The expectancy effect on subjective pain reporting in functionally unblinded patients is well-established and substantial. Whatever pain reduction was measured in the 1.0 mg arm is contaminated by an unmeasurable expectancy contribution. The paper does not adjust for this. It cannot, given the design.


The Proxy-Elevation Chain

INGA314 identifies a recurring failure mode in clinical research called proxy elevation: the conversion of a narrowly-measured surrogate into a broadly-claimed clinical outcome through a series of micro-steps that each look defensible in isolation. Almog et al. executes a textbook five-step chain:

  1. A change in VAS pain score at one or more time-points is observed.
  2. That acute change is described as “significant analgesia.”
  3. Significant analgesia is described as “effective treatment for chronic pain.”
  4. Effective treatment is described as “safe and effective” — bringing in safety claims unsupported by the duration of observation.
  5. Safety and efficacy in chronic pain is described as “regaining quality of life,” bringing in an outcome that was never measured.

Each individual step looks defensible if you read it in isolation. Stacked, the chain converts a 27-patient acute pharmacokinetic study into a flagship pain-management endorsement. This is not a one-off failure. It is the most common pattern of inflation in industry-sponsored medical-device research, and it survives peer review because reviewers tend to evaluate each sentence on its own rather than the cumulative claim trajectory across the manuscript.

Each step looks defensible in isolation. Stacked, the chain converts a feasibility study into a flagship clinical endorsement.


What the Paper Should Have Said

There is a defensible scientific paper inside this manuscript. It looks something like this:

In 27 cannabis-experienced Israeli patients with neuropathic pain or CRPS, on stable concomitant analgesics, a single inhalation of 0.5 or 1.0 mg THC delivered via the Syqe device produced a reproducible PK profile with peak plasma THC at approximately 4 minutes. The 1.0 mg dose, but not the 0.5 mg dose, produced a statistically detectable VAS pain reduction versus placebo over 150 minutes; the magnitude of this effect may include an unmeasurable contribution from functional unblinding given the dose-graded “high” reports. Adverse events were dose-dependent and acute. Inferences about chronic-pain management, long-term safety, cognitive safety, quality of life, and comparative efficacy versus other delivery modalities cannot be drawn from this design and require dedicated trials.

That paper is approximately three to five times less confident than the published version. It is also approximately three to five times more useful to the reader, because the published version invites them to make decisions the evidence cannot support.


Why This Pattern Matters Beyond One Paper

Almog et al. is not unusual. It is a clean instance of a much broader pattern in clinical literature, and especially in industry-sponsored medical-device research: a methodologically defensible early-phase study is reframed in its Discussion and Conclusion sections as evidence for a clinical claim the study cannot actually support.

The mechanism is reliable. Discussion sections face the lowest peer-review scrutiny of any part of a manuscript. Reviewers focus on Methods and Results. Discussions are read as commentary. By the time a claim escalates from “significant VAS reduction over 150 minutes” to “patients regain their quality of life,” the inflation has compounded across three or four sentences, each of which looked reasonable to the reader who had just read the previous one.

This pattern matters because Discussion-section claims are what get cited. They flow into systematic reviews, into clinical guidelines, into regulatory submissions, into payer decisions, and into the marketing collateral that physicians and patients eventually encounter. The original 150-minute window of empirical observation is several layers of citation away by the time it reaches the people making clinical and financial decisions.

INGA314 was built to detect this pattern systematically — not by adjudicating whether a paper is “good” or “bad,” but by quantifying the gap between what its methods can support and what its conclusions claim. That gap, the confidence inflation factor, is measurable. It can be reported. It can be required of authors. It can flow into citation-weighting algorithms. And it gives reviewers, regulators, and investors a tool that they currently lack: a structured way to discount a published claim by the amount its own evidence does not justify.


About This Analysis

This case study was produced using INGA314, a methodology for detecting and quantifying logical failures in high-stakes documents — including scientific papers, regulatory submissions, investor materials, and contracts. INGA314 identifies four core failure modes: scope violations, proxy elevation, causal inflation, and confidence inflation. It produces quantified outputs (inflation factors, validity scores) that can be cited and replicated.

INGA314 is built and commercialized at inga314.ai. The tagline: “They find papers. We find flaws.” If you would like to apply INGA314 to a paper, pitch deck, regulatory filing, or contract — or if you would like to read more case studies — visit inga314.ai or daridor.blog.


Reference: Almog S, Aharon-Peretz J, Vulfsons S, Ogintz M, Abalia H, Lupo T, Hayon Y, Eisenberg E. The pharmacokinetics, efficacy, and safety of a novel selective-dose cannabis inhaler in patients with chronic pain: A randomized, double-blinded, placebo-controlled trial. Eur J Pain. 2020;24(8):1505–1516. doi: 10.1002/ejp.1605. PMID: 32445190. PMCID: PMC7496774.


Published by:

Unknown's avatar

Dan D. Aridor

I hold an MBA from Columbia Business School (1994) and a BA in Economics and Business Management from Bar-Ilan University (1991). Previously, I served as a Lieutenant Colonel (reserve) in the Israeli Intelligence Corps. Additionally, I have extensive experience managing various R&D projects across diverse technological fields. In 2024, I founded INGA314.com, a platform dedicated to providing professional scientific consultations and analytical insights. I am passionate about history and science fiction, and I occasionally write about these topics.

Categories כלליTags , , , , Leave a comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.