for faster, safer clinical decisions
Strengthening clinical trials with regulatory-grade external controls — supporting investigators, research institutions, and sponsors to generate stronger evidence in data- and resource-constrained settings.
Regulatory confidence
Auditability, traceability, defensible evidence
Clinical insight
Bias-controlled evidence from trials + RWD
Economic impact
Earlier decisions, reduced recruitment burden, lower risk




As a result, promising studies may become underpowered, delayed, or infeasible.
Our methodology allows investigators to:
Trial costs are high and vary widely across programs
Success probabilities remain low to moderate across phases
Recruitment bottlenecks frequently delay timelines
Evidence assembled post-hoc often faces skepticism
Fragmented reasoning can lead to failure even when the biology is sound.
FDA
Increasing clarity on prespecification, data provenance, and sensitivity analyses for externally controlled trials.

MHRA
Acknowledges RWD external controls may be acceptable when RCTs are infeasible or would cause major delay.

EMA
Moving toward a Europe-wide reflection framework to harmonize approaches.
External controls are discussable — but only with strong governance and justification.
Regulators historically said "No" to external controls due to hypothesis violations. Today, global guidance is demanding hybrid designs.
European Medicines Agency (EMA)
The EMA's latest reflection paper on external controls demands rigor. It explicitly scrutinizes the "methodological constraints" required to turn real-world data into pivotal evidence. Variacle provides the exact answer to this scrutiny: an auditable causal framework that validates exchangeability assumptions instead of ignoring them, aligning with the EU's demand for defensible causal conclusions.
Medicines and Healthcare products Regulatory Agency (MHRA)
The MHRA has released draft guidance on External Control Arms (ECAs). While RCTs remain the gold standard, they acknowledge ECAs can provide credible evidence when RCTs are unethical or infeasible. Key requirement: hybrid designs over single-arm studies.
U.S. Food and Drug Administration (FDA)
Variacle is the operational engine for the FDA's vision. We align with the 'Framework for FDA's RWE Program' and 'Assessing EHR Data'. We enable the same distributed architecture as FDA's Sentinel but replace rigid data harmonization with causal inference estimands without moving data.
Europe (DARWIN EU®), Japan (MID-NET®), and China (Hainan Pilot) are all aligning. The winners won't be the biggest datasets—they'll be the clearest protocols and auditable code.
RWE supported new loading dose in children.
Chart review provided effectiveness evidence where RCTs were not feasible.
Registry data served as pivotal evidence for transplant patients.
National death records used to assess mortality.
Not proposing a new trial design
Not asking teams to change regulation
We structure how evidence is generated and combined across development environments
In randomized trials, randomization identifies treatment effects for the patients and outcomes defined in the protocol—the classical efficacy question in the trial population. Real-world data is widely used afterward for effectiveness: how medicines perform across routine care, broader cohorts, and long horizons. That split is familiar—and many analytics stacks stop there, optimizing for descriptive or population-level RWE.
Variacle is built for a different regulatory moment. When single-arm, open-label, or ethically constrained designs make a concurrent control infeasible, sponsors still need a defensible answer to an efficacy-style question. We use governed real-world sources to complete the missing counterfactuals for the trial estimand—not to substitute a new target population or to blur the line with post hoc effectiveness narratives.
That posture keeps you closer to what agencies emphasize—the effect in the trial population and the credibility of identification under explicit assumptions—rather than leaning on the hardest global transportability claims. Where competitors sell federated queries, dashboards, or loose real-world comparisons, Variacle sells protocol-aligned causal evidence production.
Trial estimand first
RWD is anchored to the trial question and population, not repurposed as a generic effectiveness study after the fact.
Fragmented reasoning can lead to failure even when the biology is sound.
Hospital / registry
Local compute
Sponsor
RWD source
RWD
Federated posture
Data remains with each institution (federated approach).
Analyses run locally under a governed framework.
Evidence is aggregated without sharing raw patient data.
Result: credible external evidence without compromising data ownership.
Reducing bias when borrowing external evidence
External datasets often differ in baseline severity and selection mechanisms
These differences create unmeasured confounding across sources
A baseline link variable (W) bridges populations across datasets
Under explicit identification conditions, we recalibrate the external control
The cards above summarize how we borrow strength across sources. The section below is the technical backbone: why naive pooling fails under scrutiny, what regulators worry about, and how Variacle's framing relates to identification—not slogans.
From method credibility
Why standard data integration often fails to convince regulators—and how that connects to explicit assumptions, intervals, and the link-variable view you saw above.
Why standard data integration fails to convince regulators.
It is mathematically impossible to shorten confidence intervals using observational data if the magnitude of the bias is unknown. As proven by Chen, Zhang, and Ye (2021), any valid confidence interval must cover the "worst-case" bias, rendering the external data useless for precision gains. Without knowledge of the confounding bias magnitude, hybrid estimators leveraging observational and experimental data cannot produce shorter confidence intervals than estimators using purely experimental data.
If you can't quantify the bias (h), you can't shrink the interval smaller than the RCT alone.
Regulators routinely reject external controls because current methodology relies on a fragile assumption: Mean exchangeability over studies (S).
This assumes that enrolling in the RCT (S = 0) does not affect a patient's outcome compared to the real world (S = 1). Existing methods rely on this conditional independence, but in reality, it is almost always violated.
Notation:
• a ∈ {0,1}: treatment indicator
• Ya: potential outcome under treatment a
• S ∈ {0,1}: study indicator (0=RCT, 1=RWD)
• W: observed covariates / confounders
Subjects often adhere more strictly to treatment regimens in an RCT (S = 0) than in the real world. The "Trial Effect" modifies the outcome, violating exchangeability.
Digital health endpoints measured via different devices introduce batch effects. Manufacturer discrepancies cause potential outcomes in RWD to differ from RCT, even for the same patient.
[1] Chen, S., Zhang, B., & Ye, T. (2021). Minimax rates and adaptivity in combining experimental and observational data. arXiv:2109.10522
[2] Valancius M, Pang H, Zhu J, Cole SR, Funk MJ, Kosorok MR. (2024). A causal inference framework for leveraging external controls. Biometrics, 80(4), ujae095.
[3] Colnet B, Mayer I, Chen G, et al. (2020). Causal inference methods for combining randomized and observational data: a review. Statistical Science.
Use external evidence to interpret treatment effect when no internal control exists.
Increase effective sample size and reduce uncertainty.
Reduce control arm burden when enrollment is slow.
Support evidence generation when randomization is infeasible or ethically complex.
Designed for scenarios where traditional controls are difficult, not to replace RCTs when feasible.
Defined before analysis and aligned with trial objectives.
Documented rationale for dataset inclusion/exclusion.
Robustness checks clearly reported.
Reproducible transformations and decision documentation.
Documentation aligned with lifecycle governance obligations.
In regulated settings, the artifacts are as important as the method.
Lower control-arm recruitment pressure in constrained settings.
Borrowing external evidence to reduce uncertainty.
Earlier signal clarity → earlier go/no-go decisions.
Value is modeled conservatively — not promised.
Typical pilot duration: 8–12 weeks
A structured pilot to de-risk evidence before committing at scale.
We are currently collaborating with clinical investigators, research institutions, and industry partners interested in strengthening the methodological foundation of clinical studies in data-constrained settings.
If you are preparing a proposal or exploring new study designs, we would be happy to discuss how external controls could support your project.
Start a conversationWhen evidence compounds, uncertainty shrinks.