VariacleRegulatory-grade external controls

for faster, safer clinical decisions

Strengthening clinical trials with regulatory-grade external controls — supporting investigators, research institutions, and sponsors to generate stronger evidence in data- and resource-constrained settings.

Regulatory confidence

Auditability, traceability, defensible evidence

Clinical insight

Bias-controlled evidence from trials + RWD

Economic impact

Earlier decisions, reduced recruitment burden, lower risk

  • Supporting foundation
  • Healthstart
  • Celera
  • ENISA — Empresa Nacional de Innovación

Clinical studies often face structural constraints

  • Limited access to sufficiently large patient cohorts
  • Difficulty recruiting control arms in clinical trials
  • Fragmented clinical datasets across institutions
  • Budget constraints in publicly funded research

As a result, promising studies may become underpowered, delayed, or infeasible.

Variacle enables the use of regulatory-grade external controls to strengthen clinical evidence.

Our methodology allows investigators to:

  • Contextualize single-arm studies
  • Augment underpowered randomized trials
  • Reduce control-arm recruitment burden
  • Generate stronger evidence using existing data sources

Stalled trials and fragmented evidence slow development

Trial costs are high and vary widely across programs

Success probabilities remain low to moderate across phases

Recruitment bottlenecks frequently delay timelines

Evidence assembled post-hoc often faces skepticism

Fragmented reasoning can lead to failure even when the biology is sound.

Why now: regulators are formalizing expectations for external controls

U.S. Food and Drug Administration

FDA

Increasing clarity on prespecification, data provenance, and sensitivity analyses for externally controlled trials.

Medicines and Healthcare products Regulatory Agency

MHRA

Acknowledges RWD external controls may be acceptable when RCTs are infeasible or would cause major delay.

European Medicines Agency

EMA

Moving toward a Europe-wide reflection framework to harmonize approaches.

External controls are discussable — but only with strong governance and justification.

Deeper regulatory landscape and commercial proof ↓

The tides have turned

Regulators historically said "No" to external controls due to hypothesis violations. Today, global guidance is demanding hybrid designs.

Key regulatory shifts

European Medicines Agency (EMA)

The "methodological constraints" era

The EMA's latest reflection paper on external controls demands rigor. It explicitly scrutinizes the "methodological constraints" required to turn real-world data into pivotal evidence. Variacle provides the exact answer to this scrutiny: an auditable causal framework that validates exchangeability assumptions instead of ignoring them, aligning with the EU's demand for defensible causal conclusions.

Medicines and Healthcare products Regulatory Agency (MHRA)

A pragmatic step toward RWE

The MHRA has released draft guidance on External Control Arms (ECAs). While RCTs remain the gold standard, they acknowledge ECAs can provide credible evidence when RCTs are unethical or infeasible. Key requirement: hybrid designs over single-arm studies.

U.S. Food and Drug Administration (FDA)

The 21st century Cures Act era

Variacle is the operational engine for the FDA's vision. We align with the 'Framework for FDA's RWE Program' and 'Assessing EHR Data'. We enable the same distributed architecture as FDA's Sentinel but replace rigid data harmonization with causal inference estimands without moving data.

Global

Same principles, different railways

Europe (DARWIN EU®), Japan (MID-NET®), and China (Hainan Pilot) are all aligning. The winners won't be the biggest datasets—they'll be the clearest protocols and auditable code.

The evidence: RWE success stories

Vimpat® (UCB)

RWE supported new loading dose in children.

Vijoice® (Novartis)

Chart review provided effectiveness evidence where RCTs were not feasible.

Orencia® (BMS)

Registry data served as pivotal evidence for transplant patients.

Actemra® (Genentech)

National death records used to assess mortality.

A governed evidence engine for development decisions

Not proposing a new trial design

Not asking teams to change regulation

We structure how evidence is generated and combined across development environments

From fragmented analyses to governed evidence production

Real-world data, reframed for efficacy—not only effectiveness

In randomized trials, randomization identifies treatment effects for the patients and outcomes defined in the protocol—the classical efficacy question in the trial population. Real-world data is widely used afterward for effectiveness: how medicines perform across routine care, broader cohorts, and long horizons. That split is familiar—and many analytics stacks stop there, optimizing for descriptive or population-level RWE.

Variacle is built for a different regulatory moment. When single-arm, open-label, or ethically constrained designs make a concurrent control infeasible, sponsors still need a defensible answer to an efficacy-style question. We use governed real-world sources to complete the missing counterfactuals for the trial estimand—not to substitute a new target population or to blur the line with post hoc effectiveness narratives.

That posture keeps you closer to what agencies emphasize—the effect in the trial population and the credibility of identification under explicit assumptions—rather than leaning on the hardest global transportability claims. Where competitors sell federated queries, dashboards, or loose real-world comparisons, Variacle sells protocol-aligned causal evidence production.

Trial estimand first

RWD is anchored to the trial question and population, not repurposed as a generic effectiveness study after the fact.

External controls without moving raw data

  • Data remains with each owner or institution
  • Analyses run locally using a common governed framework
  • Evidence is aggregated across sources, including public/external datasets
  • No raw patient-level data needs to be shared between sponsors or partners

Fragmented reasoning can lead to failure even when the biology is sound.

Hospital / registry

Local compute

Sponsor

RWD source

RWD

Federated posture

Variacle integrates evidence across distributed clinical datasets while respecting institutional data governance and data access constraints.

Data stays local

Data remains with each institution (federated approach).

Governed analyses

Analyses run locally under a governed framework.

Evidence without sharing PHI

Evidence is aggregated without sharing raw patient data.

Result: credible external evidence without compromising data ownership.

Particularly relevant for

Rare disease trialsOrphan drug researchDrug repositioning studiesPublicly funded trials with recruitment constraintsPediatric or underrepresented populations

Method credibility: controlling confounding with a link variable

Reducing bias when borrowing external evidence

External datasets often differ in baseline severity and selection mechanisms

These differences create unmeasured confounding across sources

A baseline link variable (W) bridges populations across datasets

Under explicit identification conditions, we recalibrate the external control

The cards above summarize how we borrow strength across sources. The section below is the technical backbone: why naive pooling fails under scrutiny, what regulators worry about, and how Variacle's framing relates to identification—not slogans.

From method credibility

Technical foundation: beyond naive borrowing

Why standard data integration often fails to convince regulators—and how that connects to explicit assumptions, intervals, and the link-variable view you saw above.

Beyond the impossibility theorem

Why standard data integration fails to convince regulators.

Problem 01: the math

The limit of adaptation

It is mathematically impossible to shorten confidence intervals using observational data if the magnitude of the bias is unknown. As proven by Chen, Zhang, and Ye (2021), any valid confidence interval must cover the "worst-case" bias, rendering the external data useless for precision gains. Without knowledge of the confounding bias magnitude, hybrid estimators leveraging observational and experimental data cannot produce shorter confidence intervals than estimators using purely experimental data.

The impossibility theorem
CI= any valid confidence interval
= true treatment effect
= unknown confounding bias
= sample size of the RCT

If you can't quantify the bias (h), you can't shrink the interval smaller than the RCT alone.

Problem 02: the assumption

The "mean exchangeability" trap

Regulators routinely reject external controls because current methodology relies on a fragile assumption: Mean exchangeability over studies (S).

This assumes that enrolling in the RCT (S = 0) does not affect a patient's outcome compared to the real world (S = 1). Existing methods rely on this conditional independence, but in reality, it is almost always violated.

Notation:

• a ∈ {0,1}: treatment indicator

• Ya: potential outcome under treatment a

• S ∈ {0,1}: study indicator (0=RCT, 1=RWD)

• W: observed covariates / confounders

Protocol-driven adherence

Subjects often adhere more strictly to treatment regimens in an RCT (S = 0) than in the real world. The "Trial Effect" modifies the outcome, violating exchangeability.

Measurement inconsistency

Digital health endpoints measured via different devices introduce batch effects. Manufacturer discrepancies cause potential outcomes in RWD to differ from RCT, even for the same patient.

Core literature

[1] Chen, S., Zhang, B., & Ye, T. (2021). Minimax rates and adaptivity in combining experimental and observational data. arXiv:2109.10522

[2] Valancius M, Pang H, Zhu J, Cole SR, Funk MJ, Kosorok MR. (2024). A causal inference framework for leveraging external controls. Biometrics, 80(4), ujae095.

[3] Colnet B, Mayer I, Chen G, et al. (2020). Causal inference methods for combining randomized and observational data: a review. Statistical Science.

Where external controls add value in real development settings

Contextualize single-arm trials

Use external evidence to interpret treatment effect when no internal control exists.

Augment underpowered RCT controls

Increase effective sample size and reduce uncertainty.

Rescue recruitment-limited studies

Reduce control arm burden when enrollment is slow.

Rare disease or ethical placebo constraints

Support evidence generation when randomization is infeasible or ethically complex.

Designed for scenarios where traditional controls are difficult, not to replace RCTs when feasible.

Regulatory-grade = method + artifacts + traceability

Prespecified estimand & SAP

Defined before analysis and aligned with trial objectives.

Transparent data selection & provenance

Documented rationale for dataset inclusion/exclusion.

Diagnostics & sensitivity analyses

Robustness checks clearly reported.

Full traceability & audit trail

Reproducible transformations and decision documentation.

Sponsor oversight support

Documentation aligned with lifecycle governance obligations.

In regulated settings, the artifacts are as important as the method.

Impact and ROI: value from scale, speed, and reduced uncertainty

Reduced accrual burden

Lower control-arm recruitment pressure in constrained settings.

Increased effective sample size (ESS)

Borrowing external evidence to reduce uncertainty.

Shorter decision timelines

Earlier signal clarity → earlier go/no-go decisions.

Value is modeled conservatively — not promised.

Start small. Generate evidence. Then scale.

Step 1

Feasibility & data audit (2–4 weeks)

  • Endpoint availability
  • Covariate capture
  • Data quality & comparability
  • Regulatory context assessment
Step 2

Prespecified ECA build (4–6 weeks)

  • Estimand definition
  • SAP development
  • Federated execution
  • Diagnostics & sensitivity pack
Step 3

Decision memo & scale plan (2 weeks)

  • Executive summary
  • Risk assessment
  • Program applicability review
  • Scale roadmap across portfolio

Typical pilot duration: 8–12 weeks

A structured pilot to de-risk evidence before committing at scale.

Variacle can support research teams through

  • Study design and external control strategy
  • Statistical analysis plans and methodological framework
  • Robustness and sensitivity analyses
  • Evidence packages suitable for regulatory or public health evaluation

We are currently collaborating with clinical investigators, research institutions, and industry partners interested in strengthening the methodological foundation of clinical studies in data-constrained settings.

If you are preparing a proposal or exploring new study designs, we would be happy to discuss how external controls could support your project.

Start a conversation

Thank you

When evidence compounds, uncertainty shrinks.