AI reads DNA for disease clues

Instead of a vague “harmful” label, this new AI can link specific mutations to likely diseases, helping doctors get to answers—and tailored care—faster.

 

Genetic testing has become routine in many areas of medicine, but interpreting the results is still a bottleneck. Labs can often say a DNA change is “pathogenic” (disease-causing), “benign”, or—frequently—declare a variant of uncertain significance (VUS), which can be clinically frustrating for patients and clinicians alike. The five-tier framework most services rely on is set out in the widely adopted ACMG/AMP standards for sequence variant interpretation, which focuses on how likely a variant is to be harmful, not necessarily what it does in the body or which condition it best matches.

Mount Sinai says its newly developed model is designed to fill that gap by predicting the most likely phenotypic outcomes or disease categories linked to specific mutations (beyond pathogenicity alone), rather than stopping at a generic “harmful” label. In its December 15, 2025 announcement (with wider coverage on December 16), the health system described the work as a step towards faster, more precise diagnosis and more targeted care planning, particularly when a variant’s impact is unclear or when the clinical picture spans multiple organ systems (Mount Sinai’s newsroom release).

This reframing may be clinically useful if it helps clinicians prioritise which diagnoses to investigate first—something that can affect how quickly families move from uncertainty to answers.

What Mount Sinai says the new model does

According to Mount Sinai, the system was trained to learn patterns that connect specific variants with specific diseases, using large-scale genetic and clinical information. The headline claim is not that the AI merely separates “bad” variants from “harmless” ones, but that it can prioritise likely disease outcomes tied to a mutation—information that could help clinicians decide which symptoms to monitor, which confirmatory tests to order, and which specialists to involve.

Independent coverage echoed that framing, describing the approach as disease-specific variant prediction that goes beyond traditional variant-effect tools (Medical Xpress report) and as a potential accelerator for rare disease work-ups where sequencing is available but interpretation lags (News-Medical coverage). While those reports broadly align with Mount Sinai’s messaging, performance claims should be treated cautiously while it’s now published in Nature Communications, the key test will be external validation, calibration, and clear failure modes.

More broadly, this direction aligns with a wider trend in clinical genomics: using machine learning to turn genomic findings into more actionable clinical hypotheses.

Why this matters for the diagnostic odyssey

For people with suspected genetic or rare conditions, the main challenge is often not obtaining sequencing, but translating results into an answer. Studies document multi-year diagnostic delays across many rare diseases; a systematic review in the Orphanet Journal of Rare Diseases summarised how diagnostic delay remains common and varies by condition, healthcare setting and symptom onset (systematic review of diagnostic delay). Patient organisations also report long and costly “diagnostic odysseys”, shaped by misdiagnoses, limited specialist access and overlapping symptoms (EURORDIS on the diagnostic odyssey).

A disease-predictive AI could help in several ways:

  • Triage and prioritisation: When sequencing identifies multiple plausible variants, clinicians need to decide which ones best match the patient’s features. A model that suggests disease links might push the most compatible diagnoses to the top of the list.
  • Better use of scarce expertise: Clinical geneticists and variant scientists are limited resources. Tools that focus attention on the most likely disease–variant pairs could reduce time spent exploring low-yield leads.
  • Earlier surveillance and intervention: Even before a definitive diagnosis, clinicians can sometimes monitor for known complications of the most likely conditions—particularly in paediatrics, cardiology and neurology.

None of this replaces clinical judgement, and it would not remove the need for follow-up testing (for example, biochemical assays, imaging, family segregation studies or functional experiments). However, it could reduce the “search space” in a way that makes the diagnostic pathway more efficient and, for patients, less burdensome.

The data challenge: you’re only as good as your reference

Genetic interpretation depends heavily on reference databases and curated evidence. One major public resource is ClinVar, which aggregates submitted interpretations of variants and their relationship to phenotypes. ClinVar’s strengths—scale and openness—also highlight known challenges in the field, including conflicting assertions, incomplete phenotype detail, uneven representation across ancestries, and the reality that many variants remain VUS for years.

Disease-specific prediction adds another layer of difficulty. It is not just “is this variant damaging?”, but “damaging in what way, in which tissue, along which pathway, and does that pattern resemble Disease A more than Disease B?” That kind of mapping is complicated by:

  • Pleiotropy: the same gene (or even the same variant) can be associated with more than one clinical phenotype, depending on context.
  • Variable expressivity and penetrance: a variant’s effects can range from mild to severe, and some carriers may never develop symptoms.
  • Phenotyping noise: medical records do not always capture nuanced signs, and diagnosis codes can be broad or inconsistent.

Mount Sinai’s promise, in essence, is that AI can learn these relationships at scale. The risk is that models can also learn biases at scale—overweighting well-studied populations and conditions, and underperforming for underrepresented groups. Any clinical deployment would need clear reporting on population diversity, fairness-related analyses, and safeguards against overconfident outputs.

Where it could fit in the clinic—and where it can’t (yet)

If the model performs as described, an early use case is as a decision-support layer in existing genetics workflows. It might sit alongside standard variant classification, family history, and phenotype matching, offering ranked disease hypotheses linked to known or suspected variant associations.

That could be especially relevant for variants that are “probably harmful” but diagnostically non-specific. As MedlinePlus notes, VUS results can’t confirm or rule out a diagnosis on their own (MedlinePlus), and in practice are typically not used as the sole basis for major health-care decisions (NCI guidance in cancer genetics). A tool that suggests “this VUS resembles variants previously associated with Condition X” could help guide what additional evidence to seek—without prematurely labelling a patient.

However, there are clear boundaries:

  • It won’t replace confirmatory evidence. Predictions still need to be checked against clinical presentation, inheritance patterns and—where feasible—functional validation.
  • It may struggle with ultra-rare disorders. If the training data contains few examples of a condition, performance may be limited. This is a common constraint in rare-disease machine learning.
  • It must be interpretable enough for medicine. Clinicians and laboratories need to understand why a model ranks certain diseases highly, and what uncertainty looks like. Overconfident “black box” outputs can increase risk.

There is also a regulatory and governance question. Decision-support tools that influence diagnosis may fall into medical-device territory, which can trigger requirements around validation, monitoring and post-market surveillance. Mount Sinai has not, in its public materials, outlined a clinical certification pathway; that will be relevant if the model moves towards deployment.

Personalised treatment: promise, but not automatic

Mount Sinai’s release emphasises personalised treatment opportunities. That is plausible—if a disease prediction leads to an accurate diagnosis early enough to change management. In some conditions, targeted therapies exist; in others, supportive care and surveillance are the main benefits. Either way, earlier diagnosis can still matter: it may avoid unnecessary procedures, connect families to specialist clinics and inform reproductive planning.

It is also worth separating what AI can predict from what the healthcare system can deliver. Even with strong predictions:

  • access to specialist services may be limited;
  • treatments may be expensive or unavailable; and
  • evidence for genotype-guided therapy is strong in some areas and more limited in others.

Disease-linked variant prediction could be a valuable entry point to precision medicine, but it does not guarantee that precision medicine is accessible or appropriate in every case.

The bottom line for 2025: a shift towards actionable genomics

As of today, Mount Sinai’s announcement signals a shift in how some clinical AI tools in genomics aim to operate: moving beyond “is this variant bad?” to “what disease is this variant most likely pointing to?”. If the approach is validated across diverse populations and integrated carefully into clinical workflows, it could help reduce uncertainty around ambiguous findings and support more targeted follow-up testing.

For now, a measured view is warranted. The concept aligns with real clinical needs; the key test will be transparent performance reporting, external validation, and evidence that the tool improves patient-relevant outcomes—not only prediction metrics—when used in everyday care.

WhatsApp
LinkedIn
Facebook