AI can detect cancer, but it can also recognize patient characteristics—an ability that affects diagnostic accuracy. A new study shows that AI systems used in pathology perform unevenly across race, gender, and age. Researchers identified three main causes of this bias and developed a framework that significantly reduces disparities. The findings highlight the need to test medical AI for fairness to ensure reliable cancer care for all patients.
How Pathology Guides Cancer Diagnosis
Pathologists traditionally examine thin slices of tissue under a microscope to detect cancer. They assess cell patterns, shapes, and tissue architecture to determine cancer presence, type, and stage. Normally, slides reveal no personal information about patients.
When AI Sees More Than Disease
A study led by Harvard Medical School found that pathology AI models can infer demographic information from tissue slides. This capability introduces bias because models may rely on patterns associated with race, age, or gender when diagnosing cancer.
The researchers tested multiple AI models and discovered that performance varied across patient populations. To address this, they developed FAIR-Path, a framework that dramatically reduces bias.
Why Bias Occurs in Pathology AI
Three main factors drive disparities in AI performance:
- Uneven Training Data: AI models are often trained on datasets that overrepresent some groups and underrepresent others, making diagnoses less accurate for underrepresented populations.
- Disease Incidence Differences: Some cancers occur more frequently in specific populations, allowing models to perform better for these groups but struggle with rarer cases.
- Subtle Molecular Signals: AI can detect mutations and molecular patterns linked to demographics. Models may use these patterns as shortcuts, which reduces accuracy for populations with less common mutations.
Yu, a lead researcher, explained, “AI is so powerful that it can detect obscure biological signals beyond human observation. This sometimes causes models to focus more on demographic features than the disease itself.”
Reducing Bias With FAIR-Path
FAIR-Path applies contrastive learning, teaching AI to focus on meaningful differences between cancer types while ignoring less relevant demographic distinctions. After applying FAIR-Path, diagnostic disparities dropped by 88 percent.
Yu said, “This adjustment allows models to learn robust features, making them fairer and more generalizable across populations. It shows that we can reduce bias even without perfectly balanced datasets.”
Looking Ahead
Researchers plan to test pathology AI in different regions, clinical settings, and demographic contexts. They are also exploring FAIR-Path for limited-data situations. Ultimately, the goal is to create AI tools that assist human experts with accurate, fair, and fast cancer diagnoses for all patients.
Yu concluded, “By carefully designing AI systems, we can ensure they perform well across every population, improving outcomes and equity in healthcare.”
