Explainable AI for computational pathology identifies model limitations and tissue biomarkers

Kaczmarzyk, Jakub, Koo, Peter, Saltz, Joel (September 2024) Explainable AI for computational pathology identifies model limitations and tissue biomarkers. arXiv. ISSN 2331-8422 (Submitted)

[thumbnail of 10.48550.arXiv.2409.03080.pdf] PDF
10.48550.arXiv.2409.03080.pdf - Submitted Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (8MB)

Abstract

Introduction: Deep learning models hold great promise for digital pathology, but their opaque decision-making processes undermine trust and hinder clinical adoption. Explainable AI methods are essential to enhance model transparency and reliability. Methods: We developed HIPPO, an explainable AI framework that systematically modifies tissue regions in whole slide images to generate image counterfactuals, enabling quantitative hypothesis testing, bias detection, and model evaluation beyond traditional performance metrics. HIPPO was applied to a variety of clinically important tasks, including breast metastasis detection in axillary lymph nodes, prognostication in breast cancer and melanoma, and IDH mutation classification in gliomas. In computational experiments, HIPPO was compared against traditional metrics and attention-based approaches to assess its ability to identify key tissue elements driving model predictions. Results: In metastasis detection, HIPPO uncovered critical model limitations that were undetectable by standard performance metrics or attention-based methods. For prognostic prediction, HIPPO outperformed attention by providing more nuanced insights into tissue elements influencing outcomes. In a proof-of-concept study, HIPPO facilitated hypothesis generation for identifying melanoma patients who may benefit from immunotherapy. In IDH mutation classification, HIPPO more robustly identified the pathology regions responsible for false negatives compared to attention, suggesting its potential to outperform attention in explaining model decisions. Conclusions: HIPPO expands the explainable AI toolkit for computational pathology by enabling deeper insights into model behavior. This framework supports the trustworthy development, deployment, and regulation of weakly-supervised models in clinical and research settings, promoting their broader adoption in digital pathology.

Item Type: Paper
Subjects: bioinformatics
bioinformatics > computational biology
CSHL Authors:
Communities: CSHL labs > Koo Lab
SWORD Depositor: CSHL Elements
Depositing User: CSHL Elements
Date: 4 September 2024
Date Deposited: 26 Nov 2024 16:02
Last Modified: 26 Nov 2024 16:02
URI: https://repository.cshl.edu/id/eprint/41745

Actions (login required)

Administrator's edit/view item Administrator's edit/view item