Koo, Peter K, Ploenzke, Matt (February 2020) Interpreting Deep Neural Networks Beyond Attribution Methods: Quantifying Global Importance of Genomic Features. bioRxiv. (Submitted)
Preview |
PDF
2020.02.19.956896v1.full.pdf - Submitted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (229kB) | Preview |
Abstract
Despite deep neural networks (DNNs) having found great success at improving performance on various prediction tasks in computational genomics, it remains difficult to understand why they make any given prediction. In genomics, the main approaches to interpret a high-performing DNN are to visualize learned representations via weight visualizations and attribution methods. While these methods can be informative, each has strong limitations. For instance, attribution methods only uncover the independent contribution of single nucleotide variants in a given sequence. Here we discuss and argue for global importance analysis which can quantify population-level importance of putative features and their interactions learned by a DNN. We highlight recent work that has benefited from this interpretability approach and then discuss connections between global importance analysis and causality.
Item Type: | Paper |
---|---|
Subjects: | bioinformatics bioinformatics > genomics and proteomics organs, tissues, organelles, cell types and functions > tissues types and functions > neural networks |
CSHL Authors: | |
Communities: | CSHL labs > Koo Lab |
SWORD Depositor: | CSHL Elements |
Depositing User: | CSHL Elements |
Date: | 20 February 2020 |
Date Deposited: | 20 Dec 2023 20:00 |
Last Modified: | 20 Dec 2023 20:00 |
Related URLs: | |
URI: | https://repository.cshl.edu/id/eprint/41349 |
Actions (login required)
Administrator's edit/view item |