Robust Neural Networks are More Interpretable for Genomics

Koo, Peter K., Qian, Sharon, Volf, Verena, Kalimeris, Dimitris (June 2019) Robust Neural Networks are More Interpretable for Genomics. In: ICML Workshop for Computational Biology, Long Beach, CA.

URL: https://sites.google.com/view/icml-compbio-2019/ho...

Abstract

Deep neural networks (DNNs) have been applied to a variety of regulatory genomics tasks. For interpretability, attribution methods are employed to provide importance scores for each nucleotide in a given sequence. However, even with state-of-the-art DNNs, there is no guarantee that these methods can recover interpretable, biological representations. Here we perform systematic experiments on synthetic genomic data to raise awareness of this issue. We find that deeper networks have better generalization performance, but attribution methods recover less interpretable representations. Then, we show training methods promoting robustness – including regularization, injecting ran- dom noise into the data, and adversarial training – significantly improve interpretability of DNNs, especially for smaller datasets.

Item Type: Conference or Workshop Item (Paper)
Subjects: bioinformatics > computational biology
organs, tissues, organelles, cell types and functions > tissues types and functions > neural networks
CSHL Authors:
Communities: CSHL labs > Koo Lab
Depositing User: Matthew Dunn
Date: 14 June 2019
Date Deposited: 17 Sep 2019 19:59
Last Modified: 17 Sep 2019 19:59
URI: https://repository.cshl.edu/id/eprint/38417

Actions (login required)

Administrator's edit/view item Administrator's edit/view item
CSHL HomeAbout CSHLResearchEducationNews & FeaturesCampus & Public EventsCareersGiving