Improving Convolutional Network Interpretability with Exponential Activations

Koo, P. K., Ploenzke, Matt (June 2019) Improving Convolutional Network Interpretability with Exponential Activations. In: ICML Workshop for Computational Biology, Long Beach, CA.

Abstract

Deep convolutional networks trained on regula- tory genomic sequences tend to learn distributed representations of sequence motifs across many first layer filters. This makes it challenging to decipher which features are biologically meaning- ful. Here we introduce the exponential activation that – when applied to first layer filters – leads to more interpretable representations of motifs, both visually and quantitatively, compared to rectified linear units. We demonstrate this on synthetic DNA sequences which have ground truth with various convolutional networks, and then show that this phenomenon holds on in vivo DNA sequences.

Item Type: Conference or Workshop Item (Paper)
Subjects: bioinformatics > computational biology
organs, tissues, organelles, cell types and functions > tissues types and functions > neural networks
CSHL Authors:
Communities: CSHL labs > Koo Lab
Depositing User: Matthew Dunn
Date: 14 June 2019
Date Deposited: 17 Sep 2019 19:50
Last Modified: 17 Sep 2019 19:50
URI: https://repository.cshl.edu/id/eprint/38418

Actions (login required)

Administrator's edit/view item Administrator's edit/view item