Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Interpreting Deep Visual Representations via Network Dissection

Abstract

The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn. Keywords: Convolutional neural networks; Network interpretability; Visual recognition; Interpretable machine learning; Visualization; Detectors; Training; Image color analysis; Task analysis; Image segmentation; SemanticsUnited States. Defense Advanced Research Projects Agency (FA8750-18-C-0004)National Science Foundation (U.S.)(Grant 1524817)National Science Foundation (U.S.)(Grant 1532591)United States. Office of Naval Research (Grant N00014-16-1-3116)Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryGoogle (Firm)Amazon.com (Firm)NVIDIA CorporationFacebook (Firm

Similar works

Full text

thumbnail-image

DSpace@MIT

redirect
Last time updated on 20/11/2019

This paper was published in DSpace@MIT.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.