深度神经网络可解释性方法汇总,附 Tensorflow 代码实现
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了深度神经网络可解释性方法汇总,附 Tensorflow 代码实现,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含7646字,纯文字阅读大概需要11分钟。
内容图文
理解神经网络:人们一直觉得深度学习可解释性较弱。然而,理解神经网络的研究一直也没有停止过,本文就来介绍几种神经网络的可解释性方法,并配有能够在Jupyter下运行的代码链接。
1.Activation Maximization
通过激活最化来解释深度神经网络的方法一共有两种,具体如下:
1.1 Activation Maximization (AM)
相关代码如下:
1.2 Performing AM in Code Space
相关代码如下:
2.Layer-wise Relevance Propagation
层方向的关联传播,一共有5种可解释方法。Sensitivity Analysis、Simple Taylor Decomposition、Layer-wise Relevance Propagation、Deep Taylor Decomposition、DeepLIFT。它们的处理方法是:先通过敏感性分析引入关联分数的概念,利用简单的Taylor Decomposition探索基本的关联分解,进而建立各种分层的关联传播方法。具体如下:
2.1 Sensitivity Analysis
相关代码如下:
2.2 Simple Taylor Decomposition
相关代码如下:
2.3 Layer-wise Relevance Propagation
相关代码如下:
2.4 Deep Taylor Decomposition
相关代码如下:
2.5 DeepLIFT
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.5%20DeepLIFT.ipynb
3.Gradient Based Methods
基于梯度的方法有:反卷积、反向传播, 引导反向传播,积分梯度和平滑梯度这几种。具体可以参考如下链接:
https://github.com/1202kbs/Understanding-NN/blob/master/models/grad.py
详细信息如下:
3.1 Deconvolution
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.1%20Deconvolution.ipynb
3.2 Backpropagation
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.2%20Backpropagation.ipynb
3.3 Guided Backpropagation
相关代码如下:
3.4 Integrated Gradients
相关代码如下:
3.5 SmoothGrad
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.5%20SmoothGrad.ipynb
4.Class Activation Map
类激活映射的方法有3种,分别为:Class Activation Map、Grad-CAM、 Grad-CAM++。在MNIST上的代码可以参考:
https://github.com/deepmind/mnist-cluttered
每种方法的详细信息如下:
4.1 Class Activation Map
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.1%20CAM.ipynb
4.2 Grad-CAM
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.2%20Grad-CAM.ipynb
4.3 Grad-CAM++
相关代码如下:
https://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.3%20Grad-CAM-PP.ipynb
5.Quantifying Explanation Quality
虽然每一种解释技术都基于其自身的直觉或数学原理,但在更抽象的层次上确定好解释的特征并能够定量地测试这些特征也很重要。这里再推荐两种基于质量和评价的可解释性方法。具体如下:
5.1 Explanation Continuity
相关代码如下:
5.2 Explanation Selectivity
相关代码如下:
参考文献
Sections 1.1 ~ 2.2 and 5.1 ~ 5.2
[1] Montavon, G., Samek, W., Müller, K., jun 2017. Methods for Interpreting and Understanding Deep Neural Networks. arXiv preprint arXiv:1706.07979, 2017.
Section 1.3
[2] Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J., 2016. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pp. 3387-3395.
[3] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
Section 2.3
[4] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W., 07 2015. On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation. PLOS ONE 10 (7), 1-46.
Section 2.4
[5] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R., 2017. Explaining nonlinear classi cation decisions with deep Taylor decomposition. Pattern Recognition 65, 211-222.
Section 2.5
[6] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. arXiv preprint arXiv:1704.02685, 2017.
Section 3.1
[7] Zeiler, M. D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I. pp. 818-833.
Section 3.2
[8] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations, 2014.
Section 3.3
[9] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Section 3.4
[10] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365, 2017.
Section 3.5
[11] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
Section 4.1
[12] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
Section 4.2
[13] R. R.Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. arXiv:1611.01646, 2016.
Section 4.3
[14] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. CoRR, abs/1710.11063, 2017.
作者:代码医生
来源:相约机器人@微信公众号
推荐阅读:
AutoML 和神经架构搜索初探
新型神经网络结构?详解胶囊网络到底是什么东东
如何清晰的理解从神经网络到人工智能这个概念?
△ 关注极市平台
获得最新CV干货
大白
小白学CV 的其他话题
极市CV社区是人工智能垂直领域计算机视觉技术的开发者社区,致力于为视觉算法开发者提供一个分享创造、结识伙伴、协同互助的平台。
原文:https://www.cnblogs.com/cx2016/p/13738899.html
内容总结
以上是互联网集市为您收集整理的深度神经网络可解释性方法汇总,附 Tensorflow 代码实现全部内容,希望文章能够帮你解决深度神经网络可解释性方法汇总,附 Tensorflow 代码实现所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。