CNN

Convolutional Filter Visualization, 27 Jul. 2022 (posts)
Deep Neural Networks are black-boxes: they map some input to some output, and we can make them do this surprisingly well. However, we usually have no idea how this mapping works. Particularly Convolutional Neural Networks (CNNs), which employ “convolutions” as filters, achieved some impressive results (before Vision Transformers came along). Filter Visualization can help us understand what kind of patterns the convolutional filters in CNNs detect. Why would we want to do it? § …
Categories: Deep Learning
472 Words, Tagged with: Deep Learning · Explainability · CNN
Thumbnail for Convolutional Filter Visualization
Explanation-based Anomaly Detection in Deep Neural Networks, 01 Feb. 2020 (posts)
Masters Thesis (PDF). If an AI gives you a weird explanation for its prediction, you should remain septical about the accuracy of the prediction. Sounds reasonable? This was the general idea of my masters thesis, which was originally titled Self-Assessment of Visual Recognition Systems based on Attribution. Today, I would call it Explanation-based Anomaly Detection in Deep Neural Networks. The general idea was to use attribution-based explanation methods to detect anomalies (such as unusual …
Categories: Anomaly Detection
340 Words, Tagged with: Deep Learning · Anomaly Detection · CNN · Explainability
Thumbnail for Explanation-based Anomaly Detection in Deep Neural Networks