Anomaly Detection

Language Models as Reasoners for Out-of-Distribution Detection, 17 Sep. 2024 (papers)
Our paper Language Models as Reasoners for Out-of-Distribution Detection has been presented at the Workshop on AI Safety Engineering (WAISE) 2024 and recieved the best paper award by popular vote. It constitutes a basic extension of our idea of Out-of-Distribution Detection with Logical Reasoning, where we replaced the prolog-based reasoning component with a LLM. Abstract Deep neural networks (DNNs) are prone to making wrong predictions with high confidence for data that does not stem from their …
Categories: Anomaly Detection · Neuro-Symbolic
195 Words, Tagged with: WAISE · Anomaly Detection · Large Language Models · Neuro-Symbolic
Thumbnail for Language Models as Reasoners for Out-of-Distribution Detection
Out-of-Distribution Detection with Logical Reasoning, 04 Jan. 2024 (papers)
Our paper Out-of-Distribution Detction with Logical Reasoning has been accepted on the WACV 2024. Abstract Machine Learning models often only generalize reliably to samples from the training distribution. Consequentially, detecting when input data is out-of-distribution (OOD) is crucial, especially in safety-critical applications. Current OOD detection methods, however, tend to be domain agnostic and often fail to incorporate valuable prior knowledge about the structure of the training …
Categories: Anomaly Detection · Neuro-Symbolic
224 Words, Tagged with: WACV · Anomaly Detection · Neuro-Symbolic
Thumbnail for Out-of-Distribution Detection with Logical Reasoning
Towards Deep Anomaly Detection with Structured Knowledge Representations, 15 Jun. 2023 (papers)
My paper Towards Deep Anomaly Detection with Structured Knowledge Representations has been accepted on the Workshop on AI Safety Engineering at SafeComp. Abstract Machine learning models tend to only make reliable predictions for inputs that are similar to the training data. Consequentially, anomaly detection, which can be used to detect unusual inputs, is critical for ensuring the safety of machine learning agents operating in open environments. In this work, we identify and discuss several …
Categories: Anomaly Detection · Neuro-Symbolic
180 Words, Tagged with: WAISE · Anomaly Detection · Neuro-Symbolic
Thumbnail for Towards Deep Anomaly Detection with Structured Knowledge Representations
On Outlier Exposure with Generative Models, 23 Nov. 2022 (papers)
Our paper On Outlier Exposure with Generative Models has been accepted on the NeurIPS Machine Learning Safety Workshop. Abstract While Outlier Exposure reliably increases the performance of Out-of-Distribution detectors, it requires a set of available outliers during training. In this paper, we propose Generative Outlier Exposure (GOE), which alleviates the need for available outliers by using generative models to sample synthetic outliers from low-density regions of the data distribution. The …
Categories: Anomaly Detection
110 Words, Tagged with: MLSW · Generative Models · Anomaly Detection
Thumbnail for On Outlier Exposure with Generative Models
Multi-Class Hypersphere Anomaly Detection (MCHAD), 13 Jul. 2022 (papers)
Our Paper Multi-Class Hypersphere Anomaly Detection (MCHAD) has been accepted for presentation at the ICPR 2022. In summary, we propose a new loss function for learning neural networks that are able to detect anomalies in their inputs. Poster for MCHAD (PDF). How does it work? Omitting some details, the loss we propose has three different components, each of which we will explain in the following. Intra-Class Variance We want the $f(x)$ of one class to cluster as tightly around a class center …
Categories: Anomaly Detection
366 Words, Tagged with: ICPR · Anomaly Detection
Thumbnail for Multi-Class Hypersphere Anomaly Detection (MCHAD)
PyTorch-OOD: A library for Out-of-Distribution Detection based on PyTorch, 13 Jul. 2022 (papers)
Our paper PyTorch-OOD: A library for Out-of-Distribution Detection based on PyTorch has been presented at the CVPR 2022 Workshops. You can find the most recent version of the python source code on GitHub. Abstract Machine Learning models based on Deep Neural Networks behave unpredictably when presented with inputs that do not stem from the training distribution and sometimes make egregiously wrong predictions with high confidence. This property undermines the trustworthiness of systems depending …
Categories: Anomaly Detection
214 Words, Tagged with: CVPR Workshops · Anomaly Detection
Thumbnail for PyTorch-OOD: A library for Out-of-Distribution Detection based on PyTorch
On Challenging Aspects of Reproducibility in Deep Anomaly Detection, 13 Jul. 2022 (papers)
Our companion paper On Challenging Aspects of Reproducibility in Deep Anomaly Detection has been accepted for presentation on the Fourth Workshop on Reproducible Research in Pattern Recognition (satellite event of ICPR 2022). In it, we discuss aspects of reproducibility for our anomaly detection algorithm MCHAD, as well as anomaly detection with deep neural networks in general. In particular, we discussed the following challenges for the reproducibility: Nondeterminism: conducting the same …
Categories: Anomaly Detection · Reproducibility
212 Words, Tagged with: RRPR · Anomaly Detection · Reproducibility
Thumbnail for On Challenging Aspects of Reproducibility in Deep Anomaly Detection
Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection, 13 Jul. 2021 (papers)
Our Paper Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection has been accepted at the ICJAI 2021 Workshop for Artificial Intelligence for Anomalies and Novelties. In summary, we investigated the following phenomenon: when you train neural networks several times, and then measure their performance on some task, there is a certain variance in the performance measurements, since the results of experiments may vary based on several factors (that are effectively …
Categories: Anomaly Detection · Reproducibility
252 Words, Tagged with: AI4AN · Anomaly Detection · Reproducibility
Thumbnail for Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection
Explanation-based Anomaly Detection in Deep Neural Networks, 01 Feb. 2020 (posts)
Masters Thesis (PDF). If an AI gives you a weird explanation for its prediction, you should remain septical about the accuracy of the prediction. Sounds reasonable? This was the general idea of my masters thesis, which was originally titled Self-Assessment of Visual Recognition Systems based on Attribution. Today, I would call it Explanation-based Anomaly Detection in Deep Neural Networks. The general idea was to use attribution-based explanation methods to detect anomalies (such as unusual …
Categories: Anomaly Detection
340 Words, Tagged with: Deep Learning · Anomaly Detection · CNN · Explainability
Thumbnail for Explanation-based Anomaly Detection in Deep Neural Networks