Our paper Language Models as Reasoners for Out-of-Distribution Detection has been presented at the Workshop on AI Safety Engineering (WAISE) 2024 and recieved the best paper award by popular vote.
It constitutes a basic extension of our idea of Out-of-Distribution Detection with Logical Reasoning, where we replaced the prolog-based reasoning component with a LLM.
Abstract Deep neural networks (DNNs) are prone to making wrong predictions with high confidence for data that does not stem from their …
Our paper Deep learning-based harmonization and super-resolution of Landsat-8 and Sentinel-2 images, which is based on the masters thesis of my colleague Venkatesh Thirugnana Sambandham, has been published in the ISPRS Journal of Photogrammetry and Remote Sensing. This work is an extension of our previous workshop paper on transformers for satellite homogenization. In summary, we find that a simple UNet model is able to provide surprisingly good performance for the satellite homogenization task. …
Our paper Out-of-Distribution Detction with Logical Reasoning has been accepted on the WACV 2024.
Abstract Machine Learning models often only generalize reliably to samples from the training distribution. Consequentially, detecting when input data is out-of-distribution (OOD) is crucial, especially in safety-critical applications. Current OOD detection methods, however, tend to be domain agnostic and often fail to incorporate valuable prior knowledge about the structure of the training …
My paper Towards Deep Anomaly Detection with Structured Knowledge Representations has been accepted on the Workshop on AI Safety Engineering at SafeComp.
Abstract Machine learning models tend to only make reliable predictions for inputs that are similar to the training data. Consequentially, anomaly detection, which can be used to detect unusual inputs, is critical for ensuring the safety of machine learning agents operating in open environments. In this work, we identify and discuss several …
Our paper On Outlier Exposure with Generative Models has been accepted on the NeurIPS Machine Learning Safety Workshop.
Abstract While Outlier Exposure reliably increases the performance of Out-of-Distribution detectors, it requires a set of available outliers during training. In this paper, we propose Generative Outlier Exposure (GOE), which alleviates the need for available outliers by using generative models to sample synthetic outliers from low-density regions of the data distribution. The …
During the last weeks, I worked with some colleagues on a website that aims to improve access to social work literature. We described the results in out paper Social Work Research Map – ein niederschwelliger Zugang zu internationalen Publikationen der Sozialen Arbeit, which has been published in the journal Soziale Passagen.
While the paper is written in german, there is also a technical report in english.
Abstract Internationalization is a central topic in higher education policy in Germany. An …
Our abstract Towards Transformer-based Homogenization of Satellite Imagery for Landsat-8 and Sentinel-2 was accepted for presentation on the Transformers Workshop for Environmental Science.
In summary, we somewhat surprisingly found that transformers, a neural network architecture that achieves state-of-the-art results on most tasks it is applied to, does not outperform a vanilla U-Net model on our particular superresolution task.
Our Paper Multi-Class Hypersphere Anomaly Detection (MCHAD) has been accepted for presentation at the ICPR 2022. In summary, we propose a new loss function for learning neural networks that are able to detect anomalies in their inputs.
Poster for MCHAD (PDF).
How does it work? Omitting some details, the loss we propose has three different components, each of which we will explain in the following.
Intra-Class Variance We want the $f(x)$ of one class to cluster as tightly around a class center …
Our paper PyTorch-OOD: A library for Out-of-Distribution Detection based on PyTorch has been presented at the CVPR 2022 Workshops. You can find the most recent version of the python source code on GitHub.
Abstract Machine Learning models based on Deep Neural Networks behave unpredictably when presented with inputs that do not stem from the training distribution and sometimes make egregiously wrong predictions with high confidence. This property undermines the trustworthiness of systems depending …
Our companion paper On Challenging Aspects of Reproducibility in Deep Anomaly Detection has been accepted for presentation on the Fourth Workshop on Reproducible Research in Pattern Recognition (satellite event of ICPR 2022).
In it, we discuss aspects of reproducibility for our anomaly detection algorithm MCHAD, as well as anomaly detection with deep neural networks in general. In particular, we discussed the following challenges for the reproducibility:
Nondeterminism: conducting the same …
Our Paper Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection has been accepted at the ICJAI 2021 Workshop for Artificial Intelligence for Anomalies and Novelties.
In summary, we investigated the following phenomenon: when you train neural networks several times, and then measure their performance on some task, there is a certain variance in the performance measurements, since the results of experiments may vary based on several factors (that are effectively …
Inspired by the “Spiegel-Mining” talk from David Kriesel, a friend of mine and a Prof. from the Hochschule Magdeburg scraped a german website that regularly publishes reviews of social work literature, and mined the resulting 18.000 articles, hoping for interesting insights.
In an attempt to visualize the discourse, we created several topic maps, like the one below, which you can find on the accompanying (german) website. The colours represent the gender of the authors of the review. …