QAI labs stand for Quality in Artificial Intelligence

QAI labs is a collective of researchers at different institutes who develop and validate advanced data analysis techniques in order to make sense of biomedical data

Artificial Intelligence

We develop machine learning and signal processing techniques as well as solutions to inverse problems for biomedical measurements. This includes methods to localize brain activity recorded by electro- and magnetoencephalography (M/EEG), techniques for analyzing directional interactions between time series, and methods for dimensionality reduction and statistical source separation of physiological signals.
We also use classical machine learning and state-of-the-art deep learning for predictive and generative modeling of complex health data.

Quality

We use mathematical theory and empirical validation to assure the quality of our pipelines. Specifically, we design suitable ground-truth data and quantitative performance metrics to benchmark difficult data analysis problems. This has led us to discover problems with popular methods and to propose improved procedures. Generally, we enjoy scrutinizing complex data analysis workflows, where we are interested in all factors that affect the real-world performance of analysis pipelines including data quality, model fidelity, robustness, uncertainty, privacy, ethics, and fairness. A current focus of our work is to define and assess the quality of so-called AI model `explanations’. Overall, our efforts aim to contribute to more trustworthy and reliable data analyses.

Work samples

Explainable AI:
Artificial intelligence is increasingly used to assist high-stakes decisions in areas such as finance, medicine, or autonomous driving. Upcoming regulations will require that the principles by which such algorithms arrive at their predictions should be transparent. However, the emerging field of ‘explainable’ AI (XAI) is lacking formal definitions of what correct explanations are. This can pose a serious problem, as we have demonstrated that the use of XAI approaches can lead to severe misinterpretations. Our constructive criticism of XAI has been cited over 1000 times and won several awards (Haufe et al., 2014). Currently, we are working on quantitative benchmarks of ‘explanation performance’ in the medical image and natural language processing domains, as well as on novel XAI methods.

Information flow between time series:
Identifying senders and receivers of directed interactions from time series data using Granger causality has applications in finance, earth-, and neurosciences, among others. However, we have demonstrated that realistic noise conditions can lead to spurious detections of Granger-causal interactions. At the same time, we have provided a remedy based on the concept of time reversal (Haufe et al., 2013; Winkler et al., 2015).

Applications

We work on a multitude of challenging problems in biomedicine. Our main application areas are neuroscience and intensive care.

Analysis of electrophysiological data:
Estimating and localizing brain interactions from non-invasive recordings such as magneto- and electroencephalography (M/EEG) is of high interest for clinical and cognitive neuroscience. QAI labs develop advanced Bayesian methods for M/EEG inverse source reconstruction and state-of-the-art signal processing methods for robust time series interaction estimation. These are applied to healthy and clinical populations to derive normative functional connectomes and to identify biomarkers for mental diseases.

Synthesis and analysis of intensive care data:
Patient data routinely collected in hospitals bear a tremendous potential to study critical diseases and forecast impending complications. Privacy concerns, however, often prohibit data sharing across centers, thus impeding scientific progress. To address this, we develop and carefully validate generative models that can synthesize time series data recorded in intensive care units (ICU). We moreover use machine learning to estimate causal effects of treatments in ICU settings.

Community

We believe in open science and the wisdom of crowds. Therefore, we publish much of our research code on GitHub. We also publish simulation frameworks (Huang et al. 2016; Haufe and Ewald, 2019), benchmark data and evaluation codes (Wilming et al., 2022), and organize data analysis challenges (Langer et al., 2022) to foster progress in the field.

Labs

Currently, there are QAI labs at Technische Universität Berlin (TUB), Physikalisch-Technische Bundesanstalt (PTB) in Berlin, and Charité – Universitätsmedizin Berlin.
At TUB, the Uncertainty, Inverse Modeling and Machine Learning Group (UNIML) is part of the Institute for Theoretical Computer Science and Software Engineering and has a focus on methods development, mathematical modeling, and teaching. Its counterpart at the PTB is Working Group 8.44 Machine Learning and Uncertainty, which also engages in AI standardization efforts.
At Charité, the Brain and Data Science Lab is part of the Department of Neurology and situated at the Berlin Center for Advanced Neuroimaging (BCAN). Here we work towards the identification of biomarkers for brain disorders from neuroimaging data.

Head

QAI labs are headed by computer scientist Prof. Dr. rer. nat. Stefan Haufe. Dr. Haufe holds a PhD in machine learning from Technische Universität Berlin. He received postdoctoral training in the biomedical engineering departments of the City College of New York and Columbia University. In 2019, he became head of the Braindata Lab at Charité –
Universitätsmedizin in Berlin funded through a Starting Grant of the European Research Council (ERC). In 2021 he became a professor of Uncertainty, Inverse Modeling and Machine Learning at TUB and head of research group 8.44 at the PTB through a joint appointment. Further details on his CV including a list of publications are provided under Head.

Team

Currently, about 15 scientists with academic backgrounds in computer science, mathematics, engineering, psychology, and neuroscience, and from master’s student to professor level are part of QAI labs. Please find out more about individual members under Team.

We are always interested in reinforcing our team. Please have a look under Jobs for open positions. A list of potential topics for degree theses or lab rotation/exchange visit projects can be found under Projects.

Teaching

At TUB, we are currently offering two weekly courses for Master-level students in computer science and related disciplines. In these seminars, students deliver presentations on select topics based on curated material, framed by contributions from the QAI teaching staff. In the Machine Learning and Inverse Modeling seminar, students explore the fundamentals of various neuroimaging techniques and learn data analysis strategies ranging from classical to cutting-edge. In the Quality Assurance in Machine Learning seminar, students get familiar with all quality aspects of AI modeling ranging from data quality to fairness and privacy. Students are familiarized with regulatory frameworks and will conduct an audit. Moreover, we organize an AI forensics clinic in which students analyze weak spots in sandbox AI models. All courses are held in English.

Partners

QAI labs are affiliated with the Einstein Center for Neurosciences (ECN) Berlin and the Bernstein Center for Computational Neuroscience (BCCN) Berlin and are part of Collaborative Research Centre TRR 295 RETUNE on Movement Disorders. We have a dense network of collaborators within and outside these centers and our institutes. Our main external collaborators include the Max Planck Institute for Human Cognitive and Brain Sciences, the Universitätsklinikum Hamburg Eppendorf (UKE), the Universitätsklinikum Würzburg (UKW), the Universität Zürich (UZH), the University of California San Franscisco (UCSF), the University of California San Diego (UCSD), the City College of New York (CCNY), and Columbia University. We are also involved in various efforts towards standardizing AI organized by the Deutsches Institut für Normung (DIN).

Funding

QAI labs are funded by the European Research Council (ERC), the Einstein Foundation Berlin, the Heidenhain Foundation, the German Federal Ministry for Economic Affairs and Climate Action (BMWK), and the European Partnership on Metrology (EPM). Research conducted within QAI labs is also supported by the German Science Foundation (DFG).