Follow
Gesina Schwalbe
Gesina Schwalbe
Postdoc, University of Lübeck
Verified email at pheerai.de - Homepage
Title
Cited by
Cited by
Year
A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts
G Schwalbe, B Finzel
Data Mining and Knowledge Discovery, 1-59, 2023
200*2023
Inspect, understand, overcome: A survey of practical methods for ai safety
S Houben, S Abrecht, M Akila, A Bär, F Brockherde, P Feifel, ...
Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty …, 2022
652022
A survey on methods for the safety assurance of machine learning based systems
G Schwalbe, M Schels
10th European Congress on Embedded Real Time Software and Systems (ERTS 2020), 2020
642020
Structuring the safety argumentation for deep neural network based perception in automotive applications
G Schwalbe, B Knie, T Sämann, T Dobberphul, L Gauerhof, S Raafatnia, ...
Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops: DECSoS …, 2020
342020
Concept embedding analysis: A review
G Schwalbe
arXiv preprint arXiv:2203.13909, 2022
272022
Expressive explanations of DNNs by combining concept analysis with ILP
J Rabold, G Schwalbe, U Schmid
KI 2020: Advances in Artificial Intelligence: 43rd German Conference on AI …, 2020
262020
Concept enforcement and modularization as methods for the ISO 26262 safety argumentation of neural networks
G Schwalbe, M Schels
Otto-Friedrich-Universität, 2020
182020
Knowledge augmented machine learning with applications in autonomous driving: A survey
J Wörmann, D Bogdoll, C Brunner, E Bührle, H Chen, EF Chuo, ...
arXiv preprint arXiv:2205.04712, 2022
152022
Interpretable model-agnostic plausibility verification for 2d object detectors using domain-invariant concept bottleneck models
M Keser, G Schwalbe, A Nowzad, A Knoll
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
102023
Evaluating the stability of semantic concept representations in CNNs for robust explainability
G Mikriukov, G Schwalbe, C Hellert, K Bade
World Conference on Explainable Artificial Intelligence, 499-524, 2023
92023
Verification of size invariance in DNN activations using concept embeddings
G Schwalbe
IFIP International Conference on Artificial Intelligence Applications and …, 2021
62021
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces
G Mikriukov, G Schwalbe, C Hellert, K Bade
arXiv preprint arXiv:2305.07663, 2023
4*2023
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
G Schwalbe, C Wirth, U Schmid
arXiv preprint arXiv:2201.00572, 2022
4*2022
GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
G Mikriukov, G Schwalbe, C Hellert, K Bade
arXiv preprint arXiv:2311.14435, 2023
32023
Strategies for safety goal decomposition for neural networks
G Schwalbe, M Schels
Abstracts 3rd ACM Computer Science in Cars Symposium, 2019
32019
Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Y Shoeb, R Chan, G Schwalbe, A Nowzad, F Güney, H Gottschalk
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2024
22024
Unveiling Ontological Commitment in Multi-Modal Foundation Models
M Keser, G Schwalbe, N Amini-Naieni, M Rottmann, A Knoll
arXiv preprint arXiv:2409.17109, 2024
2024
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
JH Lee, G Mikriukov, G Schwalbe, S Wermter, D Wolter
arXiv preprint arXiv:2409.13456, 2024
2024
Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs
G Mikriukov, G Schwalbe, F Motzkus, K Bade
World Conference on Explainable Artificial Intelligence, 92-116, 2024
2024
Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study
P Mitra, G Schwalbe, N Klein
arXiv preprint arXiv:2405.20876, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20