
Prof. Dr. Marina Höhne
Abteilungsleiterin
Abteilung: Data Science in Bioeconomy
Telefon: +49 (0)331 5699 902
E-Mail: MHoehne@ atb-potsdam.de
Mitarbeit in Programmbereichen
Mitwirkung in Gremien
- Climate Change Center Berlin Brandenburg
- ELLIS Gesellschaft, the European Laboratory for Learning and Intelligent Systems
- Berlin AI Competence Center BIFOLD, the Berlin Institute for the Foundations of Learning and Data
Projekte
- Joint Lab KI.DS – Joint Lab Künstliche Intelligenz & Data Science Im Rahmen des Joint Lab bilden die Universität Osnabrück und das ATB gemeinsam Doktorandinnen und Doktoranden an der Schnittstelle von Agrarwissenschaft und Künstlicher Intelligenz aus…
- DCropS4OneHealth – Diversifying cropping systems for the One Health of soils, plants and humans; BiodivGesundheit: Diversifizierung von Pflanzenbausystemen für die gemeinsame Gesundheit von Böden, Pflanzen und Menschen Die Diversifizierung von Pflan…
- XAI-Mobil – Towards Reliable Artificial Intelligence for Explainable, Interactive and Self-evolving Systems Projekt im Rahmen des Mobilitätsförderprogramms des Chinesisch-Deutschen Zentrum für Wissenschaftsförderung mit Forschern der Sun Yat-sen Univ…
- Explaining 4.0 – Künstliche Intelligenz - Transparenz und Effizienz Das Ziel des Projektes Explaining 4.0 ist die Entwicklung von Methoden, die einen signifikanten Beitrag zu einem ganzheitlichen -globalen- Verständnis von KI-Modellen leisten. Dabei …
Veröffentlichungen
- Gautam, S.; Höhne, M.; Hansen, S.; Jenssen, R.; Kampffmeyer, M. (2023): This looks More Like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation. Pattern Recognition. (April): p. 109172. Online: https://doi.org/10.1016/j.patcog.2022.109172
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet. arXiv. : p. 1-10. Online: https://doi.org/10.48550/arXiv.2303.05498
- Bykov, K.; Deb, M.; Grinwald, D.; Müller, C.; Höhne, M. (2023): DORA: Exploring outlier representations in Deep Neural Networks. arXiv. : p. 1-34. Online: https://doi.org/10.48550/arXiv.2206.04530
- Bommer, P.; Kretschmer, M.; Hedström, A.; Bareeva, D.; Höhne, M. (2023): Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science. arXiv. : p. 1-30. Online: https://doi.org/10.48550/arXiv.2303.00652
- Hedström, A.; Bommer, P.; Wickstrøm, K.; Samek, W.; Lapuschkin, S.; Höhne, M. (2023): The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. arXiv. : p. 1-30. Online: https://doi.org/10.48550/arXiv.2302.07265
- Bykov, K.; Deb, M.; Grinwald, D.; Müller, K.; Höhne, M. (2023): DORA: Exploring Outlier Representations in Deep Neural Networks. Transactions on Machine Learning Research. (06): p. 1-43. Online: https://openreview.net/forum?id=nfYwRIezvg
- Hedström, A.; Bommer, P.; Wickstrøm, K.; Samek, W.; Lapuschkin, S.; Höhne, M. (2023): The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. Transactions on Machine Learning Research. (06): p. 1-35. Online: https://openreview.net/forum?id=j3FK00HyfU
Veröffentlichungen vor ATB-Zugehörigkeit
- Gautam, S., Höhne, M. M.-C., Hansen, S., Jenssen, R., & Kampffmeyer, M. (2022). Demonstrating The Risk of Imbalanced Datasets in Chest X-ray Image-based Diagnostics by Prototypical Relevance Propagation. In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE
- Hedström, A., Weber, L., Bareeva, D., Motzkus, F., Samek, W., Lapuschkin, S., and Höhne, M. M.-C. (2022). Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations. arXiv preprint arXiv:2202.06861
- Mieth, B., Rozier, A., Rodriguez, J. A., Höhne, M.M.-C., Görnitz, N., & Müller, K. R. (2021): DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genomics and Bioinformatics, 3(3), lqab065
- Bykov, K., Deb, M., Grinwald, D., Müller, K.R. and Höhne, M.M.-C., 2022. DORA: Exploring outlier representations in Deep Neural Networks. arXiv preprint arXiv: 2206.04530
- Bykov, K., Hedström, A., Nakajima, S., and Höhne, M.M.-C. (2021): NoiseGrad: enhancing explanations by introducing stochasticity to model weights. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 6, pp. 6132-6140)
- Bykov, K., Höhne, M.M.-C., Creosteanu, A., Müller, K.-R., Klauschen, F., Nakajima, S., & Kloft, M. (2021). Explaining bayesian neural networks. arXiv preprint arXiv:2108.10346
- Mieth, B., Hockley, J. R., Görnitz, N., Vidovic, M.M.-C., Müller, K. R., Gutteridge, A., and Ziemek, D. (2019): Using transfer learning from prior reference knowledge to improve the clustering of single-cell RNA-Seq data. Scientific reports, 9(1), 1-14
- Vidovic, M.M.-C., Kloft M., Müller K.-R., and Görnitz N., 2017. ML2motif – Reliable extraction of discriminative sequence motifs from learning machines. PloS one 12.3, e0174392
- Vidovic M.M.-C., Görnitz N., Müller K.-R., and Kloft M., 2016. Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567
- Vidovic, M.M.-C., Hwang H.J., Amsüss S., Hahne J.M., Farina D., and Müller K.-R. (2016): Improving the robustness of myoelectric pattern recognition for upper limb prostheses by covariate shift adaptation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24.9, 961-970
- Vidovic, M.M.-C., Görnitz N., Müller K.-R., Rätsch G., and Kloft M. (2015): SVM2Motif Reconstructing Overlapping DNA Sequence Motifs by Mimicking an SVM Predictor. PloS one 10.12, e0144782