Prof. Dr. Marina Höhne
Aufsätze in referierten Fachzeitschriften [12 Ergebnisse]
- Bareeva, D.; Höhne, M.; Warnecke, A.; Pirch, L.; Müller, K.; Rieck, K.; Bykov, K. (2024): Manipulating Feature Visualizations with Gradient Slingshots. arXiv. : p. 1-19. Online: https://doi.org/10.48550/arXiv.2401.06122
- Gautam, S.; Höhne, M.; Hansen, S.; Jenssen, R.; Kampffmeyer, M. (2023): This looks More Like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation. Pattern Recognition. (April): p. 109172. Online: https://doi.org/10.1016/j.patcog.2022.109172
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet. arXiv. : p. 1-10. Online: https://doi.org/10.48550/arXiv.2303.05498
- Bommer, P.; Kretschmer, M.; Hedström, A.; Bareeva, D.; Höhne, M. (2023): Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science. arXiv. : p. 1-30. Online: https://doi.org/10.48550/arXiv.2303.00652
- Bykov, K.; Deb, M.; Grinwald, D.; Müller, K.; Höhne, M. (2023): DORA: Exploring Outlier Representations in Deep Neural Networks. Transactions on Machine Learning Research. (06): p. 1-43. Online: https://doi.org/10.48550/arXiv.2206.04530
- Hedström, A.; Bommer, P.; Wickstrøm, K.; Samek, W.; Lapuschkin, S.; Höhne, M. (2023): The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. Transactions on Machine Learning Research. (06): p. 1-35. Online: https://openreview.net/forum?id=j3FK00HyfU
- Hanfeld, P.; Höhne, M.; Bussmann, M.; Hönig, W. (2023): Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors. arXiv. : p. 1-6. Online: https://doi.org/10.48550/arXiv.2305.12859
- Hanfeld, P.; Wahba, K.; Höhne, M.; Bussmann, M.; Hönig, W. (2023): Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches. arXiv. : p. 1-7. Online: https://doi.org/10.48550/arXiv.2308.00344
- Grinwald, D.; Bykov, K.; Nakajima, S.; Höhne, M. (2023): Visualizing the Diversity of Representations Learned by Bayesian Neural Networks. Transactions on Machine Learning Research. (11): p. 1-25. Online: https://openreview.net/pdf?id=ZSxvyWrX6k
- Bykov, K.; Kopf, L.; Nakajima, S.; Kloft, M.; Höhne, M. (2023): Labeling Neural Representations with Inverse Recognition. arXiv. : p. 1-24. Online: https://doi.org/10.48550/arXiv.2311.13594
Beiträge zu Sammelwerken [7 Ergebnisse]
- Liu, S.; Hedström, A.; Hanike Basavegowda, D.; Weltzien, C.; Höhne, M. (2024): Explainable AI in grassland monitoring: Enhancing model performance and domain adaptability. In: Hoffmann, C.; Stein, A.; Gallmann, E.; Dörr, J.; Krupitzer, C.; Floto, H.(eds.): Informatik in der Land-, Forst- und Ernährungswirtschaft. Focus: Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. 44. GIL-Jahrestagung - Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. Gesellschaft für Informatik (GI), Bonn, (1617-5468/978-3-88579-738-8), p. 143-154. Online: https://gil-net.de/wp-content/uploads/2024/02/GI_Proceedings_344-3.f-1.pdf
- Hanike Basavegowda, D.; Höhne, M.; Weltzien, C. (2024): Deep Learning-based UAV-assisted grassland monitoring to facilitate Eco-scheme 5 realization. In: Hoffmann, C.; Stein, A.; Gallmann, E.; Dörr, J.; Krupitzer, C.; Floto, H.(eds.): Informatik in der Land-, Forst- und Ernährungswirtschaft. Focus: Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. 44. GIL-Jahrestagung - Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. Gesellschaft für Informatik (GI), Bonn, (1617-5468/978-3-88579-738-8), p. 197-202. Online: https://gil-net.de/wp-content/uploads/2024/02/GI_Proceedings_344-3.f-1.pdf
- Bykov, K.; Müller, K.; Höhne, M. (2024): Mark My Words: Dangers of Watermarked Images in ImageNet. In: Nowaczyk, S.; et al.(eds.): Artificial Intelligence. ECAI 2023 International Workshops. Proceedings, Part I. ECAI 2023 XI-ML Workshops. Springer, Cham, Switzerland, (1865-0929 / 978-3-031-50396-2), p. 426-434. Online: https://doi.org/10.1007/978-3-031-50396-2_24
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet. In: ICLR2023 workshop on Pitfalls of limited data and computation for Trustworthy ML. ICLR 2023. p. 1-10. Online: https://openreview.net/forum?id=0stsgHlCxS
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2023): Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test. In: XAI in Action: Past, Present, and Future Applications. NeurIPS 2023. Neural Information Processing Systems, San Diego, p. 1-19. Online: https://openreview.net/forum?id=vVpefYmnsG
- Bykov, K.; Kopf, L.; Nakajima, S.; Kloft, M.; Höhne, M. (2023): Labeling Neural Representations with Inverse Recognition. In: Advances in Neural Information Processing Systems 35 (NeurIPS 2023). NeurIPS 2023. Neural Information Processing Systems, San Diego, p. 1-24. Online: https://doi.org/10.48550/arXiv.2311.13594
- Bykov, K.; Kopf, L.; Höhne, M. (2023): Finding Spurious Correlations with Function-Semantic Contrast Analysis. In: Longo, L.(eds.): Conference proceedings, Part II: eXplainable Artificial Intelligence. First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023. 1st World Conference On eXplainable Artificial Intelligence (xAI 2023). Springer, Cham, Switzerland, (1865-0929/978-3-031-44066-3), p. 549-572. Online: https://doi.org/10.1007/978-3-031-44067-0_28
Vorträge und Poster [17 Ergebnisse]
- Hanike Basavegowda, D.; Höhne, M.; Weltzien, C. (2024): Deep Learning-based UAV-assisted grassland monitoring to facilitate Eco-scheme 5 realization.
- Liu, S.; Hedström, A.; Hanike Basavegowda, D.; Weltzien, C.; Höhne, M. (2024): Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability.
- Höhne, M. (2023): How much can I trust you? Towards Understanding Neural Networks.
- Höhne, M. (2023): KI - Perspektiven in der Bioökonomie.
- Bommer, P.; Kretschmer, M.; Hedström, A.; Bareeva, D.; Höhne, M. (2023): Evaluation of explainable AI solutions in climate science.
- Bykov, K.; Deb, M.; Grinwald, D.; Muller, K.; Höhne, M. (2023): DORA: Exploring outlier representations in Deep Neural Networks.
- Bykov, K.; Muller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet.
- Höhne, M. (2023): How much can I trust you? Towards Understanding Neural Networks.
- Bykov, K.; Kopf, L.; Höhne, M. (2023): Finding Spurious Correlations with Function-Semantic Contrast Analysis.
- Höhne, M. (2023): How much can I trust you? Towards Understanding Neural Networks.