Michele Cafagna

Univeristy of Malta

avatar.png

Msida, Malta

I’m Michele [miˈkɛːle], third-year Marie Curie PhD Fellow at the Institute of Linguistics & Language Technology at the University of Malta, 🇲🇹. My primary interests lie at the intersection of Computer Vision, Natural Language, Cognitive Sciences, and XAI.

I’m supervised by Prof. Albert Gatt and co-supervised by Prof. Kees van Deemter. I’m an Early Stage Researcher in the NL4XAI Project, which is part of the project Horizon2020 funded by the EU. I’ve been a Visiting Researcher at the University of Utrecht,🇳🇱 working on Multimodal Grounding and interning at the 🍊Orange Innovations Labs, Lannion, 🇫🇷.

Previously, I was a Machine Learning Research Scientist at Aptus.AI, a RegTech startup based in Pisa, 🇮🇹. I’ve been a Visiting Researcher at the Center for Language and Cognition of the University of Groeningen (CLCG), 🇳🇱 where I worked on my Master Thesis project on Natural Language Generation, I’ve also collaborated with the ItaliaNLP Lab at the Institute of Linguistics of the National Research Center (ILC-CNR) based in Pisa, 🇮🇹.

news

Jan 16, 2024 Our paper “ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models” , accepted @ICLR 2024 , Vienna,🇦🇹 🎉
Oct 10, 2023 Reviewer for COLING-LREC2024, Torino, 🇮🇹
Sep 11, 2023 Presented the poster“HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales” @INLG 2023 , Prague, 🇨🇿
Sep 4, 2023 Our paper “Interpreting Vision and Language Generative Models with Semantic Visual Priors” published in @Frontiers in AI Journal
Aug 6, 2023 Reviewer for 26th European Conference on Artificial Intelligence ECAI 2023, Kraków, 🇵🇱
Jul 13, 2023 Our paper “HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales” , accepted @INLG 2023 , Prague, 🇨🇿 🎉
Jun 18, 2023 Reviewer for MMNLG2023 co-located with INLG 2023, Prague, 🇨🇿
Jun 14, 2023 Reviewer for EMNLP2023, Singapore, 🇸🇬
May 8, 2023 Proud to be among the outstanding reviewers who served @EACL 2023, Dubrovnik, 🇭🇷
Apr 28, 2023 Our paper “Interpreting Vision and Language Generative Models with Semantic Visual Priors”, is online 🎉. 🗃️ Code available here

selected publications

  1. Interpreting vision and language generative models with semantic visual priors
    Michele Cafagna, Lina M. Rojas-Barahona, Kees Deemter, and Albert Gatt
    Frontiers in Artificial Intelligence, 2023
  2. VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
    Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, and Albert Gatt
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022
  3. HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales
    Michele Cafagna, Kees Deemter, and Albert Gatt
    In Proceedings of the 16th International Natural Language Generation Conference, Sep 2023
  4. Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
    Michele Cafagna, Kees van Deemter, and Albert Gatt
    In Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS), Dec 2022
  5. What Vision-Language Models ’See’ when they See Scenes
    Michele Cafagna, Kees Deemter, and Albert Gatt
    arXiv preprint arXiv:2109.07301, Dec 2021