Michele Cafagna

NLP Data Scientist @Okra.ai

avatar.png

Leiden, The Netherlands

I’m Michele [miˈkɛːle], NLP R&D Scientist at Okra.ai,

I’m work in Medical NLP. however, my interests lie at the intersection of Computer Vision, Natural Language, Cognitive Sciences, and XAI,

I’ve received my Phd from the Institute of Linguistics & Language Technology at the University of Malta, 🇲🇹, supervised by Prof. Albert Gatt and co-supervised by Prof. Kees van Deemter. Here I was a Marie Curie PhD Fellow and Early Stage Researcher in the NL4XAI Project During my Phd, I worked as a Visiting Researcher at the University of Utrecht, 🇳🇱 focusing on Multimodal Grounding and interning at the 🍊Orange Innovations Labs, Lannion, 🇫🇷.

Previously, I was Machine Learning Research Scientist at Aptus.AI, a RegTech startup based in Pisa, 🇮🇹. I earned my Master’s in Computer Science and AI, at the University of Pisa 🇮🇹, with a thesis project in NLG carried out as a Visiting Researcher at the Center for Language and Cognition of the University of Groeningen (CLCG), 🇳🇱 . I’ve also collaborated with the ItaliaNLP Lab at the Institute of Linguistics of the National Research Center (ILC-CNR) based in Pisa, 🇮🇹.

news

May 15, 2024 My Phd Thesis is online 🎉 : Visually Grounded Language Generation: Data, Models and Explanations beyond Descriptive Captions.
Mar 18, 2024 Joined Okra.ai as NLP Data Scentist,🇳🇱 🎉
Jan 16, 2024 Our paper “ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models” , accepted @ICLR 2024 , Vienna,🇦🇹 🎉
Oct 10, 2023 Reviewer for COLING-LREC2024, Torino, 🇮🇹
Sep 11, 2023 Presented the poster“HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales” @INLG 2023 , Prague, 🇨🇿
Sep 4, 2023 Our paper “Interpreting Vision and Language Generative Models with Semantic Visual Priors” published in @Frontiers in AI Journal
Aug 6, 2023 Reviewer for 26th European Conference on Artificial Intelligence ECAI 2023, Kraków, 🇵🇱
Jul 13, 2023 Our paper “HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales” , accepted @INLG 2023 , Prague, 🇨🇿 🎉
Jun 18, 2023 Reviewer for MMNLG2023 co-located with INLG 2023, Prague, 🇨🇿
Jun 14, 2023 Reviewer for EMNLP2023, Singapore, 🇸🇬

selected publications

  1. Interpreting vision and language generative models with semantic visual priors
    Michele Cafagna, Lina M. Rojas-Barahona, Kees Deemter, and Albert Gatt
    Frontiers in Artificial Intelligence, 2023
  2. VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
    Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, and Albert Gatt
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022
  3. HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales
    Michele Cafagna, Kees Deemter, and Albert Gatt
    In Proceedings of the 16th International Natural Language Generation Conference, Sep 2023
  4. Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
    Michele Cafagna, Kees van Deemter, and Albert Gatt
    In Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS), Dec 2022
  5. What Vision-Language Models ’See’ when they See Scenes
    Michele Cafagna, Kees Deemter, and Albert Gatt
    arXiv preprint arXiv:2109.07301, Dec 2021