RGCL Project: Research on Hateful Meme Detection

Publications: RGCL (ACL 2024) and RA-HMD (EMNLP 2025)

Separate papers; shared theme and authors

University of Cambridge

Research Overview

Detecting hateful memes requires reasoning over text and images, where cues can be subtle, compositional, or context-dependent. We present a line of work comprising two papers: RGCL (ACL 2024), which proposes a retrieval-guided contrastive learning approach for hateful meme detection, and RA-HMD (EMNLP 2025), which extends the approach with large multimodal models (LMMs) for improved robustness and cross-domain generalization. This page provides links, high-level contributions, and citation guidance for each paper.

ACL 2024 Foundational Method

Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning

Proposes a hatefulness-aware embedding space through retrieval-guided contrastive training. Achieves state-of-the-art performance on HatefulMemes dataset (AUROC 87.0). Demonstrates a retrieval-based detection system capable of updating without retraining.

Cite this paper for the core RGCL method and baseline results.

EMNLP 2025 LMM Extension

Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection

Extends RGCL with large multimodal models (LMMs) for enhanced robustness and cross-domain generalization. Achieves state-of-the-art on six meme classification datasets. Improves robustness under adversarial attacks and generates higher-quality rationales.

Cite this paper for the LMM-based approach and extended evaluation.

How These Papers Relate

RGCL (ACL 2024) is the foundational method paper that introduces retrieval-guided contrastive learning for hateful meme detection. RA-HMD (EMNLP 2025) is a follow-up that extends the research direction using large multimodal models (LMMs) and adds new evaluations across six datasets with improved robustness analyses.

They are independent publications; cite the paper that corresponds to the method or results you use (or cite both if you build on both contributions).

ACL 2024 RGCL: Retrieval-Guided Contrastive Learning

Abstract

Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for correct hatefulness classification. We propose constructing a hatefulness-aware embedding space through retrieval-guided contrastive training. Our approach achieves state-of-the-art performance on the HatefulMemes dataset with an AUROC of 87.0, outperforming much larger fine-tuned large multimodal models. We demonstrate a retrieval-based hateful memes detection system, which is capable of identifying hatefulness based on data unseen in training. This allows developers to update the hateful memes detection system by simply adding new examples without retraining, a desirable feature for real services in the constantly evolving landscape of hateful memes on the Internet.

Model Architecture

1. Using VL Encoder F to extract the joint vision-language representation for a training example i. Additionally, the VL Encoder encodes the training memes into a retrieval database G. 2. During training, pseudo-gold and hard negative examples are obtained using the Faiss nearest neighbour search. During inference, K nearest neighbours are obtained using the same querying process to perform the KNN-based inference. 3. During training, we optimise the joint loss function L. 4. For inference, we use conventional logistic classifier and our proposed retrieval-based KNN majority voting.

RGCL Architecture

Performance on Hateful Meme Detection

We show performance of RGCL on hateful meme detection task on HatefulMemes and HarMeme datasets. The accuracy score is reported.

Model HatefulMemes HarMeme
SOTA 86.60 90.80
SOTA model Flamingo LLaVA-13B
RGCL 87.0 91.8

EMNLP 2025 RA-HMD: Robust Adaptation with Large Multimodal Models

Abstract

Hateful memes have become a significant concern on the Internet, necessitating robust automated detection systems. While Large Multimodal Models (LMMs) have shown promise in hateful meme detection, they face notable challenges like sub-optimal performance and limited out-of-domain generalization capabilities. Recent studies further reveal the limitations of both supervised fine-tuning (SFT) and in-context learning when applied to LMMs in this setting. To address these issues, we propose a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Analysis reveals that our approach achieves improved robustness under adversarial attacks compared to SFT models. Experiments on six meme classification datasets show that our approach achieves state-of-the-art performance, outperforming larger agentic systems. Moreover, our method generates higher-quality rationales for explaining hateful content compared to standard SFT, enhancing model interpretability.

Key Contributions

  • Extends RGCL with large multimodal models for enhanced robustness
  • Evaluates on six meme classification datasets with state-of-the-art results
  • Demonstrates improved robustness under adversarial attacks
  • Generates higher-quality rationales for model interpretability
  • Preserves general vision-language capabilities while improving in-domain performance

Model Architecture

The RA-HMD framework employs a two-stage training pipeline using a Large Multimodal Model (LMM) backbone with LoRA adaptation. Stage 1 trains the retrieval encoder with contrastive loss, while Stage 2 fine-tunes the model with language modeling and retrieval-guided losses. The system encodes training memes into a database and uses K-nearest neighbor retrieval with positive and negative samples for robust hateful meme detection.

RA-HMD Architecture

Citations

BibTeX: RGCL (ACL 2024)

@inproceedings{RGCL2024Mei,
    title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
    author = "Mei, Jingbiao  and
      Chen, Jinghong  and
      Lin, Weizhe  and
      Byrne, Bill  and
      Tomalin, Marcus",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.291",
    doi = "10.18653/v1/2024.acl-long.291",
    pages = "5333--5347"
}

BibTeX: RA-HMD (EMNLP 2025)

@inproceedings{RAHMD2025Mei,
    title = "Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection",
    author = "Mei, Jingbiao  and
      Chen, Jinghong  and
      Yang, Guangyu  and
      Lin, Weizhe  and
      Byrne, Bill",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.1215/",
    pages = "23817--23839",
    ISBN = "979-8-89176-332-6",
}

Acknowledgement

This page was adopted from the Nerfies project page, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Many thanks to the Academic Project Page Template.