RGCL Project: Research on Hateful Meme Detection

Publications: RGCL (ACL 2024), RA-HMD (EMNLP 2025), and ExPO-HM (ICLR 2026)

University of Cambridge

Research Overview

Detecting hateful memes requires reasoning over text and images, where cues can be subtle, compositional, or context-dependent. We present a line of work comprising three papers: RGCL (ACL 2024), which proposes a retrieval-guided contrastive learning approach for hateful meme detection; RA-HMD (EMNLP 2025), which extends the approach with large multimodal models (LMMs) for improved robustness and cross-domain generalization; and ExPO-HM (ICLR 2026), which introduces explainable policy optimization for interpretable hateful meme detection with reasoning capabilities. This page provides links, high-level contributions, and citation guidance for each paper.

ACL 2024 Foundational Method

Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning

Proposes a hatefulness-aware embedding space through retrieval-guided contrastive training. Achieves state-of-the-art performance on HatefulMemes dataset. Demonstrates a retrieval-based detection system capable of updating without retraining.

Cite this paper for the core RGCL method and baseline results.

EMNLP 2025 LMM Extension

Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection

Extends RGCL with large multimodal models (LMMs) for enhanced robustness and cross-domain generalization. Achieves state-of-the-art on six meme classification datasets. Improves robustness under adversarial attacks and generates higher-quality rationales.

Cite this paper for the LMM-based approach and extended evaluation.

ICLR 2026 Explainable AI

ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection

Introduces explainable policy optimization for interpretable hateful meme detection. Combines SFT warmup, GRPO with curriculum learning, and Conditional Decision Entropy (CDE). Achieves SOTA on binary detection, fine-grained classification, and reasoning quality.

Cite this paper for the explainable detection approach with reasoning.

How These Papers Relate

RGCL (ACL 2024) is the foundational method paper that introduces retrieval-guided contrastive learning for hateful meme detection. RA-HMD (EMNLP 2025) extends the research direction using large multimodal models (LMMs) and adds new evaluations across six datasets with improved robustness analyses. ExPO-HM (ICLR 2026) advances the field further by introducing explainable policy optimization, moving from binary detection to explanation-driven detection with interpretable reasoning.

They are independent publications; cite the paper that corresponds to the method or results you use (or cite multiple papers if you build on multiple contributions).

ACL 2024 RGCL: Retrieval-Guided Contrastive Learning

Abstract

Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for correct hatefulness classification. We propose constructing a hatefulness-aware embedding space through retrieval-guided contrastive training. Our approach achieves state-of-the-art performance on the HatefulMemes dataset with an AUROC of 87.0, outperforming much larger fine-tuned large multimodal models. We demonstrate a retrieval-based hateful memes detection system, which is capable of identifying hatefulness based on data unseen in training. This allows developers to update the hateful memes detection system by simply adding new examples without retraining, a desirable feature for real services in the constantly evolving landscape of hateful memes on the Internet.

Model Architecture

1. Using VL Encoder F to extract the joint vision-language representation for a training example i. Additionally, the VL Encoder encodes the training memes into a retrieval database G. 2. During training, pseudo-gold and hard negative examples are obtained using the Faiss nearest neighbour search. During inference, K nearest neighbours are obtained using the same querying process to perform the KNN-based inference. 3. During training, we optimise the joint loss function L. 4. For inference, we use conventional logistic classifier and our proposed retrieval-based KNN majority voting.

RGCL Architecture

Performance on Hateful Meme Detection

We show performance of RGCL on hateful meme detection task on HatefulMemes and HarMeme datasets. The accuracy score is reported.

Model HatefulMemes HarMeme
SOTA 86.60 90.80
SOTA model Flamingo LLaVA-13B
RGCL 87.0 91.8

EMNLP 2025 RA-HMD: Robust Adaptation with Large Multimodal Models

Abstract

Hateful memes have become a significant concern on the Internet, necessitating robust automated detection systems. While Large Multimodal Models (LMMs) have shown promise in hateful meme detection, they face notable challenges like sub-optimal performance and limited out-of-domain generalization capabilities. Recent studies further reveal the limitations of both supervised fine-tuning (SFT) and in-context learning when applied to LMMs in this setting. To address these issues, we propose a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Analysis reveals that our approach achieves improved robustness under adversarial attacks compared to SFT models. Experiments on six meme classification datasets show that our approach achieves state-of-the-art performance, outperforming larger agentic systems. Moreover, our method generates higher-quality rationales for explaining hateful content compared to standard SFT, enhancing model interpretability.

Key Contributions

  • Extends RGCL with large multimodal models for enhanced robustness
  • Evaluates on six meme classification datasets with state-of-the-art results
  • Demonstrates improved robustness under adversarial attacks
  • Generates higher-quality rationales for model interpretability
  • Preserves general vision-language capabilities while improving in-domain performance

Model Architecture

The RA-HMD framework employs a two-stage training pipeline using a Large Multimodal Model (LMM) backbone with LoRA adaptation. Stage 1 trains the retrieval encoder with contrastive loss, while Stage 2 fine-tunes the model with language modeling and retrieval-guided losses. The system encodes training memes into a database and uses K-nearest neighbor retrieval with positive and negative samples for robust hateful meme detection.

RA-HMD Architecture

ICLR 2026 ExPO-HM: Explain-then-Detect Policy Optimization

Abstract

Hateful memes have emerged as a particularly challenging form of online abuse, motivating the development of automated detection systems. Most prior approaches rely on direct detection, producing only binary predictions. Such models fail to provide the context and explanations that real-world moderation requires. Recent Explain-then-Detect approaches, using Chain-of-Thought prompting or LMM agents, perform worse than simple SFT baselines, and even advanced post-training methods such as GRPO fail to close the gap. Our analysis identifies two key issues of such systems: important policy-relevant cues such as targets and attack types are not hypothesized by the model as a likely explanation; and the binary reward signal is insufficient to guide reasoning. To address these challenges, we propose ExPO-HM (Explain-then-Detect Policy Optimization for Hateful Memes), inspired by the training and evaluation process of human annotators. ExPO-HM combines SFT warmup, GRPO with curriculum learning, and Conditional Decision Entropy (CDE) as both metric and reward for reasoning quality. Across three hateful meme benchmarks, ExPO-HM achieves state-of-the-art performance on binary detection, fine-grained classification, and reasoning quality, with up to 15% and 17% F1 improvement over the GRPO and DPO baselines, respectively. By moving hateful meme detection from simple binary alarms to explanation-driven detection, ExPO-HM provides accurate, interpretable, and actionable moderation support.

Key Contributions

  • Introduces Explain-then-Detect paradigm for hateful meme detection
  • Proposes Conditional Decision Entropy (CDE) as metric and reward for reasoning quality
  • Combines SFT warmup with GRPO and curriculum learning
  • Achieves SOTA on binary detection, fine-grained classification, and reasoning quality
  • Up to 15% and 17% F1 improvement over GRPO and DPO baselines
  • Provides interpretable explanations for moderation decisions

Comparison with Prior Approaches

Unlike binary direct detection methods that only produce alarms, or agentic approaches that struggle with reasoning quality, ExPO-HM provides interpretable explanations before making detection decisions. This moves hateful meme moderation from simple binary outputs to actionable, explanation-driven detection.

ExPO-HM Brain Teaser

Model Architecture

ExPO-HM employs a three-stage training pipeline: (1) SFT warmup to initialize the model with basic explanation capabilities; (2) GRPO with curriculum learning to progressively improve reasoning quality; (3) Conditional Decision Entropy (CDE) optimization for high-quality explanations. The model learns to first generate explanations (identifying targets, attack types, and policy-relevant cues) before making detection decisions, mimicking the annotation process of human moderators.

ExPO-HM Architecture

Citations

BibTeX: RGCL (ACL 2024)

@inproceedings{RGCL2024Mei,
    title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
    author = "Mei, Jingbiao  and
      Chen, Jinghong  and
      Lin, Weizhe  and
      Byrne, Bill  and
      Tomalin, Marcus",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.291",
    doi = "10.18653/v1/2024.acl-long.291",
    pages = "5333--5347"
}

BibTeX: RA-HMD (EMNLP 2025)

@inproceedings{RAHMD2025Mei,
    title = "Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection",
    author = "Mei, Jingbiao  and
      Chen, Jinghong  and
      Yang, Guangyu  and
      Lin, Weizhe  and
      Byrne, Bill",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.1215/",
    pages = "23817--23839",
    ISBN = "979-8-89176-332-6",
}

BibTeX: ExPO-HM (ICLR 2026)

@inproceedings{
EXPOHM2026Mei,
title={ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection},
author={Jingbiao Mei and Mingsheng Sun and Jinghong Chen and Pengda Qin and Yuhong Li and Da Chen and Bill Byrne},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=bEejbORUI5}
}

Acknowledgement

This page was adopted from the Nerfies project page, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Many thanks to the Academic Project Page Template.