RGCL

Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning

University of Cambridge

Abstract

Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for correct hatefulness classification. We propose constructing a hatefulness-aware embedding space through retrieval-guided contrastive training. Our approach achieves state-of-the-art performance on the HatefulMemes dataset with an AUROC of 87.0, outperforming much larger fine-tuned large multimodal models. We demonstrate a retrieval-based hateful memes detection system, which is capable of identifying hatefulness based on data unseen in training. This allows developers to update the hateful memes detection system by simply adding new examples without retraining, a desirable feature for real services in the constantly evolving landscape of hateful memes on the Internet.

Model

1. Using VL Encoder F to extract the joint vision-language representation for a training example i. Additionally, the VL Encoder encodes the training memes into a retrieval database G. 2. During training, pseudo-gold and hard negative examples are obtained using the Faiss nearest neighbour search. During inference, K nearest neighbours are obtained using the same querying process to perform the KNN-based inference. 3. During training, we optimise the joint loss function L. 4. For inference, we use conventional logistic classifier and our proposed retrieval-based KNN majority voting.

RGCL Architecture

Performance on Hateful Meme Detection

We show performance of RGCL on hateful meme detection task on HatefulMemes and HarMeme datasets.

The accuracy score is reported.
Model HatefulMemes HarMeme
SOTA 86.60 90.80
SOTA model Flamingo LLaVA-13B
RGCL 87.0 91.8

BibTeX

 
        @inproceedings{RGCL2024Mei,
          title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
          author = "Mei, Jingbiao  and
            Chen, Jinghong  and
            Lin, Weizhe  and
            Byrne, Bill  and
            Tomalin, Marcus",
          editor = "Ku, Lun-Wei  and
            Martins, Andre  and
            Srikumar, Vivek",
          booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
          month = aug,
          year = "2024",
          address = "Bangkok, Thailand",
          publisher = "Association for Computational Linguistics",
          url = "https://aclanthology.org/2024.acl-long.291",
          doi = "10.18653/v1/2024.acl-long.291",
          pages = "5333--5347",
          abstract = "Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for correct hatefulness classification. We propose constructing a hatefulness-aware embedding space through retrieval-guided contrastive training. Our approach achieves state-of-the-art performance on the HatefulMemes dataset with an AUROC of 87.0, outperforming much larger fine-tuned large multimodal models. We demonstrate a retrieval-based hateful memes detection system, which is capable of identifying hatefulness based on data unseen in training. This allows developers to update the hateful memes detection system by simply adding new examples without retraining {---} a desirable feature for real services in the constantly evolving landscape of hateful memes on the Internet.",
      }

      

Acknowledgement

This page was adopted from the Nerfies project page, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Many thanks to the Academic Project Page Template