UM
Residential Collegefalse
Status已發表Published
PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions
Zhang, Yudong1,2; Xie, Ruobing2; Chen, Jiansheng3; Sun, Xingwu2,4; Wang, Yu1
2024
Conference Name32nd ACM International Conference on Multimedia, MM 2024
Source PublicationMM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
Pages11175-11183
Conference Date28 October 2024 - 1 November 2024
Conference PlaceMelbourne
CountryAustralia
Publication PlaceNew York, NY, USA
PublisherAssociation for Computing Machinery, Inc
Abstract

Large Vision-Language Models (LVLMs) have demonstrated their powerful multimodal capabilities. However, they also face serious safety problems, as adversaries can induce robustness issues in LVLMs through the use of well-designed adversarial examples. Therefore, LVLMs are in urgent need of detection tools for adversarial examples to prevent incorrect responses. In this work, we first discover that LVLMs exhibit regular attention patterns for clean images when presented with probe questions. We propose an unconventional method named PIP, which utilizes the attention patterns of one randomly selected irrelevant probe question (e.g., "Is there a clock''') to distinguish adversarial examples from clean examples. Regardless of the image to be tested and its corresponding question, PIP only needs to perform one additional inference of the image to be tested and the probe question, and then achieves successful detection of adversarial examples. Even under black-box attacks and open dataset scenarios, our PIP, coupled with a simple SVM, still achieves more than 98% recall and a precision of over 90%. Our PIP is the first attempt to detect adversarial attacks on LVLMs via simple irrelevant probe questions, shedding light on deeper understanding and introspection within LVLMs. The code is available at https://github.com/btzyd/pip.

KeywordDetecting Adversarial Example Large Vision-language Model
DOI10.1145/3664647.3685510
URLView the original
Language英語English
Scopus ID2-s2.0-85209790025
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionUniversity of Macau
Corresponding AuthorXie, Ruobing; Chen, Jiansheng; Wang, Yu
Affiliation1.Tsinghua University, Beijing, China
2.Tencent, Beijing, China
3.University of Science and Technology Beijing, Beijing, China
4.University of Macau, Macao
Recommended Citation
GB/T 7714
Zhang, Yudong,Xie, Ruobing,Chen, Jiansheng,et al. PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions[C], New York, NY, USA:Association for Computing Machinery, Inc, 2024, 11175-11183.
APA Zhang, Yudong., Xie, Ruobing., Chen, Jiansheng., Sun, Xingwu., & Wang, Yu (2024). PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions. MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia, 11175-11183.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Yudong]'s Articles
[Xie, Ruobing]'s Articles
[Chen, Jiansheng]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Yudong]'s Articles
[Xie, Ruobing]'s Articles
[Chen, Jiansheng]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Yudong]'s Articles
[Xie, Ruobing]'s Articles
[Chen, Jiansheng]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.