The Integration of Vision-LLMs into AD Systems: Capabilities and Challenges

Table of Links
Related Work
Preliminaries
Methodology
4.1 Auto-Generation of Typographic Attack
2 Related Work
2.1 Vision-LLMs
Having demonstrated the proficiency of Large Language Models (LLMs) in reasoning across various natural language benchmarks, researchers have extended LLMs with visual encoders to support multimodal understanding. This integration has given rise to various forms of Vision-LLMs, capable of reasoning based on the composition of visual and language inputs.
\
Vision-LLMs Pre-training. The interconnection between LLMs and pre-trained vision models involves the individual pre-training of unimodal encoders on their respective domains, followed by large-scale vision-language joint training [17, 18, 19, 20, 2, 1]. Through an interleaved visual language corpus (e.g., MMC4 [21] and M3W [22]), auto-regressive models learn to process images by converting them into visual tokens, combine these with textual tokens, and input them into LLMs. Visual inputs are treated as a foreign language, enhancing traditional text-only LLMs by enabling visual understanding while retaining their language capabilities. Hence, a straightforward pre-training strategy may not be designed to handle cases where input text is significantly more aligned with visual texts in an image than with the visual context of that image.
\
Vision-LLMs in AD Systems. Vision-LLMs have proven useful for perception, planning, reasoning, and control in autonomous driving (AD) systems [6, 7, 9, 5]. For example, existing works have quantitatively benchmarked the linguistic capabilities of Vision-LLMs in terms of their trustworthiness in explaining the decision-making processes of AD [7]. Others have explored the use of VisionLLMs for vehicular maneuvering [8, 5], and [6] even validated an approach in controlled physical environments. Because AD systems involve safety-critical situations, comprehensive analyses of their vulnerabilities are crucial for reliable deployment and inference. However, proposed adoptions of Vision-LLMs into AD have been straightforward, which means existing issues (e.g., vulnerabilities against typographic attacks) in such models are likely present without proper countermeasures.
\
:::info
Authors:
(1) Nhat Chung, CFAR and IHPC, A*STAR, Singapore and VNU-HCM, Vietnam;
(2) Sensen Gao, CFAR and IHPC, A*STAR, Singapore and Nankai University, China;
(3) Tuan-Anh Vu, CFAR and IHPC, A*STAR, Singapore and HKUST, HKSAR;
(4) Jie Zhang, Nanyang Technological University, Singapore;
(5) Aishan Liu, Beihang University, China;
(6) Yun Lin, Shanghai Jiao Tong University, China;
(7) Jin Song Dong, National University of Singapore, Singapore;
(8) Qing Guo, CFAR and IHPC, A*STAR, Singapore and National University of Singapore, Singapore.
:::
:::info
This paper is available on arxiv under CC BY 4.0 DEED license.
:::
\