Residential College | false |
Status | 已發表Published |
Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams | |
Xie, Bochen1; Deng, Yongjian2; Shao, Zhanpeng3; Xu, Qingsong4; Li, Youfu1 | |
2024-08 | |
Source Publication | IEEE Transactions on Circuits and Systems for Video Technology |
ISSN | 1051-8215 |
Abstract | Event cameras are neuromorphic vision sensors that record a scene as sparse and asynchronous event streams. Most event-based methods project events into dense frames and process them using conventional vision models, resulting in high computational complexity. A recent trend is to develop point-based networks that achieve efficient event processing by learning sparse representations. However, existing works may lack robust local information aggregators and effective feature interaction operations, thus limiting their modeling capabilities. To this end, we propose an attention-aware model named Event Voxel Set Transformer (EVSTr) for efficient spatiotemporal representation learning on event streams. It first converts the event stream into voxel sets and then hierarchically aggregates voxel features to obtain robust representations. The core of EVSTr is an event voxel transformer encoder that consists of two well-designed components, including the Multi-Scale Neighbor Embedding Layer (MNEL) for local information aggregation and the Voxel Self-Attention Layer (VSAL) for global feature interaction. Enabling the network to incorporate a long-range temporal structure, we introduce a segment modeling strategy (S2TM) to learn motion patterns from a sequence of segmented voxel sets. The proposed model is evaluated on two recognition tasks, including object classification and action recognition. To provide a convincing model evaluation, we present a new event-based action recognition dataset (NeuroHAR) recorded in challenging scenarios. Comprehensive experiments show that EVSTr achieves state-of-the-art performance while maintaining low model complexity. |
Keyword | Event Camera Neuromorphic Vision Attention Mechanism Object Classification Action Recognition |
DOI | 10.1109/TCSVT.2024.3448615 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Scopus ID | 2-s2.0-85201786533 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology DEPARTMENT OF ELECTROMECHANICAL ENGINEERING |
Corresponding Author | Li, Youfu |
Affiliation | 1.Department of Mechanical Engineering, City University of Hong Kong, Hong Kong, SAR, China 2.College of Computer Science, Beijing University of Technology, Beijing, China 3.College of Information Science and Engineering, Hunan Normal University, Changsha, China 4.Department of Electromechanical Engineering, University of Macau, Macao, SAR, China |
Recommended Citation GB/T 7714 | Xie, Bochen,Deng, Yongjian,Shao, Zhanpeng,et al. Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024. |
APA | Xie, Bochen., Deng, Yongjian., Shao, Zhanpeng., Xu, Qingsong., & Li, Youfu (2024). Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams. IEEE Transactions on Circuits and Systems for Video Technology. |
MLA | Xie, Bochen,et al."Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams".IEEE Transactions on Circuits and Systems for Video Technology (2024). |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment