UM
Residential Collegefalse
Status即將出版Forthcoming
Time-sync video tag extraction using semantic association graph
Yang, Wenmain1,2; Wang, Kun3; Ruan, Na1; Gao, Wenyuan1; Jia, Weijia1,2; Zhao, Wei4; Liu, Nan5; Zhang, Yunyong5
2019-07-01
Source PublicationACM Transactions on Knowledge Discovery from Data
ISSN1556-4681
Volume13Issue:4
Abstract

Time-sync comments (TSCs) reveal a new way of extracting the online video tags. However, such TSCs have lots of noises due to users’ diverse comments, introducing great challenges for accurate and fast video tag extractions. In this article, we propose an unsupervised video tag extraction algorithm named Semantic Weight-Inverse Document Frequency (SW-IDF). Specifically, we first generate corresponding semantic association graph (SAG) using semantic similarities and timestamps of the TSCs. Second, we propose two graph cluster algorithms, i.e., dialogue-based algorithm and topic center-based algorithm, to deal with the videos with different density of comments. Third, we design a graph iteration algorithm to assign the weight to each comment based on the degrees of the clustered subgraphs, which can differentiate the meaningful comments from the noises. Finally, we gain the weight of each word by combining Semantic Weight (SW) and Inverse Document Frequency (IDF). In this way, the video tags are extracted automatically in an unsupervised way. Extensive experiments have shown that SW-IDF (dialogue-based algorithm) achieves 0.4210 F1-score and 0.4932 MAP (Mean Average Precision) in high-density comments, 0.4267 F1-score and 0.3623 MAP in low-density comments; while SW-IDF (topic center-based algorithm) achieves 0.4444 F1-score and 0.5122 MAP in high-density comments, 0.4207 F1-score and 0.3522 MAP in low-density comments. It has a better performance than the state-of-the-art unsupervised algorithms in both F1-score and MAP.

KeywordExtraction
DOI10.1145/3332932
URLView the original
Language英語English
WOS IDWOS:000496747400002
Scopus ID2-s2.0-85068445496
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionUniversity of Macau
Affiliation1.Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
2.State Key Lab of IoT for Smart City, FST, University of Macau, Macau, 999078, China
3.Department of Electrical and Computer Engineering, University of California, Los Angeles, 90095, United States
4.Department of Computer Science and Engineering, American University of Sharjah, Sharjah, United Arab Emirates
5.China Unicom Research Institute, Economic-Technological Development Area, Beijing, Bldg. 2, No, 1 Beihuan East Road, 100176, China
First Author AffilicationFaculty of Science and Technology
Recommended Citation
GB/T 7714
Yang, Wenmain,Wang, Kun,Ruan, Na,et al. Time-sync video tag extraction using semantic association graph[J]. ACM Transactions on Knowledge Discovery from Data, 2019, 13(4).
APA Yang, Wenmain., Wang, Kun., Ruan, Na., Gao, Wenyuan., Jia, Weijia., Zhao, Wei., Liu, Nan., & Zhang, Yunyong (2019). Time-sync video tag extraction using semantic association graph. ACM Transactions on Knowledge Discovery from Data, 13(4).
MLA Yang, Wenmain,et al."Time-sync video tag extraction using semantic association graph".ACM Transactions on Knowledge Discovery from Data 13.4(2019).
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yang, Wenmain]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yang, Wenmain]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yang, Wenmain]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.