Residential Collegefalse
Status已發表Published
Reinforcement learning-based QoE-oriented dynamic adaptive streaming framework
Wei, Xuekai1; Zhou, Mingliang2,3; Kwong, Sam1,4; Yuan, Hui5; Wang, Shiqi1; Zhu, Guopu6; Cao, Jingchao1
2021-08-01
Source PublicationInformation Sciences
ISSN0020-0255
Volume569Pages:786-803
Abstract

Dynamic adaptive streaming over the HTTP (DASH) standard has been widely adopted by many content providers for online video transmission and greatly improve the performance. Designing an efficient DASH system is challenging because of the inherent large fluctuations characterizing both encoded video sequences and network traces. In this paper, a reinforcement learning (RL)-based DASH technique that addresses user quality of experience (QoE) is constructed. The DASH adaptive bitrate (ABR) selection problem is formulated as a Markov decision process (MDP) problem. Accordingly, an RL-based solution is proposed to solve the MDP problem, in which the DASH clients act as the RL agent, and the network variation constitutes the environment. The proposed user QoE is used as the reward by jointly considering the video quality and buffer status. The goal of the RL algorithm is to select a suitable video quality level for each video segment to maximize the total reward. Then, the proposed RL-based ABR algorithm is embedded in the QoE-oriented DASH framework. Experimental results show that the proposed RL-based ABR algorithm outperforms state-of-the-art schemes in terms of both temporal and visual QoE factors by a noticeable margin while guaranteeing application-level fairness when multiple clients share a bottlenecked network.

KeywordMachine Learning Mpeg-dash Quality Of Experience Reinforcement Learning
DOI10.1016/j.ins.2021.05.012
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Information Systems
WOS IDWOS:000659919700011
PublisherELSEVIER SCIENCE INCSTE 800, 230 PARK AVE, NEW YORK, NY 10169
Scopus ID2-s2.0-85107705501
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Corresponding AuthorZhou, Mingliang; Kwong, Sam
Affiliation1.Department of Computer Science, City University of Hong Kong, Kowloon, 999077, China
2.School of Computer Science, Chongqing University, Chongqing, 400044, China
3.State Key Lab of Internet of Things for Smart City, University of Macau, Taipa, 999078, China
4.City University of Hong Kong Shenzhen Research Institute, Shenzhen, 518057, China
5.School of Control Science and Engineering, Shandong University, Ji'nan, 250061, China
6.Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Wei, Xuekai,Zhou, Mingliang,Kwong, Sam,et al. Reinforcement learning-based QoE-oriented dynamic adaptive streaming framework[J]. Information Sciences, 2021, 569, 786-803.
APA Wei, Xuekai., Zhou, Mingliang., Kwong, Sam., Yuan, Hui., Wang, Shiqi., Zhu, Guopu., & Cao, Jingchao (2021). Reinforcement learning-based QoE-oriented dynamic adaptive streaming framework. Information Sciences, 569, 786-803.
MLA Wei, Xuekai,et al."Reinforcement learning-based QoE-oriented dynamic adaptive streaming framework".Information Sciences 569(2021):786-803.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wei, Xuekai]'s Articles
[Zhou, Mingliang]'s Articles
[Kwong, Sam]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wei, Xuekai]'s Articles
[Zhou, Mingliang]'s Articles
[Kwong, Sam]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wei, Xuekai]'s Articles
[Zhou, Mingliang]'s Articles
[Kwong, Sam]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.