UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Supervised actor-critic reinforcement learning with action feedback for algorithmic trading
Sun, Qizhou; Si, Yain-Whar
2022-12-17
Source PublicationApplied Intelligence
ISSN0924-669X
Volume53Issue:13Pages:16875–16892
Abstract

Reinforcement learning is one of the promising approaches for algorithmic trading in financial markets. However, in certain situations, buy or sell orders issued by an algorithmic trading program may not be fulfilled entirely. By considering the actual scenarios from the financial markets, in this paper, we propose a novel framework named Supervised Actor-Critic Reinforcement Learning with Action Feedback (SACRL-AF) for solving this problem. The action feedback mechanism of SACRL-AF notifies the actor about the dealt positions and corrects the transitions of the replay buffer. Meanwhile, the dealt positions are used as the labels for the supervised learning. Recent studies have shown that Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) are more stable and superior to other actor-critic algorithms. Against this background, based on the proposed SACRL-AF framework, two reinforcement learning algorithms henceforth referred to as Supervised Deep Deterministic Policy Gradient with Action Feedback (SDDPG-AF) and Supervised Twin Delayed Deep Deterministic Policy Gradient with Action Feedback (STD3-AF) are proposed in this paper. Experimental results show that SDDPG-AF and STD3-AF achieve the state-of-art performance in profitability.

KeywordFinance Reinforcement Learning Supervised Learning Algorithmic Trading
DOI10.1007/s10489-022-04322-5
URLView the original
Indexed BySCIE
Language英語English
Funding ProjectForce-directed Algorithms for Visualization of Large-Scale Dynamic Graphs
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000900092800003
PublisherSPRINGER, VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS
Scopus ID2-s2.0-85144208905
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorSi, Yain-Whar
AffiliationDepartment of Computer and Information Science, University of Macau, Avenida da Universidade, Macau, China
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Sun, Qizhou,Si, Yain-Whar. Supervised actor-critic reinforcement learning with action feedback for algorithmic trading[J]. Applied Intelligence, 2022, 53(13), 16875–16892.
APA Sun, Qizhou., & Si, Yain-Whar (2022). Supervised actor-critic reinforcement learning with action feedback for algorithmic trading. Applied Intelligence, 53(13), 16875–16892.
MLA Sun, Qizhou,et al."Supervised actor-critic reinforcement learning with action feedback for algorithmic trading".Applied Intelligence 53.13(2022):16875–16892.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Sun, Qizhou]'s Articles
[Si, Yain-Whar]'s Articles
Baidu academic
Similar articles in Baidu academic
[Sun, Qizhou]'s Articles
[Si, Yain-Whar]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Sun, Qizhou]'s Articles
[Si, Yain-Whar]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.