Application of Reinforcement Learning in Financial Trading and Execution
Author: Zhiyuan Yao
Advisor: Dr. Ionut Florescu, Dr. Chihoon Lee
Date: October 1, 2024
Department: Financial Engineering
Degree: Doctor of Philosophy
Advisory Committee:
Dr. Ionut Florescu, Chairman
Dr. Chihoon Lee, Chairman
Dr. Rong Liu,
Dr. Zachary Feinstein,
Dr. Jia Xu
Abstract: This project explores how reinforcement learning (RL) can be applied to real-world portfolio optimization and trading. Training RL agents for financial tasks faces two key challenges: communication delays between traders and systems, and the lack of realistic market simulators for testing and learning.
To address this, the research proposes a hierarchical trading framework that uses two RL agents working together to maximize returns. The first study introduces a model-based RL method designed to handle performance loss caused by feedback delays — a common issue in fast-moving markets with high uncertainty. The approach is tested on controlled environments and classic Atari games to show its effectiveness.
The second part develops an agent-based market simulator powered by RL agents. This simulator generates realistic trading data and helps analyze how agents react to shocks like flash crashes, providing a valuable sandbox for strategy testing.
Finally, the third study combines portfolio optimization and order execution into a unified RL framework. By connecting two specialized RL agents — one for asset selection and one for execution — the system aims to improve active portfolio management results. The framework is tested on U.S. equity market data, demonstrating the potential of RL to enhance both strategy and execution in modern trading.
For full Dissertation, click here.