Skip to content

pavva94/PPOAlgorithmicTrading

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Proximal Policy Optimization applied to Algorithmic Trading

Project based on the framework developed by T. Théate and D. Ernst, modified to implement a PPO.

Dependencies

The dependencies are listed in the text file "requirements.txt":

  • Python 3.7.4
  • Pytorch 1.5.0
  • Tensorboard
  • Gym
  • Numpy
  • Pandas
  • Matplotlib
  • Scipy
  • Seaborn
  • Statsmodels
  • Requests
  • Pandas-datareader
  • TQDM
  • Tabulate

Usage

Simulating (training and testing) a chosen supported algorithmic trading strategy on a chosen supported stock is performed by running the following command:

python main.py -strategy STRATEGY -stock STOCK

with:

  • STRATEGY being the name of the trading strategy (by default PPO),
  • STOCK being the name of the stock (by default Apple).

The performance of this algorithmic trading policy will be automatically displayed in the terminal, and some graphs will be generated and stored in the folder named "Figures".

Citation

Experimental code supporting the results presented in the scientific research paper:

Thibaut Théate and Damien Ernst. "An Application of Deep Reinforcement Learning to Algorithmic Trading." (2020). [arxiv]

About

Forked repo based on "An Application of Deep Reinforcement Learning to Algorithmic Trading". Modified to implement PPO.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%