Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding LSTM support to pretrain #315

Open
wants to merge 55 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
a9f7e30
mode pretrain in base_calse
Apr 11, 2019
c708a37
add mask support to DataLoader
Apr 11, 2019
169a80b
add better data split
Apr 13, 2019
b0ee4c7
add comments and fix some bugs
Apr 13, 2019
027891f
when using dataset.get_next_batch to expect fore returns.
Apr 13, 2019
b7541bd
update _get_pretrain_placeholders in all models.
Apr 13, 2019
11a9d00
make it work.
May 8, 2019
004e3fb
Merge pull request #1 from hill-a/master
XMaster96 May 8, 2019
4d14812
Merge branch 'master' into LSTM-pretrain
May 8, 2019
167a337
Make it work with 2.5.1
May 8, 2019
187f16e
Merge branch 'master' into LSTM-pretrain
XMaster96 May 8, 2019
1827b2d
improve the syntax
May 8, 2019
34e7bde
Merge remote-tracking branch 'origin/LSTM-pretrain' into LSTM-pretrain
May 8, 2019
54e5c01
Merge branch 'master' into LSTM-pretrain
araffin May 14, 2019
c43b39a
-fix partial_minibatch for LSTMs
May 14, 2019
c7b795a
-fix data alignment for LSTMs
May 15, 2019
920ac7b
Merge remote-tracking branch 'origin/LSTM-pretrain' into LSTM-pretrain
May 15, 2019
ccddbb2
Delete __init__.py
XMaster96 May 15, 2019
ee29e78
Delete run_atari.py
XMaster96 May 15, 2019
938a4f8
Delete run_mujoco.py
XMaster96 May 15, 2019
a952d02
Delete ppo2.py
XMaster96 May 15, 2019
a2a94ad
-fix syntax line length.
May 15, 2019
58ddd30
Merge remote-tracking branch 'origin/LSTM-pretrain' into LSTM-pretrain
May 15, 2019
77538da
-fix syntax
May 15, 2019
a124dcb
-fix syntax
May 15, 2019
c4d9c47
remove nano.save
May 20, 2019
d75c01e
Merge branch 'master' into LSTM-pretrain
XMaster96 May 20, 2019
4d87f41
Merge branch 'master' into LSTM-pretrain
XMaster96 Jun 5, 2019
6ab6728
Merge branch 'master' into LSTM-pretrain
araffin Jun 27, 2019
b9e0fc0
Merge pull request #2 from hill-a/master
XMaster96 Jul 20, 2019
fa4bbcf
Merge branch 'master' into LSTM-pretrain
Jul 20, 2019
b30413e
split LSTM dataset from Expert dataset.
Jul 20, 2019
40a94ad
-fix syntax
Jul 20, 2019
cba1030
Merge branch 'master' into LSTM-pretrain
XMaster96 Aug 7, 2019
9ed3cfa
add TD3 support
Aug 7, 2019
96567be
Merge branch 'master' into LSTM-pretrain
araffin Aug 23, 2019
4bfb988
-fix indentation
Sep 2, 2019
30bdb19
-fix syntax
Sep 2, 2019
06ba9da
-fix syntax
Sep 3, 2019
a08f420
-fix syntax
Sep 3, 2019
17c04ac
Merge branch 'master' into LSTM-pretrain
XMaster96 Sep 3, 2019
1406af3
Merge branch 'master' into LSTM-pretrain
XMaster96 Sep 6, 2019
f448eea
Merge branch 'master' into LSTM-pretrain
araffin Sep 7, 2019
9770a12
Merge branch 'master' into LSTM-pretrain
araffin Sep 12, 2019
eda5b8f
-change
Sep 13, 2019
428022e
Merge branch 'LSTM-pretrain' of https://github.com/XMaster96/stable-b…
Sep 13, 2019
694e6c1
- change
Sep 13, 2019
f11f732
- change
Sep 13, 2019
d8685e4
- fix model save
Sep 14, 2019
e03be1d
-fix syntax
Sep 14, 2019
2f6da05
-fix syntax
Sep 14, 2019
570b8d9
-fix syntax
Sep 14, 2019
e0bb120
-fix pickle load
Sep 14, 2019
2ee1300
Merge branch 'master' into LSTM-pretrain
XMaster96 Sep 16, 2019
988ba5c
Merge branch 'master' into LSTM-pretrain
XMaster96 Sep 21, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
-fix syntax
  • Loading branch information
XMaster96 authored and XMaster96 committed May 15, 2019
commit a124dcb40b1be67781e8e73da89aa493633eaf6c
3 changes: 2 additions & 1 deletion stable_baselines/gail/dataset/dataset.py
Original file line number Diff line number Diff line change
@@ -148,7 +148,8 @@ def __init__(self, expert_path=None, traj_data=None, train_fraction=0.7,
indices += c_i[i:i+batch_size]

# Free memory
del split_indices, len_list, sort_buffer, stack_indices, max_len, mod_max_len, final_stack_len, cycle_indices
del split_indices, len_list, sort_buffer, stack_indices, max_len, mod_max_len, final_stack_len,\
cycle_indices

# Train/Validation split when using behavior cloning
train_indices = indices[:split_point]