-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial states #2
Comments
Hi Amineh! Thanks for the insight, I think you are right. In these days I'm going to test your advice to see if with this I'm going to get grid cells. I will let you know ;) |
I re-trained the agent this time feeding the previous RNN state if timestep is not 0. Unfortunately, the results are even worse. I looked again at the official repo and they don't seem to feed the previous RNN state. At this point, I believe that it could be a problem with the data. Maybe I misinterpret the paper about the trajectory simulator from here. Testing the agent on the data provided in the official repo could verify my hypothesis. |
Thank you for explaining.
I'll have another look and let you know if I can find anything.
…On Sat, 3 Aug 2019, 22:13 Stefano Rosà, ***@***.***> wrote:
I re-trained the agent this time feeding the previous RNN state if
timestep is not 0. Unfortunately, the results are even worse.
I looked again at the official repo and they don't seem to feed the
previous RNN state.
https://github.com/deepmind/grid-cells/blob/d1a2304d9a54e5ead676af577a38d4d87aa73041/train.py#L167
At this point, I believe that it could be a problem with the data. Maybe I
misinterpret the paper about the trajectory simulator from here
<https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002553>.
Testing the agent on the data provided in the official repo could verify my
hypothesis.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2?email_source=notifications&email_token=AESHOXFKRX66FCRKWTTD5K3QCW7V3A5CNFSM4IHDMIP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3PSZEQ#issuecomment-517942418>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AESHOXF66EU5GIJ4EWIJ5KDQCW7V3ANCNFSM4IHDMIPQ>
.
|
Hi Stefano,
Good job with the code.
I think the reason you're not getting the Grid cells is that you don't feed the previous state of your last batch of data to your LSTM network. You mention in agent.py line 49 that you want to do this if time step is not zero, but I don't see any implementations of this.
#If is going to be timestep=0, initialize the hidden and cell state using the Ground Truth Distributions. #Otherwise, use the hidden state and cell state from the previous timestep passed using the placeholders
I'm guessing this would make all your data to be seen like sequences of length 100, which is a great misuse of data.
If I'm mistaking, please let me know, or ignore this issue.
Good luck.
The text was updated successfully, but these errors were encountered: