You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Increase the number of epochs implies in multiply the dataset size. I need to work with the previous version, using Tensorflow v1, because I need to interact each step each epoch. I changed the program reinitializing the dataset iterator (dataset.iterator_initializer) each epoch change. The problem is that I had no success to update epoch at each iterator initializing without increase the dataset.
I tried setting an operator, including a variable in database property class named epoch, including a placeholder ...
I tried including an argument epoch in word2vec.train ... it is possible to see in github. Word2vec class only know that it is another epoch through dataset, but this is not good, because the batch is not synchronized with dataset size.
I tried something sending operator feed without success.
Can you give an idea. The structure is:
....
for epoch in range(!, epochs+1):
sess.run(dataset.iterator_initializer)
while True:
try:
sess.run( ... , feed_dict={ epoch: epoch} )
result_dict = sess.run(to_be_run_dict)
...
I tried to start with 20 epochs, The text file is big, all data of Eur-Lex in 2012 in Portuguese. After 2 hours and half, the session do not started. Using Linux 16.06, 22GB of RAM, Tensorflow 1.15. It never spend so much time. It is still running now while I was writing.
The text was updated successfully, but these errors were encountered:
Increase the number of epochs implies in multiply the dataset size. I need to work with the previous version, using Tensorflow v1, because I need to interact each step each epoch. I changed the program reinitializing the dataset iterator (dataset.iterator_initializer) each epoch change. The problem is that I had no success to update epoch at each iterator initializing without increase the dataset.
I tried setting an operator, including a variable in database property class named epoch, including a placeholder ...
I tried including an argument epoch in word2vec.train ... it is possible to see in github. Word2vec class only know that it is another epoch through dataset, but this is not good, because the batch is not synchronized with dataset size.
I tried something sending operator feed without success.
Can you give an idea. The structure is:
I tried to start with 20 epochs, The text file is big, all data of Eur-Lex in 2012 in Portuguese. After 2 hours and half, the session do not started. Using Linux 16.06, 22GB of RAM, Tensorflow 1.15. It never spend so much time. It is still running now while I was writing.
The text was updated successfully, but these errors were encountered: