-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues w/ running on Windows #1
Comments
@ad48hp That issue occurs when the smallest octave image size is too small for the model. A quick test with the GoogleNet Caffe models shows that they require both height and width to higher than 223px. This will result in an error because one of Octave 1's dimensions is than 223px:
For the example images, I used
Octaves as I understand are pretty much the same thing as the multiscale resolution technique used with style transfer. The |
Seems to work now, though not sure why, but the DeepDreaming is incredibly slow with this one.. Could ya look into that ? This is the script i tried.. |
@ad48hp I'll look into it. Have you tried using |
Seems to not go faster. Can't yet find that one in the dreamify script.. |
The If the learning rate is set too high, the image will look like this: If the learning rate is set too low, then very little if any change will occur on the output image. Different models can require different learning rates, and you may have to play around with the values to find one that works. PyTorch VGG models for instance will require a really low learning rate, like |
Okay, seems to be running fine now, maybe the speed gets higher when it's run few times.. The code i wrote for myself back then was able to pick-up w/ the zooming where it last ended, so let's say i wanted 1000 pictures to be generated, but on 437 i stopped it, and then when i runned it again it continued with 438.. Could ya implement it here, boi ? |
@ad48hp For the moment you can try this: https://github.com/ProGamerGov/neural-dream/tree/output-image-name. The I made the |
How that script worked is that it used path.isfile to search for the last image, and then use it.. |
On Windows, CUDA backend writes "RuntimeError: error in LoadLibraryA".. I wrote.. |
Also, i was able to reproduce the error i wrote you about! ^^
Can you reproduce the results ? |
@ad48hp The parameters don't seem to raise that PyTorch error that you posted above. The content image you are using seems extremely bright, and using |
I meant, i was able to reproduce the issue i wrote about here, oddly enough with the Places205 model path as well.. What's weird is that at the beggining, i wasn't getting those shapes at all and now consistently.. Isn't there some sort of cache that keeps ruining it (i tried to remove the __pycache folders, didn't changed anything so far).. |
The learning rate that you are using seems like it might be a bit too high. Lowering it to 1.6 resulted in this output at 29 iterations: Using I've found that the best results sometimes require taking things slow initially, because the network has more time to build the details without overshooting. |
That sucessfully fixed the problem with Places205 !
|
@ad48hp I've found that results like those are normally caused by the inputs and parameters that you have chosen. The content image you are using may be affecting the results. You can try adding |
Ha, ya seem to be yet way better at it than me ! Under Places365 "Ours" it ends with "tar" but from what i look there's no archive in it, so ya can just remove the 'tar' extension from the filename completely.. |
Also, the NSFW network doesn't seem to work.. |
I've added the |
Shouldn't the If channel_mode 'is set to a value other than all, only the first value in the list will be used' part in Readme also include 'ignore' ? Also, can ya now add one tiny thing, that when the output_start_num is settled to a value greater than 1, it would automatically load the t(output_start_num - 1).png file in the output directory or something like that yet ? |
@ad48hp Good catch! I'll have to think about how I'd implement such a feature. For the moment though, it's simple enough to manually find the image highest number. |
I should mention that if you add |
Also, this is a thing i believe is not directly relevant to this repository, but for some reason, DeepDream scripts run fast at the beggining, and then, after about 20 minutes of runtime, it slows down tredemounsly, sometimes yet rerunning the app helps temporarily, but after a while, the speed goes terrible again (first about 6 iterations per seconds then drops to about 1 iteration per 3 seconds).. Reported GPU usage is consistently low (even when setting NVIDIA to highest performance setting) and goes slow even with MSI Afterburner.. CPU speed goes from 45% to about 24% after a while.. |
I would expect if there was a memory leak, then usage would increase. I changed a line of code in the update branch that could fix the issue if it was caused by a bug in my code: https://github.com/ProGamerGov/neural-dream/tree/update. If that doesn't resolve it, then there could be something else going on. |
I'll try the update soon hopefully.. |
That looks like it may be a PyTorch bug: NVIDIA/apex#319 It doesn't look like anyone has been able to reliably reproduce the error though. |
File "D:\Cancer\neural-dream\neural_dream\models\googlenet\bvlc_googlenet.py", line 352, in forward pool5_7x7_s1 = F.avg_pool2d(inception_5b_output, kernel_size=(7, 7), stride=(1, 1), padding=(0,), ceil_mode=False, count_include_pad=False) RuntimeError: Given input size: (1024x4x4). Calculated output size: (1024x-2x-2). Output size is too small
Tried with multiple images, image-size & models..
Would be nice if you could get it working as it's quite moar challenging to get Caffe run on Windows currently..
The text was updated successfully, but these errors were encountered: