Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My computer's gpu doesn't seem to support me #65

Open
Cristo-R opened this issue Nov 21, 2020 · 2 comments
Open

My computer's gpu doesn't seem to support me #65

Cristo-R opened this issue Nov 21, 2020 · 2 comments

Comments

@Cristo-R
Copy link

tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4096,128,4,4] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: inpaint_net/b1conv10_atrous/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](inpaint_net/b1conv10_atrous/SpaceToBatchND, inpaint_net/b1conv10_atrous/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
something like this

@vinits5
Copy link

vinits5 commented Mar 31, 2021

Reduce the batch_size parameter from 16 to some lower value. Do it until your system doesn't stop giving ResourceExhaustedError. I hope you are using some GPU and not CPU.

@Cristo-R
Copy link
Author

Cristo-R commented Apr 1, 2021

Reduce the batch_size parameter from 16 to some lower value. Do it until your system doesn't stop giving ResourceExhaustedError. I hope you are using some GPU and not CPU.
I solved this problem last year, thank you for your reply.Less batch_size will reduce the model's performance so that I try to use colab to run it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants