You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4096,128,4,4] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: inpaint_net/b1conv10_atrous/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](inpaint_net/b1conv10_atrous/SpaceToBatchND, inpaint_net/b1conv10_atrous/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
something like this
The text was updated successfully, but these errors were encountered:
Reduce the batch_size parameter from 16 to some lower value. Do it until your system doesn't stop giving ResourceExhaustedError. I hope you are using some GPU and not CPU.
Reduce the batch_size parameter from 16 to some lower value. Do it until your system doesn't stop giving ResourceExhaustedError. I hope you are using some GPU and not CPU.
I solved this problem last year, thank you for your reply.Less batch_size will reduce the model's performance so that I try to use colab to run it :)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4096,128,4,4] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: inpaint_net/b1conv10_atrous/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](inpaint_net/b1conv10_atrous/SpaceToBatchND, inpaint_net/b1conv10_atrous/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
something like this
The text was updated successfully, but these errors were encountered: