You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that the normalization of images in your code uses "image - mean," rather than the normal "(image - mean)/std". Is there a specific reason for using this kind of normalization?
For the scale of pixel values, your code is in the scale of [-128,128], rather than [-0.5,0.5] that are used in other works. Which one is better?
Also, if I use only the ImageNet pre-trained model, which kind of image normalization and pixel scale should I use?
The text was updated successfully, but these errors were encountered:
I have figured the scale problem in data preprocessing. But I still do not know why the model do not use std in the normalization? Any help will be appreciated!
I notice that the normalization of images in your code uses "image - mean," rather than the normal "(image - mean)/std". Is there a specific reason for using this kind of normalization?
For the scale of pixel values, your code is in the scale of [-128,128], rather than [-0.5,0.5] that are used in other works. Which one is better?
Also, if I use only the ImageNet pre-trained model, which kind of image normalization and pixel scale should I use?
The text was updated successfully, but these errors were encountered: