Hi there, thanks for open-sourcing this repository!
I have a question regarding the performance and reproducibility of this implementation. Specifically, I’m wondering:
Are the results obtained with this codebase expected to match (or be close to) the performance reported in the original DINOv3 paper?
Has there been any internal benchmarking to verify alignment with the paper’s reported results?
Alternatively, have other users reported their reproduction results using this repository (e.g., on classification, segmentation, or depth estimation tasks)?
I’m currently trying to reproduce some of the downstream results, but I’m not sure whether any performance gap I observe is due to my setup or inherent differences between this implementation and the original one.
Any clarification or pointers would be greatly appreciated. Thanks again for your work!
Hi there, thanks for open-sourcing this repository!
I have a question regarding the performance and reproducibility of this implementation. Specifically, I’m wondering:
Are the results obtained with this codebase expected to match (or be close to) the performance reported in the original DINOv3 paper?
Has there been any internal benchmarking to verify alignment with the paper’s reported results?
Alternatively, have other users reported their reproduction results using this repository (e.g., on classification, segmentation, or depth estimation tasks)?
I’m currently trying to reproduce some of the downstream results, but I’m not sure whether any performance gap I observe is due to my setup or inherent differences between this implementation and the original one.
Any clarification or pointers would be greatly appreciated. Thanks again for your work!