-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixtures: profile upload peformance #88
Comments
@tiborsimko: if the change is only in the metadata, doing it with the option of skip-files would definitely improve the time needed. Note that this record has 17 file indices. In versions 0.1.11 (and older), the file indices are stored as normal files. Starting with the 0.2, the file indices are read, and the files inside the file index are processed. This particular record has more than 2000 files that have to be deleted/reinserted (unless the --skip-files option is specified) |
FYI, I've just checked this with the latest version (0.2.5). I get:
The update takes a bit longer, since it has to delete:
Could you please check if you get similar timings (instead of the six minutes mentioned above? |
The previous comment was not a fair comparison because it was in a newly created DB (with less entries). Doing it in the qa instance, with the 2.5M entries was again slow. There was indeed a missing index on the file_object table. Creating it on dev improved the timing quite a lot. We will put the same index on qa |
Closing the issue. Feel free to reopen it if there are still any issues |
Current behaviour
Seen on the QA instance on November 15th.
Updating ATLAS records from from cernopendata/opendata.cern.ch#3688 using
cernopendata-portal
image 0.1.11 works very fast, both locally and on PROD:However, on QA the same upload process got stuck:
There was no reply for many minutes; the process seems to "run away".
I have interrupted it after about 6 minutes:
Expected behaviour
The records should be updated fast, within 2-3 seconds, as with 0.1.11.
Notes
This is especially interesting because the change in the record JSON was only minimal:
That is, there was no change in attached files when performing this update, and the record itself has only about 34 files attached, all directly and not via index files... So waiting for 6 minutes seems excessive.
It would be good to profile the fixture loading command to see where this extra time was spent. (Perhaps some missing DB indexes and an inefficient DB query causing slow downs?)
The text was updated successfully, but these errors were encountered: