-
Notifications
You must be signed in to change notification settings - Fork 0
Germplasm Search Optimizations #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
If not specified, this sort will be used to keep the endpoints idempotent.
…tion Added utility methods to SearchQueryBuilder and BrAPIRepositoryImpl to allow for proper paginating for hibernate fetch queries that don't suffocate memory. Also added methods to run queries without pagination entirely using the SearchQueryBuilder to prevent the use of pagination when it's not required, an issue that specifically had to be addressed for the BI cache, but one that introduced code that is reusable for other use cases. Modified the GermplasmApiController's searchGermplasmPost endpoint to accomodate two code paths: - One where no page and pageSize are supplied. In this scenario the code will grab all germplasm without the use of pagination. Good for large data grabs, but gets dangerous with excessively large amounts of data. This is entirely to meet BI's current use case, which we have strongly advised they move off of. - When page and/or pageSize are supplied, paginate as requested, default page size of 1000 if not requested.
Additionally make these configurable vars consistent and usable across BrAPIController and PagingUtility, which both utilize them.
@@ -156,7 +156,7 @@ public ResponseEntity<SeedLotTransactionListResponse> seedlotsTransactionsGet( | |||
validateAcceptHeader(request); | |||
Metadata metadata = generateMetaDataTemplate(page, pageSize); | |||
List<SeedLotTransaction> data = seedLotService.findSeedLotTransactions(transactionDbId, seedLotDbId, | |||
germplasmDbId, germplasmName, crossDbId, crossName, commonCropName, programDbId, externalReferenceId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
externalReferenceId
is the newer, correct spelling, introduced in BrAPI v2.1.
externalReferenceID
is the deprecated one and could be deleted if there is no need for backwards compatibility with v2.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were autofixes for params that were being unused in these sigs, I can rename to externalReferenceId
here and elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will use the right one here and other places, but think we should wait to fully delete from ApiControllers and data models, or do it in another commit/MR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would either continue to support the deprecated externalReferenceID
parameters (see my comment on CrossingProjectsApiController.java about how to do that), or fully remove them from controller signatures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I would rather this not be a sticking point then. These changes are pretty far removed and out of scope of the things we actually want. I will just revert these changes.
If we actually want to remove those attributes from the codebase would rather that be done in a separate MR, bc it's a decent amount of changes to fully support.
@@ -156,7 +156,7 @@ public ResponseEntity<SeedLotTransactionListResponse> seedlotsTransactionsGet( | |||
validateAcceptHeader(request); | |||
Metadata metadata = generateMetaDataTemplate(page, pageSize); | |||
List<SeedLotTransaction> data = seedLotService.findSeedLotTransactions(transactionDbId, seedLotDbId, | |||
germplasmDbId, germplasmName, crossDbId, crossName, commonCropName, programDbId, externalReferenceId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were autofixes for params that were being unused in these sigs, I can rename to externalReferenceId
here and elsewhere
@@ -33,29 +33,14 @@ | |||
|
|||
public class BrAPIController { | |||
private static final Logger log = LoggerFactory.getLogger(ServerInfoApiController.class); | |||
|
|||
protected Metadata generateMetaDataTemplateForSearch(Integer originalRequestedPage, Integer newRequestedPage, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was unused.
@@ -81,16 +66,6 @@ protected Metadata generateMetaDataTemplate(Integer page, Integer pageSize) thro | |||
return metaData; | |||
} | |||
|
|||
private void validatePaging(Integer page, Integer pageSize) throws BrAPIServerException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to PagingUtility
@@ -156,7 +156,7 @@ public ResponseEntity<SeedLotTransactionListResponse> seedlotsTransactionsGet( | |||
validateAcceptHeader(request); | |||
Metadata metadata = generateMetaDataTemplate(page, pageSize); | |||
List<SeedLotTransaction> data = seedLotService.findSeedLotTransactions(transactionDbId, seedLotDbId, | |||
germplasmDbId, germplasmName, crossDbId, crossName, commonCropName, programDbId, externalReferenceId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will use the right one here and other places, but think we should wait to fully delete from ApiControllers and data models, or do it in another commit/MR.
|
||
import java.util.List; | ||
|
||
public interface BrAPIComponent<T, R extends SearchRequest> { | ||
List<T> findEntities(@Valid R request, Metadata metadata); | ||
List<T> findEntities(@Valid R request, Metadata metadata) throws BrAPIServerException; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these BrAPIAServerExceptions
were added to catch and populate the InvalidPagingException
up the stack to deliver a 400 to the user and give them a message instrucitng them how to page correctly.
Pageable pageReq = PagingUtility.getPageRequest(metadata); | ||
SearchQueryBuilder<ListEntity> searchQuery = buildQueryString(request); | ||
|
||
Page<ListEntity> entityPage = listRepository.findAllBySearch(searchQuery, pageReq); | ||
Page<ListEntity> entityPage = listRepository.findAllBySearchAndPaginate(searchQuery, pageReq); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of these call changes are just signature switches. I wanted to utilize findAllBySearch
to mean search using a search query without pagination. So far, just GermplasmService
really uses this, and everything else still uses pagination for now. For all intents and purposes, findAllBySearchAndPaginate()
is the old findAllBySearch()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still need to do functional testing, but I want to get my initial feedback to you as soon as possible.
src/main/java/org/brapi/test/BrAPITestServer/controller/germ/CrossesApiController.java
Outdated
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/controller/germ/CrossingProjectsApiController.java
Outdated
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/controller/germ/GermplasmApiController.java
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/controller/germ/PlannedCrossesApiController.java
Outdated
Show resolved
Hide resolved
@@ -156,7 +156,7 @@ public ResponseEntity<SeedLotTransactionListResponse> seedlotsTransactionsGet( | |||
validateAcceptHeader(request); | |||
Metadata metadata = generateMetaDataTemplate(page, pageSize); | |||
List<SeedLotTransaction> data = seedLotService.findSeedLotTransactions(transactionDbId, seedLotDbId, | |||
germplasmDbId, germplasmName, crossDbId, crossName, commonCropName, programDbId, externalReferenceId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would either continue to support the deprecated externalReferenceID
parameters (see my comment on CrossingProjectsApiController.java about how to do that), or fully remove them from controller signatures.
src/main/java/org/brapi/test/BrAPITestServer/service/germ/SeedLotService.java
Outdated
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/service/germ/CrossingProjectService.java
Outdated
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/service/germ/GermplasmService.java
Outdated
Show resolved
Hide resolved
src/main/java/org/brapi/test/BrAPITestServer/service/germ/GermplasmService.java
Outdated
Show resolved
Hide resolved
}); | ||
} | ||
|
||
private void fetchRemainingGermCollectionsUsingQuery(SearchQueryBuilder<GermplasmEntity> searchQuery, List<GermplasmEntity> germEntities) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we've outgrown what the ORM can be reasonably used for, I would escape to SQL, either executing a database function (see this PR for an example) or creating a view that performs all the joins on the database side so that we can use a single query.
That said, I'm open to doing it this way if performance is acceptable (I still need to do functional testing) and we make a plan to stop doing this as soon as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol yea I kinda hate this whole method lol nothing technically wrong with it, its just not pretty
but it seems to be working for now, we'll keep an eye on it if future changes make it unnecessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the sentiment but disagree that the mentioned solution is a good fix.
The code is liable to break if the table or any associated tables undergo schema changes, and now you have to track down the procedure you made and fix it.
This ORM solution will stay intact regardless of schema changes (provided columns/relationships here are completely obliterated/changed).
A view tbh has a similar problem.
This is also breaks the convention that the application has total control of the SQL it needs to execute, which can be difficult to fix should the need arise.
All this said, yea this is an eyesore, but IMO this is more indicative that the actual schema/data model is the problem here, not the ORM.
And, to be fair, the only reason this eyesore method is needed is because you guys aren't paginating 😉
If you did, the updated optimizations to the paging version of this method will always deliver the performance you want, at least in my testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable to me to change data access code when the schema changes, but I realize nobody wants to maintain large PL/pgSQL functions or complex views.
I agree that pagination is the right approach when requesting a lot of data, ideally we can move to making paginated requests eventually.
if (sortClause.isEmpty()) { | ||
// By default, sort on entity id to have query result remain idempotent | ||
sortClause = " ORDER BY entity.id ASC "; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could be moved to the constructor, line 25. Just to avoid maintaining the default in multiple places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that this check can be moved, but the default sort String definitely can, so I will do that.
This logic is meant to be called after a SearchQueryBuilder
is constructed, because if the sort clause is non-empty, we should use that instead of this default.
These were simply IntelliJ suggestions I added in bc I was editing the surrounding code. I did not intend or want to change the way the spec or APIs work surrounding, at least not in this body of commits. To keep this work separate from the germ optimization work, I have reverted these changes. Also fixed a typo Matthew caught in ScaleService
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally along with the bi-api changes, it is working.
I did encounter an Out Of Memory Error when loading around 68k germplasm without pagination, but I had constrained the memory of the docker container to 4GB (and therefore the default JVM heap limit to 1GB, I think it defaults to 1/4 of available RAM in the docker image we're using). When I increased the available memory to 8GB (2GB JVM heap limit), it worked. Because we'll have 32GB (8GB JVM heap limit) in production, I think we'll have a decent amount of headroom, but it could still occur.
A couple of optimizations, bug fixes and workarounds have been provided to improve the performance and usability of the germplasm search endpoint.
firstResult/maxResults specified with collection fetch; applying in memory
which was a big clue to why this search endpoint was performing so poorly. Tim essentially tried to implement as vlad described here, but there was a critical error, we passed through all of the query logic to the search query builder, which today runs paginated queries no matter what. This means that while we did the grunt work of fetching the IDs separately and giving them to another query, we ended up with the same applying in memory error as before. This code has been changed so that not only does theSearchQueryBuilder
support non-paginated queries, there is also more support for the kind of double query that the paginated fetches require. (See GermplasmService.findGermplasmEntities()). The performance improvement is orders of magnitude. I went from about 4.5 seconds 100 records to about 500ms, 15 seconds on 1000 to 1 second on a dataset of 550k germs on a program.page
andpageSize
or neither attributed be present in the request to kick off some new logic on the germplasm search POST endpoint which will utilize the new non-paginated query thatSearchQueryBuilder
supports to return all data at once, non-paginated. This has a breaking point, however, as at about 250k germ records per program java completely exhausts its heap trying to load all the data in entity objects and converting them to Json. It should be noted this also isn't particularly fast, as this is a large amount of data to transmit. 125k records takes about 30 seconds on average to get back. But this should work as a stop gap in the meantime.Could not prepare SQL statement
error, we needed a way to completely refuse lookups that could produce more than 65k sql params, as this is the limit. These occurred mostly in my testing when I tried to paginate germplasms large than 65k records, because in order to fetch these records we need to pass IDs of found records in inital query to later join-fetch queries. I suppose there are other ways around this, like we could break up the queries into more queries, but the right solution feels like to incentivize the requester to actually request data from the search endpoint in a meaningful and more performant way. That is, we have configured a way to control the maximum allowable page size for page requests on the server. For now, it is 65k, and this applies to all entities, not just Germplasm. Specifically for BI, this will be a problem for the cache, which we have addressed for the germplasm entity but not for other larger entities they might have, like observations and observation units. I may revert this commit and put it somewhere else separately if it is a problem loading a cache for large datasets, or I might add to this body of code.Note: This PR and its associated commits should also be merged to the prod server when it is verified on BI's end.