-
Notifications
You must be signed in to change notification settings - Fork 13
Add BBKNN (TS) method. #84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add BBKNN (TS) method. #84
Conversation
|
I have a couple questions about the submission and testing:
Thanks! |
Yes, it needs to be added there to be included in the workflow. It sounds like you ran the workflow already but I'm not sure that it would include the new method without doing this.
When we do the full benchmark run on the cloud any failed metrics are ignored (or more accurately given a score of zero). We don't usually do the full runs locally so there might be some differences in the settings that causes it to not produce an output. Generally we wouldn't want to disable a metric just for a specific dataset/method. |
|
Thank you for the responses. I noticed that the Regarding my other question, this was ultimately just a vanilla NextFlow question, I've now figured out how to pipe |
|
Okay this PR is ready for review. @mumichae please note that src/methods/bbknn_ts/script.py lines 18-226 does not require detailed review comments, this is an entirely LLM-generated function that we do not want to modify. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on a quick glance, this code will likely work as the developer intended, however it does not make use of the pre-computed processing step and recomputes everything by itself insead. This in itself won't cause the code to fail or give wrong results, however it won't follow the benchmark setup as intended
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate a little on the comment that "it won't follow the benchmark setup as intended"? Is that because we indicate this is an embedding method with preferred normalization of log_cp10k but use the "layers/counts" as input rather than "layers/normalized"?
Would it be considered to follow the benchmark setup as intended if we indicated it is a [feature] method and set adata_integrated.layers['corrected_counts'] = adata_integrated.X? IIUC that could only improve its overall score on the v2.0.0 benchmark as then the HVG metric would also be computed.
I'd rather not edit the code to do that since right now it's purely LLM implemented, just want to make sure we're not missing something more fundamental. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the benchmark we don't consider the preprocessing steps (count transformations, feature selection) as part of the integration method, and these operations are precomputed and made available for the integration components. But I'm considering relaxing the logic for this method, if we want to preserve the LLM code and tuning preprocessing parameters is not what the LLM code intends.
|
Friendly ping -- do you need anything else prior to re-reviewing this PR @mumichae? Thank you for your help. |
| name: bbknn_ts | ||
| label: BBKNN (TS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method name is a bit misleading, since the main feature of this approach is to combine Combat and BBKNN. I think a name like gemini_combat_bbknn could make this distinction clearer
| - name: methods/batchelor_fastmnn | ||
| - name: methods/batchelor_mnn_correct | ||
| - name: methods/bbknn | ||
| - name: methods/bbknn_ts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name would need to be updated (see above)
| batchelor_fastmnn, | ||
| batchelor_mnn_correct, | ||
| bbknn, | ||
| bbknn_ts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name would need to be updated (see above)
|
Hi @cmclean, apologies for my late reply, I've been swamped with events the last couple weeks. I had a look at the prompt and the code in more detail, and found that it suggest a rather unconventional workflow, since you're running 2 integration benchmarks back-to-back. But since the point of openproblems is to benchmark novel approaches, so it's definitely interesting to include this approach. I added some clarification on the relationship between preprocessing and integration here, but ultimately decided to keep the LLM approach as its own end-to-end workflow, where preprocessing steps aren't tunable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review on the output type of the method to be consistent with the evaluation workflow (i.e. the correct integrated representation gets evaluated)
| repository: https://github.com/google-research/score | ||
|
|
||
| info: | ||
| method_types: [embedding] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since BBKNN only modifies the KNN graph only, the ouput type should be knn, not embedding
| method_types: [embedding] | |
| method_types: [knn] |
If you also want to consider the Combat-corrected counts as additional part of the method output and configure, both outputs as folllows:
| method_types: [embedding] | |
| method_types: [full, knn] |
the evaluation pipeline considers it as a separate version of the method and evaluate it separately.
So there will be an evaluation of bbknn_ts:full and bbknn_ts:knn, but there won't be any combination of metrics from the Combat corrected counts and BBKNN in a single entry in the results table.
Either way, embedding isn't the correct output format of this approach, because the resulting KNN graph from the embedding will be used for knn-based metrics, not the BBKNN corrected graph
| label: BBKNN (TS) | ||
| summary: "A combination of ComBat and BBKNN discovered and implemented by Gemini." | ||
| description: | | ||
| "The BBKNN (TS) solution applies standard scRNA-seq preprocessing steps, including total count normalization, log-transformation, and scaling of gene expression data. Batch effect correction is performed using scanpy.pp.combat directly on the gene expression matrix (before dimensionality reduction). Dimensionality reduction is then applied using PCA on the ComBat-corrected data, and this PCA embedding (adata.obsm['X_pca']) is designated as the integrated embedding (adata.obsm['X_emb']). A custom batch-aware nearest neighbors graph is constructed based on this integrated embedding; for each cell, neighbors are independently identified within its own batch and other batches, up to n_neighbors_per_batch. These candidate neighbors are merged, keeping the minimum distance for duplicate entries, and the top total_k_neighbors are selected for each cell. Finally, a symmetric sparse distance matrix and a binary connectivities matrix are generated to represent the integrated neighborhood graph. This code was entirely written by the AI system described in the associated publication." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For feature outputs, the transformation to PCA for the embedding representation is part of the postprocessing of the integration output, so that part of the prompt is not applicable to this approach. Since the final representation is a corrected KNN graph, the task should be considered as a knn output type, not embedding (see method types section of this config)
Describe your changes
This PR adds the top-performing "BBKNN (TS)" method from our recent preprint [1]. I have verified that tests pass, and running the
run_full_local.shscript on my own machine successfully recapitulates the numbers published in the preprint.[1] Aygun et al, An AI system to help scientists write expert-level empirical software, arXiv:2509.06503 (2025), https://arxiv.org/abs/2509.06503 .
Checklist before requesting a review
I have performed a self-review of my code
Check the correct box. Does this PR contain:
Proposed changes are described in the CHANGELOG.md
CI Tests succeed and look good!