Skip to content

Conversation

BSnelling
Copy link
Collaborator

(also adds a .gitignore)

from typing import Iterable
from ruamel.yaml import YAML

def generate_neural_ode_problem(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few things:

  1. This function should, as optional argument, take experimental conditions.
  2. It should not have default names for measurements and neural network file right, as these need to exist?
  3. How about allowing users to, optionally, provide initial values for the species following valid PEtab math syntax?

@sebapersson sebapersson requested a review from dilpath October 20, 2025 13:37
@sebapersson
Copy link
Collaborator

@dilpath could you also take a look at this (you are a bit more familiar with Python than I am)

Copy link
Member

@dilpath dilpath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I like it, but could you specify the target audience/intended use case? Perhaps prepare a short docs page that describes it's use in the docs Examples? That would help me understand the purpose better and make it easier to review.

from typing import Iterable
from ruamel.yaml import YAML

def generate_neural_ode_problem(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the target audience for this function, people who have never used PEtab before? If so then looks OK overall. If not, then I would prefer a different design

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate on different design?

I would argue target is both experienced PEtab users, but especially those newer to the format.

Copy link
Member

@dilpath dilpath Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this is a duplicate comment of what we discussed (i.e., the basic function will just accept a pytorch model + list of species IDs).

For experienced PEtab users, this might be cumbersome -- they might prefer the modular design we discussed.

s = model.createSpecies()
s.setId(species)
s.setConstant(False)
s.setInitialAmount(0.5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 0.5? 0 or 1? Or something on a similar scale to the first measurement?

observables = {
"observableId": observable_ids,
"observableFormula": species,
"noiseFormula": [0.05] * len(species),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 0.05?

Comment on lines +133 to +140
parameters = {
"parameterId": [f"{network_name}_ps"],
"parameterScale": ["lin"],
"lowerBound": ["-inf"],
"upperBound": ["inf"],
"nominalValue": [None],
"estimate": [1],
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add priors to initialize these to be small? I think the bounds should also be finite.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not really see how bounds on neural network parameters make sense, as usually they are estimated unconstrained, and algorithms for estimating them (e.g., Adam) does not handle constraints in its native form?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you're right. I rather meant for initialization, there should at least be bounds or something like a prior that can be sensibly sampled from. Unbounded uniform distribution for initialization would be ... difficult.

Copy link
Member

@dilpath dilpath Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is initialization out-of-scope?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As initialization priors will be out of the scope of PEtab v2, I think it it outside the scope here as well.

Comment on lines +146 to +157
{
"model_files": {
"model": {
"location": model_filename,
"language": "sbml",
},
},
"measurement_files": [measurements_filename],
"observable_files": ["observables.tsv"],
"condition_files": ["conditions.tsv"],
"mapping_files": ["mapping.tsv"],
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you are using a mix of both PEtab v1 and v2 here. We should target v2 but fine for now if you're working with the current state of PEtab v2 and the libpetab-python library.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is fine for now. Currently the test-suite is something of a mix as v2 is not done, but PEtab SciML will target v2 in the end. Also updating the test-suite, and these examples will also be straightforward later on as we are not really using features that will be hard to port.

As a side-note, including events later in this example could be great to show the benefit with PEtab SciML.

Comment on lines +29 to +31
with open("hybridization.tsv", "r") as f:
# number of lines is header plus inputs and outputs for each species
assert len(f.readlines()) == 1 + len(species) * 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with expected content of the files, since they are not too long? Would make the tests more understandable.

@sebapersson
Copy link
Collaborator

Overall I like it, but could you specify the target audience/intended use case? Perhaps prepare a short docs page that describes it's use in the docs Examples? That would help me understand the purpose better and make it easier to review.

@BSnelling knows more about the doc, but in #37 there should be a how-to-guide on neural ODE. But briefly, I see it as a convenient way to reproducible setup neural ODEs models, and to also benefit from the PEtab features such as experimental conditions, events (with v2). We should probably highlight these things in the designated doc page more clearly (that is, why it is nice to use PEtab SciML for neural ODEs). As for target audience, would be nice if we could more generally capture neural ODE practioners.

@dilpath
Copy link
Member

dilpath commented Oct 21, 2025

Overall I like it, but could you specify the target audience/intended use case? Perhaps prepare a short docs page that describes it's use in the docs Examples? That would help me understand the purpose better and make it easier to review.

@BSnelling knows more about the doc, but in #37 there should be a how-to-guide on neural ODE. But briefly, I see it as a convenient way to reproducible setup neural ODEs models, and to also benefit from the PEtab features such as experimental conditions, events (with v2). We should probably highlight these things in the designated doc page more clearly (that is, why it is nice to use PEtab SciML for neural ODEs). As for target audience, would be nice if we could more generally capture neural ODE practioners.

Thanks, those docs look good, and makes sense to advertise PEtab a bit there too to a potential non-PEtab audience.

convenient way to reproducible setup neural ODEs models

As for target audience, would be nice if we could more generally capture neural ODE practioners.

For me it would make sense to have a function that takes the PEtab SciML NNModel and a list of species IDs, then generates an appropriate SBML file and mapping/hybridization tables.

I don't see much benefit from including measurements in this PR, so would prefer to remove that code, unless the current PR directly addresses a very popular use case for neural ODEs.

@sebapersson
Copy link
Collaborator

For me it would make sense to have a function that takes the PEtab SciML NNModel and a list of species IDs, then generates an appropriate SBML file and mapping/hybridization tables.

Good point. The argument for measurements here is for creating a complete PEtab Problem (including PEtab problem yaml), overall reducing the work of users. Might be worthwhile to go with both, one function for building the SBML, mapping and hybridization table, then a function based on it that given measurements (and other tables) can build the full problem?

@dilpath
Copy link
Member

dilpath commented Oct 21, 2025

For me it would make sense to have a function that takes the PEtab SciML NNModel and a list of species IDs, then generates an appropriate SBML file and mapping/hybridization tables.

Good point. The argument for measurements here is for creating a complete PEtab Problem (including PEtab problem yaml), overall reducing the work of users. Might be worthwhile to go with both, one function for building the SBML, mapping and hybridization table, then a function based on it that given measurements (and other tables) can build the full problem?

Sounds good! I think it would be useful because this function would generate all of the PEtab SciML-specific files based on just the pytorch model and a list of species. Then if a user already knows PEtab, they can do the rest, or else use the measurements function.

@BSnelling
Copy link
Collaborator Author

Thank you both for the reviews! The two function approach makes sense to me: one for the model, network, hybridization and mapping and a second for measurements and other tables. I will implement that and come back for re-review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants