Exploratory experiment class refactor, focussing on InterruptedTimeSeries
#524
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
At the moment this is a bit of an experiment. I'm trying out a number of different ideas for refactoring of the experiment class. Just to test out the idea I'm focussing on the
InterruptedTimeSeries
class.Main things I've done are:
__init__
to thealgorithm
method. This is not only more pythonic, but it also gives us a very nice and mostly readable method that captures the core logic of this quasi-experimental method.__init__
to the_build_data
method. Increases modularity, testability, and tidies things up.self.data
which is anxarray.Dataset
. This keeps things tidy but also aids discoverability of the information that people want.__init__
is nice and minimal. We still automatically trigger the model fitting, by callingself.algorithm
, but there is the potential to not do this if we want to enable a more traditional Bayesian workflow where we build a model and do prior/prior predictive checks before fitting the model. But I'm not doing that in this refactor because it's a major workflow/API change.self.impact
for example which has an aperiod
dimension. So if we want the post intervention impact, we can get that byresult.impact.sel(period=="post")
. Mostly this will be invisible to the user, but for those doing manual interrogation of results then there might be slight changes in the API to document in the notebooks. I'm not wedded to this, and we could always have temporary accessor properties to replicate previous behaviour, which we could then deprecate.a. I've separated computation/processing of results and the plotting. So we have
get_plot_data_bayesian
andget_plot_data_ols
which both return data frames. Now the plot functions only ingest these data framesb. We now just have one
plot
method, and this deals with bayesian vs ols models with conditional logic. The motivation for that was to avoid massive duplication because the plots for each were so similar.c. What I have not yet done is to make the plot function only ingest the raw dataframe. At the moment it still gets a bunch of self attributes, but it would probably be better for the plot functions to just operate on data objects. I think the next step here would be to make this data an
xarray.Dataset
rather than a dataframe for greater flexibility (i.e. you can add meta data), but it also comes with some good save/load functionality from xarray. This plot refactoring is inspired by what seems to work quite well on some client projects.📚 Documentation preview 📚: https://causalpy--524.org.readthedocs.build/en/524/