Skip to content

Commit 12aaeb5

Browse files
stefanradev93vpratzhan-oldaniel-habermannarrjon
authored
Version 2.0.7 (#564)
* fix trainable parameters in distributions (#520) * Improve numerical precision in MVNScore.log_prob * add log_gamma diagnostic (#522) * add log_gamma diagnostic * add missing export for log_gamma * add missing export for gamma_null_distribution, gamma_discrepancy * fix broken unit tests * rename log_gamma module to sbc * add test_log_gamma unit test * add return information to log_gamma doc string * fix typo in docstring, use fixed-length np array to collect log_gammas instead of appending to an empty list * Breaking changes: Fix bugs regarding counts in standardization layer (#525) * standardization: add test for multi-input values (failing) This test reveals to bugs in the standarization layer: - count is updated multiple times - batch_count is too small, as the sizes from reduce_axes have to be multiplied * breaking: fix bugs regarding count in standardization layer Fixes #524 This fixes the two bugs described in c4cc133: - count was accidentally updated, leading to wrong values - count was calculated wrongly, as only the batch size was used. Correct is the product of all reduce dimensions. This lead to wrong standard deviations While the batch dimension is the same for all inputs, the size of the second dimension might vary. For this reason, we need to introduce an input-specific `count` variable. This breaks serialization. * fix assert statement in test * rename log_gamma to calibration_log_gamma (#527) * simple fix * Hotfix: numercial stability of non-log-stabilized sinkhorn plan (#531) * fix numerical stability issues in sinkhorn plan * improve test suite * fix ultra-strict convergence criterion in log_sinkhorn_plan * update dependencies * add comment about convergence check * update docsting to reflect fixes * sinkhorn_plan now returns a transport plan with uniform marginal distributions * add unit test for sinkhorn_plan * fix sinkhorn function by sampling from the logits of the transpose of the plan, instead of the plan directly * sinkhorn(x1, x2) now samples from log(plan) to receive assignments such that x2[assignments] matches x1 * re-enable test_assignment_is_optimal() for method='sinkhorn' * log_sinkhorn now correctly uses log_plan instead of keras.ops.exp(log_plan), log_sinkhorn_plan returns logits of the transport plan * add unit tests for log_sinkhorn_plan * fix faulty indexing with tensor for tensorflow backend * re-add numItermax for ot pot test --------- Co-authored-by: Daniel Habermann <[email protected]> * isinstance sequence * Pass correct training stage in compute_metrics (#534) * Pass correct training stage in CouplingFlow.compute_metrics * Pass correct training stage in CIF and PointInferenceNetwork * Custom test quantity support for calibration_ecdf (#528) * Custom test quantity support for calibration_ecdf * rename variable [no ci] * Consistent defaults for variable_keys/names in calibration_ecdf with test quantiles * Tests for calibration_ecdf with test_quantities * Remove redundant and simplify comments * Fix docstrings and typehints --------- Co-authored-by: stefanradev93 <[email protected]> * Log gamma test fix (#535) * fix test_calibration_log_gamma_end_to_end unit test failing too often than expected * set alpha to 0.1% in binom.ppf * fix typo in comment * Stateless adapters (#536) * Remove stateful adapter features * Fix tests * Fix typo * Remove nnpe from adapter * Bring back notes [skip ci] * Remove unncessary restriction to kwargs only [skip ci] * Remove old super call [skip ci] * Robustify type [skip ci] * remove standardize from multimodal sim notebook [no ci] * add draft module docstring to augmentations module [no ci] Feel free to modify. * adapt and run neurocognitive modeling notebook [no ci] * adapt cCM playground notebook [no ci] * adapt signature of Adapter.standardize * add parameters missed in previous commit * Minor NNPE polishing * remove stage in docstring from OnlineDataset --------- Co-authored-by: Lasse Elsemüller <[email protected]> Co-authored-by: Valentin Pratz <[email protected]> * Fix training strategies in BasicWorkflow * move multimodal data notebook to regular examples [no ci] * make pip install call on homepage more verbose [no ci] * remove deprecated summaries function The function was renamed to summarize in v2.0.4. * detail subsampling behavior docs for SIR simulator [no ci] fixes #518 * move DiffusionModel from experimental to networks Stabilizes the DiffusionModel class. A deprecation warning for the DiffusionModel class in the experimental module was added. * Add citation for resnet (#537) [no ci] * added citation for resnet * minor formatting --------- Co-authored-by: Valentin Pratz <[email protected]> * Bump up version [skip ci] * Allow separate inputs to subnets for continuous models (#521) Introduces easy access to the different inputs x, t and conditions, to allow for specialized processing of each input, which can be beneficial for more advanced use cases. --------- Co-authored-by: Valentin Pratz <[email protected]> * Auto-select backend (#543) * add automatic backend detection and selection * Fix typo Co-authored-by: Copilot <[email protected]> * Add priority ordering of backends --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: stefanradev93 <[email protected]> * Breaking: parameterize MVNormalScore by inverse cholesky factor to improve stability (#545) * breaking: parameterize MVNormalScore by inverse cholesky factor The log_prob can be completely calculated using the inverse cholesky factor L^{-1}. Using this also stabilizes the initial loss, and speeds up computation. This commit also contains two optimizations. Moving the computation of the precision matrix into the einsum, and using the sum of the logs instead of the log of a product. As the parameterization changes, this is a breaking change. * Add right_side_scale_inverse and test [no ci] The transformation necessary to undo standardization for a Cholesky factor of the precision matrix is x_ij = x_ij' / sigma_j, which is now implemented by a right_side_scale_inverse transformation_type. * Stop skipping MVN tests * Remove stray keyword argument in fill_triangular_matrix * Rename cov_chol_inv to precision_chol and update docstrings [no ci] * rename precision_chol to precision_cholesky_factor to improve clarity. * rename cov_chol to covariance_cholesky_factor * remove check_approximator_multivariate_normal_score function [no ci] --------- Co-authored-by: han-ol <[email protected]> * fix unconditional sampling in ContinuousApproximator (#548) - batch shape was calculated from inference_conditions even if they are known to be None - add approximator test for unconditional setting * Test quantities Linear Regression Starter notebook (#544) * Implementation of log-lik test quantity for SBC in starter notebook * update data-dependent test-quantities example * Small typo fixes in linear regression notebook --------- Co-authored-by: Paul-Christian Bürkner <[email protected]> * fix: optimizer was not used in workflow with multiple fits For the optimizer to be used, the approximator.compile function has to be called. This was not the case. I adapted the `setup_optimizer` function to match the description in its docstring, and made the compilation conditional on its output. The output indicates if a new optimizer was configured. * fix: remove extra deserialize call for SummaryNetwork The extra call leads to the DTypePolicy to be deserialized. This is then passed as a class, and cannot be handled by autoconf, leading to the error discussed in #549 * Compatibility: deserialize when get_config was overridden * unify log_prob signature in PointApproximator [no ci] ContinuousApproximator and BasicWorkflow allow passing the data positionally, we can allow the same for the PointApproximator. * Tutorial on spatial data with Gaussian Random Fields (#540) [no ci] The tutorial uses the experimental ResNet class to build a summary network for spatial data. * Support non-array data in test_quantity calibration ecdf [no ci] Simulator outputs are allowed to be of type int or float, and consequently have no batch dimension. This needs to be considered in the broadcasting of inference_conditions for data based SBC test quantities. "examples/Linear_Regression_Starter.ipynb" contains an example where this is necessary, where N is a non-batchable integer. * import calibration_log_gamma in diagnostics namespace [no ci] * Add wrapper around scipy.integrate.solve_ivp for integration * minor fixes and improvements to the pairs plot functions - pass target color to legend - do not use common norm, so that prior stays visible in kdeplots - do not share y on the diagonal, so that all marginal distributions stay visible, even if one is very peaked * fix: layers were not deserialized for Sequential and Residual As layers were passed with the `*layers` syntax, they could not be passed as keyword arguments. In `from_config`, however, this was attempted, leading to the layers to be ignored during reserialization. This commit fixes this by taking the layers from `kwargs` if they are passed as a keyword argument. * add serialization tests for Sequential and Residual * Fix: ensure checkpoint filepath exists before training Previously choosing a non-existant directory as checkpoint_filepath would lead to silently not saving at all. * Revert 954c16c since it was unnecessary The alledged issue didn't exist and checkpoint folders are created by the keras callback automatically already. I misread tests on this and didn't catch that the problem I was seeing was caused by a different part of my pipeline. * improvements to diagnostic plots (#556) * improvements to diagnostics plots add markersize parameter, add tests, support dataset_id for pairs_samples Fixes #554. * simplify test_calibration_ecdf_from_quantiles * Add pairs plot for arbitrary quantities (#550) Add pairs_quantity and plot_quantity functions that allow plotting of quantities that can be calculated for each individual dataset. Currently, for the provided metrics this is only useful for posterior contraction, but could be useful for posterior z-score and other quantities as well. * minor fix in diffusion edm schedule (#560) * minor fix in diffusion edm schedule * DeepSet: Adapt output dimension of invariant module inside the equivariant module (#557) (#561) * adapt output dim of invariant module in equivariant module See #557. The DeepSet showed bad performance and was not able to learn diverse summary statistics. Reducing the dimension of the output of the invariant module inside the equivariant module improves this, probably because the invidividual information of each set member gains importance compared to the shared information provided by the invariant module. There might be better settings for this, so we might update the default later on. However, this is already an improvement over the previous setting. * DeepSet: adapt docstring to reflect code * pairs_postorior: inconsistent type hint fix (#562) * allow exploding variance type in EDM schedule * fix type hint * Bump up version [skip ci] * Fix instructions for backend spec [skip ci] * Add New Flow Matching Schedules (#565) * add fm schedule * add fm schedule * add comments * expose time_power_law_alpha * Improve doc [skip ci] --------- Co-authored-by: stefanradev93 <[email protected]> * change default integration method to rk45 for DiffusionModel and FlowMatching. Euler shows significant deviations when computing the log-prob, which risks misleading users regarding the performance of the networks. rk45 is slower, but the problem is heavily reduced with this method. * fix nan to num inverse * fix setting markersize in lotka volterra notebook * fix: actually set KERAS_BACKEND to chosen backend Add warning if KERAS_BACKEND and actually loaded backend do not match. This can happen if keras is imported before BayesFlow. * Fix warning msg --------- Co-authored-by: Valentin Pratz <[email protected]> Co-authored-by: han-ol <[email protected]> Co-authored-by: Daniel Habermann <[email protected]> Co-authored-by: Valentin Pratz <[email protected]> Co-authored-by: arrjon <[email protected]> Co-authored-by: Lars <[email protected]> Co-authored-by: Hans Olischläger <[email protected]> Co-authored-by: Lasse Elsemüller <[email protected]> Co-authored-by: Leona Odole <[email protected]> Co-authored-by: Jonas Arruda <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Paul-Christian Bürkner <[email protected]> Co-authored-by: The-Gia Leo Nguyen <[email protected]>
1 parent cd75aef commit 12aaeb5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+2425
-281
lines changed

README.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Note that BayesFlow **will not run** without a backend.
7676

7777
If you don't know which backend to use, we recommend JAX as it is currently the fastest backend.
7878

79-
Once installed, [set the backend environment variable as required by keras](https://keras.io/getting_started/#configuring-your-backend).
79+
As of version ``2.0.7``, the backend will be set automatically. If you have multiple backends, you can manually [set the backend environment variable as described by keras](https://keras.io/getting_started/#configuring-your-backend).
8080
For example, inside your Python script write:
8181

8282
```python
@@ -97,8 +97,6 @@ Or just plainly set the environment variable in your shell:
9797
export KERAS_BACKEND=jax
9898
```
9999

100-
This way, you also don't have to manually set the backend every time you are starting Python to use BayesFlow.
101-
102100
## Getting Started
103101

104102
Using the high-level interface is easy, as demonstrated by the minimal working example below:

bayesflow/__init__.py

Lines changed: 80 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,12 @@
1-
from . import (
2-
approximators,
3-
adapters,
4-
augmentations,
5-
datasets,
6-
diagnostics,
7-
distributions,
8-
experimental,
9-
networks,
10-
simulators,
11-
utils,
12-
workflows,
13-
wrappers,
14-
)
15-
16-
from .adapters import Adapter
17-
from .approximators import ContinuousApproximator, PointApproximator
18-
from .datasets import OfflineDataset, OnlineDataset, DiskDataset
19-
from .simulators import make_simulator
20-
from .workflows import BasicWorkflow
1+
# ruff: noqa: E402
2+
# disable E402 to allow for setup code before importing any internals (which could import keras)
213

224

235
def setup():
246
# perform any necessary setup without polluting the namespace
25-
import keras
7+
import os
268
import logging
9+
from importlib.util import find_spec
2710

2811
# set the basic logging level if the user hasn't already
2912
logging.basicConfig(level=logging.INFO)
@@ -32,8 +15,63 @@ def setup():
3215
logger = logging.getLogger(__name__)
3316
logger.setLevel(logging.INFO)
3417

18+
issue_url = "https://github.com/bayesflow-org/bayesflow/issues/new?template=bug_report.md"
19+
20+
if "KERAS_BACKEND" not in os.environ:
21+
# check for available backends and automatically set the KERAS_BACKEND env variable or raise an error
22+
class Backend:
23+
def __init__(self, display_name, package_name, env_name, install_url, priority):
24+
self.display_name = display_name
25+
self.package_name = package_name
26+
self.env_name = env_name
27+
self.install_url = install_url
28+
self.priority = priority
29+
30+
backends = [
31+
Backend("JAX", "jax", "jax", "https://docs.jax.dev/en/latest/quickstart.html#installation", 0),
32+
Backend("PyTorch", "torch", "torch", "https://pytorch.org/get-started/locally/", 1),
33+
Backend("TensorFlow", "tensorflow", "tensorflow", "https://www.tensorflow.org/install", 2),
34+
]
35+
36+
found_backends = []
37+
for backend in backends:
38+
if find_spec(backend.package_name) is not None:
39+
found_backends.append(backend)
40+
41+
if not found_backends:
42+
message = "No suitable backend found. Please install one of the following:\n"
43+
for backend in backends:
44+
message += f"{backend.display_name}\n"
45+
message += "\n"
46+
47+
message += f"If you continue to see this error, please file a bug report at {issue_url}.\n"
48+
message += (
49+
"You can manually select a backend by setting the KERAS_BACKEND environment variable as shown below:\n"
50+
)
51+
message += "https://keras.io/getting_started/#configuring-your-backend"
52+
53+
raise ImportError(message)
54+
55+
if len(found_backends) > 1:
56+
found_backends.sort(key=lambda b: b.priority)
57+
chosen_backend = found_backends[0]
58+
os.environ["KERAS_BACKEND"] = chosen_backend.env_name
59+
60+
logging.warning(
61+
f"Multiple Keras-compatible backends detected ({', '.join(b.display_name for b in found_backends)}).\n"
62+
f"Defaulting to {chosen_backend.display_name}.\n"
63+
"To override, set the KERAS_BACKEND environment variable before importing bayesflow.\n"
64+
"See: https://keras.io/getting_started/#configuring-your-backend"
65+
)
66+
else:
67+
os.environ["KERAS_BACKEND"] = found_backends[0].env_name
68+
69+
import keras
3570
from bayesflow.utils import logging
3671

72+
if keras.backend.backend().lower() != os.environ["KERAS_BACKEND"].lower():
73+
logging.warning("Automatic backend selection failed, most likely because Keras was imported before BayesFlow.")
74+
3775
logging.info(f"Using backend {keras.backend.backend()!r}")
3876

3977
if keras.backend.backend() == "torch":
@@ -60,3 +98,24 @@ def setup():
6098
# call and clean up namespace
6199
setup()
62100
del setup
101+
102+
from . import (
103+
approximators,
104+
adapters,
105+
augmentations,
106+
datasets,
107+
diagnostics,
108+
distributions,
109+
experimental,
110+
networks,
111+
simulators,
112+
utils,
113+
workflows,
114+
wrappers,
115+
)
116+
117+
from .adapters import Adapter
118+
from .approximators import ContinuousApproximator, PointApproximator
119+
from .datasets import OfflineDataset, OnlineDataset, DiskDataset
120+
from .simulators import make_simulator
121+
from .workflows import BasicWorkflow

bayesflow/adapters/transforms/nan_to_num.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,8 @@ def inverse(self, data: dict[str, any], **kwargs) -> dict[str, any]:
8080
data = data.copy()
8181

8282
# Retrieve mask and values to reconstruct NaNs
83+
if self.key not in data.keys():
84+
return data
8385
values = data[self.key]
8486

8587
if not self.return_mask:

bayesflow/approximators/continuous_approximator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -537,7 +537,7 @@ def _sample(
537537
)
538538
batch_shape = keras.ops.shape(inference_conditions)[:-1]
539539
else:
540-
batch_shape = keras.ops.shape(inference_conditions)[1:-1]
540+
batch_shape = (num_samples,)
541541

542542
return self.inference_network.sample(
543543
batch_shape, conditions=inference_conditions, **filter_kwargs(kwargs, self.inference_network.sample)

bayesflow/approximators/point_approximator.py

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -143,12 +143,7 @@ def sample(
143143

144144
return samples
145145

146-
def log_prob(
147-
self,
148-
*,
149-
data: Mapping[str, np.ndarray],
150-
**kwargs,
151-
) -> np.ndarray | dict[str, np.ndarray]:
146+
def log_prob(self, data: Mapping[str, np.ndarray], **kwargs) -> np.ndarray | dict[str, np.ndarray]:
152147
"""
153148
Computes the log-probability of given data under the parametric distribution(s) for given input conditions.
154149

bayesflow/diagnostics/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
from .metrics import (
66
bootstrap_comparison,
77
calibration_error,
8+
calibration_log_gamma,
89
posterior_contraction,
910
summary_space_comparison,
1011
)
@@ -18,7 +19,9 @@
1819
mc_confusion_matrix,
1920
mmd_hypothesis_test,
2021
pairs_posterior,
22+
pairs_quantity,
2123
pairs_samples,
24+
plot_quantity,
2225
recovery,
2326
recovery_from_estimates,
2427
z_score_contraction,

bayesflow/diagnostics/metrics/posterior_contraction.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ def posterior_contraction(
1010
targets: Mapping[str, np.ndarray] | np.ndarray,
1111
variable_keys: Sequence[str] = None,
1212
variable_names: Sequence[str] = None,
13-
aggregation: Callable = np.median,
13+
aggregation: Callable | None = np.median,
1414
) -> dict[str, any]:
1515
"""
1616
Computes the posterior contraction (PC) from prior to posterior for the given samples.
@@ -27,16 +27,17 @@ def posterior_contraction(
2727
By default, select all keys.
2828
variable_names : Sequence[str], optional (default = None)
2929
Optional variable names to show in the output.
30-
aggregation : callable, optional (default = np.median)
30+
aggregation : callable or None, optional (default = np.median)
3131
Function to aggregate the PC across draws. Typically `np.mean` or `np.median`.
32+
If None is provided, the individual values are returned.
3233
3334
Returns
3435
-------
3536
result : dict
3637
Dictionary containing:
3738
3839
- "values" : float or np.ndarray
39-
The aggregated posterior contraction per variable
40+
The (optionally aggregated) posterior contraction per variable
4041
- "metric_name" : str
4142
The name of the metric ("Posterior Contraction").
4243
- "variable_names" : str
@@ -59,6 +60,7 @@ def posterior_contraction(
5960
post_vars = samples["estimates"].var(axis=1, ddof=1)
6061
prior_vars = samples["targets"].var(axis=0, keepdims=True, ddof=1)
6162
contraction = np.clip(1 - (post_vars / prior_vars), 0, 1)
62-
contraction = aggregation(contraction, axis=0)
63+
if aggregation is not None:
64+
contraction = aggregation(contraction, axis=0)
6365
variable_names = samples["estimates"].variable_names
6466
return {"values": contraction, "metric_name": "Posterior Contraction", "variable_names": variable_names}

bayesflow/diagnostics/plots/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
from .mc_confusion_matrix import mc_confusion_matrix
77
from .mmd_hypothesis_test import mmd_hypothesis_test
88
from .pairs_posterior import pairs_posterior
9+
from .pairs_quantity import pairs_quantity
10+
from .plot_quantity import plot_quantity
911
from .pairs_samples import pairs_samples
1012
from .recovery import recovery
1113
from .recovery_from_estimates import recovery_from_estimates

bayesflow/diagnostics/plots/calibration_ecdf.py

Lines changed: 12 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
from collections.abc import Callable, Mapping, Sequence
22

33
import numpy as np
4-
import keras
54
import matplotlib.pyplot as plt
65

6+
from ...utils.dict_utils import compute_test_quantities
77
from ...utils.plot_utils import prepare_plot_data, add_titles_and_labels, prettify_subplots
88
from ...utils.ecdf import simultaneous_ecdf_bands
99
from ...utils.ecdf.ranks import fractional_ranks, distance_ranks
@@ -136,33 +136,17 @@ def calibration_ecdf(
136136

137137
# Optionally, compute and prepend test quantities from draws
138138
if test_quantities is not None:
139-
test_quantities_estimates = {}
140-
test_quantities_targets = {}
141-
142-
for key, test_quantity_fn in test_quantities.items():
143-
# Apply test_quantity_func to ground-truths
144-
tq_targets = test_quantity_fn(data=targets)
145-
test_quantities_targets[key] = np.expand_dims(tq_targets, axis=1)
146-
147-
# # Flatten estimates for batch processing in test_quantity_fn, apply function, and restore shape
148-
num_conditions, num_samples = next(iter(estimates.values())).shape[:2]
149-
flattened_estimates = keras.tree.map_structure(lambda t: np.reshape(t, (-1, *t.shape[2:])), estimates)
150-
flat_tq_estimates = test_quantity_fn(data=flattened_estimates)
151-
test_quantities_estimates[key] = np.reshape(flat_tq_estimates, (num_conditions, num_samples, 1))
152-
153-
# Add custom test quantities to variable keys and names for plotting
154-
# keys and names are set to the test_quantities dict keys
155-
test_quantities_names = list(test_quantities.keys())
156-
157-
if variable_keys is None:
158-
variable_keys = list(estimates.keys())
159-
160-
if isinstance(variable_names, list):
161-
variable_names = test_quantities_names + variable_names
162-
163-
variable_keys = test_quantities_names + variable_keys
164-
estimates = test_quantities_estimates | estimates
165-
targets = test_quantities_targets | targets
139+
updated_data = compute_test_quantities(
140+
targets=targets,
141+
estimates=estimates,
142+
variable_keys=variable_keys,
143+
variable_names=variable_names,
144+
test_quantities=test_quantities,
145+
)
146+
variable_names = updated_data["variable_names"]
147+
variable_keys = updated_data["variable_keys"]
148+
estimates = updated_data["estimates"]
149+
targets = updated_data["targets"]
166150

167151
plot_data = prepare_plot_data(
168152
estimates=estimates,

bayesflow/diagnostics/plots/calibration_ecdf_from_quantiles.py

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ def calibration_ecdf_from_quantiles(
2626
fill_color: str = "grey",
2727
num_row: int = None,
2828
num_col: int = None,
29+
markersize: float = None,
2930
**kwargs,
3031
) -> plt.Figure:
3132
"""
@@ -97,6 +98,8 @@ def calibration_ecdf_from_quantiles(
9798
num_col : int, optional, default: None
9899
The number of columns for the subplots.
99100
Dynamically determined if None.
101+
markersize : float, optional, default: None
102+
The marker size in points.
100103
**kwargs : dict, optional, default: {}
101104
Keyword arguments can be passed to control the behavior of
102105
ECDF simultaneous band computation through the ``ecdf_bands_kwargs``
@@ -142,11 +145,15 @@ def calibration_ecdf_from_quantiles(
142145

143146
if stacked:
144147
if j == 0:
145-
plot_data["axes"][0].plot(xx, yy, marker="o", color=rank_ecdf_color, alpha=0.95, label="Rank ECDFs")
148+
plot_data["axes"][0].plot(
149+
xx, yy, marker="o", color=rank_ecdf_color, markersize=markersize, alpha=0.95, label="Rank ECDFs"
150+
)
146151
else:
147-
plot_data["axes"][0].plot(xx, yy, marker="o", color=rank_ecdf_color, alpha=0.95)
152+
plot_data["axes"][0].plot(xx, yy, marker="o", color=rank_ecdf_color, markersize=markersize, alpha=0.95)
148153
else:
149-
plot_data["axes"].flat[j].plot(xx, yy, marker="o", color=rank_ecdf_color, alpha=0.95, label="Rank ECDF")
154+
plot_data["axes"].flat[j].plot(
155+
xx, yy, marker="o", color=rank_ecdf_color, markersize=markersize, alpha=0.95, label="Rank ECDF"
156+
)
150157

151158
# Compute uniform ECDF and bands
152159
alpha, z, L, U = pointwise_ecdf_bands(estimates.shape[0], **kwargs.pop("ecdf_bands_kwargs", {}))

0 commit comments

Comments
 (0)