Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 51 additions & 6 deletions colibri/doc/sphinx/source/available-models/linear-model.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
.. _linear-model:

============
############
Linear Model
============
############

This model was presented in Ref. :cite:alp:`Costantini:2025wxp`.

Expand All @@ -14,15 +14,34 @@ What is this model for?
This model is especially suitable for :ref:`running bayesian fits <in_running_bayesian>`.
It can be used to:

1. **Bayesian PDF Fits with POD Parametrisation**
2. **POD Basis Construction** Generate a Proper Orthogonal Decomposition (POD) basis (see Ref. :cite:alp:`Costantini:2025wxp` for details on what this is).
1. :ref:`Construct a Proper Orthogonal Decomposition (POD) basis <pod-basis>` (see Ref. :cite:alp:`Costantini:2025wxp` for details on what this is).
2. :ref:`Run Bayesian PDF Fits with the POD Parametrisation <pod-fits>`.


Model description
-----------------

This model parametrises PDFs as linear combinations of basis functions:

.. math::

f_w(x) = \phi_0(x) + \sum_{k=1}^{N} w_k \phi_k(x),

where :math:`w = (w_1, ..., w_N)` are the parameters to be inferred,
and :math:`\phi_k(x)` are carefully chosen basis functions, which in
practice are constructed by applying Proper Orthogonal Decomposition
(POD) to a basis set of samples of the randomly initialised n3fit Neural
Network.

For details on the motivation behind this choice of model see Ref. Ref. :cite:alp:`Costantini:2025wxp`.

How to use this model
---------------------

You can find installation instructions in the `model repository <https://github.com/HEP-PBSP/wmin-model>`_.

.. _pod-basis:

Constructing a POD basis
^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -84,5 +103,31 @@ shifted by running:
where the ``shift_lhapdf_members.py`` script can be found in the directory ``wmin-model/wmin/runcards``
and ``evolved_directory`` is the fit or POD basis directory that should have previously been evolved.

You can then follow Colibri's :ref:`analytic <running_analytic>` and :ref:`bayesian <in_running_bayesian>`
workflows to run fits.

.. _pod-fits:

Running fits
^^^^^^^^^^^^

You can follow Colibri's :ref:`analytic <running_analytic>` and :ref:`bayesian <in_running_bayesian>`
workflows to run fits with this model. There are, however, a few points to note that are specific to
this model.

:underline:`Analytic fits`

Analytic fits are only appropriate for linear models that also have a linear relationship with the
data, so should be run with DIS data only.

``wmin_settings``
=================

This model has specific settings that need to be specified in the runcard in order to run a fit:

.. code-block::

wmin_settings:
wminpdfset: 250503_pod_basis_40k
n_basis: 10 # number of parameters/weights to be fitted

* ``wminpdfset`` is the POD basis set you should have constructed before running a fit.
* ``n_basis`` is the number of parameters or *weights* to be fitted (minimised). It should be less than or equal to the number of replicas in ``wminpdfset``.
2 changes: 1 addition & 1 deletion colibri/doc/sphinx/source/get-started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ To verify that the installation went through, run:
3. GPU (CUDA) JAX Support
-------------------------

The installation instructions shown above will install jax in *cpu* mode. It is also
The installation instructions shown above will install jax in CPU mode. It is also
possible, however, to run Colibri fits using *gpu cuda support*. You will first have
to install the package following one of the methods described above. Then, if you
are on a linux machine, you can install jax in cuda mode by running:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,9 @@ executable.
^^^^^^^^^^^^^^^^^^^^^^
These settings control the method of gradient descent. You can use any
of the Optax optimizers and settings, which you can read more about
`here <https://optax.readthedocs.io/en/latest/api/optimizers.html#>`_.
`here <https://optax.readthedocs.io/en/latest/api/optimizers.html#>`__.
Learning schedulers are also supported, and you can find the available options
`here <https://optax.readthedocs.io/en/latest/api/optimizer_schedules.html#>`_.
`here <https://optax.readthedocs.io/en/latest/api/optimizer_schedules.html#>`__.

``param_initialiser_settings``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
Loading