@@ -15,7 +15,7 @@ See :ref:`support_chapter`.
15
15
Why did my script break when upgrading from lmfit 0.8.3 to 0.9.0?
16
16
=================================================================
17
17
18
- See :ref: `whatsnew_090_label `
18
+ See :ref: `whatsnew_090_label `.
19
19
20
20
21
21
I get import errors from IPython
@@ -31,7 +31,6 @@ then you need to install the ``ipywidgets`` package, try: ``pip install ipywidge
31
31
32
32
33
33
34
-
35
34
How can I fit multi-dimensional data?
36
35
=====================================
37
36
@@ -91,14 +90,6 @@ is that you also get access to the plot routines from the ModelResult
91
90
class, which are also complex-aware.
92
91
93
92
94
- Can I constrain values to have integer values?
95
- ==============================================
96
-
97
- Basically, no. None of the minimizers in lmfit support integer
98
- programming. They all (I think) assume that they can make a very small
99
- change to a floating point value for a parameters value and see a change in
100
- the value to be minimized.
101
-
102
93
103
94
How should I cite LMFIT?
104
95
========================
@@ -169,3 +160,159 @@ fit. However, unlike NaN, it is also usually clear how to handle Inf, as
169
160
you probably won't ever have values greater than 1.e308 and can therefore
170
161
(usually) safely clip the argument passed to ``exp() `` to be smaller than
171
162
about 700.
163
+
164
+ .. _faq_params_stuck :
165
+
166
+ Why are Parameter Values sometime stuck at initial values?
167
+ ===========================================================
168
+
169
+ In order for a Parameter to be optimized in a fit, changing its value must
170
+ have an impact on the fit residual (`data-model ` when curve fitting, for
171
+ example). If a fit has not changed one or more of the Parameters, it means
172
+ that changing those Parameters did not change the fit residual.
173
+
174
+ Normally (that is, unless you specifically provide a function for
175
+ calculating the derivatives, in which case you probably would not be asking
176
+ this question ;)), the fitting process begins by making a very small change
177
+ to each Parameter value to determine which way and how large of a change to
178
+ make for the parameter: This is the derivative or Jacobian (change in
179
+ residual per change in parameter value). By default, the change made for
180
+ each variable Parameter is to multiply its value by (1.0+1.0e-8) or so
181
+ (unless the value is below about 1.e-15, in which case it adds 1.0e-8). If
182
+ that small change does not change the residual, then the value of the
183
+ Parameter will not be updated.
184
+
185
+ Parameter values that are "way off" are a common reason for Parameters
186
+ being stuck at initial values. As an example, imagine fitting peak-like
187
+ data with and `x ` range of 0 to 10, peak centered at 6, and a width of 1 or
188
+ 2 or so, as in the example at
189
+ :ref: `sphx_glr_examples_documentation_model_gaussian.py `. A Gaussian
190
+ function with an initial value of for the peak center at 5 and an initial
191
+ width or 5 will almost certainly find a good fit. An initial value of the
192
+ peak center of -50 will end up being stuck with a "bad fit" because a small
193
+ change in Parameters will still lead the modeled Gaussian to have no
194
+ intensity over the actual range of the data. You should make sure that
195
+ initial values for Parameters are reasonable enough to actually effect the
196
+ fit. As it turns out in the example linked to above, changing the center
197
+ value to any value between about 0 and 10 (that is, the data range) will
198
+ result to a good fit.
199
+
200
+ Another common cause for Parameters being stuck at initial values is when
201
+ the initial value is at a boundary value. For this case, too, a small
202
+ change in the initial value for the Parameter will still leave the value at
203
+ the boundary value and not show any real change in the residual.
204
+
205
+ If you're using bounds, make sure the initial values for the Parameters are
206
+ not at the boundary values.
207
+
208
+ Finally, one reason for a Parameter to not change is that they are actually
209
+ used as discrete values. This is discussed below in :ref: `faq_discrete_params `.
210
+
211
+ .. _faq_params_no_uncertainties :
212
+
213
+ Why are uncertainties in Parameters sometimes not determined?
214
+ =============================================================
215
+
216
+ In order for Parameter uncertainties to be estimated, each variable
217
+ Parameter must actually change the fit, and cannot be stuck at an initial
218
+ value or at a boundary value. See :ref: `faq_params_stuck ` for why values may
219
+ not change from their initial values.
220
+
221
+
222
+ .. _faq_discrete_params :
223
+
224
+ Can Parameters be used for Array Indices or Discrete Values?
225
+ =============================================================
226
+
227
+
228
+ The short answer is "No": variables in all of the fitting methods used in
229
+ `lmfit ` (and all of those available in `scipy.optimize `) are treated as
230
+ continuous values, and represented as double precision floating point
231
+ values. As an important example, you cannot have a variable that is
232
+ somehow constrained to be an integer.
233
+
234
+ Still, it is a rather common question of how to fit data to a model that
235
+ includes a breakpoint, perhaps
236
+
237
+ .. math ::
238
+
239
+ f(x; x_0 , a, b, c) =
240
+ \begin {cases}
241
+ c & \quad \text {for} \> x < x_0 \\
242
+ a + bx^2 & \quad \text {for} \> x > x_0
243
+ \end {cases}
244
+
245
+
246
+ That you implement with a model function and use to fit data like this:
247
+
248
+ .. jupyter-execute ::
249
+
250
+ import numpy as np
251
+ import lmfit
252
+
253
+ def quad_off(x, x0, a, b, c):
254
+ model = a + b*x**2
255
+ model[np.where(x<x0)] = c
256
+ return model
257
+
258
+ x0 = 19
259
+ b = 0.02
260
+ a = 2.0
261
+ xdat = np.linspace(0, 100, 101)
262
+ ydat = a + b*xdat**2
263
+ ydat[np.where(xdat < x0)] = a + b * x0**2
264
+ ydat += np.random.normal(scale=0.1, size=len(xdat))
265
+
266
+ mod = lmfit.Model(quad_off)
267
+ pars = mod.make_params(x0=22, a=1, b=1, c=1)
268
+
269
+ result = mod.fit(ydat, pars, x=xdat)
270
+ print(result.fit_report())
271
+
272
+ This will not result in a very good fit, as the value for `x0 ` cannot be
273
+ found by making a small change in its value. Specifically,
274
+ `model[np.where(x<x0)] ` will give the same result for `x0=22 ` and
275
+ `x0=22.001 `, and so that value is not changed during the fit.
276
+
277
+ There are a couple ways around this problems. First, you may be able to
278
+ make the fit depend on `x0 ` in a way that is not just discrete. That
279
+ depends on your model function. A second option is treat the break not as a
280
+ hard break but as a more gentle transition with a sigmoidal function, such
281
+ as an error function. Like the break-point, these will go from 0 to 1, but
282
+ more gently and with some finite value leaking into neighboring points.
283
+ The amount of leakage or width of the step can also be adjusted.
284
+
285
+ A simple modification of the above would to use an error function would
286
+ look like this and give better fit results:
287
+
288
+ .. jupyter-execute ::
289
+
290
+ import numpy as np
291
+ import lmfit
292
+ from scipy.special import erf
293
+
294
+ def quad_off(x, x0, a, b, c):
295
+ m1 = a + b*x**2
296
+ m2 = c * np.ones(len(x))
297
+ # step up from 0 to 1 at x0: (erf(x-x0)+1)/2
298
+ # step down from 1 to 0 at x0: (1-erf(x-x0))/2
299
+ model = m1 * (erf(x-x0)+1)/2 + m2*(1-erf(x-x0))/2
300
+ return model
301
+
302
+ x0 = 19
303
+ b = 0.02
304
+ a = 2.0
305
+ xdat = np.linspace(0, 100, 101)
306
+ ydat = a + b*xdat**2
307
+ ydat[np.where(xdat < x0)] = a + b * x0**2
308
+ ydat += np.random.normal(scale=0.1, size=len(xdat))
309
+
310
+ mod = lmfit.Model(quad_off)
311
+ pars = mod.make_params(x0=22, a=1, b=1, c=1)
312
+
313
+ result = mod.fit(ydat, pars, x=xdat)
314
+ print(result.fit_report())
315
+
316
+ The natural width of the error function is about 2 `x ` units, but you can
317
+ adjust this, shortening it with `erf((x-x0)*2) ` to give a sharper
318
+ transition for example.
0 commit comments