Fitting using modified Bessel function of the second kind scipy.special.kn #220
-
Hello, I am trying to perform the best fit of my data using the pe.fits.least_squares. The fit function contains the modified Bessel function of the second kind: f(b,x) = b[0] * scipy.special.kn(0, x/b[1]). However, it is required to use autograd.numpy functions, but, bessel functions are not present in the numpy library. I was wondering if there is a way to perform this fit this using the pyerrors library. Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Hi Dario, one way of using a fit function which the import numpy as np
import scipy
import pyerrors as pe
# Generate some pseudo data
x = np.linspace(0.1, 1, 4)
y = [pe.pseudo_Obs(0.3 * scipy.special.kn(0, v / 3.1) + np.random.normal(0.0, 0.02), 0.02, "test_ensemble") for v in x]
# Define the fit function
def fit_func(b, x):
return b[0] * scipy.special.kn(0, x / b[1])
# Perform the fit and rely on numerical derivatives for the error propagation
fit_res = pe.least_squares(x, y, fit_func, num_grad=True)
fit_res.gm()
print(fit_res) Note that numerical derivatives might be unreliable in some situations, especially when the function changes rapidly or is discontinous, so it might be wise to verify that the errors you get are sensible. For numerical derivatives we rely on the numdifftools package. You should be able to modify the relevant parameters via kwargs as explained in this docstring but I am not sure if we ever tested this within Another option would be to provide the missing pieces to make automatic differntiation work for the desired functions. This would be a bit more involved but could be an interesting extension of |
Beta Was this translation helpful? Give feedback.
-
Hi Dario, expanding on the last part of Fabians comment, there is also a more elegant way. As explained in the autograd documentation in https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#extend-autograd-by-defining-your-own-primitives you can define your own differentiable functions. In your case, you need the fist and second derivative for the error propagation in the fit routine. We can define those using from autograd.extend import primitive, defvjp, defjvp
k0sp = lambda x: scipy.special.kn(0, x)
k1sp = lambda x: scipy.special.kn(1, x)
k2sp = lambda x: scipy.special.kn(2, x)
k0ag = primitive(k0sp)
k1ag = primitive(k1sp)
# derivatives as defined in https://functions.wolfram.com/Bessel-TypeFunctions/BesselK/20/ShowAll.html
defvjp(k1ag, lambda ans, x: lambda g: - g * 0.5 * (k0sp(x) + k2sp(x)))
defvjp(k0ag, lambda ans, x: lambda g: - g * k1ag(x)) In analogy to Fabian's example above, you can then use def fit_func_ag(b, x):
return b[0] * k0ag(x / b[1])
fit_res = pe.least_squares(x, y, fit_func_ag)
fit_res.gm()
print(fit_res) and you'll find the same result, but now using the analytically known derivatives. @fjosw: We could think about adding this example to the documentation. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
Hi Dario,
expanding on the last part of Fabians comment, there is also a more elegant way. As explained in the autograd documentation in https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#extend-autograd-by-defining-your-own-primitives you can define your own differentiable functions. In your case, you need the fist and second derivative for the error propagation in the fit routine.
We can define those using