-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss of accuracy for modest N? #52
Comments
I believe this may be due to round-off error: julia> L(x[end])^2
6.058613695335461e10 |
Hmm, sorry, that was silly: julia> w[end]*L(x[end])^2
4.349928550211968e-29 Perhaps Golub-Welsch is losing accuracy for small weights? |
OK, there's a problem with ApproxFun's evaluation: julia> L(x[end])
246142.51350255325
# LaguerreL[27, 92.80261352555698] returns 247580.44444444444 in mathematica |
Hmm, there seems to be some fundamental stability issues: LaguerreL[27, SetPrecision[92.80261352555698, 100]] // N # returns 254540.90751235274` |
is it possible to do the recursion in log-space? |
Setting up a recursion in log-space can be avoided by making a recursion for the orthonormal Laguerre polynomials for low n such that stability issues can be reduced. For normalised Hermite polynomials, this is @dlfivefifty Yes I do think Golub-Welsh loses accuracy, also because the weights w[end-k:end-1] are exactly zero where k increases from 1 at n=30 to 63 at n=127, which is long before they should underflow. One option to counter this would be to use the (forward) recurrence relation to compute the polynomials for low n. For high n, I have almost finished explicit asymptotic expansions with ninth order convergence in n for the nodes and weights near the hard edge and in the bulk. These are quite short, extremely fast and give about double precision from n=100. |
A related comment: I was led to the above approach because it worked well for Hermite to use the recursion in |
Ok, I have just checked my old log-space code for evaluation of Laguerre polynomials, trying |
I made a pull request #53 where the recurrence relation for orthonormal polynomials and their derivatives are implemented, also for BigFloats. Those functions laguerreRec and laguerreRecDer do not seem to be that unstable: maybe they can help you? For high n, I recommend using the (unnormalised) asymptotic expansions with T terms in polyAsyRHgen(n, x, alpha, T, 1.0, 1, getUQ(alpha, 1.0, 1, T) ) |
Thanks for this, ill check it out |
It would be good if there was a paper explaining what's going on... |
The recurrence relation for the orthonormal polynomials and their derivatives was derived from the recurrence relation for standard associated Laguerre polynomials after accounting for normalisation. The routines used to compute asymptotic expansions of (possibly generalised) Laguerre polynomials are explained in the paper |
@popsomer any update on the article? I have implemented a stable recursion on the orthonormal eigenfunctions of the 2D quantum harmonic oscillator hamiltonian. But if I try to use gauss-laguerre quadrature to compute the normalisation integral, I run into: using FastGaussQuadrature
M=27
x,w=gausslaguerre(M)
w
27-element Array{Float64,1}:
0.128046
0.238419
0.250493
0.190662
0.112311
0.0525394
0.0197409
0.00598411
0.00146415
0.000288498
4.55765e-5
5.73576e-6
5.7021e-7
⋮
1.20895e-10
4.09626e-12
1.00835e-13
1.74905e-15
2.05597e-17
1.5563e-19
7.08127e-22
1.75747e-24
2.05704e-27
8.94742e-31
0.0
7.17974e-40 where I presume the zero is telling, and then major accuracy loss for any quadrature rule of this or higher order. I am really interested in your solution to this problem! |
I hope to submit the article on quadrature in February, after I finish the Gauss-Jacobi case as well. The zero in your weights might originate in the Golub-Welsh algorithm; I have changed my pull request to use GW for n <128 but to change the low weights (<1e-10) into the ones given by the recurrence relation. If your integrand for the 2D quantum harmonic oscillator increases quite rapidly, then contributions from high x may be significant. Then, those weights have to be known accurately and you could try to get more digits by doing the same modification but using BigFloats
I also added some higher order terms in the explicit expressions for higher accuracy at high n, like in chebfun #2200 @ajt60gaibb . |
is |
Yes, at popsomer:add-GL-nodeWeightExpansions, see pull request #53 and also chebfun pull 2200 |
Is there any way I can help this merge? |
Thanks for the links, @popsomer . For Julia 1.0, I have updated your gausslaguerre code (just some small things to do with type handling), and it seems to work well aside from a couple of special cases at low Below is the updated code, and the cases where it returns zero values for using FastGaussQuadrature, SpecialFunctions
"Compute Gauss-Laguerre rule based on the recurrence relation, using Newton iterations on an initial guess."
function laguerreRec(n, alpha=0.0; reduced = false)
T = typeof(float(alpha))
n_alloc = reduced ? estimate_reduced_n(n, alpha) : n
w = zeros(T, n_alloc)
x = zeros(T, n_alloc)
# We compute up to 7 starting values for the Newton iterations
n_pre = min(n, 7)
nu = 4n + 2alpha + 2
bes = besselroots(alpha, n_pre).^2 / nu # this is a lower bound by [DLMF 18.16.10]
x[1:n_pre] = bes
local pn_deriv
noUnderflow = true # this flag turns false once the weights start to underflow
for k in 1:n
if k > n_pre
# Use sextic extrapolation for a new initial guess
x[k] = 7*x[k-1] -21*x[k-2] +35*x[k-3] -35*x[k-4] +21*x[k-5] -7*x[k-6] +x[k-7]
end
step = x[k]
ov = floatmax(T) # Previous/old value
ox = x[k] # Old x
l = 0 # Newton-Raphson iteration number
max_iter = 20
while (abs(step) > 40eps(T)*x[k]) && (l < max_iter)
l = l + 1
pn, pn_deriv = lagpnRecDer(n, alpha, x[k])
if abs(pn) >= abs(ov)*(1-50eps(T))
# The function values do not decrease enough any more due to roundoff errors.
x[k] = ox # Set to the previous value and quit.
break
end
step = pn / pn_deriv
ox = x[k]
x[k] = x[k] - step
ov = pn
end
if ( x[k] < 0 ) || ( x[k] > 4*n + 2*alpha + 2 ) || ( l == max_iter ) || ( ( k != 1 ) && ( x[k - 1] >= x[k] ) )
@warn "Newton method may not have converged in laguerreRec($n,$alpha)."
end
# Compute the weight
if noUnderflow
pn_prev, ~ = lagpnRecDer(n-1, alpha, x[k])
w[k] = (n^2 +alpha*n)^(-1/2)/pn_prev/pn_deriv
end
if noUnderflow && (w[k] < underflow_threshold(T))
# Frome here on after weights will no longer be computed
noUnderflow = false
end
if reduced
if (k > 1) && !noUnderflow
x = x[1:k-1]
w = w[1:k-1]
return x, w
end
if k == n_alloc
# We have to allocate a bigger array
n_alloc *= 2
x1 = x
w1 = w
x = zeros(T, n_alloc)
w = zeros(T, n_alloc)
x[1:k] = x1
w[1:k] = w1
end
end
end
x, w
end
"""
Evaluate the orthonormal associated Laguerre polynomial with positive leading coefficient,
as well as its derivative, in the point x using the recurrence relation.
"""
function lagpnRecDer(n, alpha, x)
T = promote_type(typeof(float(alpha)), typeof(float(x)))
pnprev = zero(T)
pn = 1/sqrt(gamma(alpha+1) )
pndprev = zero(T)
pnd = zero(T)
for k in 1:n
pnold = pn
pn = (x -2*k -alpha+1)/sqrt(k*(alpha+k))*pnold-sqrt((k-1+alpha)*(k-1)/k/(k+alpha))*pnprev
pnprev = pnold
pndold = pnd
pnd = (pnold+(x-2*k-alpha+1)*pndold)/sqrt(k*(alpha+k)) -sqrt((k-1+alpha)*(k-1)/k/(alpha+k))*pndprev
pndprev = pndold
end
pn, pnd
end
# Estimate for the number of weights that don't underflow
# This is only an estimate, not an upper bound, so code has to be able to cope
# with the estimate being off
estimate_reduced_n(n, alpha) = round(typeof(n), min(17*sqrt(n), n))
# Our threshold for deciding on underflow
underflow_threshold(x) = underflow_threshold(typeof(x))
underflow_threshold(::Type{T}) where {T <: AbstractFloat} = 10floatmin(T)
# Tests
#works up to underflow:
x,w = laguerreRec(191)
x,w = laguerreRec(192)
x,w = laguerreRec(120,.1)
# currently returns incorrect wi=0 values for small even numbers 8,10,12.
# If alpha≂̸0, can also give incorrect wi=0 values for n = 9,11
x,w = laguerreRec(8)
x,w = laguerreRec(9,2.) |
Thanks! @ajt60gaibb : Maybe this can be merged with the other changes? An update on the article: @daanhb submitted it recently. We can put it on arXiv if you would want to. |
Do you mean merge #69 ? Can we get the coverage back up? If you click on the coverage links it should show which lines are missing. |
I'm not sure how best to improve this, but it looks like it went down quite a bit due to adding one line
to because line 17 of that file
doesn't have good coverage, and by adding a line, the proportional coverage went down further. So the only way to fix would seem to be add a test for line 17, or accept that the loss of coverage isn't important? |
OK it's fine, I'll merge. |
Is this issue still active? |
seems to be fixed, yay! julia> x,w=gausslaguerre(M);
julia> w
40-element Vector{Float64}:
0.08841210619030318
0.1768147390957118
0.21136311701596833
0.19408119531861268
0.14643428242412265
0.09332679843577069
0.05093220436104459
0.023976193015684367
0.009774625246714422
0.003457939993018473
⋮
1.5217354931814558e-31
4.585291614502648e-34
8.762158657486055e-37
9.82741572514546e-40
5.801152019169776e-43
1.5309086846066978e-46
1.381986305649325e-50
2.566633605012361e-55
2.700360940217009e-61 |
I see a sudden loss of accuracy at N=26 using
gausslaguerre.jl
. I useApproxFun.jl
to create a Laguerre polynomial, and then evaluate it at the quadrature points to check normalisation of the weighted Laguerre's:If M<=26, accuracy is excellent (N2=1.0 to 14 figures). At M=27, N2=0.88929
I have taken the same approach with
gausshermite.jl
and the results are very well behaved (although accuracy appears to reach a global minimum around M=30). I have tried the different ways to callgausslaguerre.jl
using theMETHOD
argument, but this hasn't improved the result. Any suggestions greatly appreciated!The text was updated successfully, but these errors were encountered: