You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several of the norm functions implemented in linalg.jl suffer from spurious overflow/underflow problems.
For example,
julia> x = [1e300]; norm(x,3)
Inf
julia> x = [1e-300]; norm(x,3)
0.0
A simple fix would be for the norm(x::AbstractVector, p::Number) and norm(x::AbstractVector) to first scale x by 1/max(abs(x)) before computing the norm, and then scale the result by max(abs(x)) (assuming it is nonzero).
Similar problems occur for the Frobenius norm in norm(A::AbstractMatrix, p).
Also, computing the Frobenius norm by sqrt(sum(diag(A'*A))) is insanely inefficient, since it is O(n^3) for an n-by-n matrix instead of O(n^2). You should really just do something like norm(reshape(A, numel(A))), or whatever the equivalent of reshape is that does not create a temporary vector. (This has the added advantage of using the BLAS nrm2 operation where possible.
The text was updated successfully, but these errors were encountered:
Several of the
norm
functions implemented inlinalg.jl
suffer from spurious overflow/underflow problems.For example,
A simple fix would be for the
norm(x::AbstractVector, p::Number)
andnorm(x::AbstractVector)
to first scalex
by1/max(abs(x))
before computing the norm, and then scale the result bymax(abs(x))
(assuming it is nonzero).Similar problems occur for the Frobenius norm in
norm(A::AbstractMatrix, p)
.Also, computing the Frobenius norm by
sqrt(sum(diag(A'*A)))
is insanely inefficient, since it is O(n^3) for an n-by-n matrix instead of O(n^2). You should really just do something likenorm(reshape(A, numel(A)))
, or whatever the equivalent ofreshape
is that does not create a temporary vector. (This has the added advantage of using the BLASnrm2
operation where possible.The text was updated successfully, but these errors were encountered: