Skip to content

Commit

Permalink
Misc.
Browse files Browse the repository at this point in the history
way too much :/
  • Loading branch information
SamuelBadr committed May 12, 2022
1 parent 33f3f0d commit 13dcc2a
Show file tree
Hide file tree
Showing 20 changed files with 284 additions and 319 deletions.
3 changes: 2 additions & 1 deletion .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
style = "blue"
style = "blue"
join_lines_based_on_source = true
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ authors = ["Samuel Badr <[email protected]>", "Hiroshi Shinaoka <h.shinaoka@
version = "0.12.0"

[deps]
DoubleFloats = "497a8b3b-efae-58df-a0af-a86822472b78"
IntervalRootFinding = "d2bf35a9-74e0-55ec-b149-d360ff49b807"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LowRankApprox = "898213cb-b102-5a47-900c-97e73b919f73"
Expand Down
4 changes: 3 additions & 1 deletion src/SparseIR.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
"Intermediate representation (IR) for many-body propagators"
module SparseIR

using DoubleFloats: Double64
using IntervalRootFinding: roots as roots_irf, Interval, isunique, interval, mid, Newton
using LinearAlgebra: dot, svd, SVD, QRIteration
using LowRankApprox: psvd
Expand All @@ -10,7 +11,7 @@ using SpecialFunctions: sphericalbesselj as sphericalbesselj_sf
export fermion, boson
export DimensionlessBasis, FiniteTempBasis, finite_temp_bases
export SparsePoleRepresentation, to_IR, from_IR
export PiecewiseLegendrePoly, PiecewiseLegendrePolyArray, roots, hat, overlap, deriv
export PiecewiseLegendrePoly, PiecewiseLegendrePolyVector, roots, hat, overlap, deriv
export LegendreBasis, MatsubaraConstBasis
export FiniteTempBasisSet
export legendre, legendre_collocation, Rule, piecewise, quadrature, reseat
Expand All @@ -22,6 +23,7 @@ export TauSampling, MatsubaraSampling, evaluate, fit

include("_specfuncs.jl")
include("_linalg.jl")
using ._LinAlg: tsvd

include("svd.jl")
include("gauss.jl")
Expand Down
86 changes: 34 additions & 52 deletions src/_linalg.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module _LinAlg

import LinearAlgebra
using LinearAlgebra: LinearAlgebra

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 18, 2022

Collaborator

Huh? What's the benefit of using using here?

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

It's a stylistic choice, when importing something, it's usually to provide additional method definitions. If I was writing DoubleFloats.jl, I could for example say import Base.:+ and then provide a couple method definitions like +(a::Double64, b::Double64), +(a::Double64, b::Float64) etc. without having to qualify + each time.

In contrast, using expresses intent to just use something without providing additional overloads.

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 18, 2022

Collaborator

Yeah, but here we just use LinearAlgebra from LinearAlgebra?

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

It's to avoid having to list all the different methods we use, while also avoiding polluting the namespace.

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

We could of course do using LinearAlgebra: norm, Givens, lmul!, rmul!, I, SVD, reflector! etc. as well.

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 18, 2022

Collaborator

Again: here import LinearAlgebra expresses exactly what we want - import the package LinearAlgebra and allow us to access it using qualified names.

using usually refers to fetches parts of it into the namespace (those symbols which are exported, or if we override it, only those we specify). using LinearAlgebra: LinearAlgebra is very strange, since it does not actually pull any symbols into the namespace, only the LinearAlgebra, which the underlying import does anyway.

Reads hacky and confusing. Recommend delete.

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 18, 2022

Collaborator

We could of course do using LinearAlgebra: norm, Givens, lmul!, rmul!, I, SVD, reflector! etc. as well.

This is an option! But using LinearAlgebra: LinearAlgebra is not good.

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

In our case, the difference isn't important -- so I'll happily change it --, but import Module in general can be dangerous because it opens up the possibility of accidentally extending Module's symbols. See for example https://stackoverflow.com/a/61072552/7120090.

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

We could of course do using LinearAlgebra: norm, Givens, lmul!, rmul!, I, SVD, reflector! etc. as well.

This is an option! But using LinearAlgebra: LinearAlgebra is not good.

I agree! I'll change it to that.

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 18, 2022

Collaborator

In our case, the difference isn't important -- so I'll happily change it --, but import Module in general can be dangerous because it opens up the possibility of accidentally extending Module's symbols.

IMHO certainly not if you do import Module, the linked stack-overflow says so. Then you need to qualify all names, no matter if you're extending them or using them. This is a good thing!

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 18, 2022

Author Collaborator

That's true! I guess in my mind it's mainly a semantics thing. When I read import Module I assume the intent is extension. But that's subjective of course :)

export tsvd, tsvd!, svd_jacobi, svd_jacobi!, rrqr, rrqr!

"""
Expand All @@ -9,7 +9,7 @@ Compute Givens rotation `R` matrix that satisfies:
[ c s ] [ f ] [ r ]
[ -s c ] [ g ] = [ 0 ]
"""
function givens_params(f::T, g::T) where T <: AbstractFloat
function givens_params(f::T, g::T) where {T<:AbstractFloat}
# ACM Trans. Math. Softw. 28(2), 206, Alg 1
if iszero(g)
c, s = one(T), zero(T)
Expand All @@ -31,7 +31,7 @@ Apply Givens rotation to vector:
[ a ] = [ c s ] [ x ]
[ b ] [ -s c ] [ y ]
"""
function givens_lmul((c, s)::Tuple{T, T}, (x, y)) where T
function givens_lmul((c, s)::Tuple{T,T}, (x, y)) where {T}
a = c * x + s * y
b = c * y - s * x
return a, b
Expand All @@ -45,7 +45,7 @@ Perform the SVD of upper triangular two-by-two matrix:
Note that smax and smin can be negative.
"""
function svd2x2(f::T, g::T, h::T) where T <: AbstractFloat
function svd2x2(f::T, g::T, h::T) where {T<:AbstractFloat}
# Code taken from LAPACK xSLAV2:
fa = abs(f)
ga = abs(g)
Expand Down Expand Up @@ -96,7 +96,7 @@ Perform the SVD of an arbitrary two-by-two matrix:
Note that smax and smin can be negative.
"""
function svd2x2(a11::T, a12::T, a21::T, a22::T) where T
function svd2x2(a11::T, a12::T, a21::T, a22::T) where {T}
abs_a12 = abs(a12)
abs_a21 = abs(a21)
if iszero(a21)
Expand Down Expand Up @@ -128,25 +128,19 @@ end

function jacobi_sweep!(U::AbstractMatrix, VT::AbstractMatrix)
ii, jj = size(U)
if ii < jj
throw(ArgumentError("matrix must be 'tall'"))
elseif size(VT, 1) != jj
throw(ArgumentError("U and VT must be compatible"))
end
ii jj || throw(ArgumentError("matrix must be 'tall'"))
size(VT, 1) == jj || throw(ArgumentError("U and VT must be compatible"))
Base.require_one_based_indexing(U)
Base.require_one_based_indexing(VT)

# TODO: non-traditional indexing
offd = zero(eltype(U))
@inbounds for i in 1:ii
for j in i+1:jj
for j in (i + 1):jj
# Construct the 2x2 matrix to be diagonalized
Hii = Hij = Hjj = zero(eltype(U))
for k in 1:ii
Hii += abs2(U[k, i])
Hij += U[k, i] * U[k, j]
Hjj += abs2(U[k, j])
end
Hii = sum(abs2, @view U[:, i])
Hij = sum(k -> @inbounds(U[k, i] * U[k, j]), 1:ii)
Hjj = sum(abs2, @view U[:, j])
offd += abs2(Hij)

# diagonalize
Expand All @@ -162,36 +156,27 @@ function jacobi_sweep!(U::AbstractMatrix, VT::AbstractMatrix)
end

"""Singular value decomposition using Jacobi rotations."""
function svd_jacobi!(U::AbstractMatrix{T}; rtol=eps(T), maxiter=20) where T
function svd_jacobi!(U::AbstractMatrix{T}; rtol=eps(T), maxiter=20) where {T}
m, n = size(U)
if m < n
throw(ArgumentError("matrix must be 'tall'"))
end
m n || throw(ArgumentError("matrix must be 'tall'"))

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 13, 2022

Collaborator

I have to say this notation I really don't like, because it is totally non-obvious unless you are familiar with short-circuiting tricks in bash scripting... but I suppose it's idiomatic Julia?

Couldn't we make a @checkarg macro that works like @assert, but throws ArgumentError?

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 13, 2022

Author Collaborator

I like it because it makes for easily readable enforcement of invariants (once you're used to it). That's how I use it at least: invariant || code to enforce said invariant.
The @checkarg suggestion sounds good also, though.

This comment has been minimized.

Copy link
@shinaoka

shinaoka May 13, 2022

Member

I have to admit that this notation is sometimes confusing.

Base.require_one_based_indexing(U) # TODO

VT = Matrix(one(T) * LinearAlgebra.I, n, n)
Unorm = LinearAlgebra.norm(@view U[1:n, 1:n])
for _ in 1:maxiter
offd = jacobi_sweep!(U, VT)
if offd < rtol * Unorm
break
end
offd < rtol * Unorm && break
end

s = Vector{T}(undef, n)
@inbounds for i in 1:n
s[i] = LinearAlgebra.norm(@view U[:, i])
# For some reason, U[:,i] ./= s[i] creates a copy
for j in axes(U, 1)
U[j, i] /= s[i]
end
end
s = LinearAlgebra.norm.(eachcol(U))
U ./= transpose(s)
return LinearAlgebra.SVD(U, s, VT)
end

"""Singular value decomposition using Jacobi rotations."""
svd_jacobi(U::AbstractMatrix{T}; rtol=eps(T), maxiter=20) where T =
svd_jacobi!(copy(U); rtol=rtol, maxiter=maxiter)
function svd_jacobi(U::AbstractMatrix{T}; rtol=eps(T), maxiter=20) where {T}
return svd_jacobi!(copy(U); rtol, maxiter)
end

"""
Truncated rank-revealing QR decomposition with full column pivoting.
Expand All @@ -204,7 +189,7 @@ where `Q` is an `(m, k)` isometric matrix, `R` is a `(k, n)` upper
triangular matrix, `piv` is a permutation vector, and `k` is chosen such
that the relative tolerance `tol` is met in the equality above.
"""
function rrqr!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
function rrqr!(A::AbstractMatrix{T}; rtol=eps(T)) where {T<:AbstractFloat}
# DGEQPF
m, n = size(A)
k = min(m, n)
Expand All @@ -219,7 +204,7 @@ function rrqr!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
sqrteps = sqrt(eps(T))

@inbounds for i in 1:k
pvt = argmax(@view pnorms[i:end]) + i - 1
pvt = argmax(ii -> pnorms[ii], i:n)
if i != pvt
jpvt[i], jpvt[pvt] = jpvt[pvt], jpvt[i]
xnorms[pvt] = xnorms[i]
Expand All @@ -233,15 +218,16 @@ function rrqr!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
tau_i = LinearAlgebra.reflector!(@view A[i:end, i])
taus[i] = tau_i
LinearAlgebra.reflectorApply!(
(@view A[i:end, i]), tau_i, @view A[i:end, i+1:end])
(@view A[i:end, i]), tau_i, @view A[i:end, (i + 1):end]
)

# Lapack Working Note 176.
for j in i+1:n
temp = abs(A[i,j]) / pnorms[j]
for j in (i + 1):n
temp = abs(A[i, j]) / pnorms[j]
temp = max(zero(T), (one(T) + temp) * (one(T) - temp))
temp2 = temp * abs2(pnorms[j] / xnorms[j])
if temp2 < sqrteps
recomputed = LinearAlgebra.norm(@view A[i+1:end, j])
recomputed = LinearAlgebra.norm(@view A[(i + 1):end, j])

This comment has been minimized.

Copy link
@mwallerb

mwallerb May 13, 2022

Collaborator

Why do we need brackets here? Also in other places?

This comment has been minimized.

Copy link
@SamuelBadr

SamuelBadr May 13, 2022

Author Collaborator

We don't really need them and I have no strong preference either way. It's just that JuliaFormatter.jl adds them by default (I think to prevent potentially confusing notation). If you don't like them, I can change the formatter's settings.

pnorms[j] = recomputed
xnorms[j] = recomputed
else
Expand All @@ -258,19 +244,16 @@ function rrqr!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
break
end
end
return LinearAlgebra.QRPivoted{T, typeof(A)}(A, taus, jpvt), k
return LinearAlgebra.QRPivoted{T,typeof(A)}(A, taus, jpvt), k
end

"""Truncated rank-revealing QR decomposition with full column pivoting."""
rrqr(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat =
rrqr!(copy(A); rtol=rtol)
rrqr(A::AbstractMatrix{T}; rtol=eps(T)) where {T<:AbstractFloat} = rrqr!(copy(A); rtol)

"""Truncate RRQR result low-rank"""
function truncate_qr_result(qr::LinearAlgebra.QRPivoted{T}, k::Integer) where T
function truncate_qr_result(qr::LinearAlgebra.QRPivoted{T}, k::Integer) where {T}
m, n = size(qr)
if k < 0 || k > min(m, n)
throw(ArgumentError("Invalid rank"))
end
0 k min(m, n) || throw(DomainError(k, "Invalid rank, must be in [0, $(min(m, n))]"))
Qfull = LinearAlgebra.QRPackedQ((@view qr.factors[:, 1:k]), qr.τ[1:k])

Q = LinearAlgebra.lmul!(Qfull, Matrix{T}(LinearAlgebra.I, m, k))
Expand All @@ -290,13 +273,13 @@ matrix with orthogonal rows and `s` are the singular values, a set of `k`
nonnegative numbers in non-ascending order. The SVD is truncated in the
sense that singular values below `tol` are discarded.
"""
function tsvd!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
function tsvd!(A::AbstractMatrix{T}; rtol=eps(T)) where {T<:AbstractFloat}
# Perform RRQR of the m x n matrix at a cost of O(m*n*k), where k is
# the QR rank (a mild upper bound for the true rank)
A_qr, k = rrqr!(A, rtol=rtol)
A_qr, k = rrqr!(A; rtol)
Q, R = truncate_qr_result(A_qr, k)

# RRQR is an excellent preconditioner for Jacobi. One should then perform
# RRQR is an excellent preconditioner for Jacobi. One should then perform
# Jacobi on RT
RT_svd = svd_jacobi(R')

Expand All @@ -308,7 +291,6 @@ function tsvd!(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat
end

"""Truncated singular value decomposition."""
tsvd(A::AbstractMatrix{T}; rtol=eps(T)) where T <: AbstractFloat =
tsvd!(copy(A); rtol=rtol)
tsvd(A::AbstractMatrix{T}; rtol=eps(T)) where {T<:AbstractFloat} = tsvd!(copy(A); rtol)

end # module _LinAlg
10 changes: 5 additions & 5 deletions src/augment.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ struct LegendreBasis{T<:AbstractFloat} <: AbstractBasis
statistics::Statistics
β::Float64
cl::Vector{T}
u::PiecewiseLegendrePolyArray{T}
u::PiecewiseLegendrePolyVector{T}
uhat::PiecewiseLegendreFTArray{T}
end

Expand All @@ -38,12 +38,12 @@ function LegendreBasis(
for l in 1:size
data[l, 1, l] = sqrt(((l - 1) + 0.5) / beta) * cl[l]
end
u = PiecewiseLegendrePolyArray(data, knots; symm)
u = PiecewiseLegendrePolyVector(data, knots; symm)

# uhat
uhat_base = PiecewiseLegendrePolyArray(sqrt(beta) .* data, Float64[-1, 1]; symm)
uhat_base = PiecewiseLegendrePolyVector(sqrt(beta) .* data, Float64[-1, 1]; symm)
even_odd = Dict(fermion => :odd, boson => :even)[statistics]
uhat = hat.(uhat_base, even_odd, 0:(size - 1))
uhat = hat.(uhat_base, even_odd)

return LegendreBasis(statistics, beta, cl, u, uhat)
end
Expand All @@ -60,7 +60,7 @@ iswellconditioned(basis::LegendreBasis) = true

function default_tau_sampling_points(basis::LegendreBasis)
x = gauss(length(basis.u))[1]
return (beta(basis) / 2) .* (x .+ 1)
return (getbeta(basis) / 2) .* (x .+ 1)
end

struct _ConstTerm{T<:Number}
Expand Down
Loading

0 comments on commit 13dcc2a

Please sign in to comment.