-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inner products between functions on (un)restricted bases #8
Comments
So the issue is essentially having |
Yes, I think that might be correct. |
OK I'll look into it, but first I need to finish the We are touching on a more complicated topic on automatically simplifying operator trees, but that's beyond my ken. |
I discovered today that this already seems to work: julia> t = range(0,stop=1,length=11)
0.0:0.1:1.0
julia> B = FEDVR(t, 4)
FEDVR{Float64} basis with 10 elements on 0.0..1.0
julia> B̃ = B[:,1:5]
FEDVR{Float64} basis with 10 elements on 0.0..1.0, restricted to basis functions 1..5 ⊂ 1..31
julia> u = B*ones(size(B,2))
ContinuumArrays.QuasiArrays.ApplyQuasiArray{Float64,1,LazyArrays.Applied{ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Array{Float64,1}}}}(LazyArrays.Applied{ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Array{Float64,1}}}(ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle(), *, (FEDVR{Float64} basis with 10 elements on 0.0..1.0, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 … 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])))
julia> v = B̃*ones(size(B̃,2))
ContinuumArrays.QuasiArrays.ApplyQuasiArray{Float64,1,LazyArrays.Applied{ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Array{Float64,1}}}}(LazyArrays.Applied{ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Array{Float64,1}}}(ContinuumArrays.QuasiArrays.LazyQuasiArrayApplyStyle(), *, (FEDVR{Float64} basis with 10 elements on 0.0..1.0, [1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])))
julia> u'v
5.0
julia> B'B̃
31×5 BandedMatrices.BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
1.0 ⋅ ⋅ ⋅ ⋅
⋅ 1.0 ⋅ ⋅ ⋅
⋅ ⋅ 1.0 ⋅ ⋅
⋅ ⋅ ⋅ 1.0 ⋅
⋅ ⋅ ⋅ ⋅ 1.0
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ To make it more efficient, I basically need to dispatch |
Cool! Does the “extra” allocation actually affect much? |
I realized that will not work :( since when forming julia> v.applied.args[2]
31-element Array{Float64,1}:
1.0
1.0
1.0
1.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0 |
Yes, it will affect me in two ways:
|
Is the goal to have it be allocation free: Vcat(Ones(5), Zeros(26)) ? In the ∞-Jacobi case I do achieve this allocation-free form so it should be possible, though may require a few "hacks" for special forms of |
Yes and no, as few allocations/unnecessary computations as possible is of course nice, but a bigger problem is the Laplacian; if you look at the operator in square brackets in the top post of this issue, you'll notice that it is undefined for When I created the issue, I was mainly thinking about efficiency, but today when working on getting the Poisson problem to work with FEDVRQuasi, I just got a lot of NaNs back, due to the first element of the Laplacian being Inf. I have a solution that is semi-hacky, I'll post a link with code for you to critique :) when I've put together a test. |
Special casing |
It may be I simply misunderstood how to achieve what you meant. Anyway, here's what I do: I first define a set of type aliases, essentially unions of (un)restricted quasiarrays and their adjoints: To calculate the inner products, I do the following: The same example as above: t = range(0,stop=1,length=11)
B = FEDVR(t, 4)
B̃ = B[:,1:5]
uv = ones(size(B,2))
u = B*uv
lu = B ⋆ uv
vv = ones(size(B̃,2))
v = B̃*vv
lv = B̃ ⋆ vv julia> @test u'v == 5
Test Passed
julia> lazyip = lu' ⋆ lv
[1.0 1.0 … 1.0 1.0]⋆ContinuumArrays.QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}(FEDVR{Float64} basis with 10 elements on 0.0..1.0)⋆FEDVR{Float64} basis with 10 elements on 0.0..1.0, restricted to basis functions 1..5 ⊂ 1..31⋆[1.0, 1.0, 1.0, 1.0, 1.0]
julia> @test lazyip isa FEDVRQuasi.LazyFEDVRInnerProduct
Test Passed
julia> @test materialize(lazyip) == 5
Test Passed Both the lazy and the non-lazy versions will only sum those elements that actually are non-zero, but the non-lazy versions will first have expanded the vectors to the full lengths, with trailing zeros. |
I find my returning to this issue every so often :) Something changed in the materialization algorithm, I think the current approach is to flatten as much as possible. I found some discrepancies: julia> R = FEDVR(range(0,stop=1,length=3),3)[:,2:end-1]
FEDVR{Float64} basis with 2 elements on 0.0..1.0, restricted to basis functions 2..4 ⊂ 1..5
julia> c = rand(size(R,2))
3-element Array{Float64,1}:
0.6897536131925821
0.29789024566149047
0.043702178352102994
julia> R*c
QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Array{Float64,1}}}(*, (FEDVR{Float64} basis with 2 elements on 0.0..1.0, [0.0, 0.6897536131925821, 0.29789024566149047, 0.043702178352102994, 0.0]))
julia> (R*c).args[2]
5-element Array{Float64,1}:
0.0
0.6897536131925821
0.29789024566149047
0.043702178352102994
0.0
julia> materialize(applied(*, R, c))
QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},BandedMatrices.BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}},Array{Float64,1}}}(*, (FEDVR{Float64} basis with 2 elements on 0.0..1.0, [0 0 0; 1 0 0; … ; 0 0 1; 0 0 0], [0.6897536131925821, 0.29789024566149047, 0.043702178352102994]))
julia> materialize(applied(*, R, c)).args[3]
3-element Array{Float64,1}:
0.6897536131925821
0.29789024566149047
0.043702178352102994
|
Sorry for the constant changes, getting the ∞-dimensional case of OPs working is quite finicky, though I think its there now. Note you can use I see the problem. I think representing restrictions as a lazy multiplication was probably a bad idea, and we should leave restrictions un-simplified, that is, as a |
No problem, it's good to get a long-term solution that's nice. If I use LazyArrays' approach, I get this julia> v = @~ R*c
FEDVR{Float64} basis with 2 elements on 0.0..1.0, restricted to basis functions 2..4 ⊂ 1..5⋆[0.6897536131925821, 0.29789024566149047, 0.043702178352102994]
julia> materialize(@~ v'v)
0.5664085257652043 but I can't do this: julia> v'v
ERROR: MethodError: no method matching *(::Applied{QuasiArrays.QuasiArrayApplyStyle,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrices.BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}}}}}, ::Applied{LazyArrays.FlattenMulStyle,typeof(*),Tuple{QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},BandedMatrices.BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}}},Array{Float64,1}}})
Closest candidates are:
*(::Any, ::Any, ::Any, ::Any...) at operators.jl:529
*(::PyCall.PyObject, ::Any) at /Users/jagot/.julia/packages/PyCall/ttONZ/src/pyoperators.jl:13
*(::AlgebraicMultigrid.Preconditioner, ::Any) at /Users/jagot/.julia/packages/AlgebraicMultigrid/eCNmE/src/preconditioner.jl:37
...
Stacktrace:
[1] top-level scope at REPL[121]:1 which is maybe a bit surprising. EDIT: julia> apply(*, v', v)
0.5664085257652043 works. |
That's an easy fix I think: just need to add |
While merging my libraries into CompactBases, I noticed inner products mostly work, without any special materialization on my part, i.e. julia> t = LinearKnotSet(6, 0, 1, 10)
21-element LinearKnotSet{6,6,6,Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}}:
0.0
0.0
0.0
0.0
0.0
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.0
1.0
1.0
1.0
1.0
julia> B = BSpline(t)[:,2:end-1]
BSpline{Float64} basis with LinearKnotSet(Float64) of order k = 6 (quintic) on 0.0..1.0 (10 intervals), restricted to basis functions 2..14 ⊂ 1..15
julia> c = ones(size(B,2))
13-element Array{Float64,1}:
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
julia> f = B*c
QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{BSpline{Float64,Float64,LinearKnotSet{6,6,6,Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}},Array{Float64,1},Array{Float64,1},SparseArrays.SparseMatrixCSC{Float64,Int64},BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}},Array{Float64,1}}}(*, (BSpline{Float64} basis with LinearKnotSet(Float64) of order k = 6 (quintic) on 0.0..1.0 (10 intervals), [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0]))
julia> f'f
1-element Array{Float64,1}:
0.9515151515151516 We can compare this with computing the inner product using julia> S = B'B
13×13 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
0.0132666 0.0101019 0.00355965 0.000593153 3.77661e-5 1.87891e-8 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.0101019 0.0180653 0.0141727 0.00526771 0.000803225 1.13117e-5 5.56714e-9 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.00355965 0.0141727 0.0234632 0.0183996 0.00638399 0.000470908 6.37228e-6 3.13151e-9 ⋅ ⋅ ⋅ ⋅ ⋅
0.000593153 0.00526771 0.0183996 0.0301697 0.0230784 0.00542545 0.000381113 5.09998e-6 2.50521e-9 ⋅ ⋅ ⋅ ⋅
3.77661e-5 0.000803225 0.00638399 0.0230784 0.0393926 0.024396 0.0055202 0.000382388 5.10061e-6 2.50521e-9 ⋅ ⋅ ⋅
1.87891e-8 1.13117e-5 0.000470908 0.00542545 0.024396 0.0393926 0.024396 0.0055202 0.000382388 5.09998e-6 3.13151e-9 ⋅ ⋅
⋅ 5.56714e-9 6.37228e-6 0.000381113 0.0055202 0.024396 0.0393926 0.024396 0.0055202 0.000381113 6.37228e-6 5.56714e-9 ⋅
⋅ ⋅ 3.13151e-9 5.09998e-6 0.000382388 0.0055202 0.024396 0.0393926 0.024396 0.00542545 0.000470908 1.13117e-5 1.87891e-8
⋅ ⋅ ⋅ 2.50521e-9 5.10061e-6 0.000382388 0.0055202 0.024396 0.0393926 0.0230784 0.00638399 0.000803225 3.77661e-5
⋅ ⋅ ⋅ ⋅ 2.50521e-9 5.09998e-6 0.000381113 0.00542545 0.0230784 0.0301697 0.0183996 0.00526771 0.000593153
⋅ ⋅ ⋅ ⋅ ⋅ 3.13151e-9 6.37228e-6 0.000470908 0.00638399 0.0183996 0.0234632 0.0141727 0.00355965
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 5.56714e-9 1.13117e-5 0.000803225 0.00526771 0.0141727 0.0180653 0.0101019
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1.87891e-8 3.77661e-5 0.000593153 0.00355965 0.0101019 0.0132666
julia> dot(c,S,c)
0.9515151515151516 The philosophical question is then if For any basis where the metric julia> B = FiniteDifferences(10, 0.5)
Finite differences basis {Float64} on 0.0..5.5 with 10 points spaced by Δx = 0.5
julia> c = ones(size(B,2))
10-element Array{Float64,1}:
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
julia> f = B*c
QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{FiniteDifferences{Float64,Int64},Array{Float64,1}}}(*, (Finite differences basis {Float64} on 0.0..5.5 with 10 points spaced by Δx = 0.5, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]))
julia> f'f
ERROR: MethodError: no method matching axes(::UniformScaling{Float64}, ::Int64)
Closest candidates are:
axes(::Core.SimpleVector, ::Integer) at essentials.jl:608
axes(::Number, ::Integer) at number.jl:65
axes(::MatrixFactorizations.LayoutQ, ::Integer) at /Users/jagot/.julia/packages/MatrixFactorizations/o0aqD/src/MatrixFactorizations.jl:38
...
Stacktrace:
[1] _check_mul_axes(::Adjoint{Float64,Array{Float64,1}}, ::UniformScaling{Float64}) at /Users/jagot/.julia/packages/ArrayLayouts/S2dXf/src/muladd.jl:46
[2] check_mul_axes(::Adjoint{Float64,Array{Float64,1}}, ::UniformScaling{Float64}) at /Users/jagot/.julia/packages/ArrayLayouts/S2dXf/src/muladd.jl:48
[3] check_applied_axes(::Applied{LazyArrays.DefaultApplyStyle,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},UniformScaling{Float64}}}) at /Users/jagot/.julia/packages/LazyArrays/KIPvT/src/linalg/mul.jl:16
[4] instantiate at /Users/jagot/.julia/packages/LazyArrays/KIPvT/src/lazyapplying.jl:31 [inlined]
[5] materialize at /Users/jagot/.julia/packages/LazyArrays/KIPvT/src/lazyapplying.jl:49 [inlined]
[6] fullmaterialize(::Applied{LazyArrays.DefaultApplyStyle,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},UniformScaling{Float64}}}) at /Users/jagot/.julia/packages/QuasiArrays/3FYr1/src/matmul.jl:105
[7] fullmaterialize(::Applied{QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}},FiniteDifferences{Float64,Int64}}}) at /Users/jagot/.julia/packages/QuasiArrays/3FYr1/src/matmul.jl:120
[8] fullmaterialize(::Applied{QuasiArrays.LazyQuasiArrayApplyStyle,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}},FiniteDifferences{Float64,Int64},Array{Float64,1}}}) at /Users/jagot/.julia/packages/QuasiArrays/3FYr1/src/matmul.jl:113
[9] fullmaterialize(::QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}},FiniteDifferences{Float64,Int64},Array{Float64,1}}}) at /Users/jagot/.julia/packages/QuasiArrays/3FYr1/src/matmul.jl:126
[10] *(::QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Float64,Array{Float64,1}},QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}}}}, ::QuasiArrays.ApplyQuasiArray{Float64,1,typeof(*),Tuple{FiniteDifferences{Float64,Int64},Array{Float64,1}}}) at /Users/jagot/.julia/packages/QuasiArrays/3FYr1/src/matmul.jl:58
[11] top-level scope at REPL[70]:1
[12] eval(::Module, ::Any) at ./boot.jl:331
[13] eval_user_input(::Any, ::REPL.REPLBackend) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
[14] run_backend(::REPL.REPLBackend) at /Users/jagot/.julia/packages/Revise/MgvIv/src/Revise.jl:1023
[15] top-level scope at REPL[1]:0 but that should be an easy fix in QuasiArrays. julia> S = B'B
UniformScaling{Float64}
0.5*I
julia> dot(c, S, c)
5.0 |
Yes so that it matches the vector case. julia> S = B'B
UniformScaling{Float64}
0.5*I This is a bad idea: we expect |
Sure, but on the other hand I'd want the inner products between functions reduce down to dot products between coefficients in those cases where the metric is diagonal with constant values. In the case where the metric is computed between two differently restricted julia> R = FiniteDifferences(20, 0.1)
Finite differences basis {Float64} on 0.0..2.1 with 20 points spaced by Δx = 0.1
julia> R1 = R[:,1:15]
Finite differences basis {Float64} on 0.0..2.1 with 20 points spaced by Δx = 0.1, restricted to basis functions 1..15 ⊂ 1..20
julia> R2 = R[:,3:20]
Finite differences basis {Float64} on 0.0..2.1 with 20 points spaced by Δx = 0.1, restricted to basis functions 3..20 ⊂ 1..20
julia> R1'R2
15×18 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ In case apply(*, f', R1', R2, g) to boil down to 0.1*dot(adjoint(view(f, 3:15)), view(g, 1:13)) Perhaps the best way is to always have the metric expand to the appropriate banded matrix and then introduce a |
Just use |
How? julia> 0.1Eye(5)
5×5 Diagonal{Float64,Fill{Float64,1,Tuple{Base.OneTo{Int64}}}}:
0.1 ⋅ ⋅ ⋅ ⋅
⋅ 0.1 ⋅ ⋅ ⋅
⋅ ⋅ 0.1 ⋅ ⋅
⋅ ⋅ ⋅ 0.1 ⋅
⋅ ⋅ ⋅ ⋅ 0.1 |
That's a |
Ok sorry, I missed that. It still does not cover the offset case though, and I would like a specialization of function dot(x::AbstractVector, D::Diagonal, y::AbstractVector)
mapreduce(t -> dot(t[1], t[2], t[3]), +, zip(x, D.diag, y))
end |
Just add that overload to FillArrays.jl |
FillArrays.jl supports julia> Eye(10,3)
10×3 FillArrays.RectDiagonal{Float64,Ones{Float64,1,Tuple{Base.OneTo{Int64}}},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}}:
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0 |
I saw that but could not figure out how to have another diagonal than the main one being non-zero, as in 15×18 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.1 ⋅ ⋅ ⋅ ⋅ Maybe I just want a lazy |
Re scalar vs 1x1 issue above, I noticed the following: julia> c
13-element Array{Float64,1}:
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
julia> S
13×13 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
0.0132666 0.0101019 0.00355965 0.000593153 3.77661e-5 … ⋅ ⋅ ⋅ ⋅ ⋅
0.0101019 0.0180653 0.0141727 0.00526771 0.000803225 ⋅ ⋅ ⋅ ⋅ ⋅
0.00355965 0.0141727 0.0234632 0.0183996 0.00638399 ⋅ ⋅ ⋅ ⋅ ⋅
0.000593153 0.00526771 0.0183996 0.0301697 0.0230784 2.50521e-9 ⋅ ⋅ ⋅ ⋅
3.77661e-5 0.000803225 0.00638399 0.0230784 0.0393926 5.10061e-6 2.50521e-9 ⋅ ⋅ ⋅
1.87891e-8 1.13117e-5 0.000470908 0.00542545 0.024396 … 0.000382388 5.09998e-6 3.13151e-9 ⋅ ⋅
⋅ 5.56714e-9 6.37228e-6 0.000381113 0.0055202 0.0055202 0.000381113 6.37228e-6 5.56714e-9 ⋅
⋅ ⋅ 3.13151e-9 5.09998e-6 0.000382388 0.024396 0.00542545 0.000470908 1.13117e-5 1.87891e-8
⋅ ⋅ ⋅ 2.50521e-9 5.10061e-6 0.0393926 0.0230784 0.00638399 0.000803225 3.77661e-5
⋅ ⋅ ⋅ ⋅ 2.50521e-9 0.0230784 0.0301697 0.0183996 0.00526771 0.000593153
⋅ ⋅ ⋅ ⋅ ⋅ … 0.00638399 0.0183996 0.0234632 0.0141727 0.00355965
⋅ ⋅ ⋅ ⋅ ⋅ 0.000803225 0.00526771 0.0141727 0.0180653 0.0101019
⋅ ⋅ ⋅ ⋅ ⋅ 3.77661e-5 0.000593153 0.00355965 0.0101019 0.0132666
julia> c'S*c
1-element Array{Float64,1}:
0.9515151515151515
julia> c'*(S*c)
0.9515151515151515
julia> (c'S)*c
1-element Array{Float64,1}:
0.9515151515151515 |
Assume a linear operator Ĥ with a discrete and continuous

spectrum. Then any function may be expanded over its eigenfunctions:
When calculating inner products and matrix elements between such

expansions, one encounters inner products of the kinds
Usually, the eigenfunctions of the discrete spectrum have compact
support, or are at least very localized (they may have an
exponentially decaying tail, which is negligible). It would be nice to
be able to calculate such inner products, without having to represent
the discrete eigenfunctions on the entire interval. Maybe something
like this?:
As a follow-up to this, I'll need to solve the Poisson problem on the
entire interval, with source terms of the form
where the source term will be non-zero only on the interval where
The text was updated successfully, but these errors were encountered: