-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start porting to new QuasiArrays #22
Conversation
Codecov Report
@@ Coverage Diff @@
## master #22 +/- ##
==========================================
+ Coverage 69.59% 69.77% +0.18%
==========================================
Files 21 21
Lines 1424 1360 -64
==========================================
- Hits 991 949 -42
+ Misses 433 411 -22
Continue to review full report at Codecov.
|
Let me know if you need any help with this. Some points:
|
Very good, thanks! I think I mostly had issues with Also, I noticed that QuasiArrays is stuck at 0.2.3 because ContinuumArrays is not released yet, so I actually don't know if it works or not :) Will have to try locally. |
It seems a lot of things broke, so fixing this will be breaking, right? Then I might as well drop Julia < 1.5 at the same time. |
The first thing that fails is the following: B = FiniteDifferences(5,1.0)
x = axes(B,1)
x² = B'QuasiDiagonal(x.^2)*B
@test x² isa Diagonal I think this breaks because previously (EDIT: I think it is not picking up my custom Invoking a 3-arg julia> apply(*, B', QuasiDiagonal(x.^2), B)
5×5 Diagonal{Float64,Array{Float64,1}}:
1.0 ⋅ ⋅ ⋅ ⋅
⋅ 4.0 ⋅ ⋅ ⋅
⋅ ⋅ 9.0 ⋅ ⋅
⋅ ⋅ ⋅ 16.0 ⋅
⋅ ⋅ ⋅ ⋅ 25.0 Then the problem of restricted bases being expanded and the coefficient vectors zero-padded remains: julia> B̃ = B[:, 2:end-1]
Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0, restricted to basis functions 2..4 ⊂ 1..5
julia> c̃ = rand(size(B̃, 2))
3-element Array{Float64,1}:
0.05671328944836307
0.5851879217376352
0.13677518135154187
julia> ṽ = B̃*c̃
ApplyQuasiArray{Float64,1,typeof(*),Tuple{FiniteDifferences{Float64,Int64},Array{Float64,1}}}(*, (Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0,
[0.0, 0.05671328944836307, 0.5851879217376352, 0.13677518135154187, 0.0]))
julia> ṽ isa ContinuumArrays.Expansion
true Inner products works as intended with this kind of expansion: julia> ṽ'ṽ
0.36436875118141365 However, explicitly using the lazy representation where the coefficients are not zero-padded do not work with inner products anymore, since I can't seem to take the adjoint of them: julia> v′ = applied(*, B̃, c̃)
Applied(*,Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0, restricted to basis functions 2..4 ⊂ 1..5,
[0.05671328944836307, 0.5851879217376352, 0.13677518135154187])
julia> v′'v′
ERROR: MethodError: no method matching adjoint(::Applied{LazyArrays.MulStyle,typeof(*),Tuple{QuasiArrays.SubQuasiArray{Float64,2,FiniteDifferences{Float64,Int64},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false},Array{Float64,1}}})
Closest candidates are:
adjoint(::Missing) at missing.jl:100
adjoint(::Number) at number.jl:169
adjoint(::Adjoint{var"#s173",var"#s172"} where var"#s172"<:Union{StaticArrays.StaticArray{Tuple{N},T,1} where T where N, StaticArrays.StaticArray{Tuple{N,M},T,2} where T where M where N} where var"#s173") at /Users/jagot/.julia/packages/StaticArrays/l7lu2/src/linalg.jl:73
...
Stacktrace:
[1] top-level scope at REPL[80]:1 |
Note you should probably work with |
I'm sort of not seeing how I should go about this; returning to the above example B = FiniteDifferences(5,1.0)
x = axes(B,1)
x² = B'QuasiDiagonal(x.^2)*B # What should be implemented for this to materialize properly? Should I implement a three-argument I try to retain the pattern Ideally |
julia> apply(*, B', QuasiDiagonal(x.^2), B)
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}},QuasiDiagonal{Float64,QuasiArrays.BroadcastQuasiArray{Float64,1,typeof(Base.literal_pow),Tuple{Base.RefValue{typeof(^)},Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Base.RefValue{Val{2}}}}},FiniteDifferences{Float64,Int64}}}(*, (QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}}(Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0), QuasiDiagonal{Float64,QuasiArrays.BroadcastQuasiArray{Float64,1,typeof(Base.literal_pow),Tuple{Base.RefValue{typeof(^)},Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Base.RefValue{Val{2}}}}}(Inclusion(0.0..6.0) .^ 2), Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0)) Where's the code you expect to be called? |
It did work, before c2bbf8d. I tried to modify the |
CompactBases.jl/src/fd_operators.jl Lines 6 to 18 in c2bbf8d
This block generates a MulQuasiArray{T,<:Any,<:Tuple{Ac::AdjointBasisOrRestricted{<:AbstractFiniteDifferences},
D::QuasiDiagonal,
B::BasisOrRestricted{<:AbstractFiniteDifferences}}} Similarly, CompactBases.jl/src/fd_operators.jl Lines 19 to 26 in c2bbf8d
this results in a |
I don't see the problem: @simplify function *(Ac::AdjointBasisOrRestricted{<:AbstractFiniteDifferences},
D::QuasiDiagonal,
B::BasisOrRestricted{<:AbstractFiniteDifferences})
T = promote_type(eltype(Ac), eltype(D), eltype(B))
A = parent(Ac)
parent(A) == parent(B) ||
throw(ArgumentError("Cannot multiply functions on different grids"))
Ai,Bi = indices(A,2), indices(B,2)
if Ai == Bi
Diagonal(Vector{T}(undef, length(Ai)))
else
m,n = length(Ai),length(Bi)
offset = Ai[1]-Bi[1]
BandedMatrix{T}(undef, (m,n), (-offset,offset))
end
end Then we get julia> x² = B'QuasiDiagonal(x.^2)*B
5×5 Diagonal{Float64,Array{Float64,1}}:
2.24932e-314 ⋅ ⋅ ⋅ ⋅
⋅ 2.27894e-314 ⋅ ⋅ ⋅
⋅ ⋅ 2.21545e-314 ⋅ ⋅
⋅ ⋅ ⋅ 2.21545e-314 ⋅
⋅ ⋅ ⋅ ⋅ 2.21545e-314 |
But that only returns an uninitialized matrix? I definitely expect 5×5 Diagonal{Float64,Array{Float64,1}}:
1.0 ⋅ ⋅ ⋅ ⋅
⋅ 4.0 ⋅ ⋅ ⋅
⋅ ⋅ 9.0 ⋅ ⋅
⋅ ⋅ ⋅ 16.0 ⋅
⋅ ⋅ ⋅ ⋅ 25.0 but I still want to separate the implementations of allocating the output matrix (currently via So, should |
No: more like B'QuasiDiagonal(x.^2)*B -> *(*(B', QuasiDiagonal(x.^2)), B) -> mul(ApplyQuasiArray(*, B', QuasiDiagonal(x.^2)), B) -> _simplify(*, B', QuasiDiagonal(x.^2), B) Debuggers help explain it, here's a partial walk through, where at the start In *(A, B) at /Users/sheehanolver/.julia/packages/QuasiArrays/9YqBZ/src/matmul.jl:32
>32 *(A::AbstractQuasiArray, B::AbstractQuasiArray) = mul(A, B)
In mul(A, B) at /Users/sheehanolver/.julia/packages/ArrayLayouts/wTf1Q/src/mul.jl:106
>106 @inline mul(A, B) = materialize(Mul(A,B))
In materialize(M) at /Users/sheehanolver/.julia/packages/ArrayLayouts/wTf1Q/src/mul.jl:105
>105 materialize(M::Mul) = copy(instantiate(M))
In copy(M) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:302
>302 @inline copy(M::Mul{<:AbstractLazyLayout,<:AbstractLazyLayout}) = simplify(M)
In simplify(M) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:298
>298 simplify(M::Mul) = simplify(*, M.A, M.B)
In simplify(#unused#, args) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:288
>288 @inline simplify(::typeof(*), args...) = _simplify(*, _flatten(args...)...) And the body of the function is then implemented as
|
Ok, so this works, but is this the preferred implementation pattern now?: CompactBases.jl/src/fd_operators.jl Lines 1 to 61 in 6a4874f
If so, I would need to generalize Incidentally, I think that the current definition of since https://github.com/JuliaMatrices/ArrayLayouts.jl/blob/master/src/mul.jl#L1-L4 |
Hmm yes I think you are right that Perhaps best to avoid So you can do the same thing with an arbitrary number of arguments with an overload of Note the |
Gotcha. I think the least disruptive for me is to adapt the |
There is a ridiculous amount of breakage: https://travis-ci.org/github/JuliaApproximation/CompactBases.jl/jobs/726268359#L12836 For instance, we now trigger some internal compiler error: https://travis-ci.org/github/JuliaApproximation/CompactBases.jl/jobs/726268359#L10684 One really bad regression with respect to restricted bases is the following: julia> N = 10
10
julia> ρ = 1.0
1.0
julia> R = FiniteDifferences(N, ρ)
Finite differences basis {Float64} on 0.0..11.0 with 10 points spaced by Δx = 1.0
julia> r = axes(R, 1)
Inclusion(0.0..11.0)
julia> sel = 3:6
3:6
julia> R̃ = R[:, sel]
Finite differences basis {Float64} on 0.0..11.0 with 10 points spaced by Δx = 1.0, restricted to basis functions 3..6 ⊂ 1..10
julia> x = QuasiDiagonal(r)
QuasiDiagonal{Float64,Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}(Inclusion(0.0..11.0))
julia> R̃'*x*R̃
4×4 BandedMatrices.BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
3.0 ⋅ ⋅ ⋅
⋅ 4.0 ⋅ ⋅
⋅ ⋅ 5.0 ⋅
⋅ ⋅ ⋅ 6.0 This should be a I assume this is the same (or related to) problem as I noted above, i.e. that the restrictions are dropped and vectors zero-padded. This problem is exacerbated when using FE-DVR, since the derivative matrices are no longer julia> N = 10
10
julia> t = range(0,stop=2,length=N)
0.0:0.2222222222222222:2.0
julia> R = FEDVR(t, 7)
FEDVR{Float64} basis with 9 elements on 0.0..2.0
julia> R̃ = R[:, 2:end-1]
FEDVR{Float64} basis with 9 elements on 0.0..2.0, restricted to elements 1:9, basis functions 2..54 ⊂ 1..55
julia> D = Derivative(axes(R̃, 1))
Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..2.0))
julia> R̃'*D*R̃
53×53 Array{Float64,2}:
0.0 24.9049 -10.8404 6.92802 -5.42022 3.47715 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0
-24.9049 0.0 19.196 -9.59798 6.92802 -4.33262 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10.8404 -19.196 0.0 19.196 -10.8404 6.36396 0.0 0.0 0.0 0.0 0.0 0.0 0.0
-6.92802 9.59798 -19.196 0.0 24.9049 -11.9814 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5.42022 -6.92802 10.8404 -24.9049 0.0 37.4844 0.0 0.0 0.0 0.0 0.0 0.0 0.0
-3.47715 4.33262 -6.36396 11.9814 -37.4844 0.0 37.4844 … 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 -37.4844 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 11.9814 -24.9049 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 -6.36396 10.8404 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 4.33262 -6.92802 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 -3.47715 5.42022 … 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 2.25 -3.47715 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
⋮ ⋮ ⋱ ⋮
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -2.25 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.47715 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -4.33262 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.36396 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 … -11.9814 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 37.4844 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 37.4844 -11.9814 6.36396 -4.33262 3.47715
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -37.4844 0.0 24.9049 -10.8404 6.92802 -5.42022
0.0 0.0 0.0 0.0 0.0 0.0 0.0 11.9814 -24.9049 0.0 19.196 -9.59798 6.92802
0.0 0.0 0.0 0.0 0.0 0.0 0.0 … -6.36396 10.8404 -19.196 0.0 19.196 -10.8404
0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.33262 -6.92802 9.59798 -19.196 0.0 24.9049
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -3.47715 5.42022 -6.92802 10.8404 -24.9049 0.0 |
I'll go through and try to fix some of this. Note It's impossible to tell at compile time that the restriction is the same on the left and right.... |
Sure, but it does not need to be known at compile time IMHO, since you don't create the matrices over and over again in a computation; you might update the matrix, but you allocate it once in the beginning of the calculation. For this reason I think it is fine if What I want to certain of is that the restriction "stays" with the basis object, and does not split into basis+restriction matrix. |
If it is helpful to you, I could get rid of the |
But then if it's not so important we can just stick to a banded matrix and make sure a fast-path is taken when that banded matrix is diagonal.... this seems more robust I can manage with |
Your other broken example is not working for me: julia> R̃'*D*R̃
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrices.BandedMatrix{Int64,FillArrays.Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},BandedMatrices.BandedMatrix{Int64,FillArrays.Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}}}(*, ([0 1 … 0 0; 0 0 … 0 0; … ; 0 0 … 0 0; 0 0 … 1 0], QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}(FEDVR{Float64} basis with 9 elements on 0.0..2.0), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..2.0)), FEDVR{Float64} basis with 9 elements on 0.0..2.0, [0 0 … 0 0; 1 0 … 0 0; … ; 0 0 … 0 1; 0 0 … 0 0])) But I suspect we need to support |
Sorry, had forgotten to push the last commit. Try again? |
Indeed, but I wanted to avoid that multiplication altogether. |
Right, this is the issue that we always want the block structure of There is a way around this but we need to decide on the syntax. Perhaps view(A, Dirichlet()) or view(A, :, InheritBlocks(2:end-1)) |
Ok, so how about something like this:
What do you think about this? Should this be implemented in CompactBases.jl or ContinuumArrays.jl? |
Yes macro makes sense, maybe |
@mortenpi looked a bit at this PR, and found your commit afee6cb, which set the FWIW, he tried to add ContinuumArrays.MemoryLayout(::Type{<:AdjointBasisOrRestricted{<:FEDVR}}) = ContinuumArrays.BasisLayout() but that did not change things. I feel that the issue of the blocked axes are slightly decoupled from how the memory layout is applied, i.e. if a |
It should be I'd have to see the error to say anything more |
The error is still there even with ContinuumArrays.MemoryLayout(::Type{<:BasisOrRestricted{<:FEDVR}}) = ContinuumArrays.BasisLayout()
ContinuumArrays.MemoryLayout(::Type{<:AdjointBasisOrRestricted{<:FEDVR}}) = ContinuumArrays.AdjointBasisLayout() and it throws
which gets thrown here: https://github.com/JuliaApproximation/CompactBases.jl/blob/dl/newquasiarrays/src/materialize_dsl.jl#L22 I definitely see the following _simplify(
::typeof(*),
::Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},
::QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},
::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},
::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},
::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}
) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:294 which means that the In this case though, arguably, one mistake is perhaps in the signature of the 3- and 4-argument Full stacktrace:
And, for completeness, what I call is R = FEDVR(range(0,stop=20,length=5), 4)
R̃ = R[:,2:12]
function call_for_debugging(r)
D = Derivative(axes(r,1))
Dc = D'
rc = r'
*(rc, Dc, D, r)
end
call_for_debugging(R̃) Disclaimer: I am not at all qualified to hack on this code, so apologies for any silliness on my part. I figure I'd try to hack on this PR a bit to get at least some sense of how ContinuumArrays and the underlying packages work. My longer-term aim is to provide a proper ContinuumArrays-compatible interface to a 1D FEM package that I work with. Stefanos helped me get started with that, but it's currently stuck a bit in a bit of a version hell, and would benefit from a new CompactBases release. |
I think it's probably unrelated to this PR, but another thing I noticed is that trying to restrict a basis by indexing into it with a I.e. calling this (on top of this PR without any custom modifications): R = FEDVR(range(0,stop=20,length=5), 4)
function call_for_debugging(r)
D = Derivative(axes(r,1))
Dc = D'
rc = r'
*(rc, Dc, D, r)
end
call_for_debugging(R[:,[1,2,3,4,5]]) throws
|
I’ll look into this today |
I'm looking at this right now. First, I think it's a mistake to leave julia> (D * R)[0.1,1]
ERROR: MethodError: no method matching iterate(::IntervalSets.ClosedInterval{Float64}) I therefore think it would be better to have this return a type Though I'll see if I can get it working as-is |
It's fixed! julia> call_for_debugging(R[:,[1,2,3,4,5]])
3×3-blocked 5×5 BlockBandedMatrices.BlockSkylineMatrix{Float64, Vector{Float64}, BlockBandedMatrices.BlockSkylineSizes{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}, Vector{Int64}, Vector{Int64}, BandedMatrices.BandedMatrix{Int64, Matrix{Int64}, Base.OneTo{Int64}}, Vector{Int64}}}:
-2.08 1.04721 -0.152786 │ 0.0565685 │ ⋅
1.04721 -0.8 0.4 │ -0.108036 │ ⋅
-0.152786 0.4 -0.8 │ 0.740492 │ ⋅
──────────────────────────────────┼──────────────┼───────────
0.0565685 -0.108036 0.740492 │ -2.08 │ 0.740492
──────────────────────────────────┼──────────────┼───────────
⋅ ⋅ ⋅ │ 0.740492 │ -0.8 Will try to get the fixes tagged today. |
Would this be a new type in CompactBases.jl? Or is this a common pattern that should be available upstream? |
It would be a new type in CompactBases.jl, and possibly with a more descriptive name, as I suspect the derivative has simple structure. E.g. for low order FEM the derivatives are just constants in each element. |
I wonder, should we take the plunge and move to Github Actions for CI? |
Yes since travis is being turned off |
So, the tests pass (except the doctests – it seems that failed doctests leads to the deploy step being skipped); should we merge now and release 0.2.0 and make the block structure of the axes and the |
c2e2af7
to
eaef103
Compare
eaef103
to
c8c0e06
Compare
Amazing! All for it |
Yep, will do tomorrow, after the doctests issue has been sorted out, @mortenpi is here to help 😄 |
e0701b8
to
af86d13
Compare
Woohoo! Thanks @dlfivefifty and @mortenpi |
No description provided.