Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedded + Autodiff #67

Open
JordiManyer opened this issue Jul 12, 2024 · 2 comments
Open

Embedded + Autodiff #67

JordiManyer opened this issue Jul 12, 2024 · 2 comments
Labels
notes Developer notes

Comments

@JordiManyer
Copy link
Collaborator

JordiManyer commented Jul 12, 2024

So here are some notes on the implementation of Autodiff for embedded triangulations. A similar discussion on this can be found in here.

Overview of the problem:

We distinguish two cases:

  1. When differentiating the boundary, triangulations change. I.e some interior cells become cut, some cuts cells become interior, etc...
  2. The above does not happen, i.e all interior, cut and exterior cells remain the same. In this case, the domain is modified by moving around the cuts within their cut cell. I.e the number and shape of the sub-cells stays the same, but the area of the sub-cells is different.

Case 1:

  • Since the amount/type of interior/exterior/cut cells changes, quadrature points are basically created and/or destroyed.
  • Therefore, there is no 1-1 correspondence between quadrature points in the original and updated measure.
  • This makes it (I think) impossible to propagate dual numbers.

Case 2:

  • Quadratures in the sub-cells reference space are the same. Differentiability is given by a change in the geometric maps going from the sub-cell reference spaces to their parent cut cell.
  • Duality can be embedded within those geometric maps, which are produced from the values of our level-set.
@JordiManyer JordiManyer added the notes Developer notes label Jul 12, 2024
@JordiManyer
Copy link
Collaborator Author

Current state:

  • We are currently only considering Case 2, for the above reasons. This is also what theoretical papers do (see here), which is very reasonable since otherwise things are not differentiable.
  • We can ensure we always fall under Case 2 by avoiding φh(p) = 0 for all mesh nodes p. During level-set updates, we can always add artificial advection (order ε) in the direction of movement that ensure we do not move our level set right on top of a mesh node.

We have the code working for simplicial meshes. Moving to QUAD/HEX meshes, however, requires some modification of the cutting algorithm:

  • Currently , QUADs are simplexified then cut using a marching tetrahedron algorithm. This creates bad cuts when the level set is not linear, because our FE space is locally Q1 (instead of P1, which is what the algorithm assumes).

  • To remedy this, we need to implement a marching cubes algorithm. This would allow us to cut QUADs directly, instead of having to go through a mesh of simplices first. (TODO)

@zjwegert
Copy link
Owner

zjwegert commented Nov 17, 2024

This comment will serve as a note regarding additional issues with comparability of GridapTopOpt and GridapEmbedded, and other issues that arrise in implementation of unfitted problems. We are actively investigating these.

Distributed issues

  • There appears to be a bug in distributed ODEs that gives AssertionError: You are trying to set a value that is not stored in the local portion. See scripts\Embedded\Bugs\odes_skeleton_distributed_bug.jl.

Jordi: I have not solved this one yet, but here is an explanation and workaround:

  • ODEs is allocating separately the residual and jacobian
  • This is fine in serial, but in parallel there are some instances where the the following happens:
  • The residual is touched by less ghost entries than the columns of the matrix
  • If we assemble both jac and res together, we communicate the extra ghost ids to the residual, so everything is consistent.
  • However, if we assemble the residual and jacobian separately, the residual is not aware of the extra ghost ids

This happens when there are touched ghost entries that do not belong to the local domain.
In particular, this happens when we have jumps, where some contributions come from two cells away. Boundary cells then get contributions from cells which are not in the local domain.
The workaround simply consists on touching the correct entries in the residual, as done in the script.

Autodiff/analytic gradient issues

  • We don't currently support differentiation of the interface normal. E.g., of φ->∫(f ⋅ n_Γ)dΓ. I believe we should see approximate equalites between dfh and dfh2 in the script below. The analytic derivatives in unfitted framework need checking as well.
f(x) = VectorValue(x[1],x[2])
df = ∇(φ->∫(f ⋅ n_Γ)dΓ,φh)
dfh = assemble_vector(df,V_φ)
_n(∇φ) = ∇φ/(10^-20+norm(∇φ))
df2 = ∇(φ->∫(f ⋅ (_n ∘ (∇(φ))))dΓ,φh)
dfh2 = assemble_vector(df2,V_φ)
  • Analytic derivatives for the above
  • Analytic differentiation of φ -> ∫(∇(fh)⋅∇(fh))dΓ fails with ERROR: This function belongs to an interface definition and cannot be used. The MWE can be found in Bugs/analytic_gradient_error.jl. In short,
dJ_int_exact2(w) = ((-n_Γ(g(fh)))*w/(norm  ((φh))))dΓ +
                  (-n_S_Λ  (jump(g(fh)*m_k_Λ) * mean(w) / ∇ˢφ_Λ))dΛ +
                  (-n_S_Σ  (g(fh)*m_k_Σ * w / ∇ˢφ_Σ))dΣ
dJ_int_exact_vec2 = assemble_vector(dJ_int_exact2,V_φ)

fails for g(fh) = ∇(fh)⋅∇(fh).

  • Differentiation of functionals that include derivatives, e.g., φ -> ∫(∇(fh)⋅∇(fh))dΩin and φ -> ∫(∇(fh)⋅∇(fh))dΓ, will cause an error. This is due to isbitstype being true for ForwardDiff.Dual. This is currently fixed by
function Arrays.return_cache(
  fg::Fields.FieldGradientArray{1,Polynomials.MonomialBasis{D,V}},
  x::AbstractVector{<:Point}) where {D,V}
  xi = testitem(x)
  T = gradient_type(V,xi)
  Polynomials._return_cache(fg,x,T,Val(false))
end

function Arrays.evaluate!(
  cache,
  fg::Fields.FieldGradientArray{1,Polynomials.MonomialBasis{D,V}},
  x::AbstractVector{<:Point}) where {D,V}
  Polynomials._evaluate!(cache,fg,x,Val(false))
end
Related error
  ERROR: MethodError: no method matching pos_neg_data(::Gridap.Arrays.LazyArray{…}, ::Gridap.Arrays.LazyArray{…})
  
  Closest candidates are:
    pos_neg_data(::AbstractArray{<:AbstractArray{<:Number}}, ::Gridap.Arrays.PosNegPartition)
     @ Gridap C:\Users\n10915192\.julia\dev\Gridap\src\Geometry\Triangulations.jl:328
    pos_neg_data(::AbstractArray, ::Gridap.Arrays.PosNegPartition)
     @ Gridap C:\Users\n10915192\.julia\dev\Gridap\src\Geometry\Triangulations.jl:317
  
  Stacktrace:
    [1] lazy_map(k::Reindex{Gridap.Arrays.LazyArray{…}}, ids::Gridap.Arrays.LazyArray{FillArrays.Fill{…}, Int32, 1, Tuple{…}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\GridapExtensions.jl:129
    [2] autodiff_array_gradient(a::Gridap.FESpaces.var"#g#65"{}, i_to_x::Gridap.Arrays.LazyArray{…}, j_to_i::Gridap.Arrays.LazyArray{…})
      @ Gridap.Arrays C:\Users\n10915192\.julia\dev\Gridap\src\Arrays\Autodiff.jl:29
    [3] _gradient(f::Function, uh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{…}}, fuh::DomainContribution)
      @ Gridap.FESpaces C:\Users\n10915192\.julia\dev\Gridap\src\FESpaces\FEAutodiff.jl:23
    [4] gradient(f::GridapTopOpt.var"#_f#23"{}, uh::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ Gridap.FESpaces C:\Users\n10915192\.julia\dev\Gridap\src\FESpaces\FEAutodiff.jl:5
    [5] gradient(F::Function, uh::Vector{Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}}}, K::Int64)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:20
    [6] StateParamIntegrandWithMeasure(F::Function, U::TrialFESpace{…}, V_φ::Gridap.FESpaces.UnconstrainedFESpace{…}, U_reg::TrialFESpace{…}, assem_U::Gridap.FESpaces.GenericSparseMatrixAssembler, assem_deriv::Gridap.FESpaces.GenericSparseMatrixAssembler)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:88
    [7] StateParamIntegrandWithMeasure
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:344 [inlined]
    [8] #326
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:92 [inlined]
    [9] iterate
      @ .\generator.jl:47 [inlined]
   [10] _collect(c::Vector{…}, itr::Base.Generator{…}, ::Base.EltypeUnknown, isz::Base.HasShape{…})
      @ Base .\array.jl:854
   [11] collect_similar(cont::Vector{typeof(Vol)}, itr::Base.Generator{Vector{typeof(Vol)}, var"#326#328"{AffineFEStateMap{…}}})
      @ Base .\array.jl:763
   [12] map(f::Function, A::Vector{typeof(Vol)})
      @ Base .\abstractarray.jl:3285
   [13] (::var"#325#327")(::EmbeddedDiscretization{2, Float64})
      @ Main c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:89
   [14] update_collection!(c::EmbeddedCollection, φh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:50
   [15] EmbeddedCollection(recipes::var"#325#327", bgmodel::UnstructuredDiscreteModel{…}, φ0::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:41
   [16] top-level scope
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:84
  • In FCM, the forward problem space has the same number of cells as the level-set function space. As a result, the fix in matching_level_set does not work.
  • There are two cases where the evaluation of the jacobian does not work. See Bugs/autodiff_jacobian.jl. This can be recreated in the following way
# Error 1
Γ = DifferentiableTriangulation(EmbeddedBoundary(cutgeo),V_φ)
dΓ = Measure(Γ,2get_order(V_φ))
dj(u,du,v) = jacobian(u->(u*v)dΓ,u)
dj(φh, get_trial_fe_basis(V_φ), get_fe_basis(V_φ))

# Error 2
Λ = SkeletonTriangulation(Triangulation(model))
dΛ = Measure(Λ,2*order)
dj(u,du,v) = jacobian(u->(jump((u))  jump((v)))dΛ,u)
dj(φh, get_trial_fe_basis(V_φ), get_fe_basis(V_φ))

AgFEM issues

  • There are instances where aggregate (AgFEM) fails to aggregate all cells and returns an error.
  • TODO: Make max_iters = 20 a parameter of the AggregateByThreshold struct.
Related error
  ERROR: AssertionError: all_aggregated
  Stacktrace:
    [1] _aggregate_by_threshold_barrier(threshold::Float64, cell_to_unit_cut_meas::Gridap.Arrays.LazyArray{…}, facet_to_inoutcut::Vector{…}, cell_to_inoutcut::Vector{…}, loc::Int64, cell_to_coords::Gridap.Arrays.LazyArray{…}, cell_to_faces::Gridap.Arrays.Table{…}, face_to_cells::Gridap.Arrays.Table{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:178
    [2] _aggregate_by_threshold(threshold::Float64, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…}, loc::Int64, facet_to_inoutcut::Vector{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:123
    [3] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…}, in_or_out::Int64)
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:80
    [4] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:7
    [5] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{2, Float64})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:3
    [6] (::var"#333#335")(cutgeo::EmbeddedDiscretization{2, Float64})
      @ Main c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_AgFEM.jl:79
    [7] update_collection!(c::EmbeddedCollection, φh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:50
    [8] evaluate!(pcf::EmbeddedPDEConstrainedFunctionals{…}, φh::Gridap.FESpaces.SingleFieldFEFunction{…}; update_space::Bool)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:1102
    [9] evaluate!(pcf::EmbeddedPDEConstrainedFunctionals{…}, φh::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:1101
   [10] iterate(m::AugmentedLagrangian, state::@NamedTuple{})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Optimisers\AugmentedLagrangian.jl:201
   [11] top-level scope
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_AgFEM.jl:122
  Some type information was truncated. Use `show(err)` to see complete types.

Other issues/TODOs

  • Caching for Gridap.ODEs evolution method has been disabled as we are not correctly updating the stiffness matrix/Jacobian etc. See Bugs/odes_reassemble_stiffness.jl
  • Related to Tagging Isolated Volumes gridap/GridapEmbedded.jl#52. This will cause a SingularException in the forward problem. We need a way to mark islands when using CutFEM/AgFEM
  • Investigate use of Jacobian for reinitialisation instead of Picard iterations.
  • CutFEM evolution: the sparsity pattern of the Jacobian changes with the level-set function because of the ∫(γg*h^2*jump(∇(u) ⋅ n_Γg)*jump(∇(v) ⋅ n_Γg))dΓg term, where dΓg is the measure on the ghost skeleton. As such, the first time we compute the operator we use the full background mesh skeleton so that the sparsity pattern of the Jacobian is the worst possible. Subsequent iterations will re-use the sparsity pattern but less integration is done. The trade-off here is memory cost vs. integration.
  • Currently, we use a modified version of RungeKutta - MutableRungeKutta. This is so that dt can change between iterations of the optimiser. This could be replaced with a more general ODESolver wrapper instead.
  • Add more tests
  • Test preprocessed refined mesh with Gmsh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
notes Developer notes
Projects
None yet
Development

No branches or pull requests

2 participants