Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: trigger build with latest Lux #882

Merged
merged 57 commits into from
Oct 17, 2024
Merged
Show file tree
Hide file tree
Changes from 51 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
e18da4a
chore: update to latest Lux
avik-pal Aug 21, 2024
29abc3b
test: run tests enabling type-instability checks
avik-pal Aug 21, 2024
dca06c9
chore: bump minimum versions
avik-pal Aug 21, 2024
b5b790b
adapt fix
KirillZubov Sep 19, 2024
8c73deb
undo NNPDE_tests.jl
KirillZubov Sep 19, 2024
05a167d
Revert "adapt fix"
KirillZubov Sep 19, 2024
16a8bd0
Array{eltypeθ}
KirillZubov Sep 19, 2024
f8d8cc1
Revert "Array{eltypeθ}"
KirillZubov Sep 23, 2024
7d6c27d
convert
KirillZubov Sep 23, 2024
b3ed008
add convert topost adapt
KirillZubov Sep 24, 2024
380260c
fix ε_
KirillZubov Sep 24, 2024
37c8677
fix
KirillZubov Sep 24, 2024
93d7839
fix convert
KirillZubov Sep 24, 2024
904231d
update convert, adapt
KirillZubov Oct 9, 2024
5fcd52e
update project.toml
KirillZubov Oct 9, 2024
2b565c3
update Project
KirillZubov Oct 9, 2024
93f00c3
update Project
KirillZubov Oct 9, 2024
77f3077
update Project
KirillZubov Oct 9, 2024
3274cb5
update
KirillZubov Oct 10, 2024
052dd9a
fix
KirillZubov Oct 10, 2024
7c7cde8
typos
KirillZubov Oct 10, 2024
7055632
fix: dependencies
avik-pal Oct 13, 2024
e095e4b
test: merge qa testing
avik-pal Oct 13, 2024
9bd5e28
chore: run formatter
avik-pal Oct 13, 2024
648b251
fix: BPINN ODE testing
avik-pal Oct 13, 2024
8a0b1d3
fix: update minimum versions
avik-pal Oct 13, 2024
52ab564
refactor: update DGM implementation
avik-pal Oct 13, 2024
33e6a8c
test: mark weighted training test as broken
avik-pal Oct 14, 2024
e23096d
refactor: remove junk boilerplate
avik-pal Oct 14, 2024
196a252
fix: element type handling
avik-pal Oct 14, 2024
63062cc
fix: incorrect DGM architecture
avik-pal Oct 14, 2024
0f14720
refactor: rearrange exports
avik-pal Oct 14, 2024
b837ca0
test: run logging with non-error depwarn
avik-pal Oct 14, 2024
a08c0b9
fix: forward tests
avik-pal Oct 14, 2024
b2d9ca6
fix: downgrade testing
avik-pal Oct 14, 2024
f64cd21
refactor: cleanup NNODE
avik-pal Oct 14, 2024
7e0e580
refactor: use explicit imports
avik-pal Oct 14, 2024
c98c2b9
refactor: cleanup NNDAE
avik-pal Oct 14, 2024
62a16fb
feat: bring back NNRODE
avik-pal Oct 15, 2024
c297480
refactor: cleanup PINN code
avik-pal Oct 15, 2024
ff778e4
fix: eltype conversions in IntegroDiff
avik-pal Oct 15, 2024
b76d8a0
refactor: cleanup neural adapter code
avik-pal Oct 15, 2024
22c316b
refactor: bayesian PINN ODEs
avik-pal Oct 15, 2024
93d270e
fix: missing NNRODE tests
avik-pal Oct 15, 2024
7535186
fix: try fixing more tests
avik-pal Oct 15, 2024
6b319cf
fix: different device handling
avik-pal Oct 16, 2024
5ec1710
fix: Bayesian PINN
avik-pal Oct 16, 2024
cc55660
test: try reducing maxiters
avik-pal Oct 16, 2024
095027e
refactor: more cleanup of neural adapter
avik-pal Oct 16, 2024
df6c2b5
docs: update compat
avik-pal Oct 16, 2024
b5a4171
docs: cleanup
avik-pal Oct 16, 2024
4d732b5
refactor: cleanup of deps a bit
avik-pal Oct 16, 2024
84a0366
fix: allow scalar for number types
avik-pal Oct 16, 2024
9efad5e
fix: neural adapter tests
avik-pal Oct 16, 2024
c483d1c
fix: final round of cleanup
avik-pal Oct 16, 2024
4294558
fix: remove incorrect NNRODE implementation
avik-pal Oct 16, 2024
3085ef5
refactor: remove NeuralPDELogging in-favor of extension (#901)
avik-pal Oct 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
style = "sciml"
format_markdown = true
format_markdown = true
annotate_untyped_fields_with_any = false
3 changes: 2 additions & 1 deletion .github/workflows/Downgrade.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ jobs:
- PDEBPINN
- NNPDE1
- NNPDE2
- NNRODE
- AdaptiveLoss
- Logging
- Forward
Expand All @@ -30,7 +31,7 @@ jobs:
- NeuralAdapter
- IntegroDiff
version:
- "1"
- "1.10"
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
Expand Down
4 changes: 4 additions & 0 deletions .github/workflows/Tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,15 @@ jobs:
strategy:
fail-fast: false
matrix:
version:
- "1.10"
group:
- "QA"
- "ODEBPINN"
- "PDEBPINN"
- "NNPDE1"
- "NNPDE2"
- "NNRODE"
- "AdaptiveLoss"
- "Logging"
- "Forward"
Expand All @@ -40,4 +43,5 @@ jobs:
with:
group: "${{ matrix.group }}"
coverage-directories: "src,lib/NeuralPDELogging/src"
julia-version: "${{ matrix.version }}"
secrets: "inherit"
79 changes: 50 additions & 29 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ authors = ["Chris Rackauckas <[email protected]>"]
version = "5.16.0"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
AdvancedHMC = "0bf59076-c3b1-5ca4-86bd-e02cd72cde3d"
ArrayInterface = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
ConcreteStructs = "2569d6c7-a4a2-43d3-a901-331e8e4be471"
Cubature = "667455a9-e2ce-5579-9412-b964f529a492"
DiffEqNoiseProcess = "77a26b50-5914-5dd7-bc55-306e6241c503"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DocStringExtensions = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
DomainSets = "5b8099bc-c8ec-5219-889f-1d9e522a28bf"
Expand All @@ -20,81 +21,101 @@ Integrals = "de52edbc-65ea-441a-8357-d3a637375a31"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LogDensityProblems = "6fdf6af0-433a-55f7-b3ed-c6c6e0b8df7c"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
MCMCChains = "c7f686f2-ff18-58e9-bc7b-31028e88f75d"
MLDataDevices = "7e8f7934-dd98-4c1a-8fe8-92b47a384d40"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
MonteCarloMeasurements = "0987c9cc-fe09-11e8-30f0-b96dd679fdca"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
QuasiMonteCarlo = "8a4e6c94-4038-4cdc-81c3-7e6ffdb2a71b"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
RecursiveArrayTools = "731186ca-8d62-57ce-b412-fbd966d074cd"
Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
RuntimeGeneratedFunctions = "7e49a35a-f44a-4d26-94aa-eba1b4ca6b47"
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
SymbolicIndexingInterface = "2efcf032-c050-4f8e-a9bb-153293bab1f5"
SymbolicUtils = "d1185830-fcd6-423d-90d6-eec64667417b"
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"
UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"
WeightInitializers = "d49dbf32-c5c2-4618-8acc-27bb2598ef2d"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
ADTypes = "1.9.0"
Adapt = "4"
AdvancedHMC = "0.6.1"
Aqua = "0.8"
ArrayInterface = "7.9"
CUDA = "5.3"
ArrayInterface = "7.11"
CUDA = "5.5.2"
ChainRulesCore = "1.24"
ComponentArrays = "0.15.14"
ComponentArrays = "0.15.16"
ConcreteStructs = "0.2.3"
Cubature = "1.5"
DiffEqNoiseProcess = "5.20"
Distributions = "0.25.107"
DocStringExtensions = "0.9.3"
DomainSets = "0.6, 0.7"
Flux = "0.14.11"
DomainSets = "0.7"
ExplicitImports = "1.10.1"
Flux = "0.14.22"
ForwardDiff = "0.10.36"
Functors = "0.4.10"
Integrals = "4.4"
LineSearches = "7.2"
LinearAlgebra = "1"
Functors = "0.4.12"
Integrals = "4.5"
LineSearches = "7.3"
LinearAlgebra = "1.10"
LogDensityProblems = "2"
Lux = "0.5.58"
LuxCUDA = "0.3.2"
Lux = "1.1.0"
LuxCUDA = "0.3.3"
LuxCore = "1.0.1"
LuxLib = "1.3.2"
MCMCChains = "6"
MethodOfLines = "0.11"
ModelingToolkit = "9.9"
MLDataDevices = "1.2.0"
MethodOfLines = "0.11.6"
ModelingToolkit = "9.46"
MonteCarloMeasurements = "1.1"
Optim = "1.7.8"
Optimization = "3.24, 4"
OptimizationOptimJL = "0.2.1"
OptimizationOptimisers = "0.2.1, 0.3"
OrdinaryDiffEq = "6.74"
Pkg = "1"
Optimisers = "0.3.3"
Optimization = "4"
OptimizationOptimJL = "0.4"
OptimizationOptimisers = "0.3"
OrdinaryDiffEq = "6.87"
Pkg = "1.10"
Printf = "1.10"
QuasiMonteCarlo = "0.3.2"
Random = "1"
RecursiveArrayTools = "3.27.0"
Reexport = "1.2"
RuntimeGeneratedFunctions = "0.5.12"
SafeTestsets = "0.1"
SciMLBase = "2.28"
SciMLBase = "2.56"
Statistics = "1.10"
SymbolicUtils = "1.5, 2, 3"
Symbolics = "5.27.1, 6"
Test = "1"
UnPack = "1"
Zygote = "0.6.69"
StochasticDiffEq = "6.69.1"
SymbolicIndexingInterface = "0.3.31"
SymbolicUtils = "3.7.2"
Symbolics = "6.14"
Test = "1.10"
WeightInitializers = "1.0.3"
Zygote = "0.6.71"
julia = "1.10"

[extras]
Aqua = "4c88cf16-eb10-579e-8560-4a9242c79595"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
DiffEqNoiseProcess = "77a26b50-5914-5dd7-bc55-306e6241c503"
ExplicitImports = "7d51a73a-1435-4ff3-83d9-f097790105c7"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LuxCUDA = "d0bbae9a-e099-4d5b-a835-1c6931763bda"
LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
LuxLib = "82251201-b29d-42c6-8e01-566dec8acb11"
MethodOfLines = "94925ecb-adb7-4558-8ed8-f975c56a0bf4"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
SafeTestsets = "1bc83da4-3b8d-516f-aca4-4fe02f6d838f"
StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[targets]
test = ["Aqua", "Test", "CUDA", "SafeTestsets", "OptimizationOptimJL", "Pkg", "OrdinaryDiffEq", "LineSearches", "LuxCUDA", "Flux", "MethodOfLines"]
test = ["Aqua", "CUDA", "DiffEqNoiseProcess", "ExplicitImports", "Flux", "LineSearches", "LuxCUDA", "LuxCore", "LuxLib", "MethodOfLines", "OptimizationOptimJL", "OrdinaryDiffEq", "Pkg", "SafeTestsets", "StochasticDiffEq", "Test"]
16 changes: 8 additions & 8 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -35,20 +35,20 @@ DiffEqBase = "6.148"
Distributions = "0.25.107"
Documenter = "1"
DomainSets = "0.6, 0.7"
Flux = "0.14.11"
Flux = "0.14.17"
Integrals = "4"
LineSearches = "7.2"
Lux = "0.5.22"
Lux = "1"
LuxCUDA = "0.3.2"
MethodOfLines = "0.11"
ModelingToolkit = "9.7"
MonteCarloMeasurements = "1"
NeuralPDE = "5.14"
Optimization = "3.24, 4"
OptimizationOptimJL = "0.2.1, 0.3, 0.4"
OptimizationOptimisers = "0.2.1, 0.3"
OptimizationPolyalgorithms = "0.2"
OrdinaryDiffEq = "6.74"
NeuralPDE = "5"
Optimization = "4"
OptimizationOptimJL = "0.4"
OptimizationOptimisers = "0.3"
OptimizationPolyalgorithms = "0.3"
OrdinaryDiffEq = "6.87"
Plots = "1.36"
QuasiMonteCarlo = "0.3.2"
Random = "1"
Expand Down
6 changes: 3 additions & 3 deletions docs/src/examples/3rd.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,18 +36,18 @@ bcs = [u(0.0) ~ 0.0,
domains = [x ∈ Interval(0.0, 1.0)]

# Neural network
chain = Lux.Chain(Dense(1, 8, Lux.σ), Dense(8, 1))
chain = Chain(Dense(1, 8, σ), Dense(8, 1))

discretization = PhysicsInformedNN(chain, QuasiRandomTraining(20))
@named pde_system = PDESystem(eq, bcs, domains, [x], [u(x)])
prob = discretize(pde_system, discretization)

callback = function (p, l)
println("Current loss is: $l")
(p.iter % 500 == 0 || p.iter == 2000) && println("Current loss is: $l")
return false
end

res = Optimization.solve(prob, OptimizationOptimisers.Adam(0.01); maxiters = 2000)
res = solve(prob, OptimizationOptimisers.Adam(0.01); maxiters = 2000, callback)
phi = discretization.phi
```

Expand Down
13 changes: 4 additions & 9 deletions docs/src/examples/complex.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,7 @@ NeuralPDE supports training PINNs with complex differential equations. This exam
As the input to this neural network is time which is real, we need to initialize the parameters of the neural network with complex values for it to output and train with complex values.

```@example complex
using Random, NeuralPDE
using OrdinaryDiffEq
using Lux, OptimizationOptimisers
using Plots
using Random, NeuralPDE, OrdinaryDiffEq, Lux, OptimizationOptimisers, Plots
rng = Random.default_rng()
Random.seed!(100)

Expand All @@ -30,11 +27,9 @@ parameters = [2.0, 0.0, 1.0]

problem = ODEProblem(bloch_equations, u0, time_span, parameters)

chain = Lux.Chain(
Lux.Dense(1, 16, tanh;
init_weight = (rng, a...) -> Lux.kaiming_normal(rng, ComplexF64, a...)),
Lux.Dense(
16, 4; init_weight = (rng, a...) -> Lux.kaiming_normal(rng, ComplexF64, a...))
chain = Chain(
Dense(1, 16, tanh; init_weight = kaiming_normal(ComplexF64)),
Dense(16, 4; init_weight = kaiming_normal(ComplexF64))
)
ps, st = Lux.setup(rng, chain)

Expand Down
8 changes: 3 additions & 5 deletions docs/src/examples/heterogeneous.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,9 @@ domains = [x ∈ Interval(0.0, 1.0),
y ∈ Interval(0.0, 1.0)]

numhid = 3
chains = [[Lux.Chain(Dense(1, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ),
Dense(numhid, 1)) for i in 1:2]
[Lux.Chain(Dense(2, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ),
Dense(numhid, 1)) for i in 1:2]]
discretization = NeuralPDE.PhysicsInformedNN(chains, QuadratureTraining())
chains = [[Chain(Dense(1, numhid, σ), Dense(numhid, numhid, σ), Dense(numhid, 1)) for i in 1:2]
[Chain(Dense(2, numhid, σ), Dense(numhid, numhid, σ), Dense(numhid, 1)) for i in 1:2]]
avik-pal marked this conversation as resolved.
Show resolved Hide resolved
discretization = PhysicsInformedNN(chains, QuadratureTraining())

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [p(x), q(y), r(x, y), s(y, x)])
prob = SciMLBase.discretize(pde_system, discretization)
Expand Down
7 changes: 3 additions & 4 deletions docs/src/examples/ks.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,13 @@ bcs = [u(x, 0) ~ u_analytic(x, 0),
Dx(u(10, t)) ~ du(10, t)]

# Space and time domains
domains = [x ∈ Interval(-10.0, 10.0),
t ∈ Interval(0.0, 1.0)]
domains = [x ∈ Interval(-10.0, 10.0), t ∈ Interval(0.0, 1.0)]
# Discretization
dx = 0.4;
dt = 0.2;

# Neural network
chain = Lux.Chain(Dense(2, 12, Lux.σ), Dense(12, 12, Lux.σ), Dense(12, 1))
chain = Chain(Dense(2, 12, σ), Dense(12, 12, σ), Dense(12, 1))

discretization = PhysicsInformedNN(chain, GridTraining([dx, dt]))
@named pde_system = PDESystem(eq, bcs, domains, [x, t], [u(x, t)])
Expand All @@ -72,7 +71,7 @@ callback = function (p, l)
end

opt = OptimizationOptimJL.BFGS()
res = Optimization.solve(prob, opt; maxiters = 2000)
res = Optimization.solve(prob, opt; maxiters = 2000, callback)
phi = discretization.phi
```

Expand Down
9 changes: 4 additions & 5 deletions docs/src/examples/linear_parabolic.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ domains = [x ∈ Interval(0.0, 1.0),
# Neural network
input_ = length(domains)
n = 15
chain = [Lux.Chain(Dense(input_, n, Lux.σ), Dense(n, n, Lux.σ), Dense(n, 1)) for _ in 1:2]
chain = [Chain(Dense(input_, n, σ), Dense(n, n, σ), Dense(n, 1)) for _ in 1:2]

strategy = StochasticTraining(500)
discretization = PhysicsInformedNN(chain, strategy)
Expand All @@ -82,18 +82,17 @@ sym_prob = symbolic_discretize(pdesystem, discretization)
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions

global iteration = 0
callback = function (p, l)
if iteration % 10 == 0
if p.iter % 500 == 0
println("iter: ", p.iter)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
end
global iteration += 1
return false
end

res = Optimization.solve(prob, OptimizationOptimisers.Adam(1e-2); maxiters = 10000)
res = solve(prob, OptimizationOptimisers.Adam(1e-2); maxiters = 5000, callback)

phi = discretization.phi

Expand Down
11 changes: 4 additions & 7 deletions docs/src/examples/nonlinear_elliptic.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,12 @@ der_ = [Dy(u(x, y)) ~ Dyu(x, y),
bcs__ = [bcs_; der_]

# Space and time domains
domains = [x ∈ Interval(0.0, 1.0),
y ∈ Interval(0.0, 1.0)]
domains = [x ∈ Interval(0.0, 1.0), y ∈ Interval(0.0, 1.0)]

# Neural network
input_ = length(domains)
n = 15
chain = [Lux.Chain(Dense(input_, n, Lux.σ), Dense(n, n, Lux.σ), Dense(n, 1)) for _ in 1:6] # 1:number of @variables
chain = [Chain(Dense(input_, n, σ), Dense(n, n, σ), Dense(n, 1)) for _ in 1:6] # 1:number of @variables

strategy = GridTraining(0.01)
discretization = PhysicsInformedNN(chain, strategy)
Expand All @@ -91,19 +90,17 @@ pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions[1:6]
approx_derivative_loss_functions = sym_prob.loss_functions.bc_loss_functions[7:end]

global iteration = 0
callback = function (p, l)
if iteration % 10 == 0
if p.iter % 10 == 0
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p.u), approx_derivative_loss_functions))
end
global iteration += 1
return false
end

res = Optimization.solve(prob, BFGS(); maxiters = 100)
res = solve(prob, BFGS(); maxiters = 100, callback)

phi = discretization.phi

Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/nonlinear_hyperbolic.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ domains = [t ∈ Interval(0.0, 1.0),
# Neural network
input_ = length(domains)
n = 15
chain = [Lux.Chain(Dense(input_, n, Lux.σ), Dense(n, n, Lux.σ), Dense(n, 1)) for _ in 1:2]
chain = [Chain(Dense(input_, n, σ), Dense(n, n, σ), Dense(n, 1)) for _ in 1:2]

strategy = QuadratureTraining()
discretization = PhysicsInformedNN(chain, strategy)
Expand All @@ -100,7 +100,7 @@ callback = function (p, l)
return false
end

res = Optimization.solve(prob, BFGS(linesearch = BackTracking()); maxiters = 200)
res = Optimization.solve(prob, BFGS(linesearch = BackTracking()); maxiters = 200, callback)

phi = discretization.phi

Expand Down
Loading
Loading