-
-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added additional loss against data for NNODE #666
Conversation
This looks good, can you write some tests to ensure its working? We need to test with each strategy. We will need to do documentation too. Where in the paper is this additional loss mentioned? |
in the paper the loss term has additional loss terms :Lgls and Lconstr(under introduction eq 3 to 8), Lgls is defined against cell density predictions via a Surrogate Model MLP which takes in data as measured u at (x,t), Lconstr performs in parameter Estimation for the Growth, Time Delay and Diffusion terms(parameter functions are functions of the solution itself -> leads to issue #572). |
Ah I see it, yep this looks like it implements what's missing, good job 👍 |
The test failure looks real. |
src/ode_solve.jl
Outdated
function generate_loss(strategy::WeightedIntervalTraining, phi, f, autodiff::Bool, tspan, p, | ||
batch) | ||
minT = tspan[1] | ||
maxT = tspan[2] | ||
|
||
weights = strategy.weights ./ sum(strategy.weights) | ||
|
||
N = length(weights) | ||
samples = strategy.samples | ||
|
||
difference = (maxT - minT) / N | ||
|
||
data = Float64[] | ||
for (index, item) in enumerate(weights) | ||
temp_data = rand(1, trunc(Int, samples * item)) .* difference .+ minT .+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rebase issue? Rebase to the new master.
Rebase this onto the new master so that it's not dependent on any other PR (it's now merged). |
(user defined function), also now has common OptimizationFunction object definition(instead for each TrainingStrategy).
Also updated docs(might need further editing) Future Scope: We can add weighted loss in NNODE,and even losses for Parameter Estimation for Inverse Problems.
(user defined function), also now has common OptimizationFunction object definition(instead for each TrainingStrategy).
Also updated docs(might need further editing) Future Scope: We can add weighted loss in NNODE,and even losses for Parameter Estimation for Inverse Problems.
(user defined function), also now has common OptimizationFunction object definition(instead for each TrainingStrategy).
Also updated docs(might need further editing) Future Scope: We can add weighted loss in NNODE,and even losses for Parameter Estimation for Inverse Problems.
(user defined function), also now has common OptimizationFunction object definition(instead for each TrainingStrategy).
Also updated docs(might need further editing) Future Scope: We can add weighted loss in NNODE,and even losses for Parameter Estimation for Inverse Problems.
src/ode_solve.jl
Outdated
# additional loss | ||
additional_loss = alg.additional_loss | ||
|
||
# Creates OptimizationFunction Object from total_loss | ||
function total_loss(θ, _) | ||
L2_loss = generate_loss(strategy, phi, f, autodiff, tspan, p, batch)(θ, phi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# additional loss | |
additional_loss = alg.additional_loss | |
# Creates OptimizationFunction Object from total_loss | |
function total_loss(θ, _) | |
L2_loss = generate_loss(strategy, phi, f, autodiff, tspan, p, batch)(θ, phi) | |
inner_f = generate_loss(strategy, phi, f, autodiff, tspan, p, batch) | |
additional_loss = alg.additional_loss | |
# Creates OptimizationFunction Object from total_loss | |
function total_loss(θ, _) | |
L2_loss = inner_f (θ, phi) |
src/ode_solve.jl
Outdated
opt_algo = if strategy isa QuadratureTraining | ||
Optimization.AutoForwardDiff() | ||
elseif strategy isa StochasticTraining | ||
Optimization.AutoZygote() | ||
elseif strategy isa WeightedIntervalTraining | ||
Optimization.AutoZygote() | ||
else | ||
# by default GridTraining choice of Optimization | ||
# if adding new training algorithms we can extend this, | ||
# if-elseif-else block for choices of optimization algos | ||
Optimization.AutoZygote() | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opt_algo = if strategy isa QuadratureTraining | |
Optimization.AutoForwardDiff() | |
elseif strategy isa StochasticTraining | |
Optimization.AutoZygote() | |
elseif strategy isa WeightedIntervalTraining | |
Optimization.AutoZygote() | |
else | |
# by default GridTraining choice of Optimization | |
# if adding new training algorithms we can extend this, | |
# if-elseif-else block for choices of optimization algos | |
Optimization.AutoZygote() | |
end | |
opt_algo = if strategy isa QuadratureTraining | |
Optimization.AutoForwardDiff() | |
else | |
Optimization.AutoZygote() | |
end |
test/NNODE_tests.jl
Outdated
@test sol.errors[:l2] < 0.5 | ||
@test sol.errors[:l2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo my bad
src/ode_solve.jl
Outdated
@@ -3,7 +3,7 @@ abstract type NeuralPDEAlgorithm <: DiffEqBase.AbstractODEAlgorithm end | |||
""" | |||
```julia | |||
NNODE(chain, opt=OptimizationPolyalgorithms.PolyOpt(), init_params = nothing; | |||
autodiff=false, batch=0, kwargs...) | |||
autodiff=false, batch=0,additional_loss=nothing,kwargs...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
autodiff=false, batch=0,additional_loss=nothing,kwargs...) | |
autodiff=false, batch=0,additional_loss=nothing, | |
kwargs...) |
src/ode_solve.jl
Outdated
example: | ||
ts=[t for t in 1:100] | ||
(u_, t_) = (analytical_func(ts), ts) | ||
function additional_loss(phi, θ) | ||
return sum(sum(abs2, [phi(t, θ) for t in t_] .- u_)) / length(u_) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't in the right spot. Make an example section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
under this argument's description? or along with the lower example?
src/ode_solve.jl
Outdated
|
||
return loss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return loss | |
return loss |
src/ode_solve.jl
Outdated
|
||
return loss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return loss | |
return loss |
added option for additional loss against data for NNODE(used PhysicsInformedNN as a reference),work in progress as im getting some errors would greatly appreciate any help. Thanks
. (trying to solve issue #640, but NNODE dosent allow additional losses against surrogate Model predictions nor does it allow parameter estimation so this pr could help solve such ODE problems)