-
-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parameters in ODE #41
Comments
It's probably best to use const variables. There was a changed that happened near the end of v0.4.6 which changed the behavior of functions inside of functions. Before, when I tested """Example problem with solution ``u(t)=u₀*exp(α*t)``"""
function linearODEExample(;α=1,u₀=1/2)
f(u,t) = α*u
analytic(u₀,t) = u₀*exp(α*t)
return(ODEProblem(f,u₀,analytic=analytic))
end it used to be fine. However, after the change you can check @code_llvm prob.f(1.2,.10)
# Output:
define double @julia_f_69844(%"#f#281"*, double, double) #0 {
top:
%3 = getelementptr inbounds %"#f#281", %"#f#281"* %0, i64 0, i32 0
%4 = load i64, i64* %3, align 8
%5 = sitofp i64 %4 to double
%6 = fmul double %5, %1
ret double %6
} which is not good. You can encapsulate it a bit: function linearODEExample2()
const α = 1.5; const u₀ = 1/2
f(u,t) = α*u
return(ODEProblem(f,u₀))
end
@code_llvm prob2.f(1.2,.10)
#Output:
define double @"julia_#3_70117"(%"##3#4"*, double, double) #0 {
top:
%3 = getelementptr inbounds %"##3#4", %"##3#4"* %0, i64 0, i32 0
%4 = load double, double* %3, align 8
%5 = fmul double %4, %1
ret double %5
} which is better, but still not very good. Instead, a top-level anonymous function: const α = 1.5; const u₀ = 1/2
g = (u,t) -> α*u
@code_llvm g(1.2,.10)
#Output:
define double @"julia_#7_70125"(double, double) #0 {
top:
%2 = fmul double %0, 1.500000e+00
ret double %2
} or just a top level function: const α = 1.5; const u₀ = 1/2
f(u,t) = α*u
prob = ODEProblem(f,u₀)
@code_llvm prob.f(1.2,.10)
# Output:
define double @julia_f_69822(double, double) #0 {
top:
%2 = fmul double %0, 1.500000e+00
ret double %2
} Will do much better. (Because of this I will be changing my premade problems over to this style once I am done making the benchmarking functions, which is either tonight or tomorrow). But there will be a params field for DEProblem subtypes coming soon since it will be useful for sensitivity analysis and bifurcation plots. However, there will be no performance difference since, from the |
That is tricky. I need the user to give me a vector field F(du,u,t,uc) and I solve in a for loop `while condition
end` Hence, I will have to redefine a new ODEProblem at each iteration because uc changes. If I use const variable, the user must not have another variable called uc in his code. There is a trade off to be found here between performance and usability. Can you advise me? Maybe named arguments or variable argument list? |
Oh! I thought you were talking about defining the function itself at the top level. If you're given parameters by the user, then closures are the way to go. From the second to last example you can see that anonymous functions don't incur a penalty as long as your types are well defined. Inside of a function, as long as you have type-stability, the types of the variables are well defined and so therefore "you get all of the benefits of it being constant" without defining them as constant. So doing something like: function solve_thisthing(F::Function,uc::Float64,u0::Float64)
f = (du,u,t) -> F(du,u,t,uc) # Will be efficient since uc is type-stable and this is wrapped in a function
prob = ODEProblem(f,u0)
return solve(prob,[0,1])
end That should be as efficient as it gets. |
I did a lot more experimenting. Check this out: https://groups.google.com/forum/#!topic/julia-users/QSuC7OPV9tk. I'll get to the bottom of this. It's important since from what I can see this is the last part for the ODE and SDE solvers which needs to be optimized, and it's what I thought would've been the easiest but in reality is turning out to have a very elusive solution... |
Thank you for all this! You lost me a bit. Your solution from above is not satisfactory? |
The end result is, the solution above, closures, is satisfactory. There are ways to do it slightly better at the cost of a compilation. When I say slightly, I mean verrry slightly: the native code coming from doing a closure is pretty good (all it's missing is inlining). However, there are other reasons to allow for a parameter type. For example, someone could use that to store cache inside of their algorithm (i.e. pre-allocate arrays inside the parameter type, and then re-use that array as a substitute for a temporary). I'll keep benchmarking things to find out what's the best way to do it, but likely you'll soon be able to define a Parameters <: DEParameters which you can pass along with your function. |
One drawback of the above solution is that I have to allocate new variables at each iteration whereas inplace modification should be possible. |
Yes indeed. I am hoping to make all of the functions be (t,u,du,p) where p is a parameter type (which can really just be a type with any fields that hold parameters. You can make it immutable but have mutable parameters, and that would let you do in-place modifications). That should be together soon, but I can't find a method so that I can enclose p without much of a performance loss (see the newest post on that julia-users thread). But master branch has something pretty cool. Right now I have implemented a macro s.t. you can do: f = @ode_define begin
dx = σ*(y-x)
dy = x*(ρ-z) - y
dz = x*y - β*z
end σ=>10. ρ=>28. β=>(8/3) and it will expand out to be: f = (t,u,du) -> begin
du[1] = 10.0(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end Once I get the parameter closures right, I will change this to: f = (t,u,du,p) -> begin
du[1] = p.σ (u[2]-u[1])
du[2] = u[1]*(p.ρ-u[3]) - u[2]
du[3] = u[1]*u[2] - p.β*u[3]
end
type fParameters<: DEParameter
σ
ρ
β
end
p = fParameters(σ,ρ,β) However, the only problem with this is that you can't keep defining a type fParameters, so I need to find a slightly different design that gets close to maximal performance. |
With the creation of ParameterizedFunctions.jl, this is complete. Now it will allow you to have parameters, choose which ones to inline, and automatically symbolically calculate the Jacobian. This format works with the ODE solvers, but is also compatible with bifurcation plots, sensitivity analysis, and parameter inference. I think this is a really good solution and so this can finally be declared done. |
Would it not be tricky to use it with Sundials as it already accept parameters? |
It should just work with Sundials. It should just work anywhere that accepts |
More detail: A dispatch would be needed to be added to use its sensitivity analysis parts, but the general ODE solver should work. |
Hi,
The vector fields of the ODE cannot have any parameters. For example, it can be useful to modify certains parameters of the vector field, e.g. f(du,u,t,params)
Do you recommend using const variables for the parameters or is there another trick ?
The text was updated successfully, but these errors were encountered: