StatProfilerHTML.jl report
Generated on Thu, 21 Dec 2023 13:06:16
File source code
Line Exclusive Inclusive Code
1 const RECOMPILE_BY_DEFAULT = true
2
3 """
4 $(TYPEDEF)
5
6 Supertype for the specialization types. Controls the compilation and
7 function specialization behavior of SciMLFunctions, ultimately controlling
8 the runtime vs compile-time trade-off.
9 """
10 abstract type AbstractSpecialization end
11
12 """
13 $(TYPEDEF)
14
15 The default specialization level for problem functions. `AutoSpecialize`
16 works by applying a function wrap just-in-time before the solve process
17 to disable just-in-time re-specialization of the solver to the specific
18 choice of model `f` and thus allow for using a cached solver compilation
19 from a different `f`. This wrapping process can lead to a small decreased
20 runtime performance with a benefit of a greatly decreased compile-time.
21
22 ## Note About Benchmarking and Runtime Optimality
23
24 It is recommended that `AutoSpecialize` is not used in any benchmarking
25 due to the potential effect of function wrapping on runtimes. `AutoSpecialize`'s
26 use case is targeted at decreased latency for REPL performance and
27 not for cases where where top runtime performance is required (such as in
28 optimization loops). Generally, for non-stiff equations the cost will be minimal
29 and potentially not even measurable. For stiff equations, function wrapping
30 has the limitation that only chunk sized 1 Dual numbers are allowed, which
31 can decrease Jacobian construction performance.
32
33 ## Limitations of `AutoSpecialize`
34
35 The following limitations are not fundamental to the implementation of `AutoSpecialize`,
36 but are instead chosen as a compromise between default precompilation times and
37 ease of maintenance. Please open an issue to discuss lifting any potential
38 limitations.
39
40 * `AutoSpecialize` is only setup to wrap the functions from in-place ODEs. Other
41 cases are excluded for the time being due to time limitations.
42 * `AutoSpecialize` will only lead to compilation reuse if the ODEFunction's other
43 functions (such as jac and tgrad) are the default `nothing`. These could be
44 JIT wrapped as well in a future version.
45 * `AutoSpecialize`'d functions are only compatible with Jacobian calculations
46 performed with chunk size 1, and only with tag `DiffEqBase.OrdinaryDiffEqTag()`.
47 Thus ODE solvers written on the common interface must be careful to detect
48 the `AutoSpecialize` case and perform differentiation under these constraints,
49 use finite differencing, or manually unwrap before solving. This will lead
50 to decreased runtime performance for sufficiently large Jacobians.
51 * `AutoSpecialize` only wraps on Julia v1.8 and higher.
52 * `AutoSpecialize` does not handle cases with units. If unitful values are detected,
53 wrapping is automatically disabled.
54 * `AutoSpecialize` only wraps cases for which `promote_rule` is defined between `u0`
55 and dual numbers, `u0` and `t`, and for which `ArrayInterface.promote_eltype`
56 is defined on `u0` to dual numbers.
57 * `AutoSpecialize` only wraps cases for which `f.mass_matrix isa UniformScaling`, the
58 default.
59 * `AutoSpecialize` does not wrap cases where `f isa AbstractSciMLOperator`
60 * By default, only the `u0 isa Vector{Float64}`, `eltype(tspan) isa Float64`, and
61 `typeof(p) isa Union{Vector{Float64},SciMLBase.NullParameters}` are specialized
62 by the solver libraries. Other forms can be specialized with
63 `AutoSpecialize`, but must be done in the precompilation of downstream libraries.
64 * `AutoSpecialize`d functions are manually unwrapped in adjoint methods in
65 SciMLSensitivity.jl in order to allow compiler support for automatic differentiation.
66 Improved versions of adjoints which decrease the recompilation surface will come
67 in non-breaking updates.
68
69 Cases where automatic wrapping is disabled are equivalent to `FullSpecialize`.
70
71 ## Example
72
73 ```
74 f(du,u,p,t) = (du .= u)
75
76 # Note this is the same as ODEProblem(f, [1.0], (0.0,1.0))
77 # If no preferences are set
78 ODEProblem{true, SciMLBase.AutoSpecialize}(f, [1.0], (0.0,1.0))
79 ```
80 """
81 struct AutoSpecialize <: AbstractSpecialization end
82
83 """
84 $(TYPEDEF)
85
86 `NoSpecialize` forces SciMLFunctions to not specialize on the types
87 of functions wrapped within it. This ultimately contributes to a
88 form such that every `prob.f` type is the same, meaning compilation
89 caches are fully reused, with the downside of losing runtime performance.
90 `NoSpecialize` is the form that most fully trades off runtime for compile
91 time. Unlike `AutoSpecialize`, `NoSpecialize` can be used with any
92 `SciMLFunction`.
93
94 ## Example
95
96 ```
97 f(du,u,p,t) = (du .= u)
98 ODEProblem{true, SciMLBase.NoSpecialize}(f, [1.0], (0.0,1.0))
99 ```
100 """
101 struct NoSpecialize <: AbstractSpecialization end
102
103 """
104 $(TYPEDEF)
105
106 `FunctionWrapperSpecialize` is an eager wrapping choice which
107 performs a function wrapping during the `ODEProblem` construction.
108 This performs the function wrapping at the earliest possible point,
109 giving the best compile-time vs runtime performance, but with the
110 difficulty that any usage of `prob.f` needs to account for the
111 function wrapper's presence. While optimal in a performance sense,
112 this method has many usability issues with nonstandard solvers
113 and analyses as it requires unwrapping before re-wrapping for any
114 type changes. Thus this method is not used by default. Given that
115 the compile-time different is almost undetectable from AutoSpecialize,
116 this method is mostly used as a benchmarking reference for speed
117 of light for `AutoSpecialize`.
118
119 ## Limitations of `FunctionWrapperSpecialize`
120
121 `FunctionWrapperSpecialize` has all of the limitations of `AutoSpecialize`,
122 but also includes the limitations:
123
124 * `prob.f` is directly specialized to the types of `(u,p,t)`, and any usage
125 of `prob.f` on other types first requires using
126 `SciMLBase.unwrapped_f(prob.f)` to remove the function wrapper.
127 * `FunctionWrapperSpecialize` can only be used by the `ODEProblem` constructor.
128 If an `ODEFunction` is being constructed, the user must manually use
129 `DiffEqBase.wrap_iip` on `f` before calling
130 `ODEFunction{true,FunctionWrapperSpecialize}(f)`. This is a fundamental
131 limitation of the approach as the types of `(u,p,t)` are required in the
132 construction process and not accessible in the `AbstractSciMLFunction` constructors.
133
134 ## Example
135
136 ```
137 f(du,u,p,t) = (du .= u)
138 ODEProblem{true, SciMLBase.FunctionWrapperSpecialize}(f, [1.0], (0.0,1.0))
139 ```
140 """
141 struct FunctionWrapperSpecialize <: AbstractSpecialization end
142
143 """
144 $(TYPEDEF)
145
146 `FullSpecialize` is an eager specialization choice which
147 directly types the `AbstractSciMLFunction` struct to match the type
148 of the model `f`. This forces recompilation of the solver on each
149 new function type `f`, leading to the most compile times with the
150 benefit of having the best runtime performance.
151
152 `FullSpecialize` should be used in all cases where top runtime performance
153 is required, such as in long-running simulations and benchmarking.
154
155 ## Example
156
157 ```
158 f(du,u,p,t) = (du .= u)
159 ODEProblem{true, SciMLBase.FullSpecialize}(f, [1.0], (0.0,1.0))
160 ```
161 """
162 struct FullSpecialize <: AbstractSpecialization end
163
164 specstring = Preferences.@load_preference("SpecializationLevel", "AutoSpecialize")
165 if specstring ∉
166 ("NoSpecialize", "FullSpecialize", "AutoSpecialize", "FunctionWrapperSpecialize")
167 error("SpecializationLevel preference $specstring is not in the allowed set of choices (NoSpecialize, FullSpecialize, AutoSpecialize, FunctionWrapperSpecialize).")
168 end
169
170 const DEFAULT_SPECIALIZATION = getproperty(SciMLBase, Symbol(specstring))
171
172 function DEFAULT_OBSERVED(sym, u, p, t)
173 error("Indexing symbol $sym is unknown.")
174 end
175
176 function DEFAULT_OBSERVED_NO_TIME(sym, u, p)
177 error("Indexing symbol $sym is unknown.")
178 end
179
180 function Base.summary(io::IO, prob::AbstractSciMLFunction)
181 type_color, no_color = get_colorizers(io)
182 print(io,
183 type_color, nameof(typeof(prob)),
184 no_color, ". In-place: ",
185 type_color, isinplace(prob),
186 no_color)
187 end
188
189 const NONCONFORMING_FUNCTIONS_ERROR_MESSAGE = """
190 Nonconforming functions detected. If a model function `f` is defined
191 as in-place, then all constituent functions like `jac` and `paramjac`
192 must be in-place (and vice versa with out-of-place). Detected that
193 some overloads did not conform to the same convention as `f`.
194 """
195
196 struct NonconformingFunctionsError <: Exception
197 nonconforming::Vector{String}
198 end
199
200 function Base.showerror(io::IO, e::NonconformingFunctionsError)
201 println(io, NONCONFORMING_FUNCTIONS_ERROR_MESSAGE)
202 print(io, "Nonconforming functions: ")
203 printstyled(io, e.nonconforming; bold = true, color = :red)
204 end
205
206 const INTEGRAND_MISMATCH_FUNCTIONS_ERROR_MESSAGE = """
207 Nonconforming functions detected. If an integrand function `f` is defined
208 as out-of-place (`f(u,p)`), then no integrand_prototype can be passed into the
209 function constructor. Likewise if `f` is defined as in-place (`f(out,u,p)`), then
210 an integrand_prototype is required. Either change the use of the function
211 constructor or define the appropriate dispatch for `f`.
212 """
213
214 struct IntegrandMismatchFunctionError <: Exception
215 iip::Bool
216 integrand_passed::Bool
217 end
218
219 function Base.showerror(io::IO, e::IntegrandMismatchFunctionError)
220 println(io, INTEGRAND_MISMATCH_FUNCTIONS_ERROR_MESSAGE)
221 print(io, "Mismatch: IIP=")
222 printstyled(io, e.iip; bold = true, color = :red)
223 print(io, ", Integrand passed=")
224 printstyled(io, e.integrand_passed; bold = true, color = :red)
225 end
226
227 """
228 $(TYPEDEF)
229 """
230 abstract type AbstractODEFunction{iip} <: AbstractDiffEqFunction{iip} end
231
232 @doc doc"""
233 $(TYPEDEF)
234
235 A representation of an ODE function `f`, defined by:
236
237 ```math
238 M \frac{du}{dt} = f(u,p,t)
239 ```
240
241 and all of its related functions, such as the Jacobian of `f`, its gradient
242 with respect to time, and more. For all cases, `u0` is the initial condition,
243 `p` are the parameters, and `t` is the independent variable.
244
245 ## Constructor
246
247 ```julia
248 ODEFunction{iip,specialize}(f;
249 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
250 analytic = __has_analytic(f) ? f.analytic : nothing,
251 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
252 jac = __has_jac(f) ? f.jac : nothing,
253 jvp = __has_jvp(f) ? f.jvp : nothing,
254 vjp = __has_vjp(f) ? f.vjp : nothing,
255 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
256 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
257 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
258 syms = nothing,
259 indepsym= nothing,
260 paramsyms = nothing,
261 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
262 sys = __has_sys(f) ? f.sys : nothing)
263 ```
264
265 Note that only the function `f` itself is required. This function should
266 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
267 for more details on in-place vs out-of-place handling.
268
269 All of the remaining functions are optional for improving or accelerating
270 the usage of `f`. These include:
271
272 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
273 to determine that the equation is actually a differential-algebraic equation (DAE)
274 if `M` is singular. Note that in this case special solvers are required, see the
275 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
276 Must be an AbstractArray or an AbstractSciMLOperator.
277 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
278 solution of the ODE. Generally only used for testing and development of the solvers.
279 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
280 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
281 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
282 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
283 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
284 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
285 as the prototype and integrators will specialize on this structure where possible. Non-structured
286 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
287 The default is `nothing`, which means a dense Jacobian.
288 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
289 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
290 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
291 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
292 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
293 internally uses `t` as the representation in any plots.
294 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
295 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
296 naming to the values, allowing `sol[:a]` in the solution.
297 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
298 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
299 finite differences and automatic differentiation to be computed in an accelerated manner
300 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
301 internally computed on demand when required. The cost of this operation is highly dependent
302 on the sparsity pattern.
303
304 ## iip: In-Place vs Out-Of-Place
305
306 `iip` is the optional boolean for determining whether a given function is written to
307 be used in-place or out-of-place. In-place functions are `f!(du,u,p,t)` where the return
308 is ignored, and the result is expected to be mutated into the value of `du`. Out-of-place
309 functions are `du=f(u,p,t)`.
310
311 Normally, this is determined automatically by looking at the method table for `f` and seeing
312 the maximum number of arguments in available dispatches. For this reason, the constructor
313 `ODEFunction(f)` generally works (but is type-unstable). However, for type-stability or
314 to enforce correctness, this option is passed via `ODEFunction{true}(f)`.
315
316 ## specialize: Controlling Compilation and Specialization
317
318 The `specialize` parameter controls the specialization level of the ODEFunction
319 on the function `f`. This allows for a trade-off between compile and run time performance.
320 The available specialization levels are:
321
322 * `SciMLBase.AutoSpecialize`: this form performs a lazy function wrapping on the
323 functions of the ODE in order to stop recompilation of the ODE solver, but allow
324 for the `prob.f` to stay unwrapped for normal usage. This is the default specialization
325 level and strikes a balance in compile time vs runtime performance.
326 * `SciMLBase.FullSpecialize`: this form fully specializes the `ODEFunction` on the
327 constituent functions that make its fields. As such, each `ODEFunction` in this
328 form is uniquely typed, requiring re-specialization and compilation for each new
329 ODE definition. This form has the highest compile-time at the cost of being the
330 most optimal in runtime. This form should be preferred for long-running calculations
331 (such as within optimization loops) and for benchmarking.
332 * `SciMLBase.NoSpecialize`: this form fully unspecializes the function types in the ODEFunction
333 definition by using an `Any` type declaration. As a result, it can result in reduced runtime
334 performance, but is the form that induces the least compile-time.
335 * `SciMLBase.FunctionWrapperSpecialize`: this is an eager function wrapping form. It is
336 unsafe with many solvers, and thus is mostly used for development testing.
337
338 For more details, see the
339 [specialization levels section of the SciMLBase documentation](https://docs.sciml.ai/SciMLBase/stable/interfaces/Problems/#Specialization-Levels).
340
341 ## Fields
342
343 The fields of the ODEFunction type directly match the names of the inputs.
344
345 ## More Details on Jacobians
346
347 The following example creates an inplace `ODEFunction` whose Jacobian is a `Diagonal`:
348
349 ```julia
350 using LinearAlgebra
351 f = (du,u,p,t) -> du .= t .* u
352 jac = (J,u,p,t) -> (J[1,1] = t; J[2,2] = t; J)
353 jp = Diagonal(zeros(2))
354 fun = ODEFunction(f; jac=jac, jac_prototype=jp)
355 ```
356
357 Note that the integrators will always make a deep copy of `fun.jac_prototype`, so
358 there's no worry of aliasing.
359
360 In general, the Jacobian prototype can be anything that has `mul!` defined, in
361 particular sparse matrices or custom lazy types that support `mul!`. A special case
362 is when the `jac_prototype` is a `AbstractSciMLOperator`, in which case you
363 do not need to supply `jac` as it is automatically set to `update_coefficients!`.
364 Refer to the AbstractSciMLOperators documentation for more information
365 on setting up time/parameter dependent operators.
366
367 ## Examples
368
369 ### Declaring Explicit Jacobians for ODEs
370
371 The most standard case, declaring a function for a Jacobian is done by overloading
372 the function `f(du,u,p,t)` with an in-place updating function for the Jacobian:
373 `f_jac(J,u,p,t)` where the value type is used for dispatch. For example,
374 take the Lotka-Volterra model:
375
376 ```julia
377 function f(du,u,p,t)
378 du[1] = 2.0 * u[1] - 1.2 * u[1]*u[2]
379 du[2] = -3 * u[2] + u[1]*u[2]
380 end
381 ```
382
383 To declare the Jacobian, we simply add the dispatch:
384
385 ```julia
386 function f_jac(J,u,p,t)
387 J[1,1] = 2.0 - 1.2 * u[2]
388 J[1,2] = -1.2 * u[1]
389 J[2,1] = 1 * u[2]
390 J[2,2] = -3 + u[1]
391 nothing
392 end
393 ```
394
395 Then we can supply the Jacobian with our ODE as:
396
397 ```julia
398 ff = ODEFunction(f;jac=f_jac)
399 ```
400
401 and use this in an `ODEProblem`:
402
403 ```julia
404 prob = ODEProblem(ff,ones(2),(0.0,10.0))
405 ```
406
407 ## Symbolically Generating the Functions
408
409 See the `modelingtoolkitize` function from
410 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
411 automatically symbolically generating the Jacobian and more from the
412 numerically-defined functions.
413 """
414 struct ODEFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, WP, TPJ,
415 O, TCV,
416 SYS} <: AbstractODEFunction{iip}
417 f::F
418 mass_matrix::TMM
419 analytic::Ta
420 tgrad::Tt
421 jac::TJ
422 jvp::JVP
423 vjp::VJP
424 jac_prototype::JP
425 sparsity::SP
426 Wfact::TW
427 Wfact_t::TWt
428 W_prototype::WP
429 paramjac::TPJ
430 observed::O
431 colorvec::TCV
432 sys::SYS
433 end
434
435 TruncatedStacktraces.@truncate_stacktrace ODEFunction 1 2
436
437 @doc doc"""
438 $(TYPEDEF)
439
440 A representation of a split ODE function `f`, defined by:
441
442 ```math
443 M \frac{du}{dt} = f_1(u,p,t) + f_2(u,p,t)
444 ```
445
446 and all of its related functions, such as the Jacobian of `f`, its gradient
447 with respect to time, and more. For all cases, `u0` is the initial condition,
448 `p` are the parameters, and `t` is the independent variable.
449
450 Generally, for ODE integrators the `f_1` portion should be considered the
451 "stiff portion of the model" with larger timescale separation, while the
452 `f_2` portion should be considered the "non-stiff portion". This interpretation
453 is directly used in integrators like IMEX (implicit-explicit integrators)
454 and exponential integrators.
455
456 ## Constructor
457
458 ```julia
459 SplitFunction{iip,specialize}(f1,f2;
460 mass_matrix = __has_mass_matrix(f1) ? f1.mass_matrix : I,
461 analytic = __has_analytic(f1) ? f1.analytic : nothing,
462 tgrad= __has_tgrad(f1) ? f1.tgrad : nothing,
463 jac = __has_jac(f1) ? f1.jac : nothing,
464 jvp = __has_jvp(f1) ? f1.jvp : nothing,
465 vjp = __has_vjp(f1) ? f1.vjp : nothing,
466 jac_prototype = __has_jac_prototype(f1) ? f1.jac_prototype : nothing,
467 sparsity = __has_sparsity(f1) ? f1.sparsity : jac_prototype,
468 paramjac = __has_paramjac(f1) ? f1.paramjac : nothing,
469 syms = nothing,
470 indepsym= nothing,
471 paramsyms = nothing,
472 colorvec = __has_colorvec(f1) ? f1.colorvec : nothing,
473 sys = __has_sys(f1) ? f1.sys : nothing)
474 ```
475
476 Note that only the functions `f_i` themselves are required. These functions should
477 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
478 for more details on in-place vs out-of-place handling.
479
480 All of the remaining functions are optional for improving or accelerating
481 the usage of the `SplitFunction`. These include:
482
483 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
484 to determine that the equation is actually a differential-algebraic equation (DAE)
485 if `M` is singular. Note that in this case special solvers are required, see the
486 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
487 Must be an AbstractArray or an AbstractSciMLOperator.
488 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
489 solution of the ODE. Generally only used for testing and development of the solvers.
490 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f_1(u,p,t)}{\partial t}``
491 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df_1}{du}``
492 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df_1}{du} v``
493 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df_1}{du}^\ast v``
494 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
495 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
496 as the prototype and integrators will specialize on this structure where possible. Non-structured
497 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
498 The default is `nothing`, which means a dense Jacobian.
499 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df_1}{dp}``.
500 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
501 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
502 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
503 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
504 internally uses `t` as the representation in any plots.
505 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
506 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
507 naming to the values, allowing `sol[:a]` in the solution.
508 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
509 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
510 finite differences and automatic differentiation to be computed in an accelerated manner
511 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
512 internally computed on demand when required. The cost of this operation is highly dependent
513 on the sparsity pattern.
514
515 ## Note on the Derivative Definition
516
517 The derivatives, such as the Jacobian, are only defined on the `f1` portion of the split ODE.
518 This is used to treat the `f1` implicit while keeping the `f2` portion explicit.
519
520 ## iip: In-Place vs Out-Of-Place
521
522 For more details on this argument, see the ODEFunction documentation.
523
524 ## specialize: Controlling Compilation and Specialization
525
526 For more details on this argument, see the ODEFunction documentation.
527
528 ## Fields
529
530 The fields of the SplitFunction type directly match the names of the inputs.
531
532 ## Symbolically Generating the Functions
533
534 See the `modelingtoolkitize` function from
535 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
536 automatically symbolically generating the Jacobian and more from the
537 numerically-defined functions. See `ModelingToolkit.SplitODEProblem` for
538 information on generating the SplitFunction from this symbolic engine.
539 """
540 struct SplitFunction{iip, specialize, F1, F2, TMM, C, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt,
541 TPJ, O,
542 TCV, SYS} <: AbstractODEFunction{iip}
543 f1::F1
544 f2::F2
545 mass_matrix::TMM
546 cache::C
547 analytic::Ta
548 tgrad::Tt
549 jac::TJ
550 jvp::JVP
551 vjp::VJP
552 jac_prototype::JP
553 sparsity::SP
554 Wfact::TW
555 Wfact_t::TWt
556 paramjac::TPJ
557 observed::O
558 colorvec::TCV
559 sys::SYS
560 end
561
562 TruncatedStacktraces.@truncate_stacktrace SplitFunction 1 2
563
564 @doc doc"""
565 $(TYPEDEF)
566
567 A representation of an ODE function `f`, defined by:
568
569 ```math
570 M \frac{du}{dt} = f(u,p,t)
571 ```
572
573 as a partitioned ODE:
574
575 ```math
576 M_1 \frac{du}{dt} = f_1(u,p,t)
577 M_2 \frac{du}{dt} = f_2(u,p,t)
578 ```
579
580 and all of its related functions, such as the Jacobian of `f`, its gradient
581 with respect to time, and more. For all cases, `u0` is the initial condition,
582 `p` are the parameters, and `t` is the independent variable.
583
584 ## Constructor
585
586 ```julia
587 DynamicalODEFunction{iip,specialize}(f1,f2;
588 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
589 analytic = __has_analytic(f) ? f.analytic : nothing,
590 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
591 jac = __has_jac(f) ? f.jac : nothing,
592 jvp = __has_jvp(f) ? f.jvp : nothing,
593 vjp = __has_vjp(f) ? f.vjp : nothing,
594 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
595 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
596 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
597 syms = nothing,
598 indepsym= nothing,
599 paramsyms = nothing,
600 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
601 sys = __has_sys(f) ? f.sys : nothing)
602 ```
603
604 Note that only the functions `f_i` themselves are required. These functions should
605 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
606 for more details on in-place vs out-of-place handling.
607
608 All of the remaining functions are optional for improving or accelerating
609 the usage of `f`. These include:
610
611 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
612 to determine that the equation is actually a differential-algebraic equation (DAE)
613 if `M` is singular. Note that in this case special solvers are required, see the
614 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
615 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
616 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
617 respectively.
618 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
619 solution of the ODE. Generally only used for testing and development of the solvers.
620 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
621 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
622 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
623 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
624 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
625 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
626 as the prototype and integrators will specialize on this structure where possible. Non-structured
627 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
628 The default is `nothing`, which means a dense Jacobian.
629 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
630 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
631 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
632 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
633 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
634 internally uses `t` as the representation in any plots.
635 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
636 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
637 naming to the values, allowing `sol[:a]` in the solution.
638 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
639 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
640 finite differences and automatic differentiation to be computed in an accelerated manner
641 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
642 internally computed on demand when required. The cost of this operation is highly dependent
643 on the sparsity pattern.
644
645 ## iip: In-Place vs Out-Of-Place
646
647 For more details on this argument, see the ODEFunction documentation.
648
649 ## specialize: Controlling Compilation and Specialization
650
651 For more details on this argument, see the ODEFunction documentation.
652
653 ## Fields
654
655 The fields of the DynamicalODEFunction type directly match the names of the inputs.
656 """
657 struct DynamicalODEFunction{iip, specialize, F1, F2, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW,
658 TWt, TPJ,
659 O, TCV, SYS} <: AbstractODEFunction{iip}
660 f1::F1
661 f2::F2
662 mass_matrix::TMM
663 analytic::Ta
664 tgrad::Tt
665 jac::TJ
666 jvp::JVP
667 vjp::VJP
668 jac_prototype::JP
669 sparsity::SP
670 Wfact::TW
671 Wfact_t::TWt
672 paramjac::TPJ
673 observed::O
674 colorvec::TCV
675 sys::SYS
676 end
677 TruncatedStacktraces.@truncate_stacktrace DynamicalODEFunction 1 2
678
679 """
680 $(TYPEDEF)
681 """
682 abstract type AbstractDDEFunction{iip} <: AbstractDiffEqFunction{iip} end
683
684 @doc doc"""
685 $(TYPEDEF)
686
687 A representation of a DDE function `f`, defined by:
688
689 ```math
690 M \frac{du}{dt} = f(u,h,p,t)
691 ```
692
693 and all of its related functions, such as the Jacobian of `f`, its gradient
694 with respect to time, and more. For all cases, `u0` is the initial condition,
695 `p` are the parameters, and `t` is the independent variable.
696
697 ## Constructor
698
699 ```julia
700 DDEFunction{iip,specialize}(f;
701 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
702 analytic = __has_analytic(f) ? f.analytic : nothing,
703 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
704 jac = __has_jac(f) ? f.jac : nothing,
705 jvp = __has_jvp(f) ? f.jvp : nothing,
706 vjp = __has_vjp(f) ? f.vjp : nothing,
707 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
708 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
709 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
710 syms = nothing,
711 indepsym= nothing,
712 paramsyms = nothing,
713 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
714 sys = __has_sys(f) ? f.sys : nothing)
715 ```
716
717 Note that only the function `f` itself is required. This function should
718 be given as `f!(du,u,h,p,t)` or `du = f(u,h,p,t)`. See the section on `iip`
719 for more details on in-place vs out-of-place handling. The history function
720 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
721 the solution interface, i.e. `h(t; save_idxs = 2)`.
722
723 All of the remaining functions are optional for improving or accelerating
724 the usage of `f`. These include:
725
726 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
727 to determine that the equation is actually a differential-algebraic equation (DAE)
728 if `M` is singular. Note that in this case special solvers are required, see the
729 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
730 Must be an AbstractArray or an AbstractSciMLOperator.
731 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
732 solution of the ODE. Generally only used for testing and development of the solvers.
733 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
734 - `jac(J,u,h,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
735 - `jvp(Jv,v,h,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
736 - `vjp(Jv,v,h,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
737 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
738 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
739 as the prototype and integrators will specialize on this structure where possible. Non-structured
740 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
741 The default is `nothing`, which means a dense Jacobian.
742 - `paramjac(pJ,h,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
743 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
744 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
745 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
746 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
747 internally uses `t` as the representation in any plots.
748 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
749 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
750 naming to the values, allowing `sol[:a]` in the solution.
751 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
752 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
753 finite differences and automatic differentiation to be computed in an accelerated manner
754 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
755 internally computed on demand when required. The cost of this operation is highly dependent
756 on the sparsity pattern.
757
758 ## iip: In-Place vs Out-Of-Place
759
760 For more details on this argument, see the ODEFunction documentation.
761
762 ## specialize: Controlling Compilation and Specialization
763
764 For more details on this argument, see the ODEFunction documentation.
765
766 ## Fields
767
768 The fields of the DDEFunction type directly match the names of the inputs.
769 """
770 struct DDEFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV, SYS,
771 } <:
772 AbstractDDEFunction{iip}
773 f::F
774 mass_matrix::TMM
775 analytic::Ta
776 tgrad::Tt
777 jac::TJ
778 jvp::JVP
779 vjp::VJP
780 jac_prototype::JP
781 sparsity::SP
782 Wfact::TW
783 Wfact_t::TWt
784 paramjac::TPJ
785 observed::O
786 colorvec::TCV
787 sys::SYS
788 end
789
790 TruncatedStacktraces.@truncate_stacktrace DDEFunction 1 2
791
792 @doc doc"""
793 $(TYPEDEF)
794
795 A representation of a DDE function `f`, defined by:
796
797 ```math
798 M \frac{du}{dt} = f(u,h,p,t)
799 ```
800
801 as a partitioned ODE:
802
803 ```math
804 M_1 \frac{du}{dt} = f_1(u,h,p,t)
805 M_2 \frac{du}{dt} = f_2(u,h,p,t)
806 ```
807
808 and all of its related functions, such as the Jacobian of `f`, its gradient
809 with respect to time, and more. For all cases, `u0` is the initial condition,
810 `p` are the parameters, and `t` is the independent variable.
811
812 ## Constructor
813
814 ```julia
815 DynamicalDDEFunction{iip,specialize}(f1,f2;
816 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
817 analytic = __has_analytic(f) ? f.analytic : nothing,
818 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
819 jac = __has_jac(f) ? f.jac : nothing,
820 jvp = __has_jvp(f) ? f.jvp : nothing,
821 vjp = __has_vjp(f) ? f.vjp : nothing,
822 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
823 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
824 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
825 syms = nothing,
826 indepsym= nothing,
827 paramsyms = nothing,
828 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
829 sys = __has_sys(f) ? f.sys : nothing)
830 ```
831
832 Note that only the functions `f_i` themselves are required. These functions should
833 be given as `f_i!(du,u,h,p,t)` or `du = f_i(u,h,p,t)`. See the section on `iip`
834 for more details on in-place vs out-of-place handling. The history function
835 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
836 the solution interface, i.e. `h(t; save_idxs = 2)`.
837
838 All of the remaining functions are optional for improving or accelerating
839 the usage of `f`. These include:
840
841 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
842 to determine that the equation is actually a differential-algebraic equation (DAE)
843 if `M` is singular. Note that in this case special solvers are required, see the
844 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
845 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
846 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
847 respectively.
848 - `analytic(u0,h,p,t)`: used to pass an analytical solution function for the analytical
849 solution of the ODE. Generally only used for testing and development of the solvers.
850 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,h,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
851 - `jac(J,u,h,p,t)` or `J=jac(u,h,p,t)`: returns ``\frac{df}{du}``
852 - `jvp(Jv,v,u,h,p,t)` or `Jv=jvp(v,u,h,p,t)`: returns the directional derivative``\frac{df}{du} v``
853 - `vjp(Jv,v,u,h,p,t)` or `Jv=vjp(v,u,h,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
854 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
855 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
856 as the prototype and integrators will specialize on this structure where possible. Non-structured
857 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
858 The default is `nothing`, which means a dense Jacobian.
859 - `paramjac(pJ,u,h,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
860 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
861 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
862 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
863 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
864 internally uses `t` as the representation in any plots.
865 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
866 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
867 naming to the values, allowing `sol[:a]` in the solution.
868 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
869 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
870 finite differences and automatic differentiation to be computed in an accelerated manner
871 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
872 internally computed on demand when required. The cost of this operation is highly dependent
873 on the sparsity pattern.
874
875 ## iip: In-Place vs Out-Of-Place
876
877 For more details on this argument, see the ODEFunction documentation.
878
879 ## specialize: Controlling Compilation and Specialization
880
881 For more details on this argument, see the ODEFunction documentation.
882
883 ## Fields
884
885 The fields of the DynamicalDDEFunction type directly match the names of the inputs.
886 """
887 struct DynamicalDDEFunction{iip, specialize, F1, F2, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW,
888 TWt, TPJ,
889 O, TCV, SYS} <: AbstractDDEFunction{iip}
890 f1::F1
891 f2::F2
892 mass_matrix::TMM
893 analytic::Ta
894 tgrad::Tt
895 jac::TJ
896 jvp::JVP
897 vjp::VJP
898 jac_prototype::JP
899 sparsity::SP
900 Wfact::TW
901 Wfact_t::TWt
902 paramjac::TPJ
903 observed::O
904 colorvec::TCV
905 sys::SYS
906 end
907
908 TruncatedStacktraces.@truncate_stacktrace DynamicalDDEFunction 1 2
909 """
910 $(TYPEDEF)
911 """
912 abstract type AbstractDiscreteFunction{iip} <:
913 AbstractDiffEqFunction{iip} end
914
915 @doc doc"""
916 $(TYPEDEF)
917
918 A representation of a discrete dynamical system `f`, defined by:
919
920 ```math
921 u_{n+1} = f(u,p,t_{n+1})
922 ```
923
924 and all of its related functions, such as the Jacobian of `f`, its gradient
925 with respect to time, and more. For all cases, `u0` is the initial condition,
926 `p` are the parameters, and `t` is the independent variable.
927
928 ## Constructor
929
930 ```julia
931 DiscreteFunction{iip,specialize}(f;
932 analytic = __has_analytic(f) ? f.analytic : nothing,
933 syms = nothing
934 indepsym = nothing,
935 paramsyms = nothing)
936 ```
937
938 Note that only the function `f` itself is required. This function should
939 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
940 for more details on in-place vs out-of-place handling.
941
942 All of the remaining functions are optional for improving or accelerating
943 the usage of `f`. These include:
944
945 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
946 solution of the ODE. Generally only used for testing and development of the solvers.
947 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
948 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
949 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
950 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
951 internally uses `t` as the representation in any plots.
952 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
953 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
954 naming to the values, allowing `sol[:a]` in the solution.
955
956 ## iip: In-Place vs Out-Of-Place
957
958 For more details on this argument, see the ODEFunction documentation.
959
960 ## specialize: Controlling Compilation and Specialization
961
962 For more details on this argument, see the ODEFunction documentation.
963
964 ## Fields
965
966 The fields of the DiscreteFunction type directly match the names of the inputs.
967 """
968 struct DiscreteFunction{iip, specialize, F, Ta, O, SYS} <:
969 AbstractDiscreteFunction{iip}
970 f::F
971 analytic::Ta
972 observed::O
973 sys::SYS
974 end
975
976 TruncatedStacktraces.@truncate_stacktrace DiscreteFunction 1 2
977
978 @doc doc"""
979 $(TYPEDEF)
980
981 A representation of an discrete dynamical system `f`, defined by:
982
983 ```math
984 0 = f(u_{n+1}, u_{n}, p, t_{n+1}, integ)
985 ```
986
987 and all of its related functions, such as the Jacobian of `f`, its gradient
988 with respect to time, and more. For all cases, `u0` is the initial condition,
989 `p` are the parameters, and `t` is the independent variable.
990 `integ` contains the fields:
991 ```julia
992 dt: the time step
993 ```
994
995 ## Constructor
996
997 ```julia
998 ImplicitDiscreteFunction{iip,specialize}(f;
999 analytic = __has_analytic(f) ? f.analytic : nothing,
1000 syms = nothing
1001 indepsym = nothing,
1002 paramsyms = nothing)
1003 ```
1004
1005 Note that only the function `f` itself is required. This function should
1006 be given as `f!(residual, u_next, u, p, t)` or `residual = f(u_next, u, p, t)`. See the section on `iip`
1007 for more details on in-place vs out-of-place handling.
1008
1009 All of the remaining functions are optional for improving or accelerating
1010 the usage of `f`. These include:
1011
1012 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1013 solution of the ODE. Generally only used for testing and development of the solvers.
1014 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1015 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1016 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1017 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1018 internally uses `t` as the representation in any plots.
1019 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1020 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1021 naming to the values, allowing `sol[:a]` in the solution.
1022
1023 ## iip: In-Place vs Out-Of-Place
1024
1025 For more details on this argument, see the ODEFunction documentation.
1026
1027 ## specialize: Controlling Compilation and Specialization
1028
1029 For more details on this argument, see the ODEFunction documentation.
1030
1031 ## Fields
1032
1033 The fields of the ImplicitDiscreteFunction type directly match the names of the inputs.
1034 """
1035 struct ImplicitDiscreteFunction{iip, specialize, F, Ta, O, SYS} <:
1036 AbstractDiscreteFunction{iip}
1037 f::F
1038 analytic::Ta
1039 observed::O
1040 sys::SYS
1041 end
1042
1043 TruncatedStacktraces.@truncate_stacktrace ImplicitDiscreteFunction 1 2
1044
1045 """
1046 $(TYPEDEF)
1047 """
1048 abstract type AbstractSDEFunction{iip} <: AbstractDiffEqFunction{iip} end
1049
1050 @doc doc"""
1051 $(TYPEDEF)
1052
1053 A representation of an SDE function `f`, defined by:
1054
1055 ```math
1056 M du = f(u,p,t)dt + g(u,p,t) dW
1057 ```
1058
1059 and all of its related functions, such as the Jacobian of `f`, its gradient
1060 with respect to time, and more. For all cases, `u0` is the initial condition,
1061 `p` are the parameters, and `t` is the independent variable.
1062
1063 ## Constructor
1064
1065 ```julia
1066 SDEFunction{iip,specialize}(f,g;
1067 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1068 analytic = __has_analytic(f) ? f.analytic : nothing,
1069 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1070 jac = __has_jac(f) ? f.jac : nothing,
1071 jvp = __has_jvp(f) ? f.jvp : nothing,
1072 vjp = __has_vjp(f) ? f.vjp : nothing,
1073 ggprime = nothing,
1074 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1075 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1076 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1077 syms = nothing,
1078 indepsym= nothing,
1079 paramsyms = nothing,
1080 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1081 sys = __has_sys(f) ? f.sys : nothing)
1082 ```
1083
1084 Note that both the function `f` and `g` are required. This function should
1085 be given as `f!(du,u,p,t)` or `du = f(u,p,t)`. See the section on `iip`
1086 for more details on in-place vs out-of-place handling.
1087
1088 All of the remaining functions are optional for improving or accelerating
1089 the usage of `f`. These include:
1090
1091 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
1092 to determine that the equation is actually a differential-algebraic equation (DAE)
1093 if `M` is singular. Note that in this case special solvers are required, see the
1094 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
1095 Must be an AbstractArray or an AbstractSciMLOperator.
1096 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1097 solution of the ODE. Generally only used for testing and development of the solvers.
1098 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1099 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1100 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1101 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1102 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1103 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1104 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1105 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1106 as the prototype and integrators will specialize on this structure where possible. Non-structured
1107 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1108 The default is `nothing`, which means a dense Jacobian.
1109 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1110 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1111 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1112 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1113 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1114 internally uses `t` as the representation in any plots.
1115 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1116 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1117 naming to the values, allowing `sol[:a]` in the solution.
1118 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1119 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1120 finite differences and automatic differentiation to be computed in an accelerated manner
1121 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1122 internally computed on demand when required. The cost of this operation is highly dependent
1123 on the sparsity pattern.
1124
1125 ## iip: In-Place vs Out-Of-Place
1126
1127 For more details on this argument, see the ODEFunction documentation.
1128
1129 ## specialize: Controlling Compilation and Specialization
1130
1131 For more details on this argument, see the ODEFunction documentation.
1132
1133 ## Fields
1134
1135 The fields of the ODEFunction type directly match the names of the inputs.
1136 """
1137 struct SDEFunction{iip, specialize, F, G, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ,
1138 GG, O,
1139 TCV, SYS,
1140 } <: AbstractSDEFunction{iip}
1141 f::F
1142 g::G
1143 mass_matrix::TMM
1144 analytic::Ta
1145 tgrad::Tt
1146 jac::TJ
1147 jvp::JVP
1148 vjp::VJP
1149 jac_prototype::JP
1150 sparsity::SP
1151 Wfact::TW
1152 Wfact_t::TWt
1153 paramjac::TPJ
1154 ggprime::GG
1155 observed::O
1156 colorvec::TCV
1157 sys::SYS
1158 end
1159
1160 TruncatedStacktraces.@truncate_stacktrace SDEFunction 1 2
1161
1162 @doc doc"""
1163 $(TYPEDEF)
1164
1165 A representation of a split SDE function `f`, defined by:
1166
1167 ```math
1168 M \frac{du}{dt} = f_1(u,p,t) + f_2(u,p,t) + g(u,p,t) dW
1169 ```
1170
1171 and all of its related functions, such as the Jacobian of `f`, its gradient
1172 with respect to time, and more. For all cases, `u0` is the initial condition,
1173 `p` are the parameters, and `t` is the independent variable.
1174
1175 Generally, for SDE integrators the `f_1` portion should be considered the
1176 "stiff portion of the model" with larger timescale separation, while the
1177 `f_2` portion should be considered the "non-stiff portion". This interpretation
1178 is directly used in integrators like IMEX (implicit-explicit integrators)
1179 and exponential integrators.
1180
1181 ## Constructor
1182
1183 ```julia
1184 SplitSDEFunction{iip,specialize}(f1,f2,g;
1185 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1186 analytic = __has_analytic(f) ? f.analytic : nothing,
1187 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1188 jac = __has_jac(f) ? f.jac : nothing,
1189 jvp = __has_jvp(f) ? f.jvp : nothing,
1190 vjp = __has_vjp(f) ? f.vjp : nothing,
1191 ggprime = nothing,
1192 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1193 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1194 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1195 syms = nothing,
1196 indepsym= nothing,
1197 paramsyms = nothing,
1198 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1199 sys = __has_sys(f) ? f.sys : nothing)
1200 ```
1201
1202 Note that only the function `f` itself is required. All of the remaining functions
1203 are optional for improving or accelerating the usage of `f`. These include:
1204
1205 - `mass_matrix`: the mass matrix `M` represented in the SDE function. Can be used
1206 to determine that the equation is actually a stochastic differential-algebraic equation (SDAE)
1207 if `M` is singular. Note that in this case special solvers are required, see the
1208 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/sdae_solve/.
1209 Must be an AbstractArray or an AbstractSciMLOperator.
1210 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1211 solution of the ODE. Generally only used for testing and development of the solvers.
1212 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f_1(u,p,t)}{\partial t}``
1213 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df_1}{du}``
1214 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df_1}{du} v``
1215 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df_1}{du}^\ast v``
1216 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1217 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1218 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1219 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1220 as the prototype and integrators will specialize on this structure where possible. Non-structured
1221 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1222 The default is `nothing`, which means a dense Jacobian.
1223 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df_1}{dp}``.
1224 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1225 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1226 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1227 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1228 internally uses `t` as the representation in any plots.
1229 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1230 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1231 naming to the values, allowing `sol[:a]` in the solution.
1232 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1233 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1234 finite differences and automatic differentiation to be computed in an accelerated manner
1235 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1236 internally computed on demand when required. The cost of this operation is highly dependent
1237 on the sparsity pattern.
1238
1239 ## Note on the Derivative Definition
1240
1241 The derivatives, such as the Jacobian, are only defined on the `f1` portion of the split ODE.
1242 This is used to treat the `f1` implicit while keeping the `f2` portion explicit.
1243
1244 ## iip: In-Place vs Out-Of-Place
1245
1246 For more details on this argument, see the ODEFunction documentation.
1247
1248 ## specialize: Controlling Compilation and Specialization
1249
1250 For more details on this argument, see the ODEFunction documentation.
1251
1252 ## Fields
1253
1254 The fields of the SplitSDEFunction type directly match the names of the inputs.
1255 """
1256 struct SplitSDEFunction{iip, specialize, F1, F2, G, TMM, C, Ta, Tt, TJ, JVP, VJP, JP, SP,
1257 TW,
1258 TWt, TPJ,
1259 O, TCV, SYS} <: AbstractSDEFunction{iip}
1260 f1::F1
1261 f2::F2
1262 g::G
1263 mass_matrix::TMM
1264 cache::C
1265 analytic::Ta
1266 tgrad::Tt
1267 jac::TJ
1268 jvp::JVP
1269 vjp::VJP
1270 jac_prototype::JP
1271 sparsity::SP
1272 Wfact::TW
1273 Wfact_t::TWt
1274 paramjac::TPJ
1275 observed::O
1276 colorvec::TCV
1277 sys::SYS
1278 end
1279
1280 TruncatedStacktraces.@truncate_stacktrace SplitSDEFunction 1 2
1281
1282 @doc doc"""
1283 $(TYPEDEF)
1284
1285 A representation of an SDE function `f` and `g`, defined by:
1286
1287 ```math
1288 M du = f(u,p,t) dt + g(u,p,t) dW_t
1289 ```
1290
1291 as a partitioned ODE:
1292
1293 ```math
1294 M_1 du = f_1(u,p,t) dt + g(u,p,t) dW_t
1295 M_2 du = f_2(u,p,t) dt + g(u,p,t) dW_t
1296 ```
1297
1298 and all of its related functions, such as the Jacobian of `f`, its gradient
1299 with respect to time, and more. For all cases, `u0` is the initial condition,
1300 `p` are the parameters, and `t` is the independent variable.
1301
1302 ## Constructor
1303
1304 ```julia
1305 DynamicalSDEFunction{iip,specialize}(f1,f2;
1306 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1307 analytic = __has_analytic(f) ? f.analytic : nothing,
1308 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1309 jac = __has_jac(f) ? f.jac : nothing,
1310 jvp = __has_jvp(f) ? f.jvp : nothing,
1311 vjp = __has_vjp(f) ? f.vjp : nothing,
1312 ggprime=nothing,
1313 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1314 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1315 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1316 syms = nothing,
1317 indepsym= nothing,
1318 paramsyms = nothing,
1319 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1320 sys = __has_sys(f) ? f.sys : nothing)
1321 ```
1322
1323 Note that only the functions `f_i` themselves are required. These functions should
1324 be given as `f_i!(du,u,p,t)` or `du = f_i(u,p,t)`. See the section on `iip`
1325 for more details on in-place vs out-of-place handling.
1326
1327 All of the remaining functions are optional for improving or accelerating
1328 the usage of `f`. These include:
1329
1330 - `mass_matrix`: the mass matrix `M_i` represented in the ODE function. Can be used
1331 to determine that the equation is actually a differential-algebraic equation (DAE)
1332 if `M` is singular. Note that in this case special solvers are required, see the
1333 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/dae_solve/.
1334 Must be an AbstractArray or an AbstractSciMLOperator. Should be given as a tuple
1335 of mass matrices, i.e. `(M_1, M_2)` for the mass matrices of equations 1 and 2
1336 respectively.
1337 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1338 solution of the ODE. Generally only used for testing and development of the solvers.
1339 - `tgrad(dT,u,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1340 - `jac(J,u,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1341 - `jvp(Jv,v,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1342 - `vjp(Jv,v,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1343 - `ggprime(J,u,p,t)` or `J = ggprime(u,p,t)`: returns the Milstein derivative
1344 ``\frac{dg(u,p,t)}{du} g(u,p,t)``
1345 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1346 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1347 as the prototype and integrators will specialize on this structure where possible. Non-structured
1348 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1349 The default is `nothing`, which means a dense Jacobian.
1350 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1351 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1352 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1353 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1354 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1355 internally uses `t` as the representation in any plots.
1356 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1357 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1358 finite differences and automatic differentiation to be computed in an accelerated manner
1359 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1360 internally computed on demand when required. The cost of this operation is highly dependent
1361 on the sparsity pattern.
1362
1363 ## iip: In-Place vs Out-Of-Place
1364
1365 For more details on this argument, see the ODEFunction documentation.
1366
1367 ## specialize: Controlling Compilation and Specialization
1368
1369 For more details on this argument, see the ODEFunction documentation.
1370
1371 ## Fields
1372
1373 The fields of the DynamicalSDEFunction type directly match the names of the inputs.
1374 """
1375 struct DynamicalSDEFunction{iip, specialize, F1, F2, G, TMM, C, Ta, Tt, TJ, JVP, VJP, JP,
1376 SP,
1377 TW, TWt,
1378 TPJ, O, TCV, SYS} <: AbstractSDEFunction{iip}
1379 # This is a direct copy of the SplitSDEFunction, maybe it's not necessary and the above can be used instead.
1380 f1::F1
1381 f2::F2
1382 g::G
1383 mass_matrix::TMM
1384 cache::C
1385 analytic::Ta
1386 tgrad::Tt
1387 jac::TJ
1388 jvp::JVP
1389 vjp::VJP
1390 jac_prototype::JP
1391 sparsity::SP
1392 Wfact::TW
1393 Wfact_t::TWt
1394 paramjac::TPJ
1395 observed::O
1396 colorvec::TCV
1397 sys::SYS
1398 end
1399
1400 TruncatedStacktraces.@truncate_stacktrace DynamicalSDEFunction 1 2
1401
1402 """
1403 $(TYPEDEF)
1404 """
1405 abstract type AbstractRODEFunction{iip} <: AbstractDiffEqFunction{iip} end
1406
1407 @doc doc"""
1408 $(TYPEDEF)
1409
1410 A representation of a RODE function `f`, defined by:
1411
1412 ```math
1413 M \frac{du}{dt} = f(u,p,t,W)dt
1414 ```
1415
1416 and all of its related functions, such as the Jacobian of `f`, its gradient
1417 with respect to time, and more. For all cases, `u0` is the initial condition,
1418 `p` are the parameters, and `t` is the independent variable.
1419
1420 ## Constructor
1421
1422 ```julia
1423 RODEFunction{iip,specialize}(f;
1424 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1425 analytic = __has_analytic(f) ? f.analytic : nothing,
1426 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1427 jac = __has_jac(f) ? f.jac : nothing,
1428 jvp = __has_jvp(f) ? f.jvp : nothing,
1429 vjp = __has_vjp(f) ? f.vjp : nothing,
1430 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1431 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1432 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1433 syms = nothing,
1434 indepsym= nothing,
1435 paramsyms = nothing,
1436 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1437 sys = __has_sys(f) ? f.sys : nothing,
1438 analytic_full = __has_analytic_full(f) ? f.analytic_full : false)
1439 ```
1440
1441 Note that only the function `f` itself is required. This function should
1442 be given as `f!(du,u,p,t,W)` or `du = f(u,p,t,W)`. See the section on `iip`
1443 for more details on in-place vs out-of-place handling.
1444
1445 All of the remaining functions are optional for improving or accelerating
1446 the usage of `f`. These include:
1447
1448 - `mass_matrix`: the mass matrix `M` represented in the RODE function. Can be used
1449 to determine that the equation is actually a random differential-algebraic equation (RDAE)
1450 if `M` is singular.
1451 - `analytic`: (u0,p,t,W)` or `analytic(sol)`: used to pass an analytical solution function for the analytical
1452 solution of the RODE. Generally only used for testing and development of the solvers.
1453 The exact form depends on the field `analytic_full`.
1454 - `analytic_full`: a boolean to indicate whether to use the form `analytic(u0,p,t,W)` (if `false`)
1455 or the form `analytic!(sol)` (if `true`). The former is expected to return the solution `u(t)` of
1456 the equation, given the initial condition `u0`, the parameter `p`, the current time `t` and the
1457 value `W=W(t)` of the noise at the given time `t`. The latter case is useful when the solution
1458 of the RODE depends on the whole history of the noise, which is available in `sol.W.W`, at
1459 times `sol.W.t`. In this case, `analytic(sol)` must mutate explicitly the field `sol.u_analytic`
1460 with the corresponding expected solution at `sol.W.t` or `sol.t`.
1461 - `tgrad(dT,u,p,t,W)` or dT=tgrad(u,p,t,W): returns ``\frac{\partial f(u,p,t,W)}{\partial t}``
1462 - `jac(J,u,p,t,W)` or `J=jac(u,p,t,W)`: returns ``\frac{df}{du}``
1463 - `jvp(Jv,v,u,p,t,W)` or `Jv=jvp(v,u,p,t,W)`: returns the directional derivative``\frac{df}{du} v``
1464 - `vjp(Jv,v,u,p,t,W)` or `Jv=vjp(v,u,p,t,W)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1465 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1466 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1467 as the prototype and integrators will specialize on this structure where possible. Non-structured
1468 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1469 The default is `nothing`, which means a dense Jacobian.
1470 - `paramjac(pJ,u,p,t,W)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1471 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1472 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1473 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1474 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1475 internally uses `t` as the representation in any plots.
1476 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1477 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1478 naming to the values, allowing `sol[:a]` in the solution.
1479 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1480 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1481 finite differences and automatic differentiation to be computed in an accelerated manner
1482 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1483 internally computed on demand when required. The cost of this operation is highly dependent
1484 on the sparsity pattern.
1485
1486 ## iip: In-Place vs Out-Of-Place
1487
1488 For more details on this argument, see the ODEFunction documentation.
1489
1490 ## specialize: Controlling Compilation and Specialization
1491
1492 For more details on this argument, see the ODEFunction documentation.
1493
1494 ## Fields
1495
1496 The fields of the RODEFunction type directly match the names of the inputs.
1497 """
1498 struct RODEFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV, SYS,
1499 } <:
1500 AbstractRODEFunction{iip}
1501 f::F
1502 mass_matrix::TMM
1503 analytic::Ta
1504 tgrad::Tt
1505 jac::TJ
1506 jvp::JVP
1507 vjp::VJP
1508 jac_prototype::JP
1509 sparsity::SP
1510 Wfact::TW
1511 Wfact_t::TWt
1512 paramjac::TPJ
1513 observed::O
1514 colorvec::TCV
1515 sys::SYS
1516 analytic_full::Bool
1517 end
1518
1519 TruncatedStacktraces.@truncate_stacktrace RODEFunction 1 2
1520
1521 """
1522 $(TYPEDEF)
1523 """
1524 abstract type AbstractDAEFunction{iip} <: AbstractDiffEqFunction{iip} end
1525
1526 @doc doc"""
1527 $(TYPEDEF)
1528
1529 A representation of an implicit DAE function `f`, defined by:
1530
1531 ```math
1532 0 = f(\frac{du}{dt},u,p,t)
1533 ```
1534
1535 and all of its related functions, such as the Jacobian of `f`, its gradient
1536 with respect to time, and more. For all cases, `u0` is the initial condition,
1537 `p` are the parameters, and `t` is the independent variable.
1538
1539 ## Constructor
1540
1541 ```julia
1542 DAEFunction{iip,specialize}(f;
1543 analytic = __has_analytic(f) ? f.analytic : nothing,
1544 jac = __has_jac(f) ? f.jac : nothing,
1545 jvp = __has_jvp(f) ? f.jvp : nothing,
1546 vjp = __has_vjp(f) ? f.vjp : nothing,
1547 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1548 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1549 syms = nothing,
1550 indepsym= nothing,
1551 paramsyms = nothing,
1552 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1553 sys = __has_sys(f) ? f.sys : nothing)
1554 ```
1555
1556 Note that only the function `f` itself is required. This function should
1557 be given as `f!(out,du,u,p,t)` or `out = f(du,u,p,t)`. See the section on `iip`
1558 for more details on in-place vs out-of-place handling.
1559
1560 All of the remaining functions are optional for improving or accelerating
1561 the usage of `f`. These include:
1562
1563 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1564 solution of the ODE. Generally only used for testing and development of the solvers.
1565 - `jac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: returns the implicit DAE Jacobian
1566 defined as ``gamma \frac{dG}{d(du)} + \frac{dG}{du}``
1567 - `jvp(Jv,v,du,u,p,gamma,t)` or `Jv=jvp(v,du,u,p,gamma,t)`: returns the directional
1568 derivative``\frac{df}{du} v``
1569 - `vjp(Jv,v,du,u,p,gamma,t)` or `Jv=vjp(v,du,u,p,gamma,t)`: returns the adjoint
1570 derivative``\frac{df}{du}^\ast v``
1571 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1572 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1573 as the prototype and integrators will specialize on this structure where possible. Non-structured
1574 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1575 The default is `nothing`, which means a dense Jacobian.
1576 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1577 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1578 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1579 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1580 internally uses `t` as the representation in any plots.
1581 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1582 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1583 naming to the values, allowing `sol[:a]` in the solution.
1584 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1585 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1586 finite differences and automatic differentiation to be computed in an accelerated manner
1587 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1588 internally computed on demand when required. The cost of this operation is highly dependent
1589 on the sparsity pattern.
1590
1591 ## iip: In-Place vs Out-Of-Place
1592
1593 For more details on this argument, see the ODEFunction documentation.
1594
1595 ## specialize: Controlling Compilation and Specialization
1596
1597 For more details on this argument, see the ODEFunction documentation.
1598
1599 ## Fields
1600
1601 The fields of the DAEFunction type directly match the names of the inputs.
1602
1603 ## Examples
1604
1605
1606 ### Declaring Explicit Jacobians for DAEs
1607
1608 For fully implicit ODEs (`DAEProblem`s), a slightly different Jacobian function
1609 is necessary. For the DAE
1610
1611 ```math
1612 G(du,u,p,t) = res
1613 ```
1614
1615 The Jacobian should be given in the form `gamma*dG/d(du) + dG/du ` where `gamma`
1616 is given by the solver. This means that the signature is:
1617
1618 ```julia
1619 f(J,du,u,p,gamma,t)
1620 ```
1621
1622 For example, for the equation
1623
1624 ```julia
1625 function testjac(res,du,u,p,t)
1626 res[1] = du[1] - 2.0 * u[1] + 1.2 * u[1]*u[2]
1627 res[2] = du[2] -3 * u[2] - u[1]*u[2]
1628 end
1629 ```
1630
1631 we would define the Jacobian as:
1632
1633 ```julia
1634 function testjac(J,du,u,p,gamma,t)
1635 J[1,1] = gamma - 2.0 + 1.2 * u[2]
1636 J[1,2] = 1.2 * u[1]
1637 J[2,1] = - 1 * u[2]
1638 J[2,2] = gamma - 3 - u[1]
1639 nothing
1640 end
1641 ```
1642
1643 ## Symbolically Generating the Functions
1644
1645 See the `modelingtoolkitize` function from
1646 [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) for
1647 automatically symbolically generating the Jacobian and more from the
1648 numerically-defined functions.
1649 """
1650 struct DAEFunction{iip, specialize, F, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ, O, TCV,
1651 SYS} <:
1652 AbstractDAEFunction{iip}
1653 f::F
1654 analytic::Ta
1655 tgrad::Tt
1656 jac::TJ
1657 jvp::JVP
1658 vjp::VJP
1659 jac_prototype::JP
1660 sparsity::SP
1661 Wfact::TW
1662 Wfact_t::TWt
1663 paramjac::TPJ
1664 observed::O
1665 colorvec::TCV
1666 sys::SYS
1667 end
1668
1669 TruncatedStacktraces.@truncate_stacktrace DAEFunction 1 2
1670
1671 """
1672 $(TYPEDEF)
1673 """
1674 abstract type AbstractSDDEFunction{iip} <: AbstractDiffEqFunction{iip} end
1675
1676 @doc doc"""
1677 $(TYPEDEF)
1678
1679 A representation of a SDDE function `f`, defined by:
1680
1681 ```math
1682 M du = f(u,h,p,t) dt + g(u,h,p,t) dW_t
1683 ```
1684
1685 and all of its related functions, such as the Jacobian of `f`, its gradient
1686 with respect to time, and more. For all cases, `u0` is the initial condition,
1687 `p` are the parameters, and `t` is the independent variable.
1688
1689 ## Constructor
1690
1691 ```julia
1692 SDDEFunction{iip,specialize}(f,g;
1693 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
1694 analytic = __has_analytic(f) ? f.analytic : nothing,
1695 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
1696 jac = __has_jac(f) ? f.jac : nothing,
1697 jvp = __has_jvp(f) ? f.jvp : nothing,
1698 vjp = __has_vjp(f) ? f.vjp : nothing,
1699 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1700 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1701 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1702 syms = nothing,
1703 indepsym= nothing,
1704 paramsyms = nothing,
1705 colorvec = __has_colorvec(f) ? f.colorvec : nothing
1706 sys = __has_sys(f) ? f.sys : nothing)
1707 ```
1708
1709 Note that only the function `f` itself is required. This function should
1710 be given as `f!(du,u,h,p,t)` or `du = f(u,h,p,t)`. See the section on `iip`
1711 for more details on in-place vs out-of-place handling. The history function
1712 `h` acts as an interpolator over time, i.e. `h(t)` with options matching
1713 the solution interface, i.e. `h(t; save_idxs = 2)`.
1714
1715 All of the remaining functions are optional for improving or accelerating
1716 the usage of `f`. These include:
1717
1718 - `mass_matrix`: the mass matrix `M` represented in the ODE function. Can be used
1719 to determine that the equation is actually a differential-algebraic equation (DAE)
1720 if `M` is singular. Note that in this case special solvers are required, see the
1721 DAE solver page for more details: https://docs.sciml.ai/DiffEqDocs/stable/solvers/dae_solve/.
1722 Must be an AbstractArray or an AbstractSciMLOperator.
1723 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
1724 solution of the ODE. Generally only used for testing and development of the solvers.
1725 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
1726 - `jac(J,u,h,p,t)` or `J=jac(u,p,t)`: returns ``\frac{df}{du}``
1727 - `jvp(Jv,v,h,u,p,t)` or `Jv=jvp(v,u,p,t)`: returns the directional derivative``\frac{df}{du} v``
1728 - `vjp(Jv,v,h,u,p,t)` or `Jv=vjp(v,u,p,t)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1729 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1730 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1731 as the prototype and integrators will specialize on this structure where possible. Non-structured
1732 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1733 The default is `nothing`, which means a dense Jacobian.
1734 - `paramjac(pJ,h,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1735 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1736 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1737 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1738 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
1739 internally uses `t` as the representation in any plots.
1740 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1741 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1742 naming to the values, allowing `sol[:a]` in the solution.
1743 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1744 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1745 finite differences and automatic differentiation to be computed in an accelerated manner
1746 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1747 internally computed on demand when required. The cost of this operation is highly dependent
1748 on the sparsity pattern.
1749
1750 ## iip: In-Place vs Out-Of-Place
1751
1752 For more details on this argument, see the ODEFunction documentation.
1753
1754 ## specialize: Controlling Compilation and Specialization
1755
1756 For more details on this argument, see the ODEFunction documentation.
1757
1758 ## Fields
1759
1760 The fields of the DDEFunction type directly match the names of the inputs.
1761 """
1762 struct SDDEFunction{iip, specialize, F, G, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt, TPJ,
1763 GG, O,
1764 TCV, SYS} <: AbstractSDDEFunction{iip}
1765 f::F
1766 g::G
1767 mass_matrix::TMM
1768 analytic::Ta
1769 tgrad::Tt
1770 jac::TJ
1771 jvp::JVP
1772 vjp::VJP
1773 jac_prototype::JP
1774 sparsity::SP
1775 Wfact::TW
1776 Wfact_t::TWt
1777 paramjac::TPJ
1778 ggprime::GG
1779 observed::O
1780 colorvec::TCV
1781 sys::SYS
1782 end
1783
1784 TruncatedStacktraces.@truncate_stacktrace SDDEFunction 1 2
1785
1786 """
1787 $(TYPEDEF)
1788 """
1789 abstract type AbstractNonlinearFunction{iip} <: AbstractSciMLFunction{iip} end
1790
1791 @doc doc"""
1792 $(TYPEDEF)
1793
1794 A representation of a nonlinear system of equations `f`, defined by:
1795
1796 ```math
1797 0 = f(u,p)
1798 ```
1799
1800 and all of its related functions, such as the Jacobian of `f`, its gradient
1801 with respect to time, and more. For all cases, `u0` is the initial condition,
1802 `p` are the parameters, and `t` is the independent variable.
1803
1804 ## Constructor
1805
1806 ```julia
1807 NonlinearFunction{iip, specialize}(f;
1808 analytic = __has_analytic(f) ? f.analytic : nothing,
1809 jac = __has_jac(f) ? f.jac : nothing,
1810 jvp = __has_jvp(f) ? f.jvp : nothing,
1811 vjp = __has_vjp(f) ? f.vjp : nothing,
1812 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
1813 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
1814 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
1815 syms = nothing,
1816 paramsyms = nothing,
1817 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1818 sys = __has_sys(f) ? f.sys : nothing)
1819 ```
1820
1821 Note that only the function `f` itself is required. This function should
1822 be given as `f!(du,u,p)` or `du = f(u,p)`. See the section on `iip`
1823 for more details on in-place vs out-of-place handling.
1824
1825 All of the remaining functions are optional for improving or accelerating
1826 the usage of `f`. These include:
1827
1828 - `analytic(u0,p)`: used to pass an analytical solution function for the analytical
1829 solution of the ODE. Generally only used for testing and development of the solvers.
1830 - `jac(J,u,p)` or `J=jac(u,p)`: returns ``\frac{df}{du}``
1831 - `jvp(Jv,v,u,p)` or `Jv=jvp(v,u,p)`: returns the directional derivative``\frac{df}{du} v``
1832 - `vjp(Jv,v,u,p)` or `Jv=vjp(v,u,p)`: returns the adjoint derivative``\frac{df}{du}^\ast v``
1833 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
1834 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
1835 as the prototype and integrators will specialize on this structure where possible. Non-structured
1836 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
1837 The default is `nothing`, which means a dense Jacobian.
1838 - `paramjac(pJ,u,p)`: returns the parameter Jacobian ``\frac{df}{dp}``.
1839 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1840 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1841 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1842 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1843 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1844 naming to the values, allowing `sol[:a]` in the solution.
1845 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
1846 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
1847 finite differences and automatic differentiation to be computed in an accelerated manner
1848 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
1849 internally computed on demand when required. The cost of this operation is highly dependent
1850 on the sparsity pattern.
1851
1852 ## iip: In-Place vs Out-Of-Place
1853
1854 For more details on this argument, see the ODEFunction documentation.
1855
1856 ## specialize: Controlling Compilation and Specialization
1857
1858 For more details on this argument, see the ODEFunction documentation.
1859
1860 ## Fields
1861
1862 The fields of the NonlinearFunction type directly match the names of the inputs.
1863 """
1864 struct NonlinearFunction{iip, specialize, F, TMM, Ta, Tt, TJ, JVP, VJP, JP, SP, TW, TWt,
1865 TPJ, O, TCV, SYS, RP} <: AbstractNonlinearFunction{iip}
1866 f::F
1867 mass_matrix::TMM
1868 analytic::Ta
1869 tgrad::Tt
1870 jac::TJ
1871 jvp::JVP
1872 vjp::VJP
1873 jac_prototype::JP
1874 sparsity::SP
1875 Wfact::TW
1876 Wfact_t::TWt
1877 paramjac::TPJ
1878 observed::O
1879 colorvec::TCV
1880 sys::SYS
1881 resid_prototype::RP
1882 end
1883
1884 TruncatedStacktraces.@truncate_stacktrace NonlinearFunction 1 2
1885
1886 """
1887 $(TYPEDEF)
1888 """
1889 abstract type AbstractIntervalNonlinearFunction{iip} <: AbstractSciMLFunction{iip} end
1890
1891 @doc doc"""
1892 $(TYPEDEF)
1893
1894 A representation of an interval nonlinear system of equations `f`, defined by:
1895
1896 ```math
1897 f(t,p) = u = 0
1898 ```
1899
1900 and all of its related functions. For all cases, `p` are the parameters and `t` is the
1901 interval variable.
1902
1903 ## Constructor
1904
1905 ```julia
1906 IntervalNonlinearFunction{iip, specialize}(f;
1907 analytic = __has_analytic(f) ? f.analytic : nothing,
1908 syms = nothing,
1909 paramsyms = nothing,
1910 sys = __has_sys(f) ? f.sys : nothing)
1911 ```
1912
1913 Note that only the function `f` itself is required. This function should
1914 be given as `f!(u,t,p)` or `u = f(t,p)`. See the section on `iip`
1915 for more details on in-place vs out-of-place handling.
1916
1917 All of the remaining functions are optional for improving or accelerating
1918 the usage of `f`. These include:
1919
1920 - `analytic(p)`: used to pass an analytical solution function for the analytical
1921 solution of the ODE. Generally only used for testing and development of the solvers.
1922 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
1923 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
1924 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
1925 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
1926 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
1927 naming to the values, allowing `sol[:a]` in the solution.
1928
1929 ## iip: In-Place vs Out-Of-Place
1930
1931 For more details on this argument, see the ODEFunction documentation.
1932
1933 ## specialize: Controlling Compilation and Specialization
1934
1935 For more details on this argument, see the ODEFunction documentation.
1936
1937 ## Fields
1938
1939 The fields of the IntervalNonlinearFunction type directly match the names of the inputs.
1940 """
1941 struct IntervalNonlinearFunction{iip, specialize, F, Ta,
1942 O, SYS,
1943 } <: AbstractIntervalNonlinearFunction{iip}
1944 f::F
1945 analytic::Ta
1946 observed::O
1947 sys::SYS
1948 end
1949
1950 TruncatedStacktraces.@truncate_stacktrace IntervalNonlinearFunction 1 2
1951
1952 """
1953 $(TYPEDEF)
1954
1955 A representation of an objective function `f`, defined by:
1956
1957 ```math
1958 \\min_{u} f(u,p)
1959 ```
1960
1961 and all of its related functions, such as the gradient of `f`, its Hessian,
1962 and more. For all cases, `u` is the state and `p` are the parameters.
1963
1964 ## Constructor
1965
1966 ```julia
1967 OptimizationFunction{iip}(f, adtype::AbstractADType = NoAD();
1968 grad = nothing, hess = nothing, hv = nothing,
1969 cons = nothing, cons_j = nothing, cons_h = nothing,
1970 hess_prototype = nothing,
1971 cons_jac_prototype = nothing,
1972 cons_hess_prototype = nothing,
1973 syms = nothing,
1974 paramsyms = nothing,
1975 observed = __has_observed(f) ? f.observed : DEFAULT_OBSERVED_NO_TIME,
1976 lag_h = nothing,
1977 hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1978 cons_jac_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1979 cons_hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
1980 lag_hess_colorvec = nothing,
1981 sys = __has_sys(f) ? f.sys : nothing)
1982 ```
1983
1984 ## Positional Arguments
1985
1986 - `f(u,p,args...)`: the function to optimize. `u` are the optimization variables and `p` are parameters used in definition of
1987 the objective, even if no such parameters are used in the objective it should be an argument in the function. This can also take
1988 any additional arguments that are relevant to the objective function, for example minibatches used in machine learning,
1989 take a look at the minibatching tutorial [here](https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/). This should return
1990 a scalar, the loss value, as the first return output and if any additional outputs are returned, they will be passed to the `callback`
1991 function described in [Callback Functions](https://docs.sciml.ai/Optimization/stable/API/solve/#Common-Solver-Options-(Solve-Keyword-Arguments)).
1992 - `adtype`: see the Defining Optimization Functions via AD section below.
1993
1994 ## Keyword Arguments
1995
1996 - `grad(G,u,p)` or `G=grad(u,p)`: the gradient of `f` with respect to `u`. If `f` takes additional arguments
1997 then `grad(G,u,p,args...)` or `G=grad(u,p,args...)` should be used.
1998 - `hess(H,u,p)` or `H=hess(u,p)`: the Hessian of `f` with respect to `u`. If `f` takes additional arguments
1999 then `hess(H,u,p,args...)` or `H=hess(u,p,args...)` should be used.
2000 - `hv(Hv,u,v,p)` or `Hv=hv(u,v,p)`: the Hessian-vector product ``(d^2 f / du^2) v``. If `f` takes additional arguments
2001 then `hv(Hv,u,v,p,args...)` or `Hv=hv(u,v,p, args...)` should be used.
2002 - `cons(res,x,p)` or `res=cons(x,p)` : the constraints function, should mutate the passed `res` array
2003 with value of the `i`th constraint, evaluated at the current values of variables
2004 inside the optimization routine. This takes just the function evaluations
2005 and the equality or inequality assertion is applied by the solver based on the constraint
2006 bounds passed as `lcons` and `ucons` to [`OptimizationProblem`](@ref), in case of equality
2007 constraints `lcons` and `ucons` should be passed equal values.
2008 - `cons_j(J,x,p)` or `J=cons_j(x,p)`: the Jacobian of the constraints.
2009 - `cons_h(H,x,p)` or `H=cons_h(x,p)`: the Hessian of the constraints, provided as
2010 an array of Hessians with `res[i]` being the Hessian with respect to the `i`th output on `cons`.
2011 - `hess_prototype`: a prototype matrix matching the type that matches the Hessian. For example,
2012 if the Hessian is tridiagonal, then an appropriately sized `Hessian` matrix can be used
2013 as the prototype and optimization solvers will specialize on this structure where possible. Non-structured
2014 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Hessian.
2015 The default is `nothing`, which means a dense Hessian.
2016 - `cons_jac_prototype`: a prototype matrix matching the type that matches the constraint Jacobian.
2017 The default is `nothing`, which means a dense constraint Jacobian.
2018 - `cons_hess_prototype`: a prototype matrix matching the type that matches the constraint Hessian.
2019 This is defined as an array of matrices, where `hess[i]` is the Hessian w.r.t. the `i`th output.
2020 For example, if the Hessian is sparse, then `hess` is a `Vector{SparseMatrixCSC}`.
2021 The default is `nothing`, which means a dense constraint Hessian.
2022 - `lag_h(res,x,sigma,mu,p)` or `res=lag_h(x,sigma,mu,p)`: the Hessian of the Lagrangian,
2023 where `sigma` is a multiplier of the cost function and `mu` are the Lagrange multipliers
2024 multiplying the constraints. This can be provided instead of `hess` and `cons_h`
2025 to solvers that directly use the Hessian of the Lagrangian.
2026 - `hess_colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2027 pattern of the `hess_prototype`. This specializes the Hessian construction when using
2028 finite differences and automatic differentiation to be computed in an accelerated manner
2029 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
2030 internally computed on demand when required. The cost of this operation is highly dependent
2031 on the sparsity pattern.
2032 - `cons_jac_colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2033 pattern of the `cons_jac_prototype`.
2034 - `cons_hess_colorvec`: an array of color vector according to the SparseDiffTools.jl definition for
2035 the sparsity pattern of the `cons_hess_prototype`.
2036
2037 When [Symbolic Problem Building with ModelingToolkit](https://docs.sciml.ai/Optimization/stable/tutorials/symbolic/) interface is used the following arguments are also relevant:
2038
2039 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
2040 example, if `u = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
2041 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
2042 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
2043 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
2044 naming to the values, allowing `sol[:a]` in the solution.
2045 - `observed`: an algebraic combination of optimization variables that is of interest to the user
2046 which will be available in the solution. This can be single or multiple expressions.
2047 - `sys`: field that stores the `OptimizationSystem`.
2048
2049 ## Defining Optimization Functions via AD
2050
2051 While using the keyword arguments gives the user control over defining
2052 all of the possible functions, the simplest way to handle the generation
2053 of an `OptimizationFunction` is by specifying the `ADtype` which lets the user choose the
2054 Automatic Differentiation backend to use for automatically filling in all of the extra functions.
2055 For example,
2056
2057 ```julia
2058 OptimizationFunction(f,AutoForwardDiff())
2059 ```
2060
2061 will use [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) to define
2062 all of the necessary functions. Note that if any functions are defined
2063 directly, the auto-AD definition does not overwrite the user's choice.
2064
2065 Each of the AD-based constructors are documented separately via their
2066 own dispatches below in the [Automatic Differentiation Construction Choice Recommendations](@ref) section.
2067
2068 ## iip: In-Place vs Out-Of-Place
2069
2070 For more details on this argument, see the ODEFunction documentation.
2071
2072 ## specialize: Controlling Compilation and Specialization
2073
2074 For more details on this argument, see the ODEFunction documentation.
2075
2076 ## Fields
2077
2078 The fields of the OptimizationFunction type directly match the names of the inputs.
2079 """
2080 struct OptimizationFunction{iip, AD, F, G, H, HV, C, CJ, CH, HP, CJP, CHP, O,
2081 EX, CEX, SYS, LH, LHP, HCV, CJCV, CHCV, LHCV} <:
2082 AbstractOptimizationFunction{iip}
2083 f::F
2084 adtype::AD
2085 grad::G
2086 hess::H
2087 hv::HV
2088 cons::C
2089 cons_j::CJ
2090 cons_h::CH
2091 hess_prototype::HP
2092 cons_jac_prototype::CJP
2093 cons_hess_prototype::CHP
2094 observed::O
2095 expr::EX
2096 cons_expr::CEX
2097 sys::SYS
2098 lag_h::LH
2099 lag_hess_prototype::LHP
2100 hess_colorvec::HCV
2101 cons_jac_colorvec::CJCV
2102 cons_hess_colorvec::CHCV
2103 lag_hess_colorvec::LHCV
2104 end
2105
2106 TruncatedStacktraces.@truncate_stacktrace OptimizationFunction 1 2
2107
2108 """
2109 $(TYPEDEF)
2110 """
2111 abstract type AbstractBVPFunction{iip, twopoint} <: AbstractDiffEqFunction{iip} end
2112
2113 @doc doc"""
2114 $(TYPEDEF)
2115
2116 A representation of a BVP function `f`, defined by:
2117
2118 ```math
2119 \frac{du}{dt}=f(u,p,t)
2120 ```
2121
2122 and the constraints:
2123
2124 ```math
2125 \frac{du}{dt}=g(u,p,t)
2126 ```
2127
2128 and all of its related functions, such as the Jacobian of `f`, its gradient
2129 with respect to time, and more. For all cases, `u0` is the initial condition,
2130 `p` are the parameters, and `t` is the independent variable.
2131
2132 ```julia
2133 BVPFunction{iip,specialize}(f, bc;
2134 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
2135 analytic = __has_analytic(f) ? f.analytic : nothing,
2136 tgrad= __has_tgrad(f) ? f.tgrad : nothing,
2137 jac = __has_jac(f) ? f.jac : nothing,
2138 bcjac = __has_jac(bc) ? bc.jac : nothing,
2139 jvp = __has_jvp(f) ? f.jvp : nothing,
2140 vjp = __has_vjp(f) ? f.vjp : nothing,
2141 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
2142 bcjac_prototype = __has_jac_prototype(bc) ? bc.jac_prototype : nothing,
2143 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
2144 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
2145 syms = nothing,
2146 indepsym= nothing,
2147 paramsyms = nothing,
2148 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
2149 bccolorvec = __has_colorvec(f) ? bc.colorvec : nothing,
2150 sys = __has_sys(f) ? f.sys : nothing)
2151 ```
2152
2153 Note that both the function `f` and boundary condition `bc` are required. `f` should
2154 be given as `f(du,u,p,t)` or `out = f(u,p,t)`. `bc` should be given as `bc(res, u, p, t)`.
2155 See the section on `iip` for more details on in-place vs out-of-place handling.
2156
2157 All of the remaining functions are optional for improving or accelerating
2158 the usage of `f` and `bc`. These include:
2159
2160 - `mass_matrix`: the mass matrix `M` represented in the BVP function. Can be used
2161 to determine that the equation is actually a BVP for differential algebraic equation (DAE)
2162 if `M` is singular.
2163 - `analytic(u0,p,t)`: used to pass an analytical solution function for the analytical
2164 solution of the BVP. Generally only used for testing and development of the solvers.
2165 - `tgrad(dT,u,h,p,t)` or dT=tgrad(u,p,t): returns ``\frac{\partial f(u,p,t)}{\partial t}``
2166 - `jac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: returns ``\frac{df}{du}``
2167 - `bcjac(J,du,u,p,gamma,t)` or `J=jac(du,u,p,gamma,t)`: erturns ``\frac{dbc}{du}``
2168 - `jvp(Jv,v,du,u,p,gamma,t)` or `Jv=jvp(v,du,u,p,gamma,t)`: returns the directional
2169 derivative``\frac{df}{du} v``
2170 - `vjp(Jv,v,du,u,p,gamma,t)` or `Jv=vjp(v,du,u,p,gamma,t)`: returns the adjoint
2171 derivative``\frac{df}{du}^\ast v``
2172 - `jac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
2173 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
2174 as the prototype and integrators will specialize on this structure where possible. Non-structured
2175 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
2176 The default is `nothing`, which means a dense Jacobian.
2177 - `bcjac_prototype`: a prototype matrix matching the type that matches the Jacobian. For example,
2178 if the Jacobian is tridiagonal, then an appropriately sized `Tridiagonal` matrix can be used
2179 as the prototype and integrators will specialize on this structure where possible. Non-structured
2180 sparsity patterns should use a `SparseMatrixCSC` with a correct sparsity pattern for the Jacobian.
2181 The default is `nothing`, which means a dense Jacobian.
2182 - `paramjac(pJ,u,p,t)`: returns the parameter Jacobian ``\frac{df}{dp}``.
2183 - `syms`: the symbol names for the elements of the equation. This should match `u0` in size. For
2184 example, if `u0 = [0.0,1.0]` and `syms = [:x, :y]`, this will apply a canonical naming to the
2185 values, allowing `sol[:x]` in the solution and automatically naming values in plots.
2186 - `indepsym`: the canonical naming for the independent variable. Defaults to nothing, which
2187 internally uses `t` as the representation in any plots.
2188 - `paramsyms`: the symbol names for the parameters of the equation. This should match `p` in
2189 size. For example, if `p = [0.0, 1.0]` and `paramsyms = [:a, :b]`, this will apply a canonical
2190 naming to the values, allowing `sol[:a]` in the solution.
2191 - `colorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2192 pattern of the `jac_prototype`. This specializes the Jacobian construction when using
2193 finite differences and automatic differentiation to be computed in an accelerated manner
2194 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
2195 internally computed on demand when required. The cost of this operation is highly dependent
2196 on the sparsity pattern.
2197 - `bccolorvec`: a color vector according to the SparseDiffTools.jl definition for the sparsity
2198 pattern of the `bcjac_prototype`. This specializes the Jacobian construction when using
2199 finite differences and automatic differentiation to be computed in an accelerated manner
2200 based on the sparsity pattern. Defaults to `nothing`, which means a color vector will be
2201 internally computed on demand when required. The cost of this operation is highly dependent
2202 on the sparsity pattern.
2203
2204 ## iip: In-Place vs Out-Of-Place
2205
2206 For more details on this argument, see the ODEFunction documentation.
2207
2208 ## specialize: Controlling Compilation and Specialization
2209
2210 For more details on this argument, see the ODEFunction documentation.
2211
2212 ## Fields
2213
2214 The fields of the BVPFunction type directly match the names of the inputs.
2215 """
2216 struct BVPFunction{iip, specialize, twopoint, F, BF, TMM, Ta, Tt, TJ, BCTJ, JVP, VJP,
2217 JP, BCJP, BCRP, SP, TW, TWt, TPJ, O, TCV, BCTCV,
2218 SYS} <: AbstractBVPFunction{iip, twopoint}
2219 f::F
2220 bc::BF
2221 mass_matrix::TMM
2222 analytic::Ta
2223 tgrad::Tt
2224 jac::TJ
2225 bcjac::BCTJ
2226 jvp::JVP
2227 vjp::VJP
2228 jac_prototype::JP
2229 bcjac_prototype::BCJP
2230 bcresid_prototype::BCRP
2231 sparsity::SP
2232 Wfact::TW
2233 Wfact_t::TWt
2234 paramjac::TPJ
2235 observed::O
2236 colorvec::TCV
2237 bccolorvec::BCTCV
2238 sys::SYS
2239 end
2240
2241 TruncatedStacktraces.@truncate_stacktrace BVPFunction 1 2
2242
2243 @doc doc"""
2244 IntegralFunction{iip,specialize,F,T} <: AbstractIntegralFunction{iip}
2245
2246 A representation of an integrand `f` defined by:
2247
2248 ```math
2249 f(u, p)
2250 ```
2251
2252 For an in-place form of `f` see the `iip` section below for details on in-place or
2253 out-of-place handling.
2254
2255 ```julia
2256 IntegralFunction{iip,specialize}(f, [integrand_prototype])
2257 ```
2258
2259 Note that only `f` is required, and in the case of inplace integrands a mutable container
2260 `integrand_prototype` to store the result of the integrand. If `integrand_prototype` is
2261 present, `f` is interpreted as in-place, and otherwise `f` is assumed to be out-of-place.
2262
2263 ## iip: In-Place vs Out-Of-Place
2264
2265 Out-of-place functions must be of the form ``y = f(u, p)`` and in-place functions of the form
2266 ``f(y, u, p)``. Since `f` is allowed to return any type (e.g. real or complex numbers or
2267 arrays), in-place functions must provide a container `integrand_prototype` that is of the
2268 right type for the variable ``y``, and the result is written to this container in-place.
2269 When in-place forms are used, in-place array operations, i.e. broadcasting, may be used by
2270 algorithms to reduce allocations. If `integrand_prototype` is not provided, `f` is assumed
2271 to be out-of-place and quadrature is performed assuming immutable return types.
2272
2273 ## specialize
2274
2275 This field is currently unused
2276
2277 ## Fields
2278
2279 The fields of the IntegralFunction type directly match the names of the inputs.
2280 """
2281 struct IntegralFunction{iip, specialize, F, T} <:
2282 AbstractIntegralFunction{iip}
2283 f::F
2284 integrand_prototype::T
2285 end
2286
2287 TruncatedStacktraces.@truncate_stacktrace IntegralFunction 1 2
2288
2289 @doc doc"""
2290 BatchIntegralFunction{iip,specialize,F,T} <: AbstractIntegralFunction{iip}
2291
2292 A representation of an integrand `f` that can be evaluated at multiple points simultaneously
2293 using threads, the gpu, or distributed memory defined by:
2294
2295 ```math
2296 y = f(u, p)
2297 ```
2298
2299 ``u`` is a vector whose elements correspond to distinct evaluation points to `f`, whose
2300 output must be returned as an array whose last "batching" dimension corresponds to integrand
2301 evaluations at the different points in ``u``. In general, the integration algorithm is
2302 allowed to vary the number of evaluation points between subsequent calls to `f`.
2303
2304 For an in-place form of `f` see the `iip` section below for details on in-place or
2305 out-of-place handling.
2306
2307 ```julia
2308 BatchIntegralFunction{iip,specialize}(f, [integrand_prototype];
2309 max_batch=typemax(Int))
2310 ```
2311 Note that only `f` is required, and in the case of inplace integrands a mutable container
2312 `integrand_prototype` to store a batch of integrand evaluations, with a last "batching"
2313 dimension.
2314
2315 The keyword `max_batch` is used to set a soft limit on the number of points to batch at the
2316 same time so that memory usage is controlled.
2317
2318 If `integrand_prototype` is present, `f` is interpreted as in-place, and otherwise `f` is
2319 assumed to be out-of-place.
2320
2321 ## iip: In-Place vs Out-Of-Place
2322
2323 Out-of-place functions must be of the form ``y = f(u,p)`` and in-place functions of the form
2324 ``f(y, u, p)``. Since `f` is allowed to return any type (e.g. real or complex numbers or
2325 arrays), in-place functions must provide a container `integrand_prototype` of the right type
2326 for ``y``. The only assumption that is enforced is that the last axes of `the `y`` and ``u``
2327 arrays are the same length and correspond to distinct batched points. The algorithm will
2328 then allocate arrays `similar` to ``y`` to pass to the integrand. Since the algorithm may
2329 vary the number of points to batch, the length of the batching dimension of ``y`` may vary
2330 between subsequent calls to `f`. To reduce allocations, views of ``y`` may also be passed to
2331 the integrand. In the out-of-place case, the algorithm may infer the type
2332 of ``y`` by passing `f` an empty array of input points. When in-place forms are used,
2333 in-place array operations may be used by algorithms to reduce allocations. If
2334 `integrand_prototype` is not provided, `f` is assumed to be out-of-place.
2335
2336 ## specialize
2337
2338 This field is currently unused
2339
2340 ## Fields
2341
2342 The fields of the BatchIntegralFunction type directly match the names of the inputs.
2343 """
2344 struct BatchIntegralFunction{iip, specialize, F, T} <:
2345 AbstractIntegralFunction{iip}
2346 f::F
2347 integrand_prototype::T
2348 max_batch::Int
2349 end
2350
2351 TruncatedStacktraces.@truncate_stacktrace BatchIntegralFunction 1 2
2352
2353 ######### Backwards Compatibility Overloads
2354
2355 (f::ODEFunction)(args...) = f.f(args...)
2356 30 (11 %)
30 (11 %) samples spent in NonlinearFunction
30 (100 %) (incl.) when called from JacobianWrapper line 99
29 (97 %) samples spent calling brusselator_2d_loop
1 (3 %) samples spent calling brusselator_2d_loop
(f::NonlinearFunction)(args...) = f.f(args...)
2357 (f::IntervalNonlinearFunction)(args...) = f.f(args...)
2358 (f::IntegralFunction)(args...) = f.f(args...)
2359 (f::BatchIntegralFunction)(args...) = f.f(args...)
2360
2361 function (f::DynamicalODEFunction)(u, p, t)
2362 ArrayPartition(f.f1(u.x[1], u.x[2], p, t), f.f2(u.x[1], u.x[2], p, t))
2363 end
2364 function (f::DynamicalODEFunction)(du, u, p, t)
2365 f.f1(du.x[1], u.x[1], u.x[2], p, t)
2366 f.f2(du.x[2], u.x[1], u.x[2], p, t)
2367 end
2368
2369 (f::SplitFunction)(u, p, t) = f.f1(u, p, t) + f.f2(u, p, t)
2370 function (f::SplitFunction)(du, u, p, t)
2371 f.f1(f.cache, u, p, t)
2372 f.f2(du, u, p, t)
2373 du .+= f.cache
2374 end
2375
2376 (f::DiscreteFunction)(args...) = f.f(args...)
2377 (f::ImplicitDiscreteFunction)(args...) = f.f(args...)
2378 (f::DAEFunction)(args...) = f.f(args...)
2379 (f::DDEFunction)(args...) = f.f(args...)
2380
2381 function (f::DynamicalDDEFunction)(u, h, p, t)
2382 ArrayPartition(f.f1(u.x[1], u.x[2], h, p, t), f.f2(u.x[1], u.x[2], h, p, t))
2383 end
2384 function (f::DynamicalDDEFunction)(du, u, h, p, t)
2385 f.f1(du.x[1], u.x[1], u.x[2], h, p, t)
2386 f.f2(du.x[2], u.x[1], u.x[2], h, p, t)
2387 end
2388 function Base.getproperty(f::DynamicalDDEFunction, name::Symbol)
2389 if name === :f
2390 # Use the f property as an alias for calling the function itself, so DynamicalDDEFunction fits the same interface as DDEFunction as expected by the ODEFunctionWrapper in DelayDiffEq.jl.
2391 return f
2392 end
2393 return getfield(f, name)
2394 end
2395
2396 (f::SDEFunction)(args...) = f.f(args...)
2397 (f::SDDEFunction)(args...) = f.f(args...)
2398 (f::SplitSDEFunction)(u, p, t) = f.f1(u, p, t) + f.f2(u, p, t)
2399
2400 function (f::SplitSDEFunction)(du, u, p, t)
2401 f.f1(f.cache, u, p, t)
2402 f.f2(du, u, p, t)
2403 du .+= f.cache
2404 end
2405
2406 (f::RODEFunction)(args...) = f.f(args...)
2407
2408 (f::BVPFunction)(args...) = f.f(args...)
2409
2410 ######### Basic Constructor
2411
2412 function ODEFunction{iip, specialize}(f;
2413 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
2414 I,
2415 analytic = __has_analytic(f) ? f.analytic : nothing,
2416 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
2417 jac = __has_jac(f) ? f.jac : nothing,
2418 jvp = __has_jvp(f) ? f.jvp : nothing,
2419 vjp = __has_vjp(f) ? f.vjp : nothing,
2420 jac_prototype = __has_jac_prototype(f) ?
2421 f.jac_prototype :
2422 nothing,
2423 sparsity = __has_sparsity(f) ? f.sparsity :
2424 jac_prototype,
2425 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
2426 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
2427 W_prototype = __has_W_prototype(f) ? f.W_prototype : nothing,
2428 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
2429 syms = nothing,
2430 indepsym = nothing,
2431 paramsyms = nothing,
2432 observed = __has_observed(f) ? f.observed :
2433 DEFAULT_OBSERVED,
2434 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
2435 sys = __has_sys(f) ? f.sys : nothing) where {iip,
2436 specialize,
2437 }
2438 if mass_matrix === I && f isa Tuple
2439 mass_matrix = ((I for i in 1:length(f))...,)
2440 end
2441
2442 if (specialize === FunctionWrapperSpecialize) &&
2443 !(f isa FunctionWrappersWrappers.FunctionWrappersWrapper)
2444 error("FunctionWrapperSpecialize must be used on the problem constructor for access to u0, p, and t types!")
2445 end
2446
2447 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
2448 if iip
2449 jac = update_coefficients! #(J,u,p,t)
2450 else
2451 jac = (u, p, t) -> update_coefficients(deepcopy(jac_prototype), u, p, t)
2452 end
2453 end
2454
2455 if jac_prototype !== nothing && colorvec === nothing &&
2456 ArrayInterface.fast_matrix_colors(jac_prototype)
2457 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
2458 else
2459 _colorvec = colorvec
2460 end
2461
2462 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
2463 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
2464 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
2465 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
2466 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
2467 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
2468 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
2469
2470 nonconforming = (jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
2471 paramjaciip) .!= iip
2472 if any(nonconforming)
2473 nonconforming = findall(nonconforming)
2474 functions = ["jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
2475 throw(NonconformingFunctionsError(functions))
2476 end
2477
2478 _f = prepare_function(f)
2479
2480 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2481
2482 if specialize === NoSpecialize
2483 ODEFunction{iip, specialize,
2484 Any, Any, Any, Any,
2485 Any, Any, Any, typeof(jac_prototype),
2486 typeof(sparsity), Any, Any, typeof(W_prototype), Any,
2487 Any,
2488 typeof(_colorvec),
2489 typeof(sys)}(_f, mass_matrix, analytic, tgrad, jac,
2490 jvp, vjp, jac_prototype, sparsity, Wfact,
2491 Wfact_t, W_prototype, paramjac,
2492 observed, _colorvec, sys)
2493 elseif specialize === false
2494 ODEFunction{iip, FunctionWrapperSpecialize,
2495 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2496 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2497 typeof(sparsity), typeof(Wfact), typeof(Wfact_t), typeof(W_prototype),
2498 typeof(paramjac),
2499 typeof(observed),
2500 typeof(_colorvec),
2501 typeof(sys)}(_f, mass_matrix, analytic, tgrad, jac,
2502 jvp, vjp, jac_prototype, sparsity, Wfact,
2503 Wfact_t, W_prototype, paramjac,
2504 observed, _colorvec, sys)
2505 else
2506 ODEFunction{iip, specialize,
2507 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2508 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2509 typeof(sparsity), typeof(Wfact), typeof(Wfact_t), typeof(W_prototype),
2510 typeof(paramjac),
2511 typeof(observed),
2512 typeof(_colorvec),
2513 typeof(sys)}(_f, mass_matrix, analytic, tgrad, jac,
2514 jvp, vjp, jac_prototype, sparsity, Wfact,
2515 Wfact_t, W_prototype, paramjac,
2516 observed, _colorvec, sys)
2517 end
2518 end
2519
2520 function ODEFunction{iip}(f; kwargs...) where {iip}
2521 ODEFunction{iip, FullSpecialize}(f; kwargs...)
2522 end
2523 ODEFunction{iip}(f::ODEFunction; kwargs...) where {iip} = f
2524 ODEFunction(f; kwargs...) = ODEFunction{isinplace(f, 4), FullSpecialize}(f; kwargs...)
2525 ODEFunction(f::ODEFunction; kwargs...) = f
2526
2527 function unwrapped_f(f::ODEFunction, newf = unwrapped_f(f.f))
2528 if specialization(f) === NoSpecialize
2529 ODEFunction{isinplace(f), specialization(f), Any, Any, Any,
2530 Any, Any, Any, Any, typeof(f.jac_prototype),
2531 typeof(f.sparsity), Any, Any, Any,
2532 Any, typeof(f.colorvec),
2533 typeof(f.sys)}(newf, f.mass_matrix, f.analytic, f.tgrad, f.jac,
2534 f.jvp, f.vjp, f.jac_prototype, f.sparsity, f.Wfact,
2535 f.Wfact_t, f.W_prototype, f.paramjac,
2536 f.observed, f.colorvec, f.sys)
2537 else
2538 ODEFunction{isinplace(f), specialization(f), typeof(newf), typeof(f.mass_matrix),
2539 typeof(f.analytic), typeof(f.tgrad),
2540 typeof(f.jac), typeof(f.jvp), typeof(f.vjp), typeof(f.jac_prototype),
2541 typeof(f.sparsity), typeof(f.Wfact), typeof(f.Wfact_t), typeof(f.W_prototype),
2542 typeof(f.paramjac),
2543 typeof(f.observed), typeof(f.colorvec),
2544 typeof(f.sys)}(newf, f.mass_matrix, f.analytic, f.tgrad, f.jac,
2545 f.jvp, f.vjp, f.jac_prototype, f.sparsity, f.Wfact,
2546 f.Wfact_t, f.W_prototype, f.paramjac,
2547 f.observed, f.colorvec, f.sys)
2548 end
2549 end
2550
2551 """
2552 $(SIGNATURES)
2553
2554 Converts a NonlinearFunction into an ODEFunction.
2555 """
2556 function ODEFunction(f::NonlinearFunction)
2557 iip = isinplace(f)
2558 ODEFunction{iip}(f)
2559 end
2560
2561 function ODEFunction{iip}(f::NonlinearFunction) where {iip}
2562 _f = iip ? (du, u, p, t) -> (f.f(du, u, p); nothing) : (u, p, t) -> f.f(u, p)
2563 if f.analytic !== nothing
2564 _analytic = (u0, p, t) -> f.analytic(u0, p)
2565 else
2566 _analytic = nothing
2567 end
2568 if f.jac !== nothing
2569 _jac = iip ? (J, u, p, t) -> (f.jac(J, u, p); nothing) : (u, p, t) -> f.jac(u, p)
2570 else
2571 _jac = nothing
2572 end
2573 if f.jvp !== nothing
2574 _jvp = iip ? (Jv, u, p, t) -> (f.jvp(Jv, u, p); nothing) : (u, p, t) -> f.jvp(u, p)
2575 else
2576 _jvp = nothing
2577 end
2578 if f.vjp !== nothing
2579 _vjp = iip ? (vJ, u, p, t) -> (f.vjp(vJ, u, p); nothing) : (u, p, t) -> f.vjp(u, p)
2580 else
2581 _vjp = nothing
2582 end
2583
2584 ODEFunction{iip, specialization(f)}(_f;
2585 mass_matrix = f.mass_matrix,
2586 analytic = _analytic,
2587 jac = _jac,
2588 jvp = _jvp,
2589 vjp = _vjp,
2590 jac_prototype = f.jac_prototype,
2591 sparsity = f.sparsity,
2592 paramjac = f.paramjac,
2593 syms = variable_symbols(f),
2594 indepsym = nothing,
2595 paramsyms = parameter_symbols(f),
2596 observed = f.observed,
2597 colorvec = f.colorvec)
2598 end
2599
2600 """
2601 $(SIGNATURES)
2602
2603 Converts an ODEFunction into a NonlinearFunction.
2604 """
2605 function NonlinearFunction(f::ODEFunction)
2606 iip = isinplace(f)
2607 NonlinearFunction{iip}(f)
2608 end
2609
2610 function NonlinearFunction{iip}(f::ODEFunction) where {iip}
2611 _f = iip ? (du, u, p) -> (f.f(du, u, p, Inf); nothing) : (u, p) -> f.f(u, p, Inf)
2612 if f.analytic !== nothing
2613 _analytic = (u0, p) -> f.analytic(u0, p, Inf)
2614 else
2615 _analytic = nothing
2616 end
2617 if f.jac !== nothing
2618 _jac = iip ? (J, u, p) -> (f.jac(J, u, p, Inf); nothing) :
2619 (u, p) -> f.jac(u, p, Inf)
2620 else
2621 _jac = nothing
2622 end
2623 if f.jvp !== nothing
2624 _jvp = iip ? (Jv, u, p) -> (f.jvp(Jv, u, p, Inf); nothing) :
2625 (u, p) -> f.jvp(u, p, Inf)
2626 else
2627 _jvp = nothing
2628 end
2629 if f.vjp !== nothing
2630 _vjp = iip ? (vJ, u, p) -> (f.vjp(vJ, u, p, Inf); nothing) :
2631 (u, p) -> f.vjp(u, p, Inf)
2632 else
2633 _vjp = nothing
2634 end
2635
2636 NonlinearFunction{iip, specialization(f)}(_f;
2637 analytic = _analytic,
2638 jac = _jac,
2639 jvp = _jvp,
2640 vjp = _vjp,
2641 jac_prototype = f.jac_prototype,
2642 sparsity = f.sparsity,
2643 paramjac = f.paramjac,
2644 syms = variable_symbols(f),
2645 paramsyms = parameter_symbols(f),
2646 observed = f.observed,
2647 colorvec = f.colorvec)
2648 end
2649
2650 @add_kwonly function SplitFunction(f1, f2, mass_matrix, cache, analytic, tgrad, jac, jvp,
2651 vjp, jac_prototype, sparsity, Wfact, Wfact_t, paramjac,
2652 observed, colorvec, sys)
2653 f1 = ODEFunction(f1)
2654 f2 = ODEFunction(f2)
2655
2656 if !(f1 isa AbstractSciMLOperator || f1.f isa AbstractSciMLOperator) &&
2657 isinplace(f1) != isinplace(f2)
2658 throw(NonconformingFunctionsError(["f2"]))
2659 end
2660
2661 SplitFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
2662 typeof(mass_matrix),
2663 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
2664 typeof(vjp), typeof(jac_prototype), typeof(sparsity),
2665 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed), typeof(colorvec),
2666 typeof(sys)}(f1, f2, mass_matrix, cache, analytic, tgrad, jac, jvp, vjp,
2667 jac_prototype, sparsity, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2668 end
2669 function SplitFunction{iip, specialize}(f1, f2;
2670 mass_matrix = __has_mass_matrix(f1) ?
2671 f1.mass_matrix : I,
2672 _func_cache = nothing,
2673 analytic = __has_analytic(f1) ? f1.analytic :
2674 nothing,
2675 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
2676 jac = __has_jac(f1) ? f1.jac : nothing,
2677 jvp = __has_jvp(f1) ? f1.jvp : nothing,
2678 vjp = __has_vjp(f1) ? f1.vjp : nothing,
2679 jac_prototype = __has_jac_prototype(f1) ?
2680 f1.jac_prototype :
2681 nothing,
2682 sparsity = __has_sparsity(f1) ? f1.sparsity :
2683 jac_prototype,
2684 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
2685 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t : nothing,
2686 paramjac = __has_paramjac(f1) ? f1.paramjac :
2687 nothing,
2688 syms = nothing,
2689 indepsym = nothing,
2690 paramsyms = nothing,
2691 observed = __has_observed(f1) ? f1.observed :
2692 DEFAULT_OBSERVED,
2693 colorvec = __has_colorvec(f1) ? f1.colorvec :
2694 nothing,
2695 sys = __has_sys(f1) ? f1.sys : nothing) where {iip,
2696 specialize,
2697 }
2698 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2699 if specialize === NoSpecialize
2700 SplitFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any,
2701 Any, Any, Any, Any, Any,
2702 Any, Any, Any}(f1, f2, mass_matrix, _func_cache,
2703 analytic,
2704 tgrad, jac, jvp, vjp, jac_prototype,
2705 sparsity, Wfact, Wfact_t, paramjac,
2706 observed, colorvec, sys)
2707 else
2708 SplitFunction{iip, specialize, typeof(f1), typeof(f2), typeof(mass_matrix),
2709 typeof(_func_cache), typeof(analytic),
2710 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
2711 typeof(jac_prototype), typeof(sparsity),
2712 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2713 typeof(colorvec),
2714 typeof(sys)}(f1, f2, mass_matrix, _func_cache, analytic, tgrad, jac,
2715 jvp, vjp, jac_prototype,
2716 sparsity, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
2717 end
2718 end
2719
2720 SplitFunction(f1, f2; kwargs...) = SplitFunction{isinplace(f2, 4)}(f1, f2; kwargs...)
2721 function SplitFunction{iip}(f1, f2; kwargs...) where {iip}
2722 SplitFunction{iip, FullSpecialize}(ODEFunction(f1), ODEFunction{iip}(f2);
2723 kwargs...)
2724 end
2725 SplitFunction(f::SplitFunction; kwargs...) = f
2726
2727 @add_kwonly function DynamicalODEFunction{iip}(f1, f2, mass_matrix, analytic, tgrad, jac,
2728 jvp, vjp, jac_prototype, sparsity, Wfact,
2729 Wfact_t, paramjac,
2730 observed, colorvec, sys) where {iip}
2731 f1 = f1 isa AbstractSciMLOperator ? f1 : ODEFunction(f1)
2732 f2 = ODEFunction(f2)
2733
2734 if isinplace(f1) != isinplace(f2)
2735 throw(NonconformingFunctionsError(["f2"]))
2736 end
2737 DynamicalODEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
2738 typeof(mass_matrix),
2739 typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
2740 typeof(vjp),
2741 typeof(jac_prototype),
2742 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2743 typeof(colorvec),
2744 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
2745 vjp, jac_prototype, sparsity, Wfact, Wfact_t,
2746 paramjac, observed,
2747 colorvec, sys)
2748 end
2749
2750 function DynamicalODEFunction{iip, specialize}(f1, f2;
2751 mass_matrix = __has_mass_matrix(f1) ?
2752 f1.mass_matrix : I,
2753 analytic = __has_analytic(f1) ? f1.analytic :
2754 nothing,
2755 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
2756 jac = __has_jac(f1) ? f1.jac : nothing,
2757 jvp = __has_jvp(f1) ? f1.jvp : nothing,
2758 vjp = __has_vjp(f1) ? f1.vjp : nothing,
2759 jac_prototype = __has_jac_prototype(f1) ?
2760 f1.jac_prototype : nothing,
2761 sparsity = __has_sparsity(f1) ? f1.sparsity :
2762 jac_prototype,
2763 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
2764 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
2765 nothing,
2766 paramjac = __has_paramjac(f1) ? f1.paramjac :
2767 nothing,
2768 syms = nothing,
2769 indepsym = nothing,
2770 paramsyms = nothing,
2771 observed = __has_observed(f1) ? f1.observed :
2772 DEFAULT_OBSERVED,
2773 colorvec = __has_colorvec(f1) ? f1.colorvec :
2774 nothing,
2775 sys = __has_sys(f1) ? f1.sys : nothing) where {
2776 iip,
2777 specialize,
2778 }
2779 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2780
2781 if specialize === NoSpecialize
2782 DynamicalODEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any,
2783 Any, Any, Any, Any, Any,
2784 Any, Any, Any, Any}(f1, f2, mass_matrix,
2785 analytic,
2786 tgrad,
2787 jac, jvp, vjp,
2788 jac_prototype,
2789 sparsity,
2790 Wfact, Wfact_t, paramjac,
2791 observed, colorvec, sys)
2792 else
2793 DynamicalODEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(mass_matrix),
2794 typeof(analytic),
2795 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
2796 typeof(jac_prototype), typeof(sparsity),
2797 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
2798 typeof(colorvec),
2799 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
2800 vjp, jac_prototype, sparsity,
2801 Wfact, Wfact_t, paramjac, observed,
2802 colorvec, sys)
2803 end
2804 end
2805
2806 function DynamicalODEFunction(f1, f2 = nothing; kwargs...)
2807 DynamicalODEFunction{isinplace(f1, 5)}(f1, f2; kwargs...)
2808 end
2809 function DynamicalODEFunction{iip}(f1, f2; kwargs...) where {iip}
2810 DynamicalODEFunction{iip, FullSpecialize}(ODEFunction{iip}(f1),
2811 ODEFunction{iip}(f2); kwargs...)
2812 end
2813 DynamicalODEFunction(f::DynamicalODEFunction; kwargs...) = f
2814
2815 function DiscreteFunction{iip, specialize}(f;
2816 analytic = __has_analytic(f) ? f.analytic :
2817 nothing,
2818 syms = nothing,
2819 indepsym = nothing,
2820 paramsyms = nothing,
2821 observed = __has_observed(f) ? f.observed :
2822 DEFAULT_OBSERVED,
2823 sys = __has_sys(f) ? f.sys : nothing) where {iip,
2824 specialize,
2825 }
2826 _f = prepare_function(f)
2827 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2828
2829 if specialize === NoSpecialize
2830 DiscreteFunction{iip, specialize, Any, Any, Any, Any}(_f, analytic,
2831 observed, sys)
2832 else
2833 DiscreteFunction{iip, specialize, typeof(_f), typeof(analytic),
2834 typeof(observed), typeof(sys)}(_f, analytic, observed, sys)
2835 end
2836 end
2837
2838 function DiscreteFunction{iip}(f; kwargs...) where {iip}
2839 DiscreteFunction{iip, FullSpecialize}(f; kwargs...)
2840 end
2841 DiscreteFunction{iip}(f::DiscreteFunction; kwargs...) where {iip} = f
2842 function DiscreteFunction(f; kwargs...)
2843 DiscreteFunction{isinplace(f, 4), FullSpecialize}(f; kwargs...)
2844 end
2845 DiscreteFunction(f::DiscreteFunction; kwargs...) = f
2846
2847 function unwrapped_f(f::DiscreteFunction, newf = unwrapped_f(f.f))
2848 specialize = specialization(f)
2849
2850 if specialize === NoSpecialize
2851 DiscreteFunction{isinplace(f), specialize, Any, Any,
2852 Any, Any}(newf, f.analytic, f.observed, f.sys)
2853 else
2854 DiscreteFunction{isinplace(f), specialize, typeof(newf), typeof(f.analytic),
2855 typeof(f.observed), typeof(f.sys)}(newf, f.analytic,
2856 f.observed, f.sys)
2857 end
2858 end
2859
2860 function ImplicitDiscreteFunction{iip, specialize}(f;
2861 analytic = __has_analytic(f) ?
2862 f.analytic :
2863 nothing,
2864 syms = nothing,
2865 indepsym = nothing,
2866 paramsyms = nothing,
2867 observed = __has_observed(f) ?
2868 f.observed :
2869 DEFAULT_OBSERVED,
2870 sys = __has_sys(f) ? f.sys : nothing) where {
2871 iip,
2872 specialize,
2873 }
2874 _f = prepare_function(f)
2875 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2876
2877 if specialize === NoSpecialize
2878 ImplicitDiscreteFunction{iip, specialize, Any, Any, Any, Any}(_f,
2879 analytic,
2880 observed,
2881 sys)
2882 else
2883 ImplicitDiscreteFunction{iip, specialize, typeof(_f), typeof(analytic), typeof(observed), typeof(sys)}(_f, analytic, observed, sys)
2884 end
2885 end
2886
2887 function ImplicitDiscreteFunction{iip}(f; kwargs...) where {iip}
2888 ImplicitDiscreteFunction{iip, FullSpecialize}(f; kwargs...)
2889 end
2890 ImplicitDiscreteFunction{iip}(f::ImplicitDiscreteFunction; kwargs...) where {iip} = f
2891 function ImplicitDiscreteFunction(f; kwargs...)
2892 ImplicitDiscreteFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
2893 end
2894 ImplicitDiscreteFunction(f::ImplicitDiscreteFunction; kwargs...) = f
2895
2896 function unwrapped_f(f::ImplicitDiscreteFunction, newf = unwrapped_f(f.f))
2897 specialize = specialization(f)
2898
2899 if specialize === NoSpecialize
2900 ImplicitDiscreteFunction{isinplace(f, 6), specialize, Any, Any,
2901 Any, Any}(newf, f.analytic, f.observed, f.sys)
2902 else
2903 ImplicitDiscreteFunction{isinplace(f, 6), specialize, typeof(newf),
2904 typeof(f.analytic),
2905 typeof(f.observed), typeof(f.sys)}(newf, f.analytic,
2906 f.observed, f.sys)
2907 end
2908 end
2909
2910 function SDEFunction{iip, specialize}(f, g;
2911 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
2912 I,
2913 analytic = __has_analytic(f) ? f.analytic : nothing,
2914 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
2915 jac = __has_jac(f) ? f.jac : nothing,
2916 jvp = __has_jvp(f) ? f.jvp : nothing,
2917 vjp = __has_vjp(f) ? f.vjp : nothing,
2918 jac_prototype = __has_jac_prototype(f) ?
2919 f.jac_prototype :
2920 nothing,
2921 sparsity = __has_sparsity(f) ? f.sparsity :
2922 jac_prototype,
2923 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
2924 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
2925 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
2926 ggprime = nothing,
2927 syms = nothing,
2928 indepsym = nothing,
2929 paramsyms = nothing,
2930 observed = __has_observed(f) ? f.observed :
2931 DEFAULT_OBSERVED,
2932 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
2933 sys = __has_sys(f) ? f.sys : nothing) where {iip,
2934 specialize,
2935 }
2936 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
2937 if iip
2938 jac = update_coefficients! #(J,u,p,t)
2939 else
2940 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
2941 end
2942 end
2943
2944 if jac_prototype !== nothing && colorvec === nothing &&
2945 ArrayInterface.fast_matrix_colors(jac_prototype)
2946 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
2947 else
2948 _colorvec = colorvec
2949 end
2950
2951 giip = isinplace(g, 4, "g", iip)
2952 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
2953 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
2954 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
2955 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
2956 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
2957 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
2958 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
2959
2960 nonconforming = (giip, jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
2961 paramjaciip) .!= iip
2962 if any(nonconforming)
2963 nonconforming = findall(nonconforming)
2964 functions = ["g", "jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
2965 throw(NonconformingFunctionsError(functions))
2966 end
2967
2968 _f = prepare_function(f)
2969 _g = prepare_function(g)
2970
2971 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
2972
2973 if specialize === NoSpecialize
2974 SDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any,
2975 Any, Any, Any, Any, Any,
2976 typeof(_colorvec), typeof(sys)}(_f, _g, mass_matrix, analytic,
2977 tgrad, jac, jvp, vjp,
2978 jac_prototype, sparsity,
2979 Wfact, Wfact_t, paramjac, ggprime, observed,
2980 _colorvec, sys)
2981 else
2982 SDEFunction{iip, specialize, typeof(_f), typeof(_g),
2983 typeof(mass_matrix), typeof(analytic), typeof(tgrad),
2984 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
2985 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
2986 typeof(paramjac), typeof(ggprime), typeof(observed), typeof(_colorvec), typeof(sys)}(_f, _g, mass_matrix,
2987 analytic, tgrad, jac,
2988 jvp, vjp,
2989 jac_prototype,
2990 sparsity, Wfact,
2991 Wfact_t,
2992 paramjac, ggprime,
2993 observed, _colorvec,
2994 sys)
2995 end
2996 end
2997
2998 function unwrapped_f(f::SDEFunction, newf = unwrapped_f(f.f),
2999 newg = unwrapped_f(f.g))
3000 specialize = specialization(f)
3001
3002 if specialize === NoSpecialize
3003 SDEFunction{isinplace(f), specialize, Any, Any,
3004 typeof(f.mass_matrix), Any, Any,
3005 Any, Any, Any, typeof(f.jac_prototype),
3006 typeof(f.sparsity), Any, Any,
3007 Any, Any,
3008 typeof(f.observed), typeof(f.colorvec), typeof(f.sys)}(newf, newg,
3009 f.mass_matrix,
3010 f.analytic,
3011 f.tgrad, f.jac,
3012 f.jvp, f.vjp,
3013 f.jac_prototype,
3014 f.sparsity,
3015 f.Wfact,
3016 f.Wfact_t,
3017 f.paramjac,
3018 f.ggprime,
3019 f.observed,
3020 f.colorvec,
3021 f.sys)
3022 else
3023 SDEFunction{isinplace(f), specialize, typeof(newf), typeof(newg),
3024 typeof(f.mass_matrix), typeof(f.analytic), typeof(f.tgrad),
3025 typeof(f.jac), typeof(f.jvp), typeof(f.vjp), typeof(f.jac_prototype),
3026 typeof(f.sparsity), typeof(f.Wfact), typeof(f.Wfact_t),
3027 typeof(f.paramjac), typeof(f.ggprime),
3028 typeof(f.observed), typeof(f.colorvec), typeof(f.sys)}(newf, newg,
3029 f.mass_matrix,
3030 f.analytic,
3031 f.tgrad, f.jac,
3032 f.jvp, f.vjp,
3033 f.jac_prototype,
3034 f.sparsity,
3035 f.Wfact,
3036 f.Wfact_t,
3037 f.paramjac,
3038 f.ggprime,
3039 f.observed,
3040 f.colorvec,
3041 f.sys)
3042 end
3043 end
3044
3045 function SDEFunction{iip}(f, g; kwargs...) where {iip}
3046 SDEFunction{iip, FullSpecialize}(f, g; kwargs...)
3047 end
3048 SDEFunction{iip}(f::SDEFunction, g; kwargs...) where {iip} = f
3049 function SDEFunction(f, g; kwargs...)
3050 SDEFunction{isinplace(f, 4), FullSpecialize}(f, g; kwargs...)
3051 end
3052 SDEFunction(f::SDEFunction; kwargs...) = f
3053
3054 @add_kwonly function SplitSDEFunction(f1, f2, g, mass_matrix, cache, analytic, tgrad, jac,
3055 jvp, vjp,
3056 jac_prototype, Wfact, Wfact_t, paramjac, observed,
3057 colorvec, sys)
3058 f1 = f1 isa AbstractSciMLOperator ? f1 : SDEFunction(f1)
3059 f2 = SDEFunction(f2)
3060
3061 SplitFunction{isinplace(f2), typeof(f1), typeof(f2), typeof(g), typeof(mass_matrix),
3062 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
3063 typeof(vjp),
3064 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3065 typeof(colorvec),
3066 typeof(sys)}(f1, f2, mass_matrix, cache, analytic, tgrad, jac,
3067 jac_prototype, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
3068 end
3069
3070 function SplitSDEFunction{iip, specialize}(f1, f2, g;
3071 mass_matrix = __has_mass_matrix(f1) ?
3072 f1.mass_matrix :
3073 I,
3074 _func_cache = nothing,
3075 analytic = __has_analytic(f1) ? f1.analytic :
3076 nothing,
3077 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
3078 jac = __has_jac(f1) ? f1.jac : nothing,
3079 jac_prototype = __has_jac_prototype(f1) ?
3080 f1.jac_prototype : nothing,
3081 sparsity = __has_sparsity(f1) ? f1.sparsity :
3082 jac_prototype,
3083 jvp = __has_jvp(f1) ? f1.jvp : nothing,
3084 vjp = __has_vjp(f1) ? f1.vjp : nothing,
3085 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
3086 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
3087 nothing,
3088 paramjac = __has_paramjac(f1) ? f1.paramjac :
3089 nothing,
3090 syms = nothing,
3091 indepsym = nothing,
3092 paramsyms = nothing,
3093 observed = __has_observed(f1) ? f1.observed :
3094 DEFAULT_OBSERVED,
3095 colorvec = __has_colorvec(f1) ? f1.colorvec :
3096 nothing,
3097 sys = __has_sys(f1) ? f1.sys : nothing) where {
3098 iip,
3099 specialize,
3100 }
3101 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3102
3103 if specialize === NoSpecialize
3104 SplitSDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any,
3105 Any, Any, Any, Any, Any, Any, Any, Any, Any,
3106 Any, Any, Any}(f1, f2, g, mass_matrix, _func_cache,
3107 analytic,
3108 tgrad, jac, jvp, vjp, jac_prototype,
3109 sparsity,
3110 Wfact, Wfact_t, paramjac, observed,
3111 colorvec, sys)
3112 else
3113 SplitSDEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(g),
3114 typeof(mass_matrix), typeof(_func_cache),
3115 typeof(analytic),
3116 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
3117 typeof(jac_prototype), typeof(sparsity),
3118 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3119 typeof(colorvec),
3120 typeof(sys)}(f1, f2, g, mass_matrix, _func_cache, analytic,
3121 tgrad, jac, jvp, vjp, jac_prototype, sparsity,
3122 Wfact, Wfact_t, paramjac,
3123 observed, colorvec, sys)
3124 end
3125 end
3126
3127 function SplitSDEFunction(f1, f2, g; kwargs...)
3128 SplitSDEFunction{isinplace(f2, 4)}(f1, f2, g; kwargs...)
3129 end
3130 function SplitSDEFunction{iip}(f1, f2, g; kwargs...) where {iip}
3131 SplitSDEFunction{iip, FullSpecialize}(SDEFunction(f1, g), SDEFunction{iip}(f2, g),
3132 g; kwargs...)
3133 end
3134 SplitSDEFunction(f::SplitSDEFunction; kwargs...) = f
3135
3136 @add_kwonly function DynamicalSDEFunction(f1, f2, g, mass_matrix, cache, analytic, tgrad,
3137 jac, jvp, vjp,
3138 jac_prototype, Wfact, Wfact_t, paramjac,
3139 observed, colorvec,
3140 sys)
3141 f1 = f1 isa AbstractSciMLOperator ? f1 : SDEFunction(f1)
3142 f2 = SDEFunction(f2)
3143
3144 DynamicalSDEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2), typeof(g),
3145 typeof(mass_matrix),
3146 typeof(cache), typeof(analytic), typeof(tgrad), typeof(jac),
3147 typeof(jvp), typeof(vjp),
3148 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3149 typeof(colorvec),
3150 typeof(sys)}(f1, f2, g, mass_matrix, cache, analytic, tgrad,
3151 jac, jac_prototype, Wfact, Wfact_t, paramjac, observed, colorvec, sys)
3152 end
3153
3154 function DynamicalSDEFunction{iip, specialize}(f1, f2, g;
3155 mass_matrix = __has_mass_matrix(f1) ?
3156 f1.mass_matrix : I,
3157 _func_cache = nothing,
3158 analytic = __has_analytic(f1) ? f1.analytic :
3159 nothing,
3160 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
3161 jac = __has_jac(f1) ? f1.jac : nothing,
3162 jac_prototype = __has_jac_prototype(f1) ?
3163 f1.jac_prototype : nothing,
3164 sparsity = __has_sparsity(f1) ? f1.sparsity :
3165 jac_prototype,
3166 jvp = __has_jvp(f1) ? f1.jvp : nothing,
3167 vjp = __has_vjp(f1) ? f1.vjp : nothing,
3168 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
3169 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
3170 nothing,
3171 paramjac = __has_paramjac(f1) ? f1.paramjac :
3172 nothing,
3173 syms = nothing,
3174 indepsym = nothing,
3175 paramsyms = nothing,
3176 observed = __has_observed(f1) ? f1.observed :
3177 DEFAULT_OBSERVED,
3178 colorvec = __has_colorvec(f1) ? f1.colorvec :
3179 nothing,
3180 sys = __has_sys(f1) ? f1.sys : nothing) where {
3181 iip,
3182 specialize,
3183 }
3184 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3185
3186 if specialize === NoSpecialize
3187 DynamicalSDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any,
3188 Any, Any, Any, Any, Any, Any,
3189 Any, Any, Any, Any, Any, Any}(f1, f2, g, mass_matrix,
3190 _func_cache,
3191 analytic, tgrad, jac, jvp, vjp,
3192 jac_prototype, sparsity,
3193 Wfact, Wfact_t, paramjac, observed,
3194 colorvec, sys)
3195 else
3196 DynamicalSDEFunction{iip, specialize, typeof(f1), typeof(f2), typeof(g),
3197 typeof(mass_matrix), typeof(_func_cache),
3198 typeof(analytic),
3199 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
3200 typeof(jac_prototype), typeof(sparsity),
3201 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3202 typeof(colorvec),
3203 typeof(sys)}(f1, f2, g, mass_matrix, _func_cache, analytic,
3204 tgrad, jac, jvp, vjp, jac_prototype, sparsity,
3205 Wfact, Wfact_t, paramjac, observed, colorvec, sys)
3206 end
3207 end
3208
3209 function DynamicalSDEFunction(f1, f2, g; kwargs...)
3210 DynamicalSDEFunction{isinplace(f2, 5)}(f1, f2, g; kwargs...)
3211 end
3212 function DynamicalSDEFunction{iip}(f1, f2, g; kwargs...) where {iip}
3213 DynamicalSDEFunction{iip, FullSpecialize}(SDEFunction{iip}(f1, g),
3214 SDEFunction{iip}(f2, g), g; kwargs...)
3215 end
3216 DynamicalSDEFunction(f::DynamicalSDEFunction; kwargs...) = f
3217
3218 function RODEFunction{iip, specialize}(f;
3219 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3220 I,
3221 analytic = __has_analytic(f) ? f.analytic : nothing,
3222 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3223 jac = __has_jac(f) ? f.jac : nothing,
3224 jvp = __has_jvp(f) ? f.jvp : nothing,
3225 vjp = __has_vjp(f) ? f.vjp : nothing,
3226 jac_prototype = __has_jac_prototype(f) ?
3227 f.jac_prototype :
3228 nothing,
3229 sparsity = __has_sparsity(f) ? f.sparsity :
3230 jac_prototype,
3231 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3232 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3233 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3234 syms = nothing,
3235 indepsym = nothing,
3236 paramsyms = nothing,
3237 observed = __has_observed(f) ? f.observed :
3238 DEFAULT_OBSERVED,
3239 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3240 sys = __has_sys(f) ? f.sys : nothing,
3241 analytic_full = __has_analytic_full(f) ?
3242 f.analytic_full : false) where {iip,
3243 specialize,
3244 }
3245 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3246 if iip
3247 jac = update_coefficients! #(J,u,p,t)
3248 else
3249 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3250 end
3251 end
3252
3253 if jac_prototype !== nothing && colorvec === nothing &&
3254 ArrayInterface.fast_matrix_colors(jac_prototype)
3255 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3256 else
3257 _colorvec = colorvec
3258 end
3259
3260 # Setup when the design is finalized by useful integrators
3261
3262 #=
3263 jaciip = jac !== nothing ? isinplace(jac,4,"jac",iip) : iip
3264 tgradiip = tgrad !== nothing ? isinplace(tgrad,4,"tgrad",iip) : iip
3265 jvpiip = jvp !== nothing ? isinplace(jvp,5,"jvp",iip) : iip
3266 vjpiip = vjp !== nothing ? isinplace(vjp,5,"vjp",iip) : iip
3267 Wfactiip = Wfact !== nothing ? isinplace(Wfact,4,"Wfact",iip) : iip
3268 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t,4,"Wfact_t",iip) : iip
3269 paramjaciip = paramjac !== nothing ? isinplace(paramjac,4,"paramjac",iip) : iip
3270
3271 nonconforming = (jaciip,tgradiip,jvpiip,vjpiip,Wfactiip,Wfact_tiip,paramjaciip) .!= iip
3272 if any(nonconforming)
3273 nonconforming = findall(nonconforming)
3274 functions = ["jac","tgrad","jvp","vjp","Wfact","Wfact_t","paramjac"][nonconforming]
3275 throw(NonconformingFunctionsError(functions))
3276 end
3277 =#
3278
3279 _f = prepare_function(f)
3280 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3281
3282
3283 if specialize === NoSpecialize
3284 RODEFunction{iip, specialize, Any, Any, Any, Any, Any,
3285 Any, Any, Any, Any, Any, Any, Any,
3286 Any,
3287 typeof(_colorvec), Any}(_f, mass_matrix, analytic,
3288 tgrad,
3289 jac, jvp, vjp,
3290 jac_prototype,
3291 sparsity, Wfact, Wfact_t,
3292 paramjac, observed,
3293 _colorvec, sys,
3294 analytic_full)
3295 else
3296 RODEFunction{iip, specialize, typeof(_f), typeof(mass_matrix),
3297 typeof(analytic), typeof(tgrad),
3298 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3299 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3300 typeof(paramjac),
3301 typeof(observed), typeof(_colorvec),
3302 typeof(sys)}(_f, mass_matrix, analytic, tgrad,
3303 jac, jvp, vjp, jac_prototype, sparsity,
3304 Wfact, Wfact_t, paramjac,
3305 observed, _colorvec, sys, analytic_full)
3306 end
3307 end
3308
3309 function RODEFunction{iip}(f; kwargs...) where {iip}
3310 RODEFunction{iip, FullSpecialize}(f; kwargs...)
3311 end
3312 RODEFunction{iip}(f::RODEFunction; kwargs...) where {iip} = f
3313 function RODEFunction(f; kwargs...)
3314 RODEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3315 end
3316 RODEFunction(f::RODEFunction; kwargs...) = f
3317
3318 function DAEFunction{iip, specialize}(f;
3319 analytic = __has_analytic(f) ? f.analytic : nothing,
3320 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3321 jac = __has_jac(f) ? f.jac : nothing,
3322 jvp = __has_jvp(f) ? f.jvp : nothing,
3323 vjp = __has_vjp(f) ? f.vjp : nothing,
3324 jac_prototype = __has_jac_prototype(f) ?
3325 f.jac_prototype :
3326 nothing,
3327 sparsity = __has_sparsity(f) ? f.sparsity :
3328 jac_prototype,
3329 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3330 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3331 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3332 syms = nothing,
3333 indepsym = nothing,
3334 paramsyms = nothing,
3335 observed = __has_observed(f) ? f.observed :
3336 DEFAULT_OBSERVED,
3337 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3338 sys = __has_sys(f) ? f.sys : nothing) where {iip,
3339 specialize,
3340 }
3341 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3342 if iip
3343 jac = update_coefficients! #(J,u,p,t)
3344 else
3345 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3346 end
3347 end
3348
3349 if jac_prototype !== nothing && colorvec === nothing &&
3350 ArrayInterface.fast_matrix_colors(jac_prototype)
3351 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3352 else
3353 _colorvec = colorvec
3354 end
3355
3356 jaciip = jac !== nothing ? isinplace(jac, 6, "jac", iip) : iip
3357 jvpiip = jvp !== nothing ? isinplace(jvp, 7, "jvp", iip) : iip
3358 vjpiip = vjp !== nothing ? isinplace(vjp, 7, "vjp", iip) : iip
3359
3360 nonconforming = (jaciip, jvpiip, vjpiip) .!= iip
3361 if any(nonconforming)
3362 nonconforming = findall(nonconforming)
3363 functions = ["jac", "jvp", "vjp"][nonconforming]
3364 throw(NonconformingFunctionsError(functions))
3365 end
3366
3367 _f = prepare_function(f)
3368 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3369
3370
3371 if specialize === NoSpecialize
3372 DAEFunction{iip, specialize, Any, Any, Any,
3373 Any, Any, Any, Any, Any,
3374 Any, Any, Any,
3375 Any, typeof(_colorvec), Any}(_f, analytic, tgrad, jac, jvp,
3376 vjp, jac_prototype, sparsity,
3377 Wfact, Wfact_t, paramjac, observed,
3378 _colorvec, sys)
3379 else
3380 DAEFunction{iip, specialize, typeof(_f), typeof(analytic), typeof(tgrad),
3381 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3382 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3383 typeof(paramjac),
3384 typeof(observed), typeof(_colorvec),
3385 typeof(sys)}(_f, analytic, tgrad, jac, jvp, vjp,
3386 jac_prototype, sparsity, Wfact, Wfact_t,
3387 paramjac, observed,
3388 _colorvec, sys)
3389 end
3390 end
3391
3392 function DAEFunction{iip}(f; kwargs...) where {iip}
3393 DAEFunction{iip, FullSpecialize}(f; kwargs...)
3394 end
3395 DAEFunction{iip}(f::DAEFunction; kwargs...) where {iip} = f
3396 DAEFunction(f; kwargs...) = DAEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3397 DAEFunction(f::DAEFunction; kwargs...) = f
3398
3399 function DDEFunction{iip, specialize}(f;
3400 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3401 I,
3402 analytic = __has_analytic(f) ? f.analytic : nothing,
3403 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3404 jac = __has_jac(f) ? f.jac : nothing,
3405 jvp = __has_jvp(f) ? f.jvp : nothing,
3406 vjp = __has_vjp(f) ? f.vjp : nothing,
3407 jac_prototype = __has_jac_prototype(f) ?
3408 f.jac_prototype :
3409 nothing,
3410 sparsity = __has_sparsity(f) ? f.sparsity :
3411 jac_prototype,
3412 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3413 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3414 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3415 syms = nothing,
3416 indepsym = nothing,
3417 paramsyms = nothing,
3418 observed = __has_observed(f) ? f.observed :
3419 DEFAULT_OBSERVED,
3420 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3421 sys = __has_sys(f) ? f.sys : nothing) where {iip,
3422 specialize,
3423 }
3424 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3425 if iip
3426 jac = update_coefficients! #(J,u,p,t)
3427 else
3428 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3429 end
3430 end
3431
3432 if jac_prototype !== nothing && colorvec === nothing &&
3433 ArrayInterface.fast_matrix_colors(jac_prototype)
3434 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3435 else
3436 _colorvec = colorvec
3437 end
3438
3439 jaciip = jac !== nothing ? isinplace(jac, 5, "jac", iip) : iip
3440 tgradiip = tgrad !== nothing ? isinplace(tgrad, 5, "tgrad", iip) : iip
3441 jvpiip = jvp !== nothing ? isinplace(jvp, 6, "jvp", iip) : iip
3442 vjpiip = vjp !== nothing ? isinplace(vjp, 6, "vjp", iip) : iip
3443 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 6, "Wfact", iip) : iip
3444 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 6, "Wfact_t", iip) : iip
3445 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 5, "paramjac", iip) : iip
3446
3447 nonconforming = (jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
3448 paramjaciip) .!= iip
3449 if any(nonconforming)
3450 nonconforming = findall(nonconforming)
3451 functions = ["jac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t", "paramjac"][nonconforming]
3452 throw(NonconformingFunctionsError(functions))
3453 end
3454
3455 _f = prepare_function(f)
3456 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3457
3458
3459 if specialize === NoSpecialize
3460 DDEFunction{iip, specialize, Any, Any, Any, Any,
3461 Any, Any, Any, Any, Any, Any, Any,
3462 Any,
3463 Any, typeof(_colorvec), Any}(_f, mass_matrix,
3464 analytic,
3465 tgrad,
3466 jac, jvp, vjp,
3467 jac_prototype,
3468 sparsity, Wfact,
3469 Wfact_t,
3470 paramjac,
3471 observed,
3472 _colorvec, sys)
3473 else
3474 DDEFunction{iip, specialize, typeof(_f), typeof(mass_matrix), typeof(analytic),
3475 typeof(tgrad),
3476 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3477 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3478 typeof(paramjac),
3479 typeof(observed),
3480 typeof(_colorvec), typeof(sys)}(_f, mass_matrix, analytic,
3481 tgrad, jac, jvp, vjp,
3482 jac_prototype, sparsity,
3483 Wfact, Wfact_t, paramjac,
3484 observed,
3485 _colorvec, sys)
3486 end
3487 end
3488
3489 function DDEFunction{iip}(f; kwargs...) where {iip}
3490 DDEFunction{iip, FullSpecialize}(f; kwargs...)
3491 end
3492 DDEFunction{iip}(f::DDEFunction; kwargs...) where {iip} = f
3493 DDEFunction(f; kwargs...) = DDEFunction{isinplace(f, 5), FullSpecialize}(f; kwargs...)
3494 DDEFunction(f::DDEFunction; kwargs...) = f
3495
3496 @add_kwonly function DynamicalDDEFunction{iip}(f1, f2, mass_matrix, analytic, tgrad, jac,
3497 jvp, vjp,
3498 jac_prototype, sparsity, Wfact, Wfact_t,
3499 paramjac,
3500 observed,
3501 colorvec) where {iip}
3502 f1 = f1 isa AbstractSciMLOperator ? f1 : DDEFunction(f1)
3503 f2 = DDEFunction(f2)
3504
3505 DynamicalDDEFunction{isinplace(f2), FullSpecialize, typeof(f1), typeof(f2),
3506 typeof(mass_matrix),
3507 typeof(analytic), typeof(tgrad), typeof(jac), typeof(jvp),
3508 typeof(vjp),
3509 typeof(jac_prototype),
3510 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3511 typeof(colorvec),
3512 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
3513 vjp, jac_prototype, sparsity, Wfact, Wfact_t,
3514 paramjac, observed,
3515 colorvec, sys)
3516 end
3517 function DynamicalDDEFunction{iip, specialize}(f1, f2;
3518 mass_matrix = __has_mass_matrix(f1) ?
3519 f1.mass_matrix : I,
3520 analytic = __has_analytic(f1) ? f1.analytic :
3521 nothing,
3522 tgrad = __has_tgrad(f1) ? f1.tgrad : nothing,
3523 jac = __has_jac(f1) ? f1.jac : nothing,
3524 jvp = __has_jvp(f1) ? f1.jvp : nothing,
3525 vjp = __has_vjp(f1) ? f1.vjp : nothing,
3526 jac_prototype = __has_jac_prototype(f1) ?
3527 f1.jac_prototype : nothing,
3528 sparsity = __has_sparsity(f1) ? f1.sparsity :
3529 jac_prototype,
3530 Wfact = __has_Wfact(f1) ? f1.Wfact : nothing,
3531 Wfact_t = __has_Wfact_t(f1) ? f1.Wfact_t :
3532 nothing,
3533 paramjac = __has_paramjac(f1) ? f1.paramjac :
3534 nothing,
3535 syms = nothing,
3536 indepsym = nothing,
3537 paramsyms = nothing,
3538 observed = __has_observed(f1) ? f1.observed :
3539 DEFAULT_OBSERVED,
3540 colorvec = __has_colorvec(f1) ? f1.colorvec :
3541 nothing,
3542 sys = __has_sys(f1) ? f1.sys : nothing) where {
3543 iip,
3544 specialize,
3545 }
3546 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3547
3548 if specialize === NoSpecialize
3549 DynamicalDDEFunction{iip, specialize, Any, Any, Any, Any, Any, Any, Any, Any, Any,
3550 Any, Any, Any, Any, Any, Any, Any}(f1, f2, mass_matrix,
3551 analytic,
3552 tgrad,
3553 jac, jvp, vjp,
3554 jac_prototype,
3555 sparsity,
3556 Wfact, Wfact_t,
3557 paramjac,
3558 observed, colorvec,
3559 sys)
3560 else
3561 DynamicalDDEFunction{iip, typeof(f1), typeof(f2), typeof(mass_matrix),
3562 typeof(analytic),
3563 typeof(tgrad), typeof(jac), typeof(jvp), typeof(vjp),
3564 typeof(jac_prototype), typeof(sparsity),
3565 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3566 typeof(colorvec),
3567 typeof(sys)}(f1, f2, mass_matrix, analytic, tgrad, jac, jvp,
3568 vjp, jac_prototype, sparsity,
3569 Wfact, Wfact_t, paramjac, observed,
3570 colorvec, sys)
3571 end
3572 end
3573
3574 function DynamicalDDEFunction(f1, f2 = nothing; kwargs...)
3575 DynamicalDDEFunction{isinplace(f1, 6)}(f1, f2; kwargs...)
3576 end
3577 function DynamicalDDEFunction{iip}(f1, f2; kwargs...) where {iip}
3578 DynamicalDDEFunction{iip, FullSpecialize}(DDEFunction{iip}(f1),
3579 DDEFunction{iip}(f2); kwargs...)
3580 end
3581 DynamicalDDEFunction(f::DynamicalDDEFunction; kwargs...) = f
3582
3583 function SDDEFunction{iip, specialize}(f, g;
3584 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix :
3585 I,
3586 analytic = __has_analytic(f) ? f.analytic : nothing,
3587 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3588 jac = __has_jac(f) ? f.jac : nothing,
3589 jvp = __has_jvp(f) ? f.jvp : nothing,
3590 vjp = __has_vjp(f) ? f.vjp : nothing,
3591 jac_prototype = __has_jac_prototype(f) ?
3592 f.jac_prototype :
3593 nothing,
3594 sparsity = __has_sparsity(f) ? f.sparsity :
3595 jac_prototype,
3596 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3597 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3598 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3599 ggprime = nothing,
3600 syms = nothing,
3601 indepsym = nothing,
3602 paramsyms = nothing,
3603 observed = __has_observed(f) ? f.observed :
3604 DEFAULT_OBSERVED,
3605 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3606 sys = __has_sys(f) ? f.sys : nothing) where {iip,
3607 specialize,
3608 }
3609 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3610 if iip
3611 jac = update_coefficients! #(J,u,p,t)
3612 else
3613 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3614 end
3615 end
3616
3617 if jac_prototype !== nothing && colorvec === nothing &&
3618 ArrayInterface.fast_matrix_colors(jac_prototype)
3619 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3620 else
3621 _colorvec = colorvec
3622 end
3623
3624 _f = prepare_function(f)
3625 _g = prepare_function(g)
3626 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3627
3628
3629 if specialize === NoSpecialize
3630 SDDEFunction{iip, specialize, Any, Any, Any, Any, Any,
3631 Any, Any, Any, Any, Any, Any, Any,
3632 Any, Any,
3633 Any, typeof(_colorvec), Any}(_f, _g, mass_matrix,
3634 analytic, tgrad,
3635 jac,
3636 jvp,
3637 vjp,
3638 jac_prototype,
3639 sparsity, Wfact,
3640 Wfact_t,
3641 paramjac, ggprime,
3642 observed,
3643 _colorvec,
3644 sys)
3645 else
3646 SDDEFunction{iip, specialize, typeof(_f), typeof(_g),
3647 typeof(mass_matrix), typeof(analytic), typeof(tgrad),
3648 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3649 typeof(sparsity), typeof(Wfact), typeof(Wfact_t),
3650 typeof(paramjac), typeof(ggprime), typeof(observed),
3651 typeof(_colorvec), typeof(sys)}(_f, _g, mass_matrix,
3652 analytic, tgrad, jac,
3653 jvp, vjp, jac_prototype,
3654 sparsity, Wfact,
3655 Wfact_t,
3656 paramjac, ggprime,
3657 observed, _colorvec, sys)
3658 end
3659 end
3660
3661 function SDDEFunction{iip}(f, g; kwargs...) where {iip}
3662 SDDEFunction{iip, FullSpecialize}(f, g; kwargs...)
3663 end
3664 SDDEFunction{iip}(f::SDDEFunction, g; kwargs...) where {iip} = f
3665 function SDDEFunction(f, g; kwargs...)
3666 SDDEFunction{isinplace(f, 5), FullSpecialize}(f, g; kwargs...)
3667 end
3668 SDDEFunction(f::SDDEFunction; kwargs...) = f
3669
3670 function NonlinearFunction{iip, specialize}(f;
3671 mass_matrix = __has_mass_matrix(f) ?
3672 f.mass_matrix :
3673 I,
3674 analytic = __has_analytic(f) ? f.analytic :
3675 nothing,
3676 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3677 jac = __has_jac(f) ? f.jac : nothing,
3678 jvp = __has_jvp(f) ? f.jvp : nothing,
3679 vjp = __has_vjp(f) ? f.vjp : nothing,
3680 jac_prototype = __has_jac_prototype(f) ?
3681 f.jac_prototype : nothing,
3682 sparsity = __has_sparsity(f) ? f.sparsity :
3683 jac_prototype,
3684 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3685 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t :
3686 nothing,
3687 paramjac = __has_paramjac(f) ? f.paramjac :
3688 nothing,
3689 syms = nothing,
3690 paramsyms = nothing,
3691 observed = __has_observed(f) ? f.observed :
3692 DEFAULT_OBSERVED_NO_TIME,
3693 colorvec = __has_colorvec(f) ? f.colorvec :
3694 nothing,
3695 sys = __has_sys(f) ? f.sys : nothing,
3696 resid_prototype = __has_resid_prototype(f) ? f.resid_prototype : nothing) where {
3697 iip, specialize}
3698
3699 if mass_matrix === I && f isa Tuple
3700 mass_matrix = ((I for i in 1:length(f))...,)
3701 end
3702
3703 if jac === nothing && isa(jac_prototype, AbstractSciMLOperator)
3704 if iip
3705 jac = update_coefficients! #(J,u,p,t)
3706 else
3707 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3708 end
3709 end
3710
3711 if jac_prototype !== nothing && colorvec === nothing &&
3712 ArrayInterface.fast_matrix_colors(jac_prototype)
3713 _colorvec = ArrayInterface.matrix_colors(jac_prototype)
3714 else
3715 _colorvec = colorvec
3716 end
3717
3718 jaciip = jac !== nothing ? isinplace(jac, 3, "jac", iip) : iip
3719 jvpiip = jvp !== nothing ? isinplace(jvp, 4, "jvp", iip) : iip
3720 vjpiip = vjp !== nothing ? isinplace(vjp, 4, "vjp", iip) : iip
3721
3722 nonconforming = (jaciip, jvpiip, vjpiip) .!= iip
3723 if any(nonconforming)
3724 nonconforming = findall(nonconforming)
3725 functions = ["jac", "jvp", "vjp"][nonconforming]
3726 throw(NonconformingFunctionsError(functions))
3727 end
3728
3729 _f = prepare_function(f)
3730 sys = something(sys, SymbolCache(syms, paramsyms))
3731
3732
3733 if specialize === NoSpecialize
3734 NonlinearFunction{iip, specialize,
3735 Any, Any, Any, Any, Any,
3736 Any, Any, Any, Any, Any,
3737 Any, Any, Any,
3738 typeof(_colorvec), Any, Any}(_f, mass_matrix,
3739 analytic, tgrad, jac,
3740 jvp, vjp,
3741 jac_prototype,
3742 sparsity, Wfact,
3743 Wfact_t, paramjac,
3744 observed,
3745 _colorvec, sys, resid_prototype)
3746 else
3747 NonlinearFunction{iip, specialize,
3748 typeof(_f), typeof(mass_matrix), typeof(analytic), typeof(tgrad),
3749 typeof(jac), typeof(jvp), typeof(vjp), typeof(jac_prototype),
3750 typeof(sparsity), typeof(Wfact),
3751 typeof(Wfact_t), typeof(paramjac),
3752 typeof(observed),
3753 typeof(_colorvec), typeof(sys), typeof(resid_prototype)}(_f, mass_matrix,
3754 analytic, tgrad, jac,
3755 jvp, vjp, jac_prototype, sparsity,
3756 Wfact,
3757 Wfact_t, paramjac,
3758 observed, _colorvec, sys, resid_prototype)
3759 end
3760 end
3761
3762 function NonlinearFunction{iip}(f; kwargs...) where {iip}
3763 NonlinearFunction{iip, FullSpecialize}(f; kwargs...)
3764 end
3765 NonlinearFunction{iip}(f::NonlinearFunction; kwargs...) where {iip} = f
3766 function NonlinearFunction(f; kwargs...)
3767 NonlinearFunction{isinplace(f, 3), FullSpecialize}(f; kwargs...)
3768 end
3769 NonlinearFunction(f::NonlinearFunction; kwargs...) = f
3770
3771 function IntervalNonlinearFunction{iip, specialize}(f;
3772 analytic = __has_analytic(f) ?
3773 f.analytic :
3774 nothing,
3775 syms = nothing,
3776 paramsyms = nothing,
3777 observed = __has_observed(f) ?
3778 f.observed :
3779 DEFAULT_OBSERVED_NO_TIME,
3780 sys = __has_sys(f) ? f.sys : nothing) where {
3781 iip,
3782 specialize,
3783 }
3784 _f = prepare_function(f)
3785 sys = something(sys, SymbolCache(syms, paramsyms))
3786
3787
3788 if specialize === NoSpecialize
3789 IntervalNonlinearFunction{iip, specialize,
3790 Any, Any, Any, Any}(_f, analytic, observed, sys)
3791 else
3792 IntervalNonlinearFunction{iip, specialize,
3793 typeof(_f), typeof(analytic),
3794 typeof(observed),
3795 typeof(sys)}(_f, analytic,
3796 observed, sys)
3797 end
3798 end
3799
3800 function IntervalNonlinearFunction{iip}(f; kwargs...) where {iip}
3801 IntervalNonlinearFunction{iip, FullSpecialize}(f; kwargs...)
3802 end
3803 IntervalNonlinearFunction{iip}(f::IntervalNonlinearFunction; kwargs...) where {iip} = f
3804 function IntervalNonlinearFunction(f; kwargs...)
3805 IntervalNonlinearFunction{isinplace(f, 3), FullSpecialize}(f; kwargs...)
3806 end
3807 IntervalNonlinearFunction(f::IntervalNonlinearFunction; kwargs...) = f
3808
3809 struct NoAD <: AbstractADType end
3810
3811 (f::OptimizationFunction)(args...) = f.f(args...)
3812 OptimizationFunction(args...; kwargs...) = OptimizationFunction{true}(args...; kwargs...)
3813
3814 function OptimizationFunction{iip}(f, adtype::AbstractADType = NoAD();
3815 grad = nothing, hess = nothing, hv = nothing,
3816 cons = nothing, cons_j = nothing, cons_h = nothing,
3817 hess_prototype = nothing,
3818 cons_jac_prototype = __has_jac_prototype(f) ?
3819 f.jac_prototype : nothing,
3820 cons_hess_prototype = nothing,
3821 syms = nothing,
3822 paramsyms = nothing,
3823 observed = __has_observed(f) ? f.observed :
3824 DEFAULT_OBSERVED_NO_TIME,
3825 expr = nothing, cons_expr = nothing,
3826 sys = __has_sys(f) ? f.sys : nothing,
3827 lag_h = nothing, lag_hess_prototype = nothing,
3828 hess_colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3829 cons_jac_colorvec = __has_colorvec(f) ? f.colorvec :
3830 nothing,
3831 cons_hess_colorvec = __has_colorvec(f) ? f.colorvec :
3832 nothing,
3833 lag_hess_colorvec = nothing) where {iip}
3834 isinplace(f, 2; has_two_dispatches = false, isoptimization = true)
3835 sys = something(sys, SymbolCache(syms, paramsyms))
3836 OptimizationFunction{iip, typeof(adtype), typeof(f), typeof(grad), typeof(hess),
3837 typeof(hv),
3838 typeof(cons), typeof(cons_j), typeof(cons_h),
3839 typeof(hess_prototype),
3840 typeof(cons_jac_prototype), typeof(cons_hess_prototype),
3841 typeof(observed),
3842 typeof(expr), typeof(cons_expr), typeof(sys), typeof(lag_h),
3843 typeof(lag_hess_prototype), typeof(hess_colorvec),
3844 typeof(cons_jac_colorvec), typeof(cons_hess_colorvec),
3845 typeof(lag_hess_colorvec)
3846 }(f, adtype, grad, hess,
3847 hv, cons, cons_j, cons_h,
3848 hess_prototype, cons_jac_prototype,
3849 cons_hess_prototype, observed, expr, cons_expr, sys,
3850 lag_h, lag_hess_prototype, hess_colorvec, cons_jac_colorvec,
3851 cons_hess_colorvec, lag_hess_colorvec)
3852 end
3853
3854 function BVPFunction{iip, specialize, twopoint}(f, bc;
3855 mass_matrix = __has_mass_matrix(f) ? f.mass_matrix : I,
3856 analytic = __has_analytic(f) ? f.analytic : nothing,
3857 tgrad = __has_tgrad(f) ? f.tgrad : nothing,
3858 jac = __has_jac(f) ? f.jac : nothing,
3859 bcjac = __has_jac(bc) ? bc.jac : nothing,
3860 jvp = __has_jvp(f) ? f.jvp : nothing,
3861 vjp = __has_vjp(f) ? f.vjp : nothing,
3862 jac_prototype = __has_jac_prototype(f) ? f.jac_prototype : nothing,
3863 bcjac_prototype = __has_jac_prototype(bc) ? bc.jac_prototype : nothing,
3864 bcresid_prototype = nothing,
3865 sparsity = __has_sparsity(f) ? f.sparsity : jac_prototype,
3866 Wfact = __has_Wfact(f) ? f.Wfact : nothing,
3867 Wfact_t = __has_Wfact_t(f) ? f.Wfact_t : nothing,
3868 paramjac = __has_paramjac(f) ? f.paramjac : nothing,
3869 syms = nothing,
3870 indepsym = nothing,
3871 paramsyms = nothing,
3872 observed = __has_observed(f) ? f.observed : DEFAULT_OBSERVED,
3873 colorvec = __has_colorvec(f) ? f.colorvec : nothing,
3874 bccolorvec = __has_colorvec(bc) ? bc.colorvec : nothing,
3875 sys = __has_sys(f) ? f.sys : nothing) where {iip, specialize, twopoint}
3876 if mass_matrix === I && f isa Tuple
3877 mass_matrix = ((I for i in 1:length(f))...,)
3878 end
3879
3880 if (specialize === FunctionWrapperSpecialize) &&
3881 !(f isa FunctionWrappersWrappers.FunctionWrappersWrapper)
3882 error("FunctionWrapperSpecialize must be used on the problem constructor for access to u0, p, and t types!")
3883 end
3884
3885 if jac === nothing && isa(jac_prototype, AbstractDiffEqLinearOperator)
3886 if iip_f
3887 jac = update_coefficients! #(J,u,p,t)
3888 else
3889 jac = (u, p, t) -> update_coefficients!(deepcopy(jac_prototype), u, p, t)
3890 end
3891 end
3892
3893 if bcjac === nothing && isa(bcjac_prototype, AbstractDiffEqLinearOperator)
3894 if iip_bc
3895 bcjac = update_coefficients! #(J,u,p,t)
3896 else
3897 bcjac = (u, p, t) -> update_coefficients!(deepcopy(bcjac_prototype), u, p, t)
3898 end
3899 end
3900
3901 if jac_prototype !== nothing && colorvec === nothing &&
3902 ArrayInterfaceCore.fast_matrix_colors(jac_prototype)
3903 _colorvec = ArrayInterfaceCore.matrix_colors(jac_prototype)
3904 else
3905 _colorvec = colorvec
3906 end
3907
3908 if bcjac_prototype !== nothing && bccolorvec === nothing &&
3909 ArrayInterfaceCore.fast_matrix_colors(bcjac_prototype)
3910 _bccolorvec = ArrayInterfaceCore.matrix_colors(bcjac_prototype)
3911 else
3912 _bccolorvec = bccolorvec
3913 end
3914
3915 bciip = if !twopoint
3916 isinplace(bc, 4, "bc", iip)
3917 else
3918 @assert length(bc) == 2
3919 bc = Tuple(bc)
3920 if isinplace(first(bc), 3, "bc", iip) != isinplace(last(bc), 3, "bc", iip)
3921 throw(NonconformingFunctionsError(["bc[1]", "bc[2]"]))
3922 end
3923 isinplace(first(bc), 3, "bc", iip)
3924 end
3925 jaciip = jac !== nothing ? isinplace(jac, 4, "jac", iip) : iip
3926 bcjaciip = if bcjac !== nothing
3927 if !twopoint
3928 isinplace(bcjac, 4, "bcjac", bciip)
3929 else
3930 @assert length(bcjac) == 2
3931 bcjac = Tuple(bcjac)
3932 if isinplace(first(bcjac), 3, "bcjac", bciip) != isinplace(last(bcjac), 3, "bcjac", bciip)
3933 throw(NonconformingFunctionsError(["bcjac[1]", "bcjac[2]"]))
3934 end
3935 isinplace(bcjac, 3, "bcjac", iip)
3936 end
3937 else
3938 bciip
3939 end
3940 tgradiip = tgrad !== nothing ? isinplace(tgrad, 4, "tgrad", iip) : iip
3941 jvpiip = jvp !== nothing ? isinplace(jvp, 5, "jvp", iip) : iip
3942 vjpiip = vjp !== nothing ? isinplace(vjp, 5, "vjp", iip) : iip
3943 Wfactiip = Wfact !== nothing ? isinplace(Wfact, 5, "Wfact", iip) : iip
3944 Wfact_tiip = Wfact_t !== nothing ? isinplace(Wfact_t, 5, "Wfact_t", iip) : iip
3945 paramjaciip = paramjac !== nothing ? isinplace(paramjac, 4, "paramjac", iip) : iip
3946
3947 nonconforming = (bciip, jaciip, tgradiip, jvpiip, vjpiip, Wfactiip, Wfact_tiip,
3948 paramjaciip) .!= iip
3949 bc_nonconforming = bcjaciip .!= bciip
3950 if any(nonconforming)
3951 nonconforming = findall(nonconforming)
3952 functions = ["bc", "jac", "bcjac", "tgrad", "jvp", "vjp", "Wfact", "Wfact_t",
3953 "paramjac"][nonconforming]
3954 throw(NonconformingFunctionsError(functions))
3955 end
3956
3957 if twopoint
3958 if iip && (bcresid_prototype === nothing || length(bcresid_prototype) != 2)
3959 error("bcresid_prototype must be a tuple / indexable collection of length 2 for a inplace TwoPointBVPFunction")
3960 end
3961 if bcresid_prototype !== nothing && length(bcresid_prototype) == 2
3962 bcresid_prototype = ArrayPartition(first(bcresid_prototype),
3963 last(bcresid_prototype))
3964 end
3965
3966 bccolorvec !== nothing && length(bccolorvec) == 2 && (bccolorvec = Tuple(bccolorvec))
3967
3968 bcjac_prototype !== nothing && length(bcjac_prototype) == 2 && (bcjac_prototype = Tuple(bcjac_prototype))
3969 end
3970
3971 if any(bc_nonconforming)
3972 bc_nonconforming = findall(bc_nonconforming)
3973 functions = ["bcjac"][bc_nonconforming]
3974 throw(NonconformingFunctionsError(functions))
3975 end
3976
3977 _f = prepare_function(f)
3978
3979 sys = something(sys, SymbolCache(syms, paramsyms, indepsym))
3980
3981
3982 if specialize === NoSpecialize
3983 BVPFunction{iip, specialize, twopoint, Any, Any, Any, Any, Any,
3984 Any, Any, Any, Any, Any, Any, Any, Any, Any, Any,
3985 Any,
3986 Any, typeof(_colorvec), typeof(_bccolorvec), Any}(_f, bc, mass_matrix,
3987 analytic, tgrad, jac, bcjac, jvp, vjp, jac_prototype,
3988 bcjac_prototype, bcresid_prototype,
3989 sparsity, Wfact, Wfact_t, paramjac, observed,
3990 _colorvec, _bccolorvec, sys)
3991 else
3992 BVPFunction{iip, specialize, twopoint, typeof(_f), typeof(bc), typeof(mass_matrix),
3993 typeof(analytic), typeof(tgrad), typeof(jac), typeof(bcjac), typeof(jvp),
3994 typeof(vjp), typeof(jac_prototype),
3995 typeof(bcjac_prototype), typeof(bcresid_prototype), typeof(sparsity),
3996 typeof(Wfact), typeof(Wfact_t), typeof(paramjac), typeof(observed),
3997 typeof(_colorvec), typeof(_bccolorvec), typeof(sys)}(_f, bc, mass_matrix, analytic,
3998 tgrad, jac, bcjac, jvp, vjp,
3999 jac_prototype, bcjac_prototype, bcresid_prototype, sparsity,
4000 Wfact, Wfact_t, paramjac,
4001 observed,
4002 _colorvec, _bccolorvec, sys)
4003 end
4004 end
4005
4006 function BVPFunction{iip}(f, bc; twopoint::Union{Val, Bool}=Val(false),
4007 kwargs...) where {iip}
4008 BVPFunction{iip, FullSpecialize, _unwrap_val(twopoint)}(f, bc; kwargs...)
4009 end
4010 BVPFunction{iip}(f::BVPFunction, bc; kwargs...) where {iip} = f
4011 function BVPFunction(f, bc; twopoint::Union{Val, Bool}=Val(false), kwargs...)
4012 BVPFunction{isinplace(f, 4), FullSpecialize, _unwrap_val(twopoint)}(f, bc; kwargs...)
4013 end
4014 BVPFunction(f::BVPFunction; kwargs...) = f
4015
4016 function IntegralFunction{iip, specialize}(f, integrand_prototype) where {iip, specialize}
4017 _f = prepare_function(f)
4018 IntegralFunction{iip, specialize, typeof(_f), typeof(integrand_prototype)}(_f,
4019 integrand_prototype)
4020 end
4021
4022 function IntegralFunction{iip}(f, integrand_prototype) where {iip}
4023 IntegralFunction{iip, FullSpecialize}(f, integrand_prototype)
4024 end
4025 function IntegralFunction(f)
4026 calculated_iip = isinplace(f, 3, "integral", true)
4027 if calculated_iip
4028 throw(IntegrandMismatchFunctionError(calculated_iip, false))
4029 end
4030 IntegralFunction{false}(f, nothing)
4031 end
4032 function IntegralFunction(f, integrand_prototype)
4033 calculated_iip = isinplace(f, 3, "integral", true)
4034 if !calculated_iip
4035 throw(IntegrandMismatchFunctionError(calculated_iip, true))
4036 end
4037 IntegralFunction{true}(f, integrand_prototype)
4038 end
4039
4040 function BatchIntegralFunction{iip, specialize}(f, integrand_prototype;
4041 max_batch::Integer = typemax(Int)) where {iip, specialize}
4042 _f = prepare_function(f)
4043 BatchIntegralFunction{
4044 iip,
4045 specialize,
4046 typeof(_f),
4047 typeof(integrand_prototype),
4048 }(_f,
4049 integrand_prototype,
4050 max_batch)
4051 end
4052
4053 function BatchIntegralFunction{iip}(f,
4054 integrand_prototype;
4055 kwargs...) where {iip}
4056 return BatchIntegralFunction{iip, FullSpecialize}(f,
4057 integrand_prototype;
4058 kwargs...)
4059 end
4060
4061 function BatchIntegralFunction(f; kwargs...)
4062 calculated_iip = isinplace(f, 3, "batchintegral", true)
4063 if calculated_iip
4064 throw(IntegrandMismatchFunctionError(calculated_iip, false))
4065 end
4066 BatchIntegralFunction{false}(f, nothing; kwargs...)
4067 end
4068 function BatchIntegralFunction(f, integrand_prototype; kwargs...)
4069 calculated_iip = isinplace(f, 3, "batchintegral", true)
4070 if !calculated_iip
4071 throw(IntegrandMismatchFunctionError(calculated_iip, true))
4072 end
4073 BatchIntegralFunction{true}(f, integrand_prototype; kwargs...)
4074 end
4075
4076 ########## Existence Functions
4077
4078 # Check that field/property exists (may be nothing)
4079 __has_jac(f) = isdefined(f, :jac)
4080 __has_jvp(f) = isdefined(f, :jvp)
4081 __has_vjp(f) = isdefined(f, :vjp)
4082 __has_tgrad(f) = isdefined(f, :tgrad)
4083 __has_Wfact(f) = isdefined(f, :Wfact)
4084 __has_Wfact_t(f) = isdefined(f, :Wfact_t)
4085 __has_W_prototype(f) = isdefined(f, :W_prototype)
4086 __has_paramjac(f) = isdefined(f, :paramjac)
4087 __has_jac_prototype(f) = isdefined(f, :jac_prototype)
4088 __has_sparsity(f) = isdefined(f, :sparsity)
4089 __has_mass_matrix(f) = isdefined(f, :mass_matrix)
4090 __has_syms(f) = isdefined(f, :syms)
4091 __has_indepsym(f) = isdefined(f, :indepsym)
4092 __has_paramsyms(f) = isdefined(f, :paramsyms)
4093 __has_observed(f) = isdefined(f, :observed)
4094 __has_analytic(f) = isdefined(f, :analytic)
4095 __has_colorvec(f) = isdefined(f, :colorvec)
4096 __has_sys(f) = isdefined(f, :sys)
4097 __has_analytic_full(f) = isdefined(f, :analytic_full)
4098 __has_resid_prototype(f) = isdefined(f, :resid_prototype)
4099
4100 # compatibility
4101 has_invW(f::AbstractSciMLFunction) = false
4102 has_analytic(f::AbstractSciMLFunction) = __has_analytic(f) && f.analytic !== nothing
4103 has_jac(f::AbstractSciMLFunction) = __has_jac(f) && f.jac !== nothing
4104 has_jvp(f::AbstractSciMLFunction) = __has_jvp(f) && f.jvp !== nothing
4105 has_vjp(f::AbstractSciMLFunction) = __has_vjp(f) && f.vjp !== nothing
4106 has_tgrad(f::AbstractSciMLFunction) = __has_tgrad(f) && f.tgrad !== nothing
4107 has_Wfact(f::AbstractSciMLFunction) = __has_Wfact(f) && f.Wfact !== nothing
4108 has_Wfact_t(f::AbstractSciMLFunction) = __has_Wfact_t(f) && f.Wfact_t !== nothing
4109 has_paramjac(f::AbstractSciMLFunction) = __has_paramjac(f) && f.paramjac !== nothing
4110 has_sys(f::AbstractSciMLFunction) = __has_sys(f) && f.sys !== nothing
4111 function has_syms(f::AbstractSciMLFunction)
4112 if __has_syms(f)
4113 f.syms !== nothing
4114 else
4115 !isempty(variable_symbols(f))
4116 end
4117 end
4118 function has_indepsym(f::AbstractSciMLFunction)
4119 if __has_indepsym(f)
4120 f.indepsym !== nothing
4121 else
4122 !isempty(independent_variable_symbols(f))
4123 end
4124 end
4125 function has_paramsyms(f::AbstractSciMLFunction)
4126 if __has_paramsyms(f)
4127 f.paramsyms !== nothing
4128 else
4129 !isempty(parameter_symbols(f))
4130 end
4131 end
4132 function has_observed(f::AbstractSciMLFunction)
4133 __has_observed(f) && f.observed !== DEFAULT_OBSERVED && f.observed !== nothing
4134 end
4135 has_colorvec(f::AbstractSciMLFunction) = __has_colorvec(f) && f.colorvec !== nothing
4136
4137 # TODO: find an appropriate way to check `has_*`
4138 has_jac(f::Union{SplitFunction, SplitSDEFunction}) = has_jac(f.f1)
4139 has_jvp(f::Union{SplitFunction, SplitSDEFunction}) = has_jvp(f.f1)
4140 has_vjp(f::Union{SplitFunction, SplitSDEFunction}) = has_vjp(f.f1)
4141 has_tgrad(f::Union{SplitFunction, SplitSDEFunction}) = has_tgrad(f.f1)
4142 has_Wfact(f::Union{SplitFunction, SplitSDEFunction}) = has_Wfact(f.f1)
4143 has_Wfact_t(f::Union{SplitFunction, SplitSDEFunction}) = has_Wfact_t(f.f1)
4144 has_paramjac(f::Union{SplitFunction, SplitSDEFunction}) = has_paramjac(f.f1)
4145 has_colorvec(f::Union{SplitFunction, SplitSDEFunction}) = has_colorvec(f.f1)
4146
4147 has_jac(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_jac(f.f1)
4148 has_jvp(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_jvp(f.f1)
4149 has_vjp(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_vjp(f.f1)
4150 has_tgrad(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_tgrad(f.f1)
4151 has_Wfact(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_Wfact(f.f1)
4152 has_Wfact_t(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_Wfact_t(f.f1)
4153 has_paramjac(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_paramjac(f.f1)
4154 has_colorvec(f::Union{DynamicalODEFunction, DynamicalDDEFunction}) = has_colorvec(f.f1)
4155
4156 has_jac(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_jac(f.f)
4157 has_jvp(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_jvp(f.f)
4158 has_vjp(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_vjp(f.f)
4159 has_tgrad(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_tgrad(f.f)
4160 has_Wfact(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_Wfact(f.f)
4161 has_Wfact_t(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_Wfact_t(f.f)
4162 has_paramjac(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_paramjac(f.f)
4163 has_colorvec(f::Union{UDerivativeWrapper, UJacobianWrapper}) = has_colorvec(f.f)
4164
4165 has_jac(f::JacobianWrapper) = has_jac(f.f)
4166 has_jvp(f::JacobianWrapper) = has_jvp(f.f)
4167 has_vjp(f::JacobianWrapper) = has_vjp(f.f)
4168 has_tgrad(f::JacobianWrapper) = has_tgrad(f.f)
4169 has_Wfact(f::JacobianWrapper) = has_Wfact(f.f)
4170 has_Wfact_t(f::JacobianWrapper) = has_Wfact_t(f.f)
4171 has_paramjac(f::JacobianWrapper) = has_paramjac(f.f)
4172 has_colorvec(f::JacobianWrapper) = has_colorvec(f.f)
4173
4174 ######### Additional traits
4175
4176 islinear(::AbstractDiffEqFunction) = false
4177 islinear(f::ODEFunction) = islinear(f.f)
4178 islinear(f::SplitFunction) = islinear(f.f1)
4179
4180 struct IncrementingODEFunction{iip, specialize, F} <: AbstractODEFunction{iip}
4181 f::F
4182 end
4183
4184 function IncrementingODEFunction{iip, specialize}(f) where {iip, specialize}
4185 _f = prepare_function(f)
4186 IncrementingODEFunction{iip, specialize, typeof(_f)}(_f)
4187 end
4188
4189 function IncrementingODEFunction{iip}(f) where {iip}
4190 IncrementingODEFunction{iip, FullSpecialize}(f)
4191 end
4192 function IncrementingODEFunction(f)
4193 IncrementingODEFunction{isinplace(f, 7), FullSpecialize}(f)
4194 end
4195
4196 (f::IncrementingODEFunction)(args...; kwargs...) = f.f(args...; kwargs...)
4197
4198 for S in [:ODEFunction
4199 :DiscreteFunction
4200 :DAEFunction
4201 :DDEFunction
4202 :SDEFunction
4203 :RODEFunction
4204 :SDDEFunction
4205 :NonlinearFunction
4206 :IntervalNonlinearFunction
4207 :IncrementingODEFunction
4208 :BVPFunction
4209 :IntegralFunction
4210 :BatchIntegralFunction]
4211 @eval begin
4212 function ConstructionBase.constructorof(::Type{<:$S{iip}}) where {
4213 iip,
4214 }
4215 (args...) -> $S{iip, FullSpecialize, map(typeof, args)...}(args...)
4216 end
4217 end
4218 end
4219
4220 SymbolicIndexingInterface.symbolic_container(fn::AbstractSciMLFunction) = fn.sys
4221
4222 function SymbolicIndexingInterface.observed(fn::AbstractSciMLFunction, sym)
4223 if has_observed(fn)
4224 if is_time_dependent(fn)
4225 return (u, p, t) -> fn.observed(sym, u, p, t)
4226 else
4227 return (u, p) -> fn.observed(sym, u, p)
4228 end
4229 end
4230 error("SciMLFunction does not have observed")
4231 end
4232
4233 function SymbolicIndexingInterface.observed(fn::AbstractSciMLFunction, sym::Symbol)
4234 return SymbolicIndexingInterface.observed(fn, getproperty(fn.sys, sym))
4235 end
4236
4237 SymbolicIndexingInterface.constant_structure(::AbstractSciMLFunction) = true