You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to try something simple with SciMLSensitivity.jl to find the sensitivities of the solution to a LinearProblem with respect to parameters. However, I get an unexpected error, which I outlined in a post.
using Zygote
using SciMLSensitivity
using ForwardDiff
using LinearSolve
import Random
Random.seed!(1234)
N =2functiontest_func(x::AbstractVector{T}) where {T<:Real}
A =reshape(x[1:N*N], (N,N))
b = x[N*N+1:end]
# This works:# sol = A\b# But this seems to not work:
prob =LinearProblem(A, b)
sol =solve(prob)
returnsum(sol)
end# Random Point
x0 =rand(N*N+N)
# Try with Zygote
grad_zygote = Zygote.gradient(test_func, x0)
display(grad_zygote[1])
# Compare with ForwardDiff
grad_forwarddiff = ForwardDiff.gradient(test_func, x0)
display(grad_forwarddiff)
I wanted to try something simple with SciMLSensitivity.jl to find the sensitivities of the solution to a
LinearProblem
with respect to parameters. However, I get an unexpected error, which I outlined in a post.The following error occurs:
As suggested by @avik-pal, returning
sum(sol.u)
fixes the problem and this may be a bug not handling thegetindex(sol, sym)
rrule correctly.Is this a bug or is there as reason
sol.u
should be used in this case?The text was updated successfully, but these errors were encountered: