Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DynPoint? Estimating Velocity #60

Closed
mc2922 opened this issue Jun 1, 2018 · 13 comments
Closed

DynPoint? Estimating Velocity #60

mc2922 opened this issue Jun 1, 2018 · 13 comments

Comments

@mc2922
Copy link
Collaborator

mc2922 commented Jun 1, 2018

Dyn point is not the most satisfying name, since we may want a point also with accelerations?
DOF is misleading
Have to think about the name

What about ->

struct Point4DOF <: IncrementalInference.InferenceVariable
  dims::Int
  labels::Vector{String}
  Point4() = new(5, String["Point4";]) #x,y,̇x, ̇y,t
end

mutable struct Point4Point4 <: IncrementalInference.FunctorPairwise where {T <: Distribution}
  z::T
  Point4Point4{T}() where {T <: Distribution} = new{T}()
  Point4Point4(z1::T) where {T <: Distribution} = new{T}(z1)
  Point4Point4(mean::Vector{Float64}, cov::Array{Float64,2}) where {T <: Distribution} = new{Distributions.MvNormal}(MvNormal(mean, cov))
end
getSample(p4p4::Point4Point4, N::Int=1) = (rand(s.z,N), )
function (p4p4::Point4Point4)(res::Array{Float64},
      idx::Int,
      meas::Tuple,
      xi::Array{Float64,5},
      xj::Array{Float64,5}  )
  Z = meas[1][1,idx]
  Xi, Xj = xj[:,idx],xj[:,idx]
  dt = Xj[5] - Xi[5] #implies Xj occured after Xi- potential issue?
  res = z - (Xj - (Xi+dt*[1 1 0 0]*Xi))
  nothing
end
@dehann
Copy link
Member

dehann commented Jun 1, 2018

Hi, thanks! The idea is to add velocity states to Point2 variables, right? And you want to store timestamps in each variable node?

Maybe we could use the variable name DynPoint2 instead, since this is still a 2D system?

I've made a few changes, here is what I think. If we're good, I will bring the code in with a pull request and reference the issue there.

struct DynPoint2 <: IncrementalInference.InferenceVariable
  ut::Int64 # microsecond time
  dims::Int
  labels::Vector{String}
  DynPoint2() = new(0, 4, String[]) # x, dy/dt, x, dy/dt,
  DynPoint2(ut::Int64) = new(ut, 4, String[]) # x, dy/dt, x, dy/dt,
end

I don't think you want to keep time stamps as free variables to be estimated (although that is entirely okay to do if you wanted)? For this I think the dimension of the node is still only 4, but there is metadata to store in the variable node also. I also dropped the "Point4" variable node name from the labels, to reduce the amout of data in stored in the database, but can certainly add it if you wanted to query against these variable types.

mutable struct DynPoint2DynPoint2 <: IncrementalInference.FunctorPairwise where {T <: Distribution}
  z::T
  DynPoint2DynPoint2{T}() where {T <: Distribution} = new{T}()
  DynPoint2DynPoint2(z1::T) where {T <: Distribution} = new{T}(z1)
  DynPoint2DynPoint2(mean::Vector{Float64}, cov::Array{Float64,2}) where {T <: Distribution} = new{Distributions.MvNormal}(MvNormal(mean, cov))
end
getSample(dp2dp2::DynPoint2DynPoint2, N::Int=1) = (rand(s.z,N), )
function (dp2dp2::DynPoint2DynPoint2)(res::Array{Float64},
      idx::Int,
      meas::Tuple,
      Xi::Array{Float64,2},
      Xj::Array{Float64,2}  )
  Z = meas[1][1,idx]
  xi, xj = Xj[:,idx],Xj[:,idx]
  # dt = xj[5] - xi[5] # I see the problem, you no longer have access to time stamps that were stored in the variable nodes -- TODO
  res[1:2] = z[1:2] - (xj[1:2] - (xi[1:2]+dt*xi[3:4]))
  res[3:4] = z[3:4] - (xj[3:4] - xi[3:4])
  nothing
end

The Xi::Array{Float64,2} have size 2 not 5, otherwise you would have Xi[i,j,k,l,m] -- 4 rows for states, by N columns for samples from the marginal of that variable.

I also added the indexing for res[1:4] which is a solver thing where we want to overwrite the memory values for res[1:4] and not create a new local variable called res and write the values there.

This example will require some solver code changes (thanks for pointing them out) -- we will need the ability to pass DynPoint2DynPoint2 variable node meta data down to the residual function. I will get right on that and open a new issue / feature request to track that.

Lastly, should we make the residual function calculate the (x,y) position residual, as well as (dx/dt, dy/dt) velocity residual. This assumes the DynPoint2DynPoint2 factor has a 4-dimensional measurement z.

@dehann
Copy link
Member

dehann commented Jun 3, 2018

See progress here:
https://github.com/JuliaRobotics/RoME.jl/pull/63/files#diff-cdc452d2efd9ea5d01d2ddd3ed447286R1

Pending change to solver first:
JuliaRobotics/IncrementalInference.jl#102

Merging sequence to prepare RoME.jl first, then merge to METADATA.jl with new release, then changes in IncrementalInference.jl can be tagged and merged to METADATA.jl also.

@dehann
Copy link
Member

dehann commented Jun 4, 2018

Hi @mc2922 , please see the new test code here:
https://github.com/JuliaRobotics/RoME.jl/blob/feature/addusermetadata/test/testDynPoint2D.jl

If you want to test with both IncrementalInference and RoME on branches, they would be:

else, hold off till these are merged into master branch -- will do that in a few days.

There is one non-obvious thing to be aware of: If you look at the test example you will see the second pose :x1 results in having (p=20,20, v-10,10). That is because the position values stored in the DynPoint2DynPoint2 is actually something called "delta position plus" by some lingo. What do you think would be the best way to represent the delta's between the DynPoints2s?

I'm personally comfortable with using "delta position plus" given background in inertial odometry preintegrals. We would have to communicate all this clearly.

Thanks for filing the issue / request and code!

@mc2922
Copy link
Collaborator Author

mc2922 commented Jun 4, 2018

Let me read up on the pre-integrals, I'm not too familiar. Will edit this comment after.
If I understood the test code correctly, dynpt2 initialized with a velocity prior only stores delta positions as opposed to global positions?

  1. To back out the global referenced position of a target being tracked you would have to sum over all nodes?
  2. Will the measurements (range or otherwise) be implemented as delta-positions too? Aren't the deltas different from the velocities only by the (ostensibly constant or not random) time differences? in that case the pose vector is unnecessarily over determined because delta(x) = delta(t) * v

@dehann
Copy link
Member

dehann commented Jun 5, 2018

Let me read up on the pre-integrals,

You don't have to go too far. The basic idea is that change in position is composed of three components (owing from double integration):

deltapositionplus
( eq. 1)

Whats happening at the moment is the DynPoint2DynPoint2 factor is using the above equation to define the difference in position between the two DynPoint2s. The position part stored in DynPoint2DynPoint2 factor corresponds to deltaposplusonly.

If I understood the test code correctly, dynpt2 initialized with a velocity prior

dynpoint2fg

The prior (unary) factor attached to :x0 has dimension 4 and sets the positions at (0,0) and velocity at (10,10), as per code:

using RoME, Distributions
fg = initfg()
v0 = addNode!(fg, :x0, DynPoint2(ut=0))
# Prior factor as boundary condition
pp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))
f0 = addFactor!(fg, [:x0;], pp0)

prior only stores delta positions as opposed to global positions?

The prior stores / sets global (absolute) values.

In contrast, looking at the (conditional) likelihood between :x0 and :x1, however,

v1 = addNode!(fg, :x1, DynPoint2(ut=1000_000)) # time in microseconds
# conditional likelihood between Dynamic Point2
dp2dp2 = DynPoint2DynPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))
f1 = addFactor!(fg, [:x0;:x1], dp2dp2)

indicates a relative-only measurement between the two locations -- as per eq. (1). In other words, the MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)) implies that deltaposplusonly = [10,10]. The change in velocity between :x0 and :x1 is deltav = [0,0].

To back out the global referenced position of a target being tracked you would have to sum over all nodes?

No, the values stored in variables are in which ever reference frame you choose ( as long as the priors agree ). In our case, the values in factor graph variable nodes are in the "global" reference frame:

ensureAllInitialized!(fg)
tree = wipeBuildNewTree!(fg)
inferOverTree!(fg, tree)
using KernelDensityEstimate
@show x0 = getKDEMax(getVertKDE(fg, :x0))
# julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]

@show x1 = getKDEMax(getVertKDE(fg, :x1))
 # julia> ... = [19.9072, 19.9765, 10.0418, 10.0797]

This implies the first position :x0 is found to have "global" (absolute) position [-0.19441, 0.0187019], while the second global position for :x1 is found as [19.9072, 19.9765]. Notice how :x1's position is not at [10,10] as might be expected. This is eq. (1) at work.

Will the measurements (range or otherwise) be implemented as delta-positions too?

It is down to how we model it, so no not necessarily. We are not restricted to just this DynPoint2DynPoint2 approach suggested here. The thing to remember is that with eq. (1), we are essentially repeating the interposition integration each time we evaluate the factor. I will try get a reasonable non-integrating factor example up soon, however, the regular high-school style Newtonian constant velocity assumption model is consistent with our general modeling (up for debate of course!).

Aren't the deltas different from the velocities only by the (ostensibly constant or not random) time differences?

If I understand your question right: The use-case suggested in these code blocks assumes the time instants associated with :x0 at 0us, and :x1 at 1000000us, are deterministic and not being estimated. If we include the time estimates to the state vector (as you originally suggested above ), say for example DynPoint2TDynPoint2T, then the time delta would be treated stochastically alongside the position and velocity estimates. Not sure I'm answering your question though?

in that case the pose vector is unnecessarily over determined because delta(x) = delta(t) * v

Yes it is over determined, and maybe a question of taste at this point. I agree with the logic that if the timestamps are treated deterministically, the change in position and change in time immediately implies a change in velocity. Should we then consider something as follows:

function (dp2dp2::VelPoint2VelPoint2)(...)
  ...
  dt = ...# tj - ti
  dp = (xj[1:2]-xi[1:2])
  res[1:2] = z[1:2] - dp
  res[3:4] = z[3:4] - (dp/dt - xi[3:4]) # EDIT, fixed mistake
  # res[3:4] = z[3:4] - (dp/dt - 0.5*(xj[3:4]+xi[3:4])) # midpoint integration
  nothing # don't want to return anything from this function
end

What do you think about that? Maybe just add both and use different names for them -- I suggest VelPoint2VelPoint2, what do you think?

@dehann dehann added the decision label Jun 5, 2018
@mc2922
Copy link
Collaborator Author

mc2922 commented Jun 5, 2018

Thanks for the write up! From this:

res[1:2] = z[1:2] - dp
res[3:4] = z[3:4] - (dp/dt - xi[3:4])

I see that the pose vector is [x y xdot ydot]? but here:

#julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]
This implies the first position :x0 is found to have "global" (absolute) position [-0.19441, 0.0187019],

You are referencing [1:2] as position, and [3:4] as the velocity? This initialization should set point 1 to (10,10) with velocities (0.1,0.1)? I would still expect the positions of pt2 to come out very close to 10,10?

dp2dp2 = DynPoint2DynPoint2(MvNormal([10ones(2);zeros(2)], 0.1eye(4)))

@mc2922
Copy link
Collaborator Author

mc2922 commented Jun 5, 2018

I read [1] for the preintegration explanation.

#res[3:4] = z[3:4] - (dp/dt - 0.5*(xj[3:4]+xi[3:4])) # midpoint integration

I wonder if we should tackle the more general problem right away, i.e. assume that odometry will be in a local frame only, and encode bearing + rotation in and out of local/global frames? Or do you think we should get a working example with the current code first? If that case, I'm hesitant about the midpoint integration. It would make sense if we obtain many odometric measurements quickly, (dt small), but if (dt large), its like a baked in time lag filter, might as well use just the new position. Makes sense to have two factor-types probably, one with each?

@dehann
Copy link
Member

dehann commented Jun 6, 2018

I see that the pose vector is [x y xdot ydot]?
You are referencing [1:2] as position, and [3:4] as the velocity?

correct.

This initialization should set point 1

I understand you to mean :x0

to (10,10) with velocities (0.1,0.1)?

Incorrect,

fg = initfg()
v0 = addNode!(fg, :x0, DynPoint2(ut=0))
# Prior factor as boundary condition
pp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))
f0 = addFactor!(fg, [:x0;], pp0)

The prior puts position at [0,0] and velocity at [10,10], both with normally distributed confidence of 0.1. By looking at the values in :x0 you will see:

using KernelDensityEstimate
@show x0 = getKDEMax(getVertKDE(fg, :x0))
# julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]

I would still expect the positions of pt2 to come out very close to 10,10?

This is the "delta position plus" thing of eq. (1) I'm trying to indicate. At this point there at two options for factors that drive velocity, I think the names would be DynPoint2DynPoint2 or VelPoint2VelPoint2 as discussed above.

I wonder if we should tackle the more general problem right away,

That would be good yes.

i.e. assume that odometry will be in a local frame only, and encode bearing + rotation in and out of local/global frames?

If I am understanding you correctly, that is precisely what the current code is already doing. The likelihood (or IncrementalInference.FunctorPairwise) factors are in the local frame. Both DynPoint2DynPoint2 or VelPoint2VelPoint2 would define a transform from point xi to xj, in the xi reference frame.

The values stored in :x0 and :x1 are in the global frame, such as shown with:

@show x0 = getKDEMax(getVertKDE(fg, :x0))
# julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]
@show x1 = getKDEMax(getVertKDE(fg, :x1))
 # julia> ... = [19.9072, 19.9765, 10.0418, 10.0797]

Or do you think we should get a working example with the current code first?

I want to add the VelPoint2VelPoint2 factor so you can choose whether you want to use the differentiation in a "forwards" (DynPointDynPoint) or "backwards" (VelPointVelPoint) manner.

If that case, I'm hesitant about the midpoint integration.

No problem, lets leave it out for now, and easy to add once the desired work flow is clear.

Makes sense to have two factor-types probably, one with each?

I will get on that -- DynPoint2DynPoint2 and VelPoint2VelPoint2 with residual functions as discussed above. Let me know if you have objections to naming or residual functions (call it peer-review :-)

I read [1] for the preintegration explanation.

PS, (author disclosure) you can find a continuous-time, higher order preintegration derivation (a formal Taylor expansion) and discussion in Chapter 4 too, as well as independently developed.

@dehann dehann changed the title DynPoint? 4DOF point? DynPoint? Estimating Velocity Jun 8, 2018
@dehann
Copy link
Member

dehann commented Jun 9, 2018

HI @mc2922 , so here is a better example where I use a slightly different pipeline for enforcing the residual functions, i.e. IncrementalInference.FunctorPairwiseMinimize -- where minimization on res[1] is used rather than root finding across the entire residual vector:

mutable struct VelPoint2VelPoint2{T} <: IncrementalInference.FunctorPairwiseMinimize where {T <: Distribution}
  z::T
  VelPoint2VelPoint2{T}() where {T <: Distribution} = new{T}()
  VelPoint2VelPoint2(z1::T) where {T <: Distribution} = new{T}(z1)
end
getSample(vp2vp2::VelPoint2VelPoint2, N::Int=1) = (rand(vp2vp2.z,N), )
function (vp2vp2::VelPoint2VelPoint2)(
                res::Array{Float64},
                userdata,
                idx::Int,
                meas::Tuple,
                Xi::Array{Float64,2},
                Xj::Array{Float64,2}  )
  #
  z = meas[1][:,idx]
  xi, xj = Xi[:,idx], Xj[:,idx]
  dt = (userdata.variableuserdata[2].ut - userdata.variableuserdata[1].ut)*1e-6   # roughly the intended use of userdata
  dp = (xj[1:2]-xi[1:2])
  dv = (xj[3:4]-xi[3:4])
  res[1] = 0.0
  res[1] += sum((z[1:2] - dp).^2)
  res[1] += sum((z[3:4] - dv).^2)
  res[1] += sum((dp/dt - xi[3:4]).^2)  # (dp/dt - 0.5*(xj[3:4]+xi[3:4])) # midpoint integration
  res[1]
end

Now the example works a little closer to what you may have expected:

fg = initfg()

# add three point locations
v0 = addNode!(fg, :x0, DynPoint2(ut=0))
v1 = addNode!(fg, :x1, DynPoint2(ut=1000_000))
v2 = addNode!(fg, :x2, DynPoint2(ut=2000_000))

# Prior factor as boundary condition
pp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))
f0 = addFactor!(fg, [:x0;], pp0)

# conditional likelihood between Dynamic Point2
dp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))
f1 = addFactor!(fg, [:x0;:x1], dp2dp2)

# conditional likelihood between Dynamic Point2
dp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))
f2 = addFactor!(fg, [:x1;:x2], dp2dp2)

# Graphs.plot(fg.g)
ensureAllInitialized!(fg)
tree = wipeBuildNewTree!(fg)
inferOverTree!(fg, tree)

# see the output
@show x0 = getKDEMax(getVertKDE(fg, :x0))
@show x1 = getKDEMax(getVertKDE(fg, :x1))
@show x2 = getKDEMax(getVertKDE(fg, :x2))

Producing output:

x0 = getKDEMax(getVertKDE(fg, :x0)) = [0.101503, -0.0273216, 9.86718, 9.91146]
x1 = getKDEMax(getVertKDE(fg, :x1)) = [10.0087, 9.95139, 10.0622, 10.0195]
x2 = getKDEMax(getVertKDE(fg, :x2)) = [19.9381, 19.9791, 10.0056, 9.92442]

@dehann
Copy link
Member

dehann commented Jun 9, 2018

I'm also working on the write-up for defining your own factors. This will become part of the general Caesar documentation.

@dehann
Copy link
Member

dehann commented Jun 11, 2018

please see new documentation here:
http://www.juliarobotics.org/Caesar.jl/latest/definingfactors.html

@dehann
Copy link
Member

dehann commented Jun 16, 2018

See new Point2Point2Velocity which uses only delta position measurements, see PR #66

@dehann
Copy link
Member

dehann commented Jun 16, 2018

See new tag RoME v0.1.4 which also requires upcoming IncrementalInference v0.3.5. The combination includes the new velocity code discussed here. Re-open if required.

@dehann dehann closed this as completed Jun 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants