-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for positioning using only one Lighthouse base station #461
Comments
@krichardsson without confirming the calibration's validity is the accuracy still good enough for single base station implementation? Knowing that there may still be distortion due to lack of calibration |
@NicksonYap Yes, I think it should work. |
Sharing my findings so far:libsurvive has a "Poser" called PoserEPNP in About EPnP/PnP: For PnP, given reception from only 1 base station: If there is reception for at least 3 sensors, it can produce a full 6DOF pose (XYZ, RPY) It may be possible to also compute in a hybrid manner in case 2 sensors have reception - where sensor 0 can see 2 lighthouses, but sensor 1 can only see 1 libsurvive has other "Posers" but so far PoserEPNP is the simplest to understand and implement. Others require minimizing/ optimizing least squares (not sure if CF can handle)
Discussions:@krichardsson @ataffanel Meaning that even in situations with a single sweep, detected from single sensor, the estimator should be able to make use of this data to make corrections in a single axis
NotesSurprisingly, I was able to set up my base stations about 10.5m apart, diagonally in a 10x4m space (without sync cables 👍 ) The HMD had no issue detecting and localizing around the space (did not wear it to experience the accuracy) However the LH Deck did not handle this well, particularly because the angle is a bit steep from a distance, and both base stations need to be seen at the same time, by rotating the CF a bit, I will be able to get full reception (LED Ring deck shows green in Lighthouse Quality Effect mode, just submitted PR) The CF has too little of a room to rotate before loosing sight from either one of the base stations Which is why I'm planning to resolder the photodiodes at and angle however it'll work only if the above is implemented. We can then potentially localize in a 10x10m space (or 10x4m at least) using only the classical V1 base stations. |
Eventually lighthouse should be handled directly in the EKF idealy by pushing individual angles. Though I still do not understand how to make that happen: as you noted we can get 6DOF from the system, this means that each sensor reading will give both a position error and an attitude error information. I have no idea how to express that. The easiest to start with would be to push the position into the Kalman filter from lighthouse.c the same way it is done today.
kalmanCoreUpdateWithTof() only push a position error into the EKF so I am not sure it will work in this case. As far as I understand It would only work when receiving angle from both basestations.
For me this is the tricky part: we want to push unique axis to the EKF in such a way that the EKF can recover 6DOF errors from it. My understanding is that when we push data to the EKF we essentially push an error vector on the internal state and the magnitude of the error vector, the error vector corresponds to our measurement Vs/ the current estimate. This is quite easy to reason about when it comes to push position or attitude error (for attitude our EKF makes it a bit tricky though). But in our case, each angles can both comes from an attitude and a position error so I am not sure how that can be modelized. |
I've been continuously working for days trying to derive the equation / algorithm for calculating position given any combination of sensor detections and basestations receptions (still requiring both horizonal and vertical sweeps per sensor) They are based on http://geomalgorithms.com/a07-_distance.html#Distance-between-Lines by re-deriving the equations with a new set of assumptions/constraints I'm no longer sure if it's still under PnP, but the concept is still the same Combinations I've worked / I'm working on: I've successfully derived a single equation that works for combination Following are plots from Matlab, showing Basestation positions (Triangle) and Detected Rays from BS. The blue vector between rays is shortest possible distance/segment a. 1 sensor, 2 basestations (should produce identical result as current implementation)
b. 2 sensors, 1 basestation
As you can see, if the distance from the CF is slightly off, or if the two rays deviate a little, the estimated CF position might be "pushed away" or "pull toward" the Basestation. It is sensitive to errors in this configuration. c. 2 sensors, 1 basestation each The following demonstrates how the equation handles a possible error where the Rays are impossibly far apart but gets detected anyhow Discussions1. Currently the resulting Orientation information from the Rays (Green Vector) for the case of Example of 2 sensors, 1 basestation each (this time rays are close together) Green Vector Agrees with the Rotation Matrix completely, both in length and in Orientation. But if the Rotation Matrix says the CF has a Yaw of 90 deg (pointed right) 2. Regarding std deviation of the calculated position and attitude, it's possible to obtain it in runtime for each result, based on: Once I'm done with getting the equation for all combos, I can then differentiate and get the equation for - "What is the error of the result, given the error of the items above (a, b, c, d)" 3. Regarding the Kalman Filter, Since TOF has no idea on any other positional or angular axis, it does not suggest an error for them and leave them untouched It seems if we have a For example if the estimated position of CF from Kalman Filter does not cross the only ray that we detected, we introduce position error vectors pointing to the closest point on that ray, even though we do not know how far the CF is from that basestation If the above are possible, then it will work even if only a single sweep is detected |
@NicksonYap Wow, good work! I think this looks interesting.
|
Thanks for the reply, For num 1. yes, i predicted it would be that way but it's rather odd, it might cause some kind of feedback and overshoot (will try after i get this done) For num 2. it's blank, a typo? For num 3., since it's D0, D1, D2 is attitude error, and there are only three, i assume it is Y P R in radians? |
I've managed to implement the formula in MATLAB into CF, turns out there are some heavy calculations required to solve the matrix equation (involving SVD & pseudoinverse) but was turned into a simple function in MATLAB. The algorithm works by gathering all possible Ray vectors (max 4 sensors * 2 BS = 8 rays) Edit: initially i had issues with memory for several days and asked for help, but i had just managed to do it :) note:
Edit: Based on initial observation I didn't think it was possible to work with single basestation without accurate calibration, however I just tested and it actually works, maybe the heavy averaging (12 times) helped a lot. Will probably make a branch and share the code here, because it's not really an efficient way to compute position, but it might be the only way to fit into CF However when switching from 1 basestation to 2 or vice versa, there may be a sudden shift in position (calibration issue) There are large position glitches once in a while, still looking into it Accuracy will depend on the distance from the BS, the orientation of the sensors (perpendicular to the rays is best, means LH deck top facing the front of BS) and the distance between the sensors For single basestation, the best set up might actually be:
|
Very cool! I'll try to take a look this week.
When you have access to two basestations, do you calculate the crossing point the same way it is done in the current implementation, and then feed the result to the kalman filter? Another solution would be to calculate the two solutions separately and feed both of them to the kalman filter. I think this would give a smoother transition
We have not seen this. We tried to write the decoder to be as robust as possible, for instance to handle camera flashes. If the decoder can not be hardened we could also add an outlier filter that rejects samples that are suspicious.
Let's see how far we can take the current deck first. Maybe there will be good reasons to create a different design in the future? |
Yes, but to be more specific, there is only one By "two solutions" do you mean we submit position results to kalman filter separately, for each basestation?
It is surely not due to the sweep/pulse decoder, there's something weird happening due to the code I added. In CF Client plotter, I can see sustained peaks (for half a sec or so) however when I set a breakpoint trying to stop when large values are detected (like 6 meters off), It couldn't. I believe the position values was never glitching, but the plotter somehow plots it that way, in a very consistent manner (fixed intervals and almost the same magnitude) I'm not sure if it's due to the heavy cpu clock or memory usage, affecting only plotted/transmitted data
Yeah, the current deck actually works pretty okay even for 1 BS |
Yes, or possibly 12 in your case :-) One way to look at it is that you leave the averaging to the kalman filter.
The difference in accuracy is handled by the std deviation parameter in the scalarUpdate() function, it tells the kalman filter how much the sample can be trusted (how noisy the data is). It is even possible to tell it that the sample is more noisy along the axis pointing towards the basestation and less noisy at right angles, by calling scalarUpdate() multiple times. Samples with a low std deviation will have a greater effect on the solution than samples with a higher std deviation. As mentioned earlier I think this solution will give a nicer transition when one basestation is occluded as the kalman filter hopefully will smooth it out (depending on the std deviation settings). |
…thouse - Translate using sensor position - Rotate H vector back to global coordinate system
Just tested the experimental feature. I don't know the state of this, or if you want to collect information. But I saw this behaviour:
I know this is experimental, but are these behaviours expected because not everything is implemented yet. Or is it helpfull to stresstest the feature? |
Thanks @dolfje! I mainly committed the code to share it in the team (and with anyone that is interested), and there are still known issues, for instance handling of one base station. I would expect it to work decently with two base stations though, but it is work in progress... |
@krichardsson Okey, then I will leave the interesting part here for future reference or if you want me to do tests. The video of the test with FF_experimental on (https://photos.app.goo.gl/p24PhomFMtxhbCed9) With the flag off, it goes directly up. No -y movement. |
@dolfje if you want to play with this I hope it will become better and better over time :-) I don't think it is ready for testing yet. |
…e. Seems to give better results.
…ng "constant" matrixes and vectors to the kalman filter as pointers.
If synchronized on two basestations, do not allow to synchronize on one basestation. If a second basestation was never seen, the synchronization will work for one basestation. The aim is to allow both running from a single LHv1 basestation and protect two basestation systems from wrongly using the slave basestation as master after desynchronization
@dolfje This should hopefully work now if you want to try it out. |
…thod where we push sweep angles into the kalman filter is now the default method.
@krichardsson I had been inactive for several months, the issue is closed, does it mean that obtaining position from single basestation has been implemented? It does look like yaw estimation using single basestation is implemented |
@NicksonYap Yes, should work with one basestation now. The precision is still better with two basestations though, but the CF will be able to handle loss of one basestation for a while. There are inherent problems in the lighthouse V1 protocol that makes it hard to identify if a pulse is from the master or slave basestation after occlusion that we can not overcome. |
Copy the app-veyor used versions for the GH action build. Lets modernize the build as a separate step
This allows to add dependencies: the windows and Mac build will not run unless the checks passes
The current Lighthouse positioning solution is based on finding the crossing of vectors from two base stations. It should be possible to find the full pose for a Lighthouse deck using data from only one base station though, and this issue is about how to implement this functionality.
Ideally I think this should be integrated into the kalman filter to be fused with other sensor data.
An initial simplification could be to only extract the position from the light house deck and ignore roll/pitch/yaw (the current solution works this way).
Some ideas to get started:
The text was updated successfully, but these errors were encountered: