-
Notifications
You must be signed in to change notification settings - Fork 742
Current Location Accuracy
There are techniques we can use, by combining AR with Location Services, to determine a user’s location far more accurately than by just using Location Services alone.
ARCL continuously receives location updates from CLLocationManager. These typically arrive every 1-15 seconds, with anywhere between 4-150m of accuracy, and contain a coordinate, an altitude and the accuracy of those readings.
When we receive a new location, we store both the location data and the user’s current position within the AR scene, in what’s called a LocationEstimate
.
Coordinates can be converted to meters, and the AR scene position is given in meters, so these values can be converted between.
When the user’s current position is queried, we look for the most accurate, and most recent, location estimate. For example, lets say we received a location estimate with 8m of accuracy, then walk north for 10m and received a new location estimate with 65m of accuracy. From AR, we know know we walked 10m through the negative z-axis of the scene, meaning 10m north. So we can go back to that 8m accurate LocationEstimate, and translate that by 10m north to find out our current coordinate, to about 8m of accuracy.
The AR scene is accurate up to about 100m, so we discard LocationEstimates which were taken more than 100m away from our current position.
When we place a LocationNode using addLocationNodeForCurrentPosition
, we’re adding it to the same position as the user, and so we’re able to give it a location using the best location estimate. As we walk away, we might get more accurate location data, and so once we’ve moved 100m away from the LocationNode, we’ll give it a confirmed location using the most accurate reading.
ARKit gets confused on your position and device rotation after a distance. This is likely because it’s looking for things to recognise which it has seen earlier on, and as you move through the world it thinks it recognises something, and re-orients and shifts your position to suit what it now thinks is the truth. This is an extension of the behaviour you see when you put a hand in front of the camera, and watch objects jump around the place.
This often results in the user’s position moving in the wrong direction. There may be techniques we can use to fix or alleviate this. For example:
We receive a location data point. We move north by 10m through the scene. We receive another location data point. We have our determined location using GPS + AR, and we have this new location data point. The problem is we're now outside of the new location data point's range, which means our determined location must be incorrect, and is likely happening because our scene's idea of north is incorrect. So we have 2 lines of travel:
- between the 2 data points
- between the first data point and our determined location.
We can look at the angle between those 2 lines and correct the scene's orientation to put us in the middle of the second data point. This will give us a more accurate orientation. Using this technique over time, as the user moves further through the scene, their True North will become, and remain, more accurate than it would by using just the device's idea of True North.
When the ARCamera’s TrackingState becomes limited, the location may change inaccurately. Upon this happening, we should discard all location estimates, and confirm the location of LocationNodes within 100m. This should alleviate the situation and allow us to start from scratch once accuracy returns. I haven’t tested this - the thinking here may need some work.
There are various algorithm improvements we can make to determine a more accurate location, which aren’t currently implemented.
We receive a location data point accurate to 6m. We move forward 10m, and receive another data point accurate to 6m. But those data points aren’t 10m apart. If we translate that first data point by 10m, we may only have an overlap between the 2 locations of, say, 2m. So we can now narrow the band of accuracy of our location to 2m.
We can extrapolate this technique further across more recent data points to get a more precise location.
The example is imagined on a 1D plane, and given the problems with True North (as mentioned in the readme), it might be smart to implement it that way for now.
I’d be interested to see this implemented, and how well it works.
A good start point for testing this is to enable the map view, and enable debug mode, in the sample project. That will display both the most recent GPS location, and our location estimate using GPS + AR.