You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 8, 2019. It is now read-only.
For an area in NW Vietnam, there's an area that exploded with alerts in 2012Q1. It looked suspect, so we checked out the raw data. Turns out there were extremely low NDVI values for a couple of months, which rather than representing tree cover loss were likely caused by cloud cover. We're talking about going from an average NDVI value of 7000 to < 1000 in two weeks across an entire region (I'm barely exaggerating).
That's an extremely strong signal, but it can't be real - it's just too large an area, changing too quickly. We thought that moving averages and the like would keep such values from causing too much mischief, but we were wrong.
So we need to do one of the following:
use pixel reliability data to remove contaminated data
We've found in the past that using pixel reliability metadata to knock out "bad" values removed too much useful data. So we won't be doing this.
Ignore alerts when there's an anomalous drop in NDVI.
We could do this by ignoring alerts for pixels that are too many standard deviations away from the mean NDVI for a given period, or for all periods. TBD. We'd have to figure out what "too many" standard deviations means, but it shouldn't be too hard to figure out the basic principles of how this might work.
Do more aggressive smoothing of the probability time series, say, using six periods for the moving average.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
For an area in NW Vietnam, there's an area that exploded with alerts in 2012Q1. It looked suspect, so we checked out the raw data. Turns out there were extremely low NDVI values for a couple of months, which rather than representing tree cover loss were likely caused by cloud cover. We're talking about going from an average NDVI value of 7000 to < 1000 in two weeks across an entire region (I'm barely exaggerating).
That's an extremely strong signal, but it can't be real - it's just too large an area, changing too quickly. We thought that moving averages and the like would keep such values from causing too much mischief, but we were wrong.
So we need to do one of the following:
We've found in the past that using pixel reliability metadata to knock out "bad" values removed too much useful data. So we won't be doing this.
We could do this by ignoring alerts for pixels that are too many standard deviations away from the mean NDVI for a given period, or for all periods. TBD. We'd have to figure out what "too many" standard deviations means, but it shouldn't be too hard to figure out the basic principles of how this might work.
The text was updated successfully, but these errors were encountered: