-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summarize conversation about our classification #52
Comments
Here's the Google Doc with a bunch of my thoughts: https://docs.google.com/document/d/1HrsThiimpRbJWVJswMJxL0yKwAouJv7PtLtXdpP6Rs4/edit?usp=sharing Near the end is a list of things we still have to do, based off what I wrote in the document. The one thing I'm wondering now, though, is whether we will actually need a unique threshold for each year, or if we can just derive one common threshold as we are currently doing. Despite the autocorrelation issue with the data we were just discussing, the fact that the 0.51 threshold does so well across almost every year suggests that maybe we might be okay with one threshold. Nevertheless, we will first need new training data to establish the threshold. That being said, if we switch to making our own greenest pixel composites and/or if we use surface reflectance imagery rather than top of atmosphere, the universal threshold will most likely change from its current value of 0.51 (especially with SR imagery). There are three critical first steps that we should aim to have completed ASAP (also outlined in the list at the end of the document):
|
@JerrilynGoldberg is just about finished with an updated mask using 2015 data for the county list given by App. Voices, so we should have both that as well as an updated study extent shortly. |
@apericak so here's my take-away on the major points, let me know if I missed anything:
Thanks for writing this all up, this is going to be an excellent resource/guide moving forwards |
We probably will want to switch to SR, but that's not a definite yet either. I actually talked to some more people today and learned that there may not be much to be gained by using SR rather than TOA. Using SR means we will have less variation over time in values, but that also means we introduce error stemming from the USGS's SR algorithm. What will probably be best is to do a mini-classification over one or a few mines to see if we get better accuracy with TOA or SR. I'll look into that early this upcoming week.
|
Also, one other thing I just learned: when we are doing the accuracy assessment, we want to try to use as many non-Landsat images as possible when manually classifying mine sites. This could mean using NAIP for more-recent years, or finding other historical aerial imagery over our study area. Finding alternate imagery may not be possible for every year, but it is considered a best practice for remote sensing projects and will likely be something journal reviewers ask us about if we don't address it. |
Closing the issue. See Classification and Analysis Summary wiki page for the details of this issue (including the links to the discussion on this issue and Andrew's full write-up) |
@apericak would you mind writing up a summary of the conversation you had with your professor about our pseudo classification and the possibly ways to assess the accuracy of our EE script output, so that we have it on record
The text was updated successfully, but these errors were encountered: