-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design protocol for determining crawl accuracy over time #136
Comments
Did a quick review of the manual data we collected a couple of weeks ago and targeted instances of a mismatch (marked in red) signaling our ground truth was different than the crawl data. I used several VPN locations (California, multiple Colorado, Virginia, and no VPN (CT)) and gave ample time to let all of the site content to load. I was not able to find a single instance of our manual data changing from what we had reported, except for bumble.com 's USPapi_before being "1YNN" instead of the reported "1YYN", and I would chalk this up to a manual error on our end. It would seem that for the mismatches of data from crawl to manual, the manual data is more accurate. As for the comments left by @SebastianZimmeck above, cookies do seem to load on every refresh - I have yet to find an instance where an OptAnon cookie does not load where it should. I plan to do some more testing on this to be certain over the next few days. As for our site sample skew, I believe it could be worth it to have a subset of websites we know to have GPP / OTGPPConsent data. A thought I have is to compile a list of websites we know to have all our needed behaviors (i.e. USP API opts out after receiving a GPC signal vs USP API already opted out before receiving a GPC signal ETC.) as these are crucial in our crawl list to get a holistic representation of results. I plan on seeing if there is a list of directory of websites having certain attributes that could simplify a search for these websites, if it is something we choose to do. |
Thanks, @natelevinson10! |
As discussed today, @franciscawijaya and @natelevinson10 will come up with a protocol of selecting 100 sites for a manual spotcheck of the first batch that has sufficient coverage (say, at least 5 positive instances, if possible) for each item we test for. @franciscawijaya and @natelevinson10, you can write the protocol here in the issue for the time being. |
To accurately assess the accuracy of the crawl data across the crawl as a whole, our protocol should focus on selecting a representative and stratified sample of 100 sites for manual review. The plan is to run a crawl batch and select 100 sites to review via the constraits below. We will then compare our results from the crawl to our manual review.
After compiling this list of 100 sites, we will manually check them in a similar fashion as we did with the CO crawl here. We will use our same methodology for verifying the maunal results here. Here is our initial plan @franciscawijaya and I reviewed, let us know what you think. |
The methodology outlined above provides a strong approach by ensuring a randomized stratified sample that is well-representative of the entire crawl. The constraints balance random selection with targeted verification of key values. However, it may be beneficial to update the methodology slightly depending on the output and what the complete crawl data looks like. For example, if OTGPPConsent appears in only a very small portion of the data, we may decide to ignore manual checking for those instances to focus on more prevalent conditions. Additionally, we will include manual checks of the .well-known data as well.
A sample size of just 10 sites that were discussed in last week's meeting would not be sufficient to draw meaningful conclusions about the accuracy, as I don't think it would capture enough variability across different conditions. However, we could explore whether a slightly reduced sample—around 50 - 75 sites instead of 100—would still provide reliable insights while making the manual review process more efficient. Since we haven’t run a full crawl since the fall, moving forward with this methodology is the best way to verify the accuracy of the crawl results currently being collected by @samir-cerrato. Running this manual check now will help us confirm whether a drift in accuracy occurs in the future while providing a baseline for future comparisons. The manual check site list should not remain constant across months but should instead change based on the output data of each crawl. We have laid out a systematic way to execute these manual checks from our CO crawl to ensure consistency in our approach here. To help track discrepancies over time, I can put together a new sheet in the GPC folder that will log discrepancies found each month and allow us to monitor how they fluctuate or change over time. |
@katehausladen provided some initial analysis accuracy analysis as shown in our draft paper (section 3.5). Starting with the September crawl (#118) we should come up with a protocol to check for each crawl going forward 100 randomly selected sites manually whether the crawl results are accurate. As we are crawling over longer periods of time we might otherwise see a drift in accuracy, for example, due to code changes or site changes, and, thus, should keep an eye on it.
I am particularly concerned about the following:
A few comments:
urlClassification
results but more about the above. Question: do the above change from load to load? I do not think that should be the case since, for example, a site should set the OptanonConsent cookie on every load. But we need to check that.The bottom line is, we need a protocol that allows us to check the analysis accuracy of our different conditions (including sub-conditions) for every crawl to keep track of analysis accuracy over time. Since we need to do it every crawl and it involves manual work, it should be manageable time-wise but also meaningful.
@natelevinson10 will take the lead here and work with @franciscawijaya and @eakubilo before starting the next crawl.
The text was updated successfully, but these errors were encountered: