Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Adds caching to all Parse* functions, using a unified cache (so the functions can be intermixed, and the cache size applies to everything). - Increase default size of cache, 20 seems very small, 200 doesn't seem too big (though it's completely arbitrary). Tried to play around with random samplings (using non-linear distributions) and it didn't exactly change anything but... - Updated the cache replacement policy to use a FIFO-ish policy. Unified Cache ============= Split the parsers between a caching frontend and a "raw" backend (the old behaviour) to avoid Parse having to pay for multiple cache lookups in order to fill entries. Also decided to have the entries be handed out initialized, so `_lookup` *always* returns an entry, which can be partial. The caller is to check whether the key they handle (for specialised parsers) or all the sub-keys are filled, and fill them if necessary. This makes for relatively straightforward code even if it bounces around a bit. The unified cache makes it so the functions are intermixed and benefit from one another's activity. Also added a `**jsParseBits` parameter to `ParseDevice`: I guess that's basically never used, but `Parse` forwarded its own `jsParseBits`, which would have led to a `TypeError`. Cache Policy ============ The cache now uses a FIFO policy (similar to recent updates to the stdlib's re module) thanks to dict being ordered since 3.6. In my limited bench not showing much (possibly because the workload was so artificial) LRU didn't stat much better than FIFO (based on hit/miss ratios) and FIFO is simpler so for now, FIFO it is. That's easy to change anyway. Anyway the point was mostly that any cache which doesn't blow the entire thing when full is almost certain to be an improvement. A related change is that the cache used to be blown after it had MAX_CACHE_SIZE+1 entries, as it was cleared - on a cache miss - if the cache size was strictly larger than `MAX_CACHE_SIZE`. Meaning the effective size of the cache was 21 (which is a pretty large increment given the small cache size). This has been changed to top out at `MAX_CACHE_SIZE`. Fixes #97
- Loading branch information