You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, with the latest stack of performance improvement PRs, about 46% of the time is spent in the LRU. In my benchmark in the vscode repo is about breaks even whether the LRU is beneficial or not.
There might be a number of bytes for which string lengths under that level are not worth adding to the LRU because computing the tokens in byte pair encoding is faster. If so, skipping the LRU (both in get() and set()) would be faster.
In this case, the token array to append to can also be passed into bytePairEncode() to avoid allocating and then appending a new array if a small token doesn't need to generate a separate array to be stored in the LRU.
The text was updated successfully, but these errors were encountered:
Currently, with the latest stack of performance improvement PRs, about 46% of the time is spent in the LRU. In my benchmark in the vscode repo is about breaks even whether the LRU is beneficial or not.
There might be a number of bytes for which string lengths under that level are not worth adding to the LRU because computing the tokens in byte pair encoding is faster. If so, skipping the LRU (both in get() and set()) would be faster.
In this case, the token array to append to can also be passed into bytePairEncode() to avoid allocating and then appending a new array if a small token doesn't need to generate a separate array to be stored in the LRU.
The text was updated successfully, but these errors were encountered: