docs: improve flow of llm caching notebook #5309
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
docs: improve flow of llm caching notebook
The notebook
llm_caching
demos various caching providers. In the previous version, there was setup common to all examples but under theIn Memory Caching
heading.If a user comes and only wants to try a particular example, they will run the common setup, then the cells for the specific provider they are interested in. Then they will get import and variable reference errors. This commit moves the common setup to the top to avoid this.
Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@dev2049