Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching system enhancements #395

Closed
kethinov opened this issue May 24, 2020 · 0 comments · Fixed by #608
Closed

Caching system enhancements #395

kethinov opened this issue May 24, 2020 · 0 comments · Fixed by #608
Labels
enhancement Planned feature

Comments

@kethinov
Copy link
Member

The template caching system needs some refinements to make it more useful. I propose the following three intertwined refinements that all work together to improve its ability to target the templates most in need of caching while ignoring others in order to reduce CPU/memory load.

Replace model caching system with a new template-only limited cache model

Instead of caching the whole model, just cache the subset of the model that is relevant to that template.

Example non-cached parent template index.html:

<p>Welcome {user}!</p>
<include src='weather'></include>

In the above example we don't want to cache the parent template index.html because {user} could result in a huge number of unique models.

We do however want to cache the weather.html partial because the data will be common to many users in the same city:

<p>High temperature today in {city} is {value}.</p>
<include src='accountSettings'></include>

Suppose weather.html is on the caching whitelist but index.html (its parent) and accountSettings.html (its child) are not. Under the current system, the entire model will be cached, which will include the irrelevant account data used by the parent template and child template.

If we refactor the cached model to include only the data used by the template itself and ignore whatever is used by parent or child templates, we can get more out of the caching system. In the above example, the limited cache model would just cache city and value rather than also irrelevantly caching user and whatever account data is used by accountSettings.html.

To do this we'll need to redesign the caching system to scan templates on the whitelist at compile time to analyze which parts of the model the template actually uses to construct a limited cache model accordingly.

Since this will be done at compile time, it is okay for it to be a relatively inefficient process since it should only happen once per Teddy instance per template. The scan results should be stored and referenced at each render.

Declarative whitelisting

We should also support a new <cache> element that will let templates add themselves to the whitelist instead of requiring this to be done via the API:

<cache maxCaches='50'></cache>
<p>High temperature today in {city} is {value}.</p>
<include src='accountSettings'></include>

It could also be used to add a template to the blacklist instead.

<cache blacklist></cache>
<p>High temperature today in {city} is {value}.</p>
<include src='accountSettings'></include>

Caching by primary key

We should support declaring a primary key by which to index the cache models. For example:

<cache key='city' maxCaches='5000'></cache>
<p>High temperature today in {city} is {value}.</p>
<include src='accountSettings'></include>

In this example, if a new template is rendered with a city that has been seen before, but the value is new, then the old cache that has the same city will be invalidated, the template will be re-rendered, and the new render will be cached instead.

This will require a breaking API change to: teddy.setCacheWhitelist({'templateNameOrPath': maxCaches})

Instead of e.g.:

teddy.setCacheWhitelist({
  'one.html': 1,
  'two.html': 50,
  'three.html': 250
})`

We will now have:

teddy.setCacheWhitelist({
  'one.html': {
    /* key is omitted in this example */
    maxCaches: 1
  },
  'two.html': {
    key: 'someModelVariable',
    maxCaches: 50
  },
  'three.html': {
    key: 'someOtherModelVariable',
    maxCaches: 250
  }
})`

Both maxCaches and key should be optional. Though strongly recommended.

@kethinov kethinov added enhancement Planned feature breaking Change will require a major version bump labels May 24, 2020
@kethinov kethinov removed the breaking Change will require a major version bump label Jun 21, 2023
@kethinov kethinov mentioned this issue Jun 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Planned feature
Development

Successfully merging a pull request may close this issue.

1 participant