Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate use of lru_cache #1142

Open
jguarato opened this issue Jan 24, 2025 · 0 comments
Open

Evaluate use of lru_cache #1142

jguarato opened this issue Jan 24, 2025 · 0 comments

Comments

@jguarato
Copy link
Contributor

lru_cache() is one such function in functools module which helps in reducing the execution time of the function by using memoization technique. It works by caching the results of function calls, which can then be reused when the same function is called with the same arguments.

I experimented with using lru_cache as a decorator for some run_() methods in the Rotor class. However, lru_cache requires all function arguments to be hashable. Since some of these methods use the rotor object itself as an argument, this caused an error. To address this, I explored an alternative library, methodtools, which supports "self" objects. However, even with methodtools, all arguments must still be hashable, and types such as arrays, lists, and dictionaries are inherently unhashable.

Potential use in ROSS

I identified two key functions in ROSS that could greatly benefit from memoization: run_modal() and run_freq_response(). These functions are invoked multiple times within other methods.

Adding lru_cache is straightforward; it simply requires applying the decorator to the desired function. However, for run_freq_response(), where arguments like speed_range and modes are of unhashable types (arrays and lists), we can convert these arguments to tuples before passing them to the function.

Here’s how it can be applied:
Image
Image

Performance comparison

1. Campbell Diagram

To test the performance impact of lru_cache, I analyzed the execution time for computing the Campbell diagram with two scenarios:

  1. Using a predefined speed range: The function is executed for the first time with a given range of speed values.
  2. Adding more speed values to the range: The function is called again with an extended range, where the initial part of the range has already been computed during the first execution.

Without lru_cache:
Image

With lru_cache in run_modal():
Image

Result: Using lru_cache reduced execution time by approximately 30%.

2. Unbalance Response

I also evaluated the impact on unbalance response calculations with the following setup:

  1. Using an initial unbalance configuration.
  2. Modifying the magnitude of the force applied to a specific node (multiplying m2 by 2).

Without lru_cache:
Image

With lru_cache in run_freq_response():
Image

Result: With lru_cache, execution time was reduced by an impressive 99%.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant