You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
lru_cache() is one such function in functools module which helps in reducing the execution time of the function by using memoization technique. It works by caching the results of function calls, which can then be reused when the same function is called with the same arguments.
I experimented with using lru_cache as a decorator for some run_() methods in the Rotor class. However, lru_cache requires all function arguments to be hashable. Since some of these methods use the rotor object itself as an argument, this caused an error. To address this, I explored an alternative library, methodtools, which supports "self" objects. However, even with methodtools, all arguments must still be hashable, and types such as arrays, lists, and dictionaries are inherently unhashable.
Potential use in ROSS
I identified two key functions in ROSS that could greatly benefit from memoization: run_modal() and run_freq_response(). These functions are invoked multiple times within other methods.
Adding lru_cache is straightforward; it simply requires applying the decorator to the desired function. However, for run_freq_response(), where arguments like speed_range and modes are of unhashable types (arrays and lists), we can convert these arguments to tuples before passing them to the function.
Here’s how it can be applied:
Performance comparison
1. Campbell Diagram
To test the performance impact of lru_cache, I analyzed the execution time for computing the Campbell diagram with two scenarios:
Using a predefined speed range: The function is executed for the first time with a given range of speed values.
Adding more speed values to the range: The function is called again with an extended range, where the initial part of the range has already been computed during the first execution.
Withoutlru_cache:
Withlru_cache in run_modal():
Result: Using lru_cache reduced execution time by approximately 30%.
2. Unbalance Response
I also evaluated the impact on unbalance response calculations with the following setup:
Using an initial unbalance configuration.
Modifying the magnitude of the force applied to a specific node (multiplying m2 by 2).
Withoutlru_cache:
Withlru_cache in run_freq_response():
Result: With lru_cache, execution time was reduced by an impressive 99%.
The text was updated successfully, but these errors were encountered:
lru_cache()
is one such function in functools module which helps in reducing the execution time of the function by using memoization technique. It works by caching the results of function calls, which can then be reused when the same function is called with the same arguments.I experimented with using
lru_cache
as a decorator for somerun_()
methods in theRotor
class. However,lru_cache
requires all function arguments to be hashable. Since some of these methods use the rotor object itself as an argument, this caused an error. To address this, I explored an alternative library, methodtools, which supports "self" objects. However, even withmethodtools
, all arguments must still be hashable, and types such as arrays, lists, and dictionaries are inherently unhashable.Potential use in ROSS
I identified two key functions in ROSS that could greatly benefit from memoization:
run_modal()
andrun_freq_response()
. These functions are invoked multiple times within other methods.Adding
lru_cache
is straightforward; it simply requires applying the decorator to the desired function. However, forrun_freq_response()
, where arguments likespeed_range
andmodes
are of unhashable types (arrays and lists), we can convert these arguments to tuples before passing them to the function.Here’s how it can be applied:
![Image](https://private-user-images.githubusercontent.com/82293939/406516769-3dae82cc-f2b9-4dab-b28f-bc547b70e446.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTY3NjktM2RhZTgyY2MtZjJiOS00ZGFiLWIyOGYtYmM1NDdiNzBlNDQ2LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTBlYjQ5MTFkYjBlN2M1MzgwZmFkNzk3ZDk4MjczZWFjOTZkZTgyNzkyOTMzMmVkMzBlYjgzOGI1MjY4ZTk5ODEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.xEWwY8Q_0oph3JzW0dKjy-JsLifakcCl4UrW_WcBl3E)
![Image](https://private-user-images.githubusercontent.com/82293939/406516813-8004f28c-e6d8-420d-ba21-80bbf6e7fa83.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTY4MTMtODAwNGYyOGMtZTZkOC00MjBkLWJhMjEtODBiYmY2ZTdmYTgzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc5ZGIzMGQ5NjYzMGNiZTdlOWQ4N2JiOGNkYjc1YTQzNDRkNWE3MDUyMTJkZWYzOTk2YTMzNjE0YjBlNjdkODAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.C9ukfiTwEfH9k5-Qmhu48cLEijTqdB_osZNE8ISeCao)
Performance comparison
1. Campbell Diagram
To test the performance impact of
lru_cache
, I analyzed the execution time for computing the Campbell diagram with two scenarios:Without
![Image](https://private-user-images.githubusercontent.com/82293939/406517644-8c365cee-8819-4d87-a5e5-f46a3b50de15.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTc2NDQtOGMzNjVjZWUtODgxOS00ZDg3LWE1ZTUtZjQ2YTNiNTBkZTE1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk5MmFhN2IwMGY4MzhmOWJkYzlkMzUwYmUwZTIzZDFhOWU4NzZhNWJjNmI4MGZlNWI0NTQ1MzE5YWUzNWFmOGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.a89831SGWHxmUkRbBivQ6CfM-tiV5-Z3tCE8q7QOcqk)
lru_cache
:With
![Image](https://private-user-images.githubusercontent.com/82293939/406517967-fb51a087-c146-4323-ac99-556c4cf09975.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTc5NjctZmI1MWEwODctYzE0Ni00MzIzLWFjOTktNTU2YzRjZjA5OTc1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTAxM2Y3OTFlYTVlMTU2OWNkMmEwNzcxZjVjMTk4YWEyMTgwMzZhMDAxZDI0OGM1N2ZjNjA5NjllYThjYTBkMTAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.QBCHbADa6731UqggVJkL5n0ynTXLBFlxqTq8rh2ZLYM)
lru_cache
inrun_modal()
:Result: Using
lru_cache
reduced execution time by approximately 30%.2. Unbalance Response
I also evaluated the impact on unbalance response calculations with the following setup:
Without
![Image](https://private-user-images.githubusercontent.com/82293939/406518286-18dc7deb-a7f4-4f52-8af5-ab80043ca265.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTgyODYtMThkYzdkZWItYTdmNC00ZjUyLThhZjUtYWI4MDA0M2NhMjY1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWMyZDZkNDZhOWUzNTFiOGRlM2JhNjIxY2ViMmNiNmRmOWQ1NTNjMGRlZTQzNGIxYzJkYWY0YWViMzMwZDEzMGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.poQsFJsYwc4pqZZIomUh7svZYTuC4eqotXastudl_ls)
lru_cache
:With
![Image](https://private-user-images.githubusercontent.com/82293939/406518415-7efc2068-bfe2-4979-becf-9e425be38d07.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5ODYwNzQsIm5iZiI6MTczODk4NTc3NCwicGF0aCI6Ii84MjI5MzkzOS80MDY1MTg0MTUtN2VmYzIwNjgtYmZlMi00OTc5LWJlY2YtOWU0MjViZTM4ZDA3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA4VDAzMzYxNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWJmZmJhODY3NmE1ZTMxNzRlZmU4YWI3Y2I5MTZhMTNjNDQwZWJhMGMzOTYyNDVmNjAyOThkZTZmMzIxNTUwMzcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.VOdfu5uh4WWw0rO779y-uUtwz_yzsYj3Y90YRKEIdfs)
lru_cache
inrun_freq_response()
:Result: With
lru_cache
, execution time was reduced by an impressive 99%.The text was updated successfully, but these errors were encountered: