Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed automatic append of py_ports and var_ports #669

Merged

Conversation

tim-shea
Copy link
Contributor

@tim-shea tim-shea commented Apr 21, 2023

Added a check to prevent re-appending the same PyPort or VarPort that is already in a process model's py_ports or var_ports list.

Adjusted CspRecvPort implementation to allocate numpy _result array once at initialization rather than on every update.

Adjusted PyRefPortVectorDense to allocate header numpy arrays the first time they are used, rather than every time a value is sent.

Issue Number: #586

Objective of pull request:
Fix incorrect append of py_ports and var_ports on each process model update, resulting in significant slowdown over time when using varports (e.g. Monitor).

Your PR fulfills the following requirements:

  • [Y] Issue created that explains the change and why it's needed
  • [N] Tests are part of the PR (for bug fixes / features)
  • [N] Docs reviewed and added / updated if needed (for bug fixes / features)
  • [Y] PR conforms to Coding Conventions
  • [Y] PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
  • [Y] Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
  • Build tests (pytest) passes locally

One test in test_learning_rule.py seems to be stalling, I don't think this is due to these changes, but will investigate further.

Please check your PR type:

  • Bugfix

What is the current behavior?

  • Execution time of .read() on a RefPort grows the more we call it.

What is the new behavior?

  • Execution time of .read() on a RefPort is stable over the course of thousands of timesteps.

Does this introduce a breaking change?

  • No

Added a check to prevent re-appending the same PyPort or VarPort that is already in a process model's py_ports or var_ports list.

Adjusted CspRecvPort implementation to allocate numpy _result array once at initialization rather than on every update.

Adjusted PyRefPortVectorDense to allocate header numpy arrays the first time they are used, rather than every time a value is sent.
@tim-shea tim-shea linked an issue Apr 21, 2023 that may be closed by this pull request
@tim-shea tim-shea requested a review from mgkwill April 21, 2023 18:57
@tim-shea
Copy link
Contributor Author

Confirmed that this breaks some unit tests in test_learning_rule.py. Needs further investigation.

It seems that putting the result of the CspRecvPort.recv into a pre-allocated numpy _result leads to a hang due to shared memory between processes, specifically in cases where the user code grabs data from a Monitor. Reverting that change for now, though in the future it would be good to find a way to effectively preallocate to avoid memory allocations during every timestep.
@mgkwill mgkwill requested a review from ysingh7 April 22, 2023 00:22
@mgkwill mgkwill merged commit efb49ce into main Apr 22, 2023
monkin77 pushed a commit to monkin77/thesis-lava that referenced this pull request Jul 12, 2024
* Fixed automatic append of py_ports and var_ports

Added a check to prevent re-appending the same PyPort or VarPort that is already in a process model's py_ports or var_ports list.

Adjusted CspRecvPort implementation to allocate numpy _result array once at initialization rather than on every update.

Adjusted PyRefPortVectorDense to allocate header numpy arrays the first time they are used, rather than every time a value is sent.

* Fixing flake formatting.

* Reverting change to preallocate _result

It seems that putting the result of the CspRecvPort.recv into a pre-allocated numpy _result leads to a hang due to shared memory between processes, specifically in cases where the user code grabs data from a Monitor. Reverting that change for now, though in the future it would be good to find a way to effectively preallocate to avoid memory allocations during every timestep.

* Cleanup.

* Fixing flake issue.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Execution time of .read() on a RefPort grows the more we call it
2 participants