[Enhancement] Set complexity dtypes for memory efficiency #412
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hey!
@Kleinjohann and I have made minor changes to the Complexity class to use
np.uint16
type values in the complexity arrays, this should be able to represent complexities all the way to 65,535 which not even the largest recording systems can achieve right now.We also updated the tests to check that this dtype is kept all the way to the single spike train annotations, reducing the memory usage by a factor of 4.
Let us know if we should change somethings
Best,
Aitor&Alex