Skip to content

Commit

Permalink
WIP: fetch upstream, resolve conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
CBroz1 committed Jan 8, 2025
2 parents ad7c74a + 75ad067 commit 558f38b
Show file tree
Hide file tree
Showing 39 changed files with 1,315 additions and 370 deletions.
22 changes: 18 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Change Log

## [0.5.4] (Unreleased)
## [0.5.5] (Unreleased)

### Release Notes

Expand All @@ -20,10 +20,12 @@ SpikeSortingRecording().alter()
SpikeSortingRecording().update_ids()
```

## [0.5.4] (December 20, 2024)

### Infrastructure

- Disable populate transaction protection for long-populating tables #1066,
#1108, #1172
#1108, #1172, #1187
- Add docstrings to all public methods #1076
- Update DataJoint to 0.14.2 #1081
- Remove `AnalysisNwbfileLog` #1093
Expand All @@ -46,10 +48,15 @@ SpikeSortingRecording().update_ids()
- Add testing for python versions 3.9, 3.10, 3.11, 3.12 #1169
- Initialize tables in pytests #1181
- Download test data without credentials, trigger on approved PRs #1180
- Add coverage of decoding pipeline to pytests #1155
- Allow python \< 3.13 #1169
- Remove numpy version restriction #1169
- Merge table delete removes orphaned master entries #1164
- Edit `merge_fetch` to expect positional before keyword arguments #1181
- Allow part restriction `SpyglassMixinPart.delete` #1192
- Move cleanup of `IntervalList` orphan entries to cron job cleanup process
#1195
- Add mixin method `get_fully_defined_key` #1198

### Pipelines

Expand All @@ -59,12 +66,17 @@ SpikeSortingRecording().update_ids()
- Improve electrodes import efficiency #1125
- Fix logger method call in `common_task` #1132
- Export fixes #1164
- Allow `get_abs_path` to add selection entry.
- Log restrictions and joins.
- Allow `get_abs_path` to add selection entry. #1164
- Log restrictions and joins. #1164
- Check if querying table inherits mixin in `fetch_nwb`. #1192, #1201
- Ensure externals entries before adding to export. #1192
- Error specificity in `LabMemberInfo` #1192

- Decoding

- Fix edge case errors in spike time loading #1083
- Allow fetch of partial key from `DecodingParameters` #1198
- Allow data fetching with partial but unique key #1198

- Linearization

Expand All @@ -81,6 +93,7 @@ SpikeSortingRecording().update_ids()
`open-cv` #1168
- `VideoMaker` class to process frames in multithreaded batches #1168, #1174
- `TrodesPosVideo` updates for `matplotlib` processor #1174
- User prompt if ambiguous insert in `DLCModelSource` #1192

- Spike Sorting

Expand All @@ -89,6 +102,7 @@ SpikeSortingRecording().update_ids()
- Fix bug in `insert_curation` returned key #1114
- Add fields to `SpikeSortingRecording` to allow recompute #1093
- Fix handling of waveform extraction sparse parameter #1132
- Limit Artifact detection intervals to valid times #1196

## [0.5.3] (August 27, 2024)

Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -166,5 +166,5 @@ keywords:
- spike sorting
- kachery
license: MIT
version: 0.5.3
date-released: '2024-04-22'
version: 0.5.4
date-released: '2024-12-20'
11 changes: 9 additions & 2 deletions docs/src/ForDevelopers/Management.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,10 +228,16 @@ disk. There are several tables that retain lists of files that have been
generated during analyses. If someone deletes analysis entries, files will still
be on disk.
To remove orphaned files, we run the following commands in our cron jobs:
Additionally, there are periphery tables such as `IntervalList` which are used
to store entries created by downstream tables. These entries are not
automatically deleted when the downstream entry is removed. To minimize interference
with ongoing user entry creation, we recommend running these cleanups on a less frequent
basis (e.g. weekly).
To remove orphaned files and entries, we run the following commands in our cron jobs:
```python
from spyglass.common import AnalysisNwbfile
from spyglass.common import AnalysisNwbfile, IntervalList
from spyglass.spikesorting import SpikeSorting
from spyglass.common.common_nwbfile import schema as nwbfile_schema
from spyglass.decoding.v1.sorted_spikes import schema as spikes_schema
Expand All @@ -241,6 +247,7 @@ from spyglass.decoding.v1.clusterless import schema as clusterless_schema
def main():
AnalysisNwbfile().nightly_cleanup()
SpikeSorting().nightly_cleanup()
IntervalList().cleanup()
nwbfile_schema.external['analysis'].delete(delete_external_files=True))
nwbfile_schema.external['raw'].delete(delete_external_files=True))
spikes_schema.external['analysis'].delete(delete_external_files=True))
Expand Down
11 changes: 9 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ addopts = [
# "--sw", # stepwise: resume with next test after failure
# "--pdb", # drop into debugger on failure
"-p no:warnings",
"--no-teardown", # don't teardown the database after tests
# "--no-teardown", # don't teardown the database after tests
# "--quiet-spy", # don't show logging from spyglass
# "--no-dlc", # don't run DLC tests
"--show-capture=no",
Expand All @@ -148,6 +148,12 @@ env = [
"TF_ENABLE_ONEDNN_OPTS = 0", # TF disable approx calcs
"TF_CPP_MIN_LOG_LEVEL = 2", # Disable TF warnings
]
filterwarnings = [
"ignore::ResourceWarning:.*",
"ignore::DeprecationWarning:.*",
"ignore::UserWarning:.*",
"ignore::MissingRequiredBuildWarning:.*",
]

[tool.coverage.run]
source = ["*/src/spyglass/*"]
Expand All @@ -157,7 +163,8 @@ omit = [ # which submodules have no tests
"*/cli/*",
# "*/common/*",
"*/data_import/*",
"*/decoding/*",
"*/decoding/v0/*",
# "*/decoding/*",
"*/figurl_views/*",
# "*/lfp/*",
# "*/linearization/*",
Expand Down
2 changes: 1 addition & 1 deletion src/spyglass/common/common_interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def plot_epoch_pos_raw_intervals(self, figsize=(20, 5), return_fig=False):
if return_fig:
return fig

def nightly_cleanup(self, dry_run=True):
def cleanup(self, dry_run=True):
"""Clean up orphaned IntervalList entries."""
orphans = self - get_child_tables(self)
if dry_run:
Expand Down
6 changes: 4 additions & 2 deletions src/spyglass/common/common_lab.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,11 @@ def get_djuser_name(cls, dj_user) -> str:
)

if len(query) != 1:
remedy = f"delete {len(query)-1}" if len(query) > 1 else "add one"
raise ValueError(
f"Could not find name for datajoint user {dj_user}"
+ f" in common.LabMember.LabMemberInfo: {query}"
f"Could not find exactly 1 datajoint user {dj_user}"
+ " in common.LabMember.LabMemberInfo. "
+ f"Please {remedy}: {query}"
)

return query[0]
Expand Down
12 changes: 3 additions & 9 deletions src/spyglass/common/common_usage.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,14 @@
from typing import List, Union

import datajoint as dj
from datajoint import FreeTable
from datajoint import config as dj_config
from pynwb import NWBHDF5IO

from spyglass.common.common_nwbfile import AnalysisNwbfile, Nwbfile
from spyglass.settings import export_dir, test_mode
from spyglass.settings import test_mode
from spyglass.utils import SpyglassMixin, SpyglassMixinPart, logger
from spyglass.utils.dj_graph import RestrGraph
from spyglass.utils.dj_helper_fn import (
make_file_obj_id_unique,
unique_dicts,
update_analysis_for_dandi_standard,
)
from spyglass.utils.dj_helper_fn import (make_file_obj_id_unique, unique_dicts,
update_analysis_for_dandi_standard)
from spyglass.utils.nwb_helper_fn import get_linked_nwbs
from spyglass.utils.sql_helper_fn import SQLDumpHelper

Expand Down Expand Up @@ -174,7 +169,6 @@ def list_file_paths(self, key: dict, as_dict=True) -> list[str]:
Return as a list of dicts: [{'file_path': x}]. Default True.
If False, returns a list of strings without key.
"""
file_table = self * self.File & key
unique_fp = {
*[
AnalysisNwbfile().get_abs_path(p)
Expand Down
48 changes: 16 additions & 32 deletions src/spyglass/decoding/decoding_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,53 +85,41 @@ def cleanup(self, dry_run=False):
@classmethod
def fetch_results(cls, key):
"""Fetch the decoding results for a given key."""
return cls().merge_get_parent_class(key).fetch_results()
return cls().merge_restrict_class(key).fetch_results()

@classmethod
def fetch_model(cls, key):
"""Fetch the decoding model for a given key."""
return cls().merge_get_parent_class(key).fetch_model()
return cls().merge_restrict_class(key).fetch_model()

@classmethod
def fetch_environments(cls, key):
"""Fetch the decoding environments for a given key."""
decoding_selection_key = cls.merge_get_parent(key).fetch1("KEY")
return (
cls()
.merge_get_parent_class(key)
.fetch_environments(decoding_selection_key)
)
restr_parent = cls().merge_restrict_class(key)
decoding_selection_key = restr_parent.fetch1("KEY")
return restr_parent.fetch_environments(decoding_selection_key)

@classmethod
def fetch_position_info(cls, key):
"""Fetch the decoding position info for a given key."""
decoding_selection_key = cls.merge_get_parent(key).fetch1("KEY")
return (
cls()
.merge_get_parent_class(key)
.fetch_position_info(decoding_selection_key)
)
restr_parent = cls().merge_restrict_class(key)
decoding_selection_key = restr_parent.fetch1("KEY")
return restr_parent.fetch_position_info(decoding_selection_key)

@classmethod
def fetch_linear_position_info(cls, key):
"""Fetch the decoding linear position info for a given key."""
decoding_selection_key = cls.merge_get_parent(key).fetch1("KEY")
return (
cls()
.merge_get_parent_class(key)
.fetch_linear_position_info(decoding_selection_key)
)
restr_parent = cls().merge_restrict_class(key)
decoding_selection_key = restr_parent.fetch1("KEY")
return restr_parent.fetch_linear_position_info(decoding_selection_key)

@classmethod
def fetch_spike_data(cls, key, filter_by_interval=True):
"""Fetch the decoding spike data for a given key."""
decoding_selection_key = cls.merge_get_parent(key).fetch1("KEY")
return (
cls()
.merge_get_parent_class(key)
.fetch_linear_position_info(
decoding_selection_key, filter_by_interval=filter_by_interval
)
restr_parent = cls().merge_restrict_class(key)
decoding_selection_key = restr_parent.fetch1("KEY")
return restr_parent.fetch_spike_data(
decoding_selection_key, filter_by_interval=filter_by_interval
)

@classmethod
Expand Down Expand Up @@ -167,11 +155,7 @@ def create_decoding_view(cls, key, head_direction_name="head_orientation"):
head_dir=position_info[head_direction_name],
)
else:
(
position_info,
position_variable_names,
) = cls.fetch_linear_position_info(key)
return create_1D_decode_view(
posterior=posterior,
linear_position=position_info["linear_position"],
linear_position=cls.fetch_linear_position_info(key),
)
Loading

0 comments on commit 558f38b

Please sign in to comment.