Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented audio extraction, adding audio streams, displaying audio stream #4
base: main
Are you sure you want to change the base?
Implemented audio extraction, adding audio streams, displaying audio stream #4
Changes from 5 commits
ff32eb2
a72d108
ff80659
e27fc50
df2bac0
b1235c8
c1ddcd8
32648c6
d0e6511
3a6e501
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as the input format for events, I don't really know what researchers/annotators are going to want to do. We have seen cases where the data was in the form of timestamps, probably something like
HH:MM:SS.123456
. So unless @NeuroLaunch has opinions about what (other) format(s) we should target, I'd say start with parsingHH:MM:SS.123456
-formatted data, and we can expand to other formats later.As far as the output format of events: MNE-Python has 2 ways of representing events (event arrays, and
Annotations
objects). We should decide which one (or both?) we want to use when converting/syncing camera timestamps to the Raw file's time domain. @NeuroLaunch do you have an opinion here? @ashtondoane are you familiar with the two kinds of MNE event representations?If I had to put a stake in the ground I'd probably say "use Annotations" but I haven't thought very hard about it yet... maybe implement that first, and if we find that we need to also implement event array support, we can add that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not familiar with the MNE representations, I will have to read the documentation. I'll begin with annotations as @NeuroLaunch also mentioned this as a possibility and we can adjust later if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not clear to me that this has actually been addressed, as nothing is done with
events
in the code; unresolving.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure about the API here. we probably want this func and the
_extract_data_from_raw
func to return the same thing (e.g., tuple of same length), but for theRaw
case there's no analogue to "audio channel". I also don't know what the audio channel is useful for --- e.g., if it's downsampled from 44.1kHz (typical audio) to 1kHz (typical MEG), it will be pretty much useless / unintelligible, so I don't see the point of syncing the audio data itself.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I don't understand the end goal of this API. I wasn't intending to use the audio data for syncing (the pulses are the goal here), but rather holding onto it to create a file in the future that has been aligned. I'm not sure I understand the point about downsampling. Would you mind clarifying here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this question is still relevant, but here goes an answer: as I understand it, the end goal here is to convert researcher-created timestamps (in HH:MM:SS.ssssss format) into an
mne.Annotations
object, after first figuring out what tranformations (shift and/or stretch) must be done to get the video time domain aligned with the MEG time domain. In that sense, there is no need to write out the camera's audio channel to a WAV file (either before or after it's been warped/synced to the MEG time domain).