Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agenda 8/27/2021 #288

Closed
F-said opened this issue Aug 24, 2021 · 7 comments
Closed

Agenda 8/27/2021 #288

F-said opened this issue Aug 24, 2021 · 7 comments

Comments

@F-said
Copy link
Contributor

F-said commented Aug 24, 2021

Updates

  • Updated README
    • included contributor section (maybe put to front? Like movie credits, no one waits around at the end)
    • described user_params functionality
  • Updated CONTRIB
    • included sections for new contributors (how-to-get-started)
  • Docker tested for stability, results here
  • Conda environment tested for stability, results here
  • First draft of pre-release notes created, here
  • Test-cases output to drive development

Blocks/Challenges

  • Write permissions non-existent in Docker container, user-error, or does container need editing? Encountered a similar problem in Singularity and had to bind paths to the container to fix.

Next Actions

Any thoughts on expanding PEPPER to Julia/R and other widely used domain-specific languages?

@georgebuzzell
Copy link
Member

Introduction of Santi and Sonya

Overview of where the pipeline is

Full pipeline implemented, runs in coda environment for dev, docker (but only in serial) for local, or singularity (and in parallel via slurm) for hpcs. Expects bids inputs, and have a template script for converting data to bids. Generally in line with bids derivatives.

Known limitations:

  1. Faster and Adjust implementations need to be updated/finalized
  2. Need to update the filter feature to allow more control from input.jason (e.g., trans band)
  3. Currently, selection of participants/files to run is done via input.jason for conda/docker but this is ignored and controlled via Slurm on hpc. Need to bring into alignment
  4. Currently, the testing suite is only for syntax errors, etc. Need to expand testing suite to test ability of pipeline to clean data
  5. Have not compared pipeline outputs to any other pipeline
  6. Documentation needs to be cleaned up to allow easier use and community contributions. Moreover, to move to a true community-driven project, need governance model that ideally is not a dictatorship.

Goal of where we want to be, and date

By Dec 1. We want to have:
7. Faster and Adjust implementations updated/finalized, filter feature updated.
8. Ability to control Slurm parallel jobs from input. Json file
9. Expanded testing suite to test ability of pipeline to clean data
10. The pipeline is able to match or exceed the output from MADE on at least 3 metrics we have identified as crucial
11. Paper submitted that describes pipeline, theory/reasoning behind design, each step included, summary of comparison with at least one other pipeline (MADE), as well as philosophy of the project and framework set up for further development (community-driven on gh, modular design, testing suite, established workflow for contributing)
12. Updated documentation to facilitate use and increased community contributions
a. Including governance for determining direction of pipeline

Real data Metrics

General
	-Percent of segments/trials retained

-Split-half reliability (after norming for average trial distance to correct for inflated reliability when analyzing “clump” of trials
-test/re-test reliability (after controlling for days in between)
-Correlation with template blink, saccade, emg

Rest
	Db-power of eyes open/close (or lights on/off alpha)

Task
	SNR of a sensory, auditory, and cognitive ERP
	dB-power of sensory, auditory alpha suppression
	dB-power of theta, delta response to control event

Simulated EEG data metrics
Simulate clean EEG data. Simulate various kinds of noise and add to clean data.
Test correlation between simulated clean data and the output after processing data that had various types of noise added

@georgebuzzell
Copy link
Member

From Santi:

what to do when various metrics do not match up
where is data coming from?

@georgebuzzell
Copy link
Member

From Santi:

Comparing metrics for different ages

@georgebuzzell
Copy link
Member

Plan to create issues to discuss and agree on which metrics are preferred, and which to prioritize. then, identify datasets and potentially compromise on metrics based on the practicality of what data we have.

@georgebuzzell
Copy link
Member

  1. agree on metrics. 2. indetify data. 3. implement expanded testing suite to compare pipeline.

in parallel to above, continue to work on updating adjust, faster, filter features, fixing run of pipeline in singularity via input json, fix current issues with testing suite, current issue with docker container.

@F-said
Copy link
Contributor Author

F-said commented Sep 3, 2021

Will break up comments into issues and schedule and close

@F-said
Copy link
Contributor Author

F-said commented Sep 20, 2021

Issues created for metrics.

Closing

@F-said F-said closed this as completed Sep 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants