Skip to content

Commit

Permalink
added more content
Browse files Browse the repository at this point in the history
  • Loading branch information
ljchang committed Aug 9, 2024
1 parent 1a49d4e commit 578a948
Show file tree
Hide file tree
Showing 15 changed files with 803 additions and 129 deletions.
Binary file modified .DS_Store
Binary file not shown.
663 changes: 636 additions & 27 deletions docs/FingerTapping.ipynb

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ parse:
# - deflist
- dollarmath
# - html_admonition
# - html_image
- html_image
- linkify
# - replacements
# - smartquotes
Expand All @@ -33,7 +33,7 @@ parse:
# HTML-specific settings
html:
favicon : "" # A path to a favicon image
use_edit_page_button : false # Whether to add an "edit this page" button to pages. If `true`, repository information in repository: must be filled in
use_edit_page_button : true # Whether to add an "edit this page" button to pages. If `true`, repository information in repository: must be filled in
use_repository_button : true # Whether to add a link to your repository button
use_issues_button : false # Whether to add an "open an issue" button
use_multitoc_numbering : true # Continuous numbering across parts/chapters
Expand Down
15 changes: 11 additions & 4 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,21 @@
format: jb-book
root: intro
parts:
- caption: Acquiring Data
- caption: Signal
numbered: False
chapters:
- file: signal
- caption: Data Acquisition
chapters:
- file: acquiring_data
- caption: Analyzing Data
- file: tasks
- caption: Data Analysis
chapters:
- file: analyzing_data
- file: FingerTapping
- caption: FAQ
- caption: Testing & Development
chapters:
- file: development
- caption: Resources
chapters:
- file: resources
- file: faq
54 changes: 4 additions & 50 deletions docs/acquiring_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,54 +5,8 @@ We are currently using the CIM laptop that has the kortex acquisition driver ins

All software and data can be accessed via kernel [web portal](https://portal.kernel.com/).

## Procedures
Please read the short [documentation](https://docs.kernel.com/docs/handling-the-flow2-headset) on the kernel website about how to set up the Flow2 and acquire data.

# Available Unity Tasks
## Breath Hold task
This task was programmed and presented in Unity. The participant was asked to switch between interleaved periods of holding their breath and periods of paced breathing. During each paced breathing block (30 sec) a bright green circle on a black background repeatedly expanded and contracted at a fixed pace (6 sec per cycle), and the participant was instructed to use this animation to guide their inhalation and exhalation respectively (5 breathing cycles were repeated in each block). At the end of paced breathing blocks the circle changed to yellow, signaling to the participant that this would be their final exhalation and the breath hold period would occur next. During each breath hold block (20 sec) the fully contracted yellow circle remained on the screen, above which the words “Hold your breath!” were displayed. Below the yellow circle a countdown to zero indicating the time left in the block was displayed. When the countdown timer hit zero, the circle turned back to bright green and paced breathing immediately commenced. This was repeated such that the participant completed a total of 6 breath hold blocks and 6 paced breathing blocks.

## Passive Auditory task
This task was programmed and presented in Unity. The task had a block design with two block types: story blocks (n=8) during which the participant listened to short clips from TED talks; and noise blocks (n=7) during which the participant listened to brown noise. After an initial 10s rest period, the story and noise blocks (each lasting for 20s) were presented (via earbuds) in a preset pseudo-randomized order (Fig. 6B). The participant was asked to keep their eyes open and look at a white fixation cross that was presented on a black background throughout the task.

## Finger Tapping task
This task was programmed and presented in Unity. In this task the participant was asked to sit in a chair with their arms on the armrests such that their palms faced upwards, while audio and visual stimuli guided them through randomized periods of left and right–hand finger tapping (n = 10 blocks per side; Fig. 6E). Specifically, at the start of each block the participant was cued in two ways: 1) audibly – brown noise was played through earbuds to either the right or left ear indicating the hand that should be used during the task, and 2) visually – a white image of a hand was displayed on a black screen with either an “L” or an “R” inscribed on it, again indicating the left or right hand should be used. Both cues persisted throughout the block (17.3 sec). Within each block the participant was asked to repeatedly tap the thumb of the cued hand to a certain finger on the same hand. A red dot overlaid on a finger of the visual stimulus indicated which finger to tap. Throughout a block the red dot moved sequentially through each of the four fingers, and each shift to a new finger indicated a new trial (n = 13 trials per block; trial duration = 0.75 sec and inter-trial interval = 0.50 sec). A brief resting period (20 sec) with a white fixation cross on a black screen followed each block.

## Go/No-Go task paradigm
The task, designed and presented using Unity game engine, consisted of two block types: go-only and go/no-go. The overall structure of the task was similar to the one used in a prior publication38. Briefly, participants completed a total of 10 blocks alternating between go-only (n = 5) and go/no-go (n = 5) with each block consisting of 24 trials. Stimuli were green leaf cartoon images (for go trials) and red flower images (for no-go trials) that were presented in a pseudorandom order, which was pre-set and unique for each run. During go/no-go blocks, 30% of the trials were chosen to be no-go trials. A different run of the task was presented at each study visit, however, all participants completed the same versions in the same order.

The task included a 15 s rest at the beginning and a 20 s rest at the end. The task also included a screen to remind the participant of task instructions (e.g. pressing the spacebar when seeing a green leaf and refraining from pressing when seeing a red flower) prior to each block. Stimulus presentation time was 400 ms followed by a 600 ms ± 100 ms inter-trial interval during which only the background was displayed. Participants were asked to provide a response within the 400 ms presentation period. A pleasant (ding) or unpleasant (buzzer) tone, played immediately after participants’ response, was used to provide positive (for hits) and negative feedback (for false-alarms), respectively. The overall task duration was approximately 7 min (Fig. 1c).

## Signal
- Moments provides the time courses for each channel:
- 0th moment: integral (# photons, from 0 to ~1e7),
- 1st moment: mean time of flight (mean time of flight, in picoseconds is on the order of 1000)
- 2nd moment: variance of time-of-flight (variance of time of flight, in picoseconds^2 is on the order of 100000)
- Hb/Moments includes an analysis indicating concentration of HbO (oxyhemoglobin) and HbR (deoxyhemoglobin).
- Gating - Time Gating
- Reconstruction - HD-DOT

## Preprocessing
- Data conditioning (calibration correction)
- Data processing (Filtering, De-trending)
- Select good channels based on channel quality check
- Convert to log contrast
- Select histogram bins with contrast
- Load atlas head mesh
- Run forward model to get TPSFs, fluences, and Jacobians
- Perform Jacobian normalization and regularization
- Crop to valid channels and histogram bins
- Invert Jacobian
- Use fully processed data together with inverse Jacobian to generate reconstruction of mu_a and mu_s' per wavelength
- Use extinction coefficients to generate 3D map of HbO and HbR

- Note that we are performing reconstruction using a linearized model (as developed in the attached paper), which provides estimates of relative changes in HbO and HbR (not yet absolute estimates).
- We do the reconstruction on the full time-domain data using histograms. We've invested heavily in developing our reconstruction pipelines to fully utilize our data and are constantly working to improve its accuracy and performance so that it scales well. Our objective is to provide our customers with the best volumetric data that can then be analyzed in any NIfTI-compatible SW.

### Relative changes in HbO and HbR concentrations (moments method)
The data preprocessing procedures have been extensively detailed in our previous studies (15). Initially, we applied a channel selection method based on histogram shape criteria (14). Subsequently, histograms derived from the chosen channels were utilized to calculate the moments of the DTOFs, specifically focusing on the sum, mean, and variance moments. The alterations in preprocessed DTOF moments were then translated into changes in absorption coefficients for each wavelength, employing the sensitivities of the various moments to absorption coefficient changes, as outlined in (13). To determine these sensitivities, a 2-layer medium with a superficial layer of 12 mm thickness was employed. Utilizing a finite element modeling (FEM) forward model from NIRFAST (58, 59), the Jacobians (sensitivity maps) for each moment were integrated within each layer to assess sensitivities. The changes in absorption coefficients at each wavelength were further converted into alterations in oxyhemoglobin and deoxyhemoglobin concentrations (HbO and HbR, respectively), employing the extinction coefficients for the respective wavelengths and the modified Beer–Lambert law (mBLL (60)). The HbO/HbR concentrations underwent additional preprocessing through a motion correction algorithm known as Temporal Derivative Distribution Repair (TDDR (61)). To address spiking artifacts arising from baseline shifts during TDDR, they were identified and rectified using cubic spline interpolation (62). Lastly, data detrending was performed using a moving average with a 100-second kernel, and short channel regression was employed to eliminate superficial physiological signals from brain activity (63), utilizing short within-module channels with a source-detector separation (SDS) of 8.5 mm.

### Absolute concentrations of HbO and HbR (curve fitting method)
The DTOF results from convolving the time-resolved TPSF with the IRF. Utilizing Flow2’s online IRF measurements, we employed a curve fitting technique to extract the absolute optical properties of the tissue beneath. Generating candidate TPSFs through an analytical solution of the diffusion equation for a homogeneous semi-infinite medium, we convolved these with the known IRF and compared them with the recorded DTOF. The search for optical properties was carried out using the Levenberg-Marquardt algorithm, focusing on fitting within the range spanning from 80% of the peak on the rising edge to 0.1% of the peak on the falling edge, with a refractive index set to 1.4. These absorption coefficient estimates were then converted to HbO and HbR concentrations. A single value for HbO and HbR was obtained by computing the median value across well-coupled long, within-module channels (SDS=26.5mm) of two prefrontal modules.

### DOT reconstruction algorithm
A finite element model (FEM) of the adult head was developed based on the unbiased non-linear averages of the MNI 152 database (49). The atlas was segmented to 5 tissue types of skin, skull, CSF, gray and white matter and discretized into linear tetrahedral elements using NIRFASTSlicer, giving rise to 413,403 nodes and 2,465,366 elements. Optical properties of each tissue layer at each wavelength (690 nm and 905 nm) were assigned based on published values of the adult head (29). The coordinates for each of the 40 modules containing the optical sources and detectors were determined and identified on the surface of the FEM and the time-resolved light propagation model was solved using the diffusion approximation to the light transport equation throughout the domain (58). The Jacobians (sensitivity functions that map a change in measured data due to a change in optical properties) for the time-resolved data (TPSF) for each optical parameter (μa and μs′) were calculated using the adjoint theorem (64) at each wavelength and then interpolated to a uniform voxel grid (also known as reconstruction basis) spanning the entire model, with a resolution of 4 × 4 × 4 mm. The use of lower resolution reconstruction basis is crucial for DOT as the problem is highly under-determined: that is the number of measurements is much lower than the number of unknowns. While a high resolution FEM mesh is needed for the calculation of the time-resolved light propagation to ensure numerical accuracy, a much lower voxel resolution is needed to better improve the stability of the inverse problem.
The time-resolved Jacobian for each optical property was then mapped to each data-type (intensity, mean time of flight and variance) which was then normalized with respect to their corresponding data. A Moore–Penrose pseudoinverse with Tikhonov regularization was used to calculate an approximation of the inverse of the Jacobian to perform a single step linear recovery of the optical properties (29) using the same functional data as outlined earlier. Note that we downsampled the data to 1Hz before performing reconstruction. The recovered changes in the μa within each voxel were mapped to changes to oxy/deoxy hemoglobin for further processing using the same GLM model described above. Lastly, in addition to GLM analyses, we performed an epoched analysis. Here, we considered different ROIs given the task: voxels within 10 mm of the left motor area with the maximum GLM contrast for the finger tapping task and voxels within 10 mm of the left auditory region with maximum GLM contrast for the passive auditory task. The time course of these ROIs were then epoched and aggregated within each block type for further visualization (Fig. 7).
## Notes
So far we are finding that dark colored hair is making it difficult to acquire data.
2 changes: 0 additions & 2 deletions docs/analyzing_data.md

This file was deleted.

4 changes: 4 additions & 0 deletions docs/development.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Future Development
- going to need something to create BIDS formatted fNIRS datasets.
- going to need to write a data importer for nltools
- going to need to test and adapt existing paradigms to be compatible with kernel portal
Loading

0 comments on commit 578a948

Please sign in to comment.