Skip to content

Commit

Permalink
Merge remote-tracking branch 'ME-ICA/master' into fix-varex-sorting
Browse files Browse the repository at this point in the history
# Conflicts:
#	tedana/selection/select_comps.py
  • Loading branch information
tsalo committed May 23, 2019
2 parents da59821 + ebd3672 commit 82276c6
Show file tree
Hide file tree
Showing 22 changed files with 791 additions and 640 deletions.
4 changes: 3 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,9 @@ jobs:
cd /tmp/data/five-echo/
tedana -d p06.SBJ01_S09_Task11_e[1,2,3,4,5].sm.nii.gz \
-e 15.4 29.7 44.0 58.3 72.6 --verbose \
--out-dir /tmp/data/five-echo/TED.five-echo/
--out-dir /tmp/data/five-echo/TED.five-echo \
--debug
- run:
name: Checking outputs
command: |
Expand Down
1 change: 0 additions & 1 deletion .circleci/tedana_outputs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,6 @@ lowk_ts_OC.nii
meica_mix.1D
mepca_OC_components.nii
mepca_mix.1D
pcastate.pkl
s0v.nii
t2sv.nii
ts_OC.nii
2 changes: 1 addition & 1 deletion .circleci/tedana_outputs_verbose.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@ mepca_S0_pred.nii
mepca_betas_catd.nii
mepca_metric_weights.nii
mepca_mix.1D
pcastate.pkl
s0v.nii
s0vG.nii
s0vs.nii
t2ss.nii
t2sv.nii
t2svG.nii
tedana_run.txt
ts_OC.nii
ts_OC_whitened.nii
23 changes: 23 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Here are some [instructions][link_signupinstructions].
Already know what you're looking for in this guide? Jump to the following sections:

* [Joining the conversation](#joining-the-conversation)
* [Monthly developer calls](#monthly-developer-calls)
* [Contributing through Github](#contributing-through-github)
* [Understanding issues, milestones, and project boards](#understanding-issues-milestones-and-project-boards)
* [Making a change](#making-a-change)
Expand All @@ -26,6 +27,14 @@ We also maintain a [gitter chat room][link_gitter] for more informal conversatio
There is significant cross-talk between these two spaces, and we look forward to hearing from you in either venue!
As a reminder, we expect all contributions to `tedana` to adhere to our [code of conduct][link_coc].

### Monthly developer calls

We run monthly developer calls via Zoom.
You can see the schedule via the `tedana` [google calendar](https://calendar.google.com/calendar/embed?src=pl6vb4t9fck3k6mdo2mok53iss%40group.calendar.google.com).
An agenda will be circulated in the gitter channel in advance of the meeting.

Everyone is welcome.
We look forward to meeting you there :hibiscus:

## Contributing through GitHub

Expand Down Expand Up @@ -56,6 +65,13 @@ is difficult to describe as one unit of work, please consider splitting it into
Issues are assigned [labels](#issue-labels) which explain how they relate to the overall project's
goals and immediate next steps.

Sometimes issues may not produce action items, and conversation will stall after a few months.
When this happens, they may be marked stale by [stale-bot][link_stale-bot],
and will be closed after a week unless there is more discussion.
This helps us keep the issue tracker organized.
Any new discussion on the issue will remove the `stale` label, and prevent it from closing.
So, if theres's a discussion you think it not yet resolved, please jump in !

* **Milestones** are the link between the issues and the high level strategy for the ``tedana`` project.
Contributors new and old are encouraged to take a look at the milestones to see how we are progressing
towards ``tedana``'s shared vision.
Expand All @@ -78,6 +94,11 @@ The current list of labels are [here][link_labels] and include:

If you feel that you can contribute to one of these issues, we especially encourage you to do so!

* [![Paused](https://img.shields.io/badge/-paused-%23ddcc5f.svg)][link_paused] *These issues should not be worked on until the resolution of other issues or Pull Requests.*

These are issues that are paused pending resolution of a related issue or Pull Request.
Please do not open any Pull Requests to resolve these issues.

* [![Bugs](https://img.shields.io/badge/-bugs-fc2929.svg)][link_bugs] *These issues point to problems in the project.*

If you find new a bug, please give as much detail as possible in your issue, including steps to recreate the error.
Expand Down Expand Up @@ -204,8 +225,10 @@ You're awesome. :wave::smiley:
[link_project_boards]: https://github.com/ME-ICA/tedana/projects
[link_gitter]: https://gitter.im/me-ica/tedana
[link_coc]: https://github.com/ME-ICA/tedana/blob/master/CODE_OF_CONDUCT.md
[link_stale-bot]: https://github.com/probot/stale

[link_labels]: https://github.com/ME-ICA/tedana/labels
[link_paused]: https://github.com/ME-ICA/tedana/labels/paused
[link_bugs]: https://github.com/ME-ICA/tedana/labels/bug
[link_helpwanted]: https://github.com/ME-ICA/tedana/labels/help%20wanted
[link_enhancement]: https://github.com/ME-ICA/tedana/labels/enhancement
Expand Down
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ After installation, you can use the following commands to create an environment

```bash
conda create -n ENVIRONMENT_NAME python=3 pip mdp numpy scikit-learn scipy
source activate ENVIRONMENT_NAME
conda activate ENVIRONMENT_NAME
pip install nilearn nibabel
pip install tedana
```
Expand All @@ -61,9 +61,14 @@ This will also allow any previously existing tedana installations to remain unto
To exit this conda environment, use

```bash
source deactivate
conda deactivate
```

NOTE: Conda < 4.6 users will need to use the soon-to-be-deprecated option
`source` rather than `conda` for the activation and deactivation steps.
You can read more about managing conda environments and this discrepancy here:
[here](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)

## Getting involved

We :yellow_heart: new contributors!
Expand Down
18 changes: 15 additions & 3 deletions docs/approach.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ calculated below, each voxel's values are only calculated from the first :math:`
echoes, where :math:`n` is the value for that voxel in the adaptive mask.

.. note::
``tedana`` allows users to provide their own mask. The adaptive mask will
be computed on this explicit mask, and may reduce it further based on the
data.
``tedana`` allows users to provide their own mask.
The adaptive mask will be computed on this explicit mask, and may reduce
it further based on the data.
If a mask is not provided, ``tedana`` runs `nilearn.masking.compute_epi_mask`_
on the first echo's data to derive a mask prior to adaptive masking.
The workflow does this because the adaptive mask generation function
Expand Down Expand Up @@ -140,6 +140,17 @@ of the other echoes (which it is).
.. image:: /_static/10_optimal_combination_timeseries.png
:align: center

.. note::
An alternative method for optimal combination that
does not use :math:`T_{2}^*`, is the parallel-acquired inhomogeneity
desensitized (PAID) ME-fMRI combination method (`Poser et al., 2006`_).
This method specifically assumes that noise in the acquired echoes is "isotopic and
homogeneous throughout the image," meaning it should be used on smoothed data.
As we do not recommend performing tedana denoising on smoothed data,
we discourage using PAID within the tedana workflow.
We do, however, make it accessible as an alternative combination method
in the t2smap workflow.

TEDPCA
``````
The next step is to identify and temporarily remove Gaussian (thermal) noise
Expand Down Expand Up @@ -223,3 +234,4 @@ robust PCA.

.. _nilearn.masking.compute_epi_mask: https://nilearn.github.io/modules/generated/nilearn.masking.compute_epi_mask.html
.. _Power et al. (2018): http://www.pnas.org/content/early/2018/02/07/1720985115.short
.. _Poser et al., 2006: https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.20900
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@

# General information about the project.
project = 'tedana'
copyright = '2017-2018, tedana developers'
copyright = '2017-2019, tedana developers'
author = 'tedana developers'

# The version info for the project you're documenting, acts as replacement for
Expand Down
83 changes: 76 additions & 7 deletions docs/multi-echo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,78 @@ For a comprehensive review, see `Kundu et al. (2017)`_.

Why use multi-echo?
-------------------
ME-EPI exhibits higher SNR and improves statistical power of analyses.
There are many potential reasons an investigator would be interested in using multi-echo EPI (ME-EPI).
Among these are the different levels of analysis ME-EPI enables.
Specifically, by collecting multi-echo data, researchers are able to compare results for
(1) single-echo, (2) optimally combined, and (3) denoised data.
Each of these levels of analysis have their own advantages.

For single-echo: currently, field standards are largely set using single-echo EPI.
Because multi-echo is composed of multiple single-echo time series, each of these can be analyzed separately.
This allows researchers to benchmark their results.

For optimally combined: Rather than analyzing single-echo time series separately,
we can combine them into a "optimally combined time series".
For more information on this combination, see :ref:`approach`.
Optimally combined data exhibits higher SNR and improves statistical power of analyses in regions
traditionally affected by drop-out.

For denoised: Collecting multi-echo data allows access to unique denoising metrics.
``tedana`` is one ICA-based denoising pipeline built on this information.
Other ICA-based denoising methods like ICA-AROMA (`Pruim et al. (2015)`_)
have been shown to significantly improve the quality of cleaned signal.

These methods, however, have comparably limited information, as they are designed to work with single-echo EPI.
Collecting multi-echo EPI allows us to leverage all of the information available for single-echo datasets,
as well as additional information only available when looking at signal decay across multiple TEs.
We can use this information to denoise the optimally combined time series.

.. _Pruim et al., 2015: https://www.sciencedirect.com/science/article/pii/S1053811915001822

Acquisition Parameter Recommendations
-------------------------------------
There is no empirically tested best parameter set for multi-echo acquisition.
The guidelines for optimizing parameters are similar to single-echo fMRI.
For multi-echo fMRI, the same factors that may guide priorities for single echo
fMRI sequences are also relevant.
Choose sequence parameters that meet the priorities of a study with regards to spatial resolution,
spatial coverage, sample rate, signal-to-noise ratio, signal drop-out, distortion, and artifacts.

The one difference with multi-echo is a slight time cost.
For multi-echo fMRI, the shortest echo time (TE) is essentially free since it is collected in the
gap between the radio frequency (RF) pulse and the single-echo acquisition.
The second echo tends to roughly match the single-echo TE.
Additional echoes require more time.
For example, on a 3T MRI, if the T2* weighted TE is 30ms for single echo fMRI,
a multi-echo sequence may have TEs of 15.4, 29.7, and 44.0ms.
In this example, the extra 14ms of acquisition time per RF pulse is the cost of multi-echo fMRI.

One way to think about this cost is in comparison to single-echo fMRI.
If a multi-echo sequence has identical spatial resolution and acceleration as a single-echo sequence,
then a rough rule of thumb is that the multi-echo sequence will have 10% fewer slices or 10% longer TR.
Instead of compromising on slice coverage or TR, one can increase acceleration.
If one increases acceleration, it is worth doing an empirical comparison to make sure there
isn't a non-trivial loss in SNR or an increase of artifacts.

A minimum of 3 echoes is recommended for running TE-dependent denoising.
While there are successful studies that don’t follow this rule,
it may be useful to have at least one echo that is earlier and one echo that is later than the
TE one would use for single-echo T2* weighted fMRI.

More than 3 echoes may be useful, because that would allow for more accurate
estimates of BOLD and non-BOLD weighted fluctuations, but more echoes have an
additional time cost, which would result in either less spatiotemporal coverage
or more acceleration.
Where the benefits of more echoes balance out the additional costs is an open research question.

We are not recommending specific parameter options at this time.
There are multiple ways to balance the slight time cost from the added echoes that have
resulted in research publications.
We suggest new multi-echo fMRI users examine the `spreadsheet`_ of journal articles that use
multi-echo fMRI to identify studies with similar acquisition priorities,
and use the parameters from those studies as a starting point.

.. _spreadsheet: https://docs.google.com/spreadsheets/d/1WERojJyxFoqcg_tndUm5Kj0H1UfUc9Ban0jFGGfPaBk/edit#gid=0

Resources
---------
Expand All @@ -44,22 +115,20 @@ Videos
.. _educational session from OHBM 2017: https://www.pathlms.com/ohbm/courses/5158/sections/7788/video_presentations/75977
.. _series of lectures from the OHBM 2017 multi-echo session: https://www.pathlms.com/ohbm/courses/5158/sections/7822

Sequences
*********
* Multi-echo sequences: who has them and how to get them.

Datasets
********
A small number of multi-echo datasets have been made public so far. This list is
not necessarily up-to-date, so please check out OpenNeuro to potentially
find more.
A number of multi-echo datasets have been made public so far.
This list is not necessarily up-to-date, so please check out OpenNeuro to potentially find more.

* `Multi-echo fMRI replication sample of autobiographical memory, prospection and theory of mind reasoning tasks`_
* `Multi-echo Cambridge`_
* `Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity`_
* `Valence processing differs across stimulus modalities`_
* `Cambridge Centre for Ageing Neuroscience (Cam-CAN)`_

.. _Multi-echo fMRI replication sample of autobiographical memory, prospection and theory of mind reasoning tasks: https://openneuro.org/datasets/ds000210/
.. _Multi-echo Cambridge: https://openneuro.org/datasets/ds000258
.. _Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity: https://openneuro.org/datasets/ds000254
.. _Valence processing differs across stimulus modalities: https://openneuro.org/datasets/ds001491
.. _Cambridge Centre for Ageing Neuroscience (Cam-CAN): https://camcan-archive.mrc-cbu.cam.ac.uk/dataaccess/
36 changes: 18 additions & 18 deletions docs/outputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -129,24 +129,24 @@ P007 rejected Rho below fmin (only in stabilized PCA decision tree)

TEDICA codes
````````````
===== =============== ========================================================
Code Classification Description
===== =============== ========================================================
I001 rejected Manual exclusion
I002 rejected Rho greater than Kappa
I003 rejected More significant voxels in S0 model than R2 model
I004 rejected S0 Dice is higher than R2 Dice and high variance
explained
I005 rejected Noise F-value is higher than signal F-value and high
variance explained
I006 ignored No good components found
I007 rejected Mid-Kappa component
I008 ignored Low variance explained
I009 rejected Mid-Kappa artifact type A
I010 rejected Mid-Kappa artifact type B
I011 ignored ign_add0
I012 ignored ign_add1
===== =============== ========================================================
===== ================= ========================================================
Code Classification Description
===== ================= ========================================================
I001 rejected|accepted Manual classification
I002 rejected Rho greater than Kappa
I003 rejected More significant voxels in S0 model than R2 model
I004 rejected S0 Dice is higher than R2 Dice and high variance
explained
I005 rejected Noise F-value is higher than signal F-value and high
variance explained
I006 ignored No good components found
I007 rejected Mid-Kappa component
I008 ignored Low variance explained
I009 rejected Mid-Kappa artifact type A
I010 rejected Mid-Kappa artifact type B
I011 ignored ign_add0
I012 ignored ign_add1
===== ================= ========================================================

Visual reports
--------------
Expand Down
35 changes: 18 additions & 17 deletions tedana/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,13 @@ def _combine_t2s(data, tes, ft2s):


@due.dcite(Doi('10.1002/mrm.20900'),
description='STE method of combining data across echoes using just '
description='PAID method of combining data across echoes using just '
'SNR/signal and TE.')
def _combine_ste(data, tes):
def _combine_paid(data, tes):
"""
Combine data across echoes using SNR/signal and TE.
Combine data across echoes using SNR/signal and TE via the
parallel-acquired inhomogeneity desensitized (PAID) ME-fMRI combination
method.
Parameters
----------
Expand Down Expand Up @@ -90,9 +92,9 @@ def make_optcom(data, tes, mask, t2s=None, combmode='t2s', verbose=True):
t2s : (S [x T]) :obj:`numpy.ndarray` or None, optional
Estimated T2* values. Only required if combmode = 't2s'.
Default is None.
combmode : {'t2s', 'ste'}, optional
How to combine data. Either 'ste' or 't2s'. If 'ste', argument 't2s' is
not required. Default is 't2s'.
combmode : {'t2s', 'paid'}, optional
How to combine data. Either 'paid' or 't2s'. If 'paid', argument 't2s'
is not required. Default is 't2s'.
verbose : :obj:`bool`, optional
Whether to print status updates. Default is True.
Expand Down Expand Up @@ -127,32 +129,31 @@ def make_optcom(data, tes, mask, t2s=None, combmode='t2s', verbose=True):
'voxels/samples: {0} != {1}'.format(mask.shape[0],
data.shape[0]))

if combmode not in ['t2s', 'ste']:
raise ValueError("Argument 'combmode' must be either 't2s' or 'ste'")
if combmode not in ['t2s', 'paid']:
raise ValueError("Argument 'combmode' must be either 't2s' or 'paid'")
elif combmode == 't2s' and t2s is None:
raise ValueError("Argument 't2s' must be supplied if 'combmode' is "
"set to 't2s'.")
elif combmode == 'ste' and t2s is not None:
LGR.warning("Argument 't2s' is not required if 'combmode' is 'ste'. "
elif combmode == 'paid' and t2s is not None:
LGR.warning("Argument 't2s' is not required if 'combmode' is 'paid'. "
"'t2s' array will not be used.")

data = data[mask, :, :] # mask out empty voxels/samples
tes = np.array(tes)[np.newaxis, ...] # (1 x E) array_like

if t2s is not None:
if combmode == 'paid':
LGR.info('Optimally combining data with parallel-acquired inhomogeneity '
'desensitized (PAID) method')
combined = _combine_paid(data, tes)
else:
if t2s.ndim == 1:
msg = 'Optimally combining data with voxel-wise T2 estimates'
else:
msg = ('Optimally combining data with voxel- and volume-wise T2 '
'estimates')
t2s = t2s[mask, ..., np.newaxis] # mask out empty voxels/samples

if verbose:
LGR.info(msg)

if combmode == 'ste':
combined = _combine_ste(data, tes)
else:
LGR.info(msg)
combined = _combine_t2s(data, tes, t2s)

combined = unmask(combined, mask)
Expand Down
8 changes: 3 additions & 5 deletions tedana/decomposition/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
# ex: set sts=4 ts=4 sw=4 et:

from .eigendecomp import (
tedpca, tedica,
)
from .pca import tedpca
from .ica import tedica


__all__ = [
'tedpca', 'tedica']
__all__ = ['tedpca', 'tedica']
Loading

0 comments on commit 82276c6

Please sign in to comment.