Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should negative component weights be treated? #318

Open
tsalo opened this issue May 30, 2019 · 14 comments
Open

How should negative component weights be treated? #318

tsalo opened this issue May 30, 2019 · 14 comments
Labels
question issues detailing questions about the project or its direction TE-dependence issues related to TE dependence metrics and component selection

Comments

@tsalo
Copy link
Member

tsalo commented May 30, 2019

Summary

In dependence_metrics, we compute component-wise metrics using weighted averages, where the weights are determined by how strongly the component loads on a given voxel. However, we treat negative weights the same as positive weights, which does not make sense to me. In fMRI data, components should be signed (i.e., a given signal over time and its opposite form should not be treated equivalently), which is indeed why we attempt to identify the right signs for components (as detailed in #316). Therefore, does it make sense to treat voxels loading highly negatively on a component the same as voxels loading highly positively when computing brain-wide metrics?

Additional Detail

Here is the code where we compute the weighting map used for the weighted averages:

tedana/tedana/model/fit.py

Lines 184 to 187 in 65f89e1

norm_weights = np.abs(np.squeeze(
utils.unmask(wtsZ, mask)[t2s != 0]**2.))
kappas[i_comp] = np.average(F_R2, weights=norm_weights)
rhos[i_comp] = np.average(F_S0, weights=norm_weights)

We square the weight maps so that negative and positive weights are treated equally. Should we instead zero out those negatively weighted voxels when calculating the metrics?

@tsalo tsalo added the question issues detailing questions about the project or its direction label May 30, 2019
@jbteves
Copy link
Collaborator

jbteves commented Jun 1, 2019

I'm not sure what the good solution here is. Since you're the one who looked into it, do you happen to know if there are any patterns in negatively-weighted components?

@tsalo
Copy link
Member Author

tsalo commented Jun 7, 2019

For the most part, negative weights don't seem to have much structure, but in the global signal components both positive and negative weights seem to follow patterns. It's really hard to know for sure based on my own anecdotal evidence. However, I discussed this a bit with @smoia, and he really doesn't think it should matter- most fMRI ICA methods don't even seem to care about directionality of the components (except MELODIC, apparently). Regardless, we can at least fix what is definitely a bug where positively and negatively weighted voxels are treated the same when performing cluster-extent thresholding. I've done this in #331.

@emdupre
Copy link
Member

emdupre commented Jun 19, 2019

I'm not so sure this is a bug.... Since the ICA signs are arbitrary it really is just detecting that there is some signal there. It's safe to assume they come from different components, but I'm not sure it's a problem to have 2+ components in the same cluster. That could still happen if they're of the same sign, right ?

@tsalo
Copy link
Member Author

tsalo commented Jun 20, 2019

If the goal of component selection is to identify which components are S0-based and which are R2-based, and one of the metrics we use to separate noise from signal is how clustered voxels weighting on each component are, then I think treating distinct sources of signal as the same is a problem. My intuition is that it is a small problem that will not affect things very much, since our minimum cluster size for cluster extent thresholding is generally only about 20 voxels, but I think it must still be an issue.

@smoia
Copy link
Collaborator

smoia commented Jun 27, 2019

@tsalo, I know I should know and I'm sorry I don't, but can you explain a bit better this last problem?
Is the cluster threshold based only on cluster size? If so, then you might want to consider differently positive and negative values, and make two cluster maps to threshold (at least, in my opinion).
Another solution might be applying TFCE or similar cluster detection methods, which I would prefer to a mere ">20vox", given the dependency of the results on voxel size (20 vox at 2mm are not 20vox at 3mm).

There might be another issue related to the sign of the components, and it's that no matter what the sign of the components is, I think it should be consistent throughout all the components.
I'm not sure about it, but I'll try to explain better an issue: the problem arises if you flip one component but not all of them, and it might arise in the moment you're reconstructing/denoising the volumes and/or especially if you use the components as nuisance regressors in an external GLM. I'm not sure the matrix products in those steps are robust to different flips in different components. Can someone comment on that? Maybe @CesarCaballeroGaudes or @javiergcas ?

@tsalo
Copy link
Member Author

tsalo commented Jun 27, 2019

@smoia The 20 voxel threshold is really np.max([int(n_voxels * 0.0005) + 5, 20]), but in practice (on our test datasets), that ends up being 20. That threshold should increase with smaller voxel sizes, although I agree that it's probably a sub-optimal method.

The cluster-extent thresholding currently uses what AfNI calls "two-sided" thresholding (positive and negative voxels are combined to form clusters). I agree that they should be clustered separately, since they indicate different things (as implemented in #331).

I'll be interested to learn more about this potential reconstruction issue. It's not something I was aware of before.

@smoia
Copy link
Collaborator

smoia commented Jun 27, 2019

Thank you for the clarification @tsalo ! I would still change the threshold but hey, no biggie.
On the other hand, where do you assign a sign to a component? When you do so, do you flip the whole matrix or the single component? In the first case, there's really nothing to worry about. In the second case, when you reconstruct with the dot product you might end up with wrong reconstructed timeseries (in a dot product of two square matrices, every off-diagonal is inverted). What do you think @eurunuela?

@tsalo
Copy link
Member Author

tsalo commented Jun 27, 2019

We flip components separately. Since the components are initially arbitrarily signed, some might need to be flipped, while others might not, to best match the data.

@eurunuela
Copy link
Collaborator

@tsalo could you please tell us where this is done in the code?

@tsalo
Copy link
Member Author

tsalo commented Jun 27, 2019

It's performed here:

tedana/tedana/model/fit.py

Lines 108 to 113 in 65f89e1

signs = stats.skew(WTS, axis=0)
signs /= np.abs(signs)
mmix = mmix.copy()
mmix *= signs
WTS *= signs
PSC *= signs

@smoia
Copy link
Collaborator

smoia commented Jun 27, 2019

Thank you @tsalo ! @eurunuela and I are looking into it a bit too. Let you know if we have any news.

@CesarCaballeroGaudes
Copy link
Contributor

Sorry for the delay in replying. It is perfectly appropriate to flip the sign of single components (for instance so that the maximum absolute value of the weights is made positive, or the histogram is shifted towards positive values). The reconstruction simply needs to take this into account, but in my opinion, since the reconstruction (or denoising) can be simply done by nulling the noise components to zero, the sign does not really matter because X*0 = 0, regardless of the sign of X. Hope this helps.

@tsalo
Copy link
Member Author

tsalo commented Jul 11, 2019

Since component flipping doesn't pose a problem for reconstruction, how do we feel about negatively weighted voxels? Is the consensus that it's okay to treat those voxels equivalently to positively weighted ones when calculating dependence metrics?

If so, I'd just like to make sure I know what the rationale for that is. Is it:

  1. The negative weights represent a different underlying signal than the positive ones, but the positively weighted signal will swamp the other signal, so the impact is negligible.
  2. The negative weights represent a different underlying signal than the positive ones, but there's no way to dissociate them and even just positive weights can represent multiple underlying signals that are also not dissociable.
    • Personally, I think that, if we can dissociate components into meaningful signals then we should. It's not possible to do that with multiple signals that are all positively weighted on voxels (i.e., the underlying signals are correlated), but it should be possible to do it when one is positively weighted and one is negatively weighted (i.e., the underlying signals are anticorrelated).
  3. Positive and negative weights represent the same underlying signal and are equally meaningful.
  4. Weight signs do matter but we can't be sure enough about the best sign for the component (see Concerns regarding optimal sign determination for components #316) to invest in positive weights specifically.

Once we have a consensus on the reason, I can close this issue and add an overall summary.

@tsalo tsalo added the TE-dependence issues related to TE dependence metrics and component selection label Oct 4, 2019
@stale stale bot added the stale label Jan 2, 2020
@jbteves
Copy link
Collaborator

jbteves commented Jan 2, 2020

@smoia @eurunuela any further thoughts?

@stale stale bot removed the stale label Jan 2, 2020
@stale stale bot added the stale label Apr 1, 2020
@tsalo tsalo removed the stale label Feb 5, 2021
@ME-ICA ME-ICA deleted a comment from stale bot Jul 12, 2021
@ME-ICA ME-ICA deleted a comment from stale bot Jul 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question issues detailing questions about the project or its direction TE-dependence issues related to TE dependence metrics and component selection
Projects
None yet
Development

No branches or pull requests

6 participants