Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the ARI methond too strict? #5

Open
finch-f opened this issue Oct 8, 2023 · 5 comments
Open

Is the ARI methond too strict? #5

finch-f opened this issue Oct 8, 2023 · 5 comments
Labels
enhancement New feature or request

Comments

@finch-f
Copy link

finch-f commented Oct 8, 2023

Hi, I really appreciate this package, as it addresses my question in my study. I am wondering if the ARI method is too strict. After using this package on my data, all clusters (obtained in a traditional cluster-based permutation test) were gone. I just want to make sure this is a normal situation. The result can been seen in the plot:

屏幕截图 2023-10-08 102001

As you can see, there is a long interval where 1-pvalue is very close to 1. However, no clusters was found via mne-ari. Is this normal?

Thanks for your kind help.

@john-veillette
Copy link
Owner

Hi @finch-f, thanks for opening an issue. ARI itself has similar statistical power to traditional cluster-based methods (for detecting TDP > 0, not necessarily TDP > .95). However, some of the current package defaults are not the most sensible and can hinder detection, which may be what's causing your problem -- though I can't know for sure without more details. I hope to iron these issues out in the next release, or at least make them more transparent to the user, but I've been a bit too busy to work on the package lately.

Specifically, the univariate test used by the all_resolutions_inference function is a permutation test by default, which I thought would be a good idea when I was initially writing the package so results would be robust against violations of parametric assumptions... but in retrospect this wasn't the best idea. The threshold for ARI detecting anything at all is the same of that for an FDR or Bonferroni correction (for the first detected true positive, and then the threshold lowers for subsequent detections like FDR). That means that if you have 100 tests and want a false positive rate of 0.05, the adjusted alpha is 0.0005. The default number of permutations used by each univariate test when you run all_resolutions_inference is 10000, so the lowest p-value it can compute is ~0.0001. This is technically enough to find p < .0005 but it makes things harder.

So you'll probably want to do one of the following, and then I'd appreciate if you report back what worked for you :)

  • Try again with a larger n_permutations argument
  • Use the statfun argument to use a parametric t-test as the univariate-level test (as in the "Defining Custom Statistics Functions" section of the tutorial instead of the default permutation t-test.
  • Use permutation ARI by setting ari_type = 'permutation'.

@finch-f
Copy link
Author

finch-f commented Oct 21, 2023

Hi, John Veillette, thank you so much for your detailed reply. I finally got time for my data analysis and found a stable way to log into github. And I must apologize for the insufficient information I provided last time.

Anyway, I followed your suggestions to see what happened. The following is my code for a permutation type of ARI.

`np.random.seed(0)

results = scipy.io.loadmat(r"F:\MEG\MEG CLEAN DATA\group_result.mat")

data = results['res_group_condition'][0]

time = results['res_group_condition'][1]

X = data

print(X.shape) # observations x times

alpha = .05

p_vals,tdp,clusters = all_resolutions_inference(X,alpha=alpha,tail=1,ari_type='permutation',n_permutations=100000)`

Increasing the n_permutations to 100000 and setting ari_type = 'permutation' produced similar result as the previous version.

permutation

So I changed to use a non-permutation type of ARI. Here is my code:

`from scipy.stats import ttest_1samp
from scipy.stats import wilcoxon

condition = 'condition_A'
chance_level = 0.2

def one_sample_ttest(data):
res = ttest_1samp(data, popmean = chance_level ,alternative = 'greater')
return res.pvalue

def non_param_one_sample_ttest(data):
data=data-chance_level
res = wilcoxon(data,alternative = 'greater')
return res.pvalue

np.random.seed(0)

results = scipy.io.loadmat(r'F:\MEG\MEG CLEAN DATA\group_result.mat')

data = results['res_group_condition'][0]

time = results['res_group_condition'][1]

X = data
print(X.shape) # observations x times
print(X)

alpha = .05

p_vals, tdp, clusters = all_resolutions_inference(
X, alpha,
ari_type = 'parametric',
statfun = non_param_one_sample_ttest
)`

Now the results seemed reasonable:

parametric

My main concern is which one is the better way for ARI? Although the second one produced the desired results, would it be too loose? In fact, it produced a result similar to the cluster-based permutation test, which is criticized for generating to broad area of results. So can I use it in my paper?

By the way, I have one small question for ARI. Since ARI makes inference at voxel/time-point level possible, can I use it to determine the onset of an effect in EEG/MEG analysis? I know this is not a question about the mne-ari package. But as a beginner in ARI, I really want to hear your advice on this question.

Thanks again for your time and helpful suggestions.

@john-veillette
Copy link
Owner

john-veillette commented Oct 23, 2023

No need to be concerned, I think. ARI should be a valid correction if the univariate test you're using is conservative -- that is, if the false positive rate is actually <= 0.05 if you use p/alpha <= 0.05 as your cutoff. The original paper introducing ARI used (I think) a parametric Z-test for its univariate test since that's the norm (no pun intended) in the fMRI literature, but us EEG folk tend to stick with non-parametric tests. In that vein, the Wilcoxon test you define in non_param_one_sample_ttest is perfectly fine.

You raise an interesting point about whether you should be troubled if you're getting similar results to a cluster-based permutation test. You're right that cluster-based permutation tests can overestimate the extent of an effect, but they don't always overestimate the extent. Sassenhagan and Draschkow estimate, by their simulation, about a 20% false positive rate for claims about effect onset -- that's worse than the target 0.05, but not 100%. ARI gives you a way of checking whether a particular cluster has a high enough proportion of true positive voxels/univariate tests to use the extent of the cluster as a proxy for the extent of the effect. When I repeat Sassenhagan and Draschkow/s simulations with only clusters ARI estimates have TDP > 0.95, claims about effect extent/onset are quite conservative.

Does that answer your questions?

@finch-f
Copy link
Author

finch-f commented Oct 24, 2023

Hi, @john-veillette, thank you very much for your insights and opinions! I have also read carefully the content in the link you provided, which deepened my understanding of ARI. They have been extremely helpful in my data analysis work, and I have benefited greatly from them. Thank you once again~

@john-veillette
Copy link
Owner

Great, glad I could help. I'll leave this issue open, since it highlights the need for a better default univariate test -- probably Wilxocon or similar -- for parametric ARI. (Of course, this has to wait until the next major/breaking release, since I don't want to interrupt anyone's current workflow.)

@john-veillette john-veillette added the enhancement New feature or request label Oct 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants