Run mri_glmfit on both groups separately, using a contrast to compute the mean 
of the group. This will create two directories, one for each group. In each 
directory, there will be a variance measure (rvar.mgh) and a mean of the 
contrast (contrast/gamma.mgh). You then need to compute the welch's t as
t = (gamma1-gamma2)/sqrt(rvar1/N1 + rvar2/N2)
where n1 and N2 are the sample sizes for each group. You can implement this in 
matlab or from the command line. If matlab, you run
rvar1 = MRIread('group1/rvar.mgh');
gamma1 = MRIread('group1/contrast/gamma.mgh');
rvar2 = MRIread('group2/rvar.mgh');
gamma2 = MRIread('group2/contrast/gamma.mgh');
t = gamma;
t.vol = (gamma1.vol-gamma2.vol)./sqrt(rvar1.vol/N1 + rvar2.vol/N2);
MRIwrite(t,'welchst.mgh')



On 3/15/19 3:24 PM, Hannah CK wrote:

        External Email - Use Caution

Hi Doug,

We actually do want the voxel-by-voxel results. Could you please give details 
on how we would do that?  Specifically, would we need to use the command line 
to run the Welch's T/F or would we extract values for each voxel and put them 
into another software to conduct the Welch's T/F?

Thank you again.


Greve, Douglas 
N.,Ph.D.<https://www.mail-archive.com/search?l=freesurfer@nmr.mgh.harvard.edu&q=from:%22Greve%2C+Douglas+N.%2CPh.D.%22>
 Thu, 14 Mar 2019 08:45:56 
-0700<https://www.mail-archive.com/search?l=freesurfer@nmr.mgh.harvard.edu&q=date:20190314>

You can use mri_segstats to get whole-cortex means (use --slabel and specify 
the lh.cortex.label and add --id 1). You can do the same thing with the 
variances. The problem with doing it after mri_glmfit is that the variances 
will not reflect the spatial averaging over cortex. If you really just want one 
number for cortex, you should use mri_segstats to extract the mean values from 
the stack (ie, the --y input), using --avgwf. This will give you a column of 
numbers, one mean for each subject. Then you can process this however you want. 
When I first responded, I thought that you want to do this on a voxel-wise 
basis.
doug



On Thu, Mar 14, 2019 at 10:10 AM Hannah CK 
<hckal...@gmail.com<mailto:hckal...@gmail.com>> wrote:

Hi Dr. Greve,


Sorry about that - I had followed the "Reply via email" link on the archive 
page instead of posting to the forum directly.


Thank you for this response. Can extracting the means be done at the 
whole-brain level? If we get the means and variances for each group, I'm 
assuming we would need to use ROIs. Could you please clarify whether it's 
possible to extract means across the whole brain?


Thank you,
Hannah









Re: [Freesurfer] Welch's t-test for HOV violation
2019-03-13 Thread Greve, Douglas N.,Ph.D.
Hi Hannah, please include the previous correspondence so that we have context.
Also, please remember to post to the list and not to us personally.
thanks!
doug






On 3/13/19 10:52 PM, Hannah CK wrote:

External Email - Use Caution

Hi Dr. Greve,

Thank you for this response. Can extracting the means be done at the
whole-brain level? If we get the means and variances for each group, I'm
assuming we would need to use ROIs. Could you please clarify whether it's
possible to extract means across the whole brain?

Thank you,
Hannah





___
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu<mailto:Freesurfer@nmr.mgh.harvard.edu>
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer



Re: [Freesurfer] Welch's t-test for HOV violation
2019-03-11 Thread Greve, Douglas N.,Ph.D.



I used to have this about 20 years ago, but I stopped supporting it when it did 
not fit cleanly into the GLM ... The shortest route is probably to do two 
separate analyses with mri_glmfit, one for each group. This will output means 
and variances for each group. Then use fscalc to compute maps of the Welch's t.



On 3/11/19 1:28 PM, Hannah CK wrote:
>
>         External Email - Use Caution
>
> We are conducting t-test comparisons across groups in which there is a
> large discrepancy in group sizes. HOV is violated. I've searched for
> ways to run Welch's t-test in FreeSurfer (or a similar analysis that
> does not assume homogeneity of variance) but am not finding one.
>
> Could anyone please advise on how to do this?
>
> Thank you.



_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu<mailto:Freesurfer@nmr.mgh.harvard.edu>
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer

Reply via email to