Hi Martin,
we're looking into it - it's probably a bug in recon-all.
Bruce
On Mon, 21 Nov
2005, Martin Ystad wrote:
I'm working on the latest developmental release for Rh. 9. and I'm
having problems with the normalization procedures on my datasets.
My datasets have large intensity inhomogeneities due to their
acquisition with a surface coil (8-ch. GE).
To fix this problem I have resorted to the use of control points. This
works if I add enough points across the brain ( over 200), and I get a
better normalization result after running "recon-all -normalization
-usecontrolpoints" and another skull-stripping on the new T1-volume.
To test whether or not this produces a good segmentaition of white
matter, I run mri_segment on my brain.mgz -volume. If I'm satisfied with
the segmentation result, I move on to the surface processing stage, and
run the -autorecon2 -script.
The problem is that after running the -autorecon2, the normalization is
completely wrong again, looking more like the first normalization done
without control points, and the wm.mgz -volume is also bad. So are the
surfaces.
How do I make freesurfer produce the same good results as I got during
the first normalization? Do I need to specify the use of control points
again, even though the brain.mgz volume looks ok?
Thanks,
Martin Ystad
Medical Student
University of Bergen
Institute of Biomedicine
Jonas Lies vei 91, 5009
Bergen, Norway.
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer