On 09/06/2010 16:49, James Holton wrote:
Operationally, I recommend treating anisotropic data just like isotropic data. There is nothing wrong with measuring a lot of zeros (think about systematic absences), other than making irrelevant statistics like Rmerge higher. One need only glance at the formula for any R factor to see that it is undefined when the "true" F is zero. Unfortunately, there are still a lot of reviewers out there who were trained that "the Rmerge in the outermost resolution bin must be 20%", and so some very sophisticated ellipsoidal cut-off programs have been written to try and meet this criterion without throwing away good data. I am actually not sure where this idea came from, but I challenge anyone to come up with a sound statistical basis for it. Better to use I/sigma(I) as a guide, as it really does tell you how much "information vs noise" you have at a given resolution.
So, if my outer shell has 10% reflections I/sigI>10, 90% reflections I/sigI=1, will Mean(I/sigI) for that shell tend to 10 or 1?
Presumably I'm calculating it wrong in my simulation (very naive: took average of all individual I/sigI), because for me it tends to 1.
But if I did get it right, then how does Mean(I/sigI) tell me that 10% of my observations have good signal?
phx.