Once upon a time, it was customary to apply a 3-sigma cutoff to each and
every spot observation, and I believe this was the era when the "~35%
Rmerge in the outermost bin" rule was conceived, alongside the "80%
completeness" rule. Together, these actually do make a " reasonable"
two-pronged criterion for the resolution limit.
Now, by "reasonable" I don't mean "true", just that there is "reasoning"
behind it. If you are applying a 3-sigma cutoff to spots, then the
expected error per spot is not more than ~33%, so if Rmerge is much
bigger than that, then there is something "funny" going on. Perhaps a
violation of the chosen space group symmetry (which may only show up at
high resolution), radiation damage, non-isomorphism, bad absorption
corrections, crystal slippage or a myriad of other "scaling problems"
could do this. Rmerge became a popular statistic because it proved a
good way of detecting problems like these in data processing.
Fundamentally, if you have done the scaling properly, then Rmerge/Rmeas
should not be worse than the expected error of a single spot
measurement. This is either the error expected from counting statistics
(33% if you are using a 3-sigma cutoff), or the calibration error of the
instrument (~5% on a bad day, ~2% on a good one), whichever is bigger.
As for completeness, 80% overall is about the bare minimum of what you
can get away with before the map starts to change noticeably. See my
movie here:
http://bl831.als.lbl.gov/~jamesh/movies/index.html#completeness
so I imagine this "80% rule" just got extended to the outermost bin.
After all, claiming a given resolution when you've only got 50% of the
spots at that resolution seems unwarranted, but requiring 100%
completeness seems a little too strict.
Where did these rules come from? As I recall, I first read about them
in the manual for the "PROCESS" program that came with our R-axis IIc
x-ray system when I was in graduate school (ca 1996). This program was
conveniently integrated into the data collection software on the
detector control computers: one was running VMS, and the "new" one was
an SGI. I imagine a few readers of this BB may have never heard of
"PROCESS", but it is listed as the "intensity integration software" for
at least a thousand PDB entries. Is there a reference for "PROCESS"?
Yes. In the literature it is almost always cited with: (Molecular
Structure Corporation, The Woodlands, TX). Do I still have a copy of
the manual? Uhh. No. In fact, the building that once contained it has
since been torn down. Good thing I kept my images!
Is this "35% Rmerge with a 3-sigma cutoff" method of determining the
resolution limit statistically valid? Yes! There are actually very
sound statistical reasons for it. Is the resolution cutoff obtained the
best one for maximum-likelihood refinement? Definitely not! Modern
refinement programs do benefit from weak data, and tossing it all out
messes up a number of things. Does including weak data make
Rmerge/Rmeas/Rpim and R/Rfree go up? Yes. Does this make them more
"honest"? No. It actually makes them less useful.
Remember all R factors are measures of _relative_ error, so it is
important to remember to ask the question: "Relative to what?". For
Rmerge, the "what" is the sum of all the spot intensities (Blundell and
Johnson, 1976). Where you run into problems is when you restrict the
Rmerge calculation to a single resolution bin. If the sum of all
intensities in the bin is actually zero, then Rmerge is undefined
(dividing by zero). If the signal-to-noise ratio is ~1, then the Rmerge
equation doesn't "blow up" mathematically, but it does give essentially
random results. This is because Rmerge values for data this weak take
on a Cauchy distribution, and no matter how much averaging you do,
Cauchy-distributed values have a random mean. You can see in the
classic Weiss & Hilgenfeld (1997) paper that they had to use "outlier
rejection" with their fake data to get Rmerge to behave even with a
signal-to-noise ratio of 2. The "turn over point" where the Rmerge
equation becomes mathematically well-behaved (Gaussian rather than
Cauchy distribution) is when the signal-to-noise ratio is about 3. I
believe this is why our forefathers used a 3-sigma cutoff.
Now, a 3-sigma cutoff on the raw observation data may sound like heresy
today, and I do NOT recommend you feed such data to refinement or other
downstream programs. But, it is important to remember what you are
trying to measure! If you are trying to detect scaling errors, then you
should be looking at spots where scaling errors are not masked by other
kinds of error. For example, a spot with only one photon in it is not
going to tell you very much about the accuracy of your scales, but its
average |delta-intensity|/intensity is going to be huge. That is, the
pre-R-factor sigma cutoff isolates the R factor calculation to spots
dominated by scaling errors. Including weaker data with their
Cauchy-distributed R factor simply adds noise to the value of the R
factor itself.
So, I'd say if you have a reviewer complaining that your Rmerge in the
outermost bin is too high, simply tell the editor that you did not use a
3-sigma cutoff on the raw data for the Rmerge calculation, and ask if
he/she would prefer that you did.
-James Holton
MAD Scientist
On 1/27/2012 9:55 AM, Jacob Keller wrote:
Clarification: I did not mean I/sigma of 2 per se, I just meant
I/sigma is more directly a measure of signal than R values.
JPK
On Fri, Jan 27, 2012 at 11:47 AM, Jacob Keller
<j-kell...@fsm.northwestern.edu> wrote:
Dear Crystallographers,
I cannot think why any of the various flavors of Rmerge/meas/pim
should be used as a data cutoff and not simply I/sigma--can somebody
make a good argument or point me to a good reference? My thinking is
that signal:noise of>2 is definitely still signal, no matter what the
R values are. Am I wrong? I was thinking also possibly the R value
cutoff was a historical accident/expedient from when one tried to
limit the amount of data in the face of limited computational
power--true? So perhaps now, when the computers are so much more
powerful, we have the luxury of including more weak data?
JPK
--
*******************************************
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email:j-kell...@northwestern.edu
*******************************************