I have a question about how the experimental sigmas are affected when one includes resolution shells containing mostly unobserved reflections. Does this vary with the data reduction software being used?

One thing I've noticed when scaling data (this with d*trek (Crystal Clear) since it's the program I use most) is that I/sigma(I) of reflections can change significantly when one changes the high resolution cutoff.

If I set the detector so that the edge is about where I stop seeing reflections and integrate to the corner of the detector, I'll get a dataset where I/sigma(I) is really compressed - there is a lot of high resolution data with I/sigma(I) about 1, but for the lowest resolution shell, the overall I/sigma(I) will be maybe 8-9. If the data set is cutoff at a lower resolution (where I/sigma(I) in the shell is about 2) and scaled, I/sigma(I) in the lowest resolution shell will be maybe 20 or even higher (OK, there is a different resolution cutoff for this shell, but if I look at individual reflections, the trend holds). Since the maximum likelihood refinements use sigmas for weighting this must affect the refinement. My experience is that interpretation of the maps is easier when the cut-off datasets are used. (Refinement is via refmac5 or shelx). Also, I'm mostly talking about datasets from well- diffracting crystals (better than 2 A).

Sue


On Mar 22, 2007, at 2:29 AM, Eleanor Dodson wrote:

I feel that is rather severe for ML refinement - sometimes for instance it helps to use all the data from the images, integrating right into the corners, thus getting a very incomplete set for the highest resolution shell. But for exptl phasing it does not help to have many many weak reflections..

Is there any way of testing this though? Only way I can think of to refine against a poorer set with varying protocols, then improve crystals/data and see which protocol for the poorer data gave the best agreement for the model comparison?

And even that is not decisive - presumably the data would have come from different crystals with maybe small diffs between the models..
Eleanor



Shane Atwell wrote:

Could someone point me to some standards for data quality, especially for publishing structures? I'm wondering in particular about highest shell completeness, multiplicity, sigma and Rmerge.

A co-worker pointed me to a '97 article by Kleywegt and Jones:

_http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_

"To decide at which shell to cut off the resolution, we nowadays tend to use the following criteria for the highest shell: completeness > 80 %, multiplicity > 2, more than 60 % of the reflections with I > 3 sigma(I), and Rmerge < 40 %. In our opinion, it is better to have a good 1.8 Å structure, than a poor 1.637 Å structure."

Are these recommendations still valid with maximum likelihood methods? We tend to use more data, especially in terms of the Rmerge and sigma cuttoff.

Thanks in advance,

*Shane Atwell*


Sue Roberts
Biochemistry & Biopphysics
University of Arizona

[EMAIL PROTECTED]

Reply via email to