Hi Clemens,

     Thank you for the clarification. I had thought you were
advocating using a general low resolution cutoff, with which I would
disagree. I spend a lot of time troubleshooting data collected and
processed by other people. Those are good reminders to go back and
check beamstop settings and overloaded spots.


Phil,

     I was thinking specifically of what to do with overloads when a
short exposure pass wasn't done, when I asked what should be done with
poorly measured data. Exclusion gives you zeroes instead of what
should be the highest intensity spots in your data set. But inclusion
could throw off scaling, which would be worse. SCALA by default
rejects estimated intensities from overloads from mosflm, which I
presume is for a very good reason. Would it be better to not use the
estimated intensities for scale determination but keep them in the
data set?


ho
UC Berkeley

> ----------------------------------------------------------------------
>
> Date:    Mon, 16 Feb 2009 09:07:38 +0000
> From:    Clemens Vonrhein <vonrh...@globalphasing.com>
> Subject: Re: CCP4BB Digest - 12 Feb 2009 to 13 Feb 2009 (#2009-45)
>
> Dear Ho,
>
> On Fri, Feb 13, 2009 at 04:45:29PM -0800, Ho-Leung Ng wrote:
>>      Can you elaborate on the effects of improper inclusion of low
>> resolution (bogus?) reflections? Other than rejecting spots from
>> obvious artifacts, it bothers me to discard data. But I can also see
>> how a few inaccurate, very high intensity spots can throw off scaling.
>
> I completely agree: it also "bothers me to discard data". However, the
> crucial word here is 'data' - which is different from Miller indices
> HKL.
>
> So I am mainly concerned with two types of reflections (HKL) that
> aren't really 'data':
>
>  1) overloads
>
>     These are obviously not included into your final reflection file
>     (unless you explicitely tell the integration software to do that
>     - in which case you know exactly what you are doing anyway). So
>     there is no problem ... or is there?
>
>     Overloaded reflections are only very few at low resolution - and
>     the most important reflections are obviously the ones at 1.94A
>     resolution so that one can have a 'better-than-2A' structure in
>     the end ... ;-) ... So still no problem, right?
>
>     And who cares if the completelness of the data isn't 100% but
>     rather 99.4%? Exactly ... so where is the problem?
>
>     But: these few missing reflections are systematically the
>     strongest ones at low(ish) resolution, and any systematically
>     missing data is not a good thing to have.
>
>     Solution: always collect a low-intensity pass to measure those
>     strong reflections if there is a substantial amount of overloads.
>
>  2) beamstop
>
>     The integration software will predict all reflections based on
>     your parameters (apart from the 000 reflection): it doesn't care
>     if such a reflection would be behind the beamstop shadow or
>     not. However, a reflection behind the beamstop will obviously not
>     actually be there - and the integrated intensity (probably a very
>     low value) will be wrong.
>
>     One example of such effects in the context of experimental
>     phasing is bogus anomalous differences. Imagine that your
>     beamstop is not exactly centred around the direct beam. You will
>     have it extending a little bit more to one side (giving you
>     maybe 20A low resolution) than to the other side (maybe 30A
>     resolution). In one orientation of the crystal you might be able
>     to collect a 25A (h,k,l) reflection very well (because it is on
>     the side where the beamstop only starts at 30A) - but the
>     (-h,-k,-l) relfection is collected in an orientation where it is
>     on the 20A-side of the beamstop, i.e. it is predicted within the
>     beamstop shadow.
>
>     Effect: you have a valid I+ measurement but a more-or-less zero
>     I- measurement, giving you a huge anomalous difference that
>     shouldn't really be there.
>
>     Now if you measured your data in different orientations (kappa
>     goniostat) with high enough multiplicity, this one bogus
>     measurement will probably be thrown out during
>     scaling/merging. You can e.g. check the so-called ROGUES file
>     produced by SCALA. But if you have the usual multiplicity of only
>     3-4 the scaling/merging process might not detect this as an
>     outlier correctly and it ends up in your data. Sure, programs
>     like autoSHARP will check for these outliers and try to reject
>     them - but this is only a hack/fix for the fundamental problem:
>     telling the integration program what the good area of the
>     detector is.
>
>     Solution: mask your beamstop. All integration programs have tools
>     for doing that (some are better than others). I haven't seen
>     any program being able to do it automatically in a reliable way
>     (if reliable would mean: correctly in at least 50% of cases) -
>     but I'm no expert in all of them by a long shot. it usually takes
>     me only about a minute or two for masking the beamstop by hand. A
>     small investment for a big return (good data) ;-)
>
> There are other possibly problematic reflections at ice-rings etc:
> these can also have effects seen in the maps. But the above effects
> have one thing in common: they happen mainly at low-resolution. And
> our models can be seen to consist of basically two real-space
> components (atoms in form of a PDB file and bulk-solvent in form of a
> mask) - one of which is a low-resolution object (solvent mask) and the
> other a high-resolution object (atoms). They need to be combined
> through some clever scaling: if there are issues with the
> low-resolution reflections this scaling can go wrong - sometimes
> really badly.
>
> Hope that helps a bit.
>
> Cheers
>
> Clemens

Reply via email to