Hi

I'd be somewhat surprised if this accounted for the difference since it would require the routine collection of many overloads in each dataset before you'd notice that the completion was higher systematically.

(The two different cutoffs that James refers to are the absolute cutoff (where reflections aren't integrated) and the profile-fitting cutoff (which is used if you want to profile fit overloads).)

Since we've had automatic detector recognition since 2002, and have deprecated the use of the DETECTOR (or SCANNER) keyword unless you have an unusual set-up since then, most people should never come across this as a problem even if they write their own scripts. James does make a good point though - if you use the "DETECTOR TYPE" keywords, you also need to provide other keywords to make sure the processing proceeds according to expectations!

I think you'd need more information on where the "extra" reflections are, or if they are strong/weak/etc as Phil suggested, before pointing the finger in any particular direction.

I replied to Simon yesterday privately with the following -


I seem to remember something like this when we started looking at Pilatus images a few years ago, but I didn't do the processing myself so can't be sure about it.

In principle, if the rejection and acceptance criteria are the same, then the two programs (and d*Trek and HKL...) should report the same completeness and the same overall stats, once you take into account the different ways the various merging Rs are calculated. I'm always pleased when people give Mosflm a good report, but I don't think there's a huge difference in the data coming out of the different programs. Occasionally, we do find a dataset where one program is better than the others (I put this down to the particular dataset being similar to one that the developer used).

However, from memory I think XDS has rather stricter rejection criteria by default - and this gives lower completeness, multiplicity and merging Rs (if you merge fewer "bad" equivalents you get lower R factors). When we ran tests using Mosflm to reject similarly "bad" reflections from a high quality dataset, we got similar completeness and merging Rs - but this is entirely artificial.

I *think* it comes down to whichever program you're most used to running, and the one you know how to get the best out of. I'm sure that you will get replies from people saying that XDS (or whatever program) always gives higher completeness etc than Mosflm!



On 9 Jun 2010, at 07:57, James Holton wrote:

Check your mosflm input file.
If this is an "ADSC" type detector and you have specified that it is (using "DETECTOR TYPE ADSC" or "SCANNER TYPE ADSC"), but have not explicitly specified the overload limit with "OVERLOAD CUTOFF", then the default overload cutoff for integration will be 100,000, and this effectively turns off overload detection. Note that there are TWO different overload cutoffs in mosflm, but both are listed in the log next to the string "(CUTOFF)".

I only discovered this myself a few weeks ago, and I have patched the current Elves release:
http://bl831.als.lbl.gov/~jamesh/elves/download.html
to avoid this problem when they run mosflm, but versions from the last two years may actually miss overloads!

-James Holton
MAD Scientist

Simon Kolstoe wrote:
Thanks Tim, Phil and Andrew for your answers.

Just one further related question:

Why is it that mosflm seems to report higher completeness than XDS on the same data (I've seen this on about 50 datasets)? I always thought it was due to mosflms peak extrapolation but it seems this isn't the answer if SCALA throws those reflections out.

Thanks,

Simon

On 7 Jun 2010, at 15:35, Phil Evans wrote:

Mosflm integrates them (profile-fitted overloads) but flags them. Pointless uses them for systematic absence tests. Scala by default ignores them, but you can include them if you want: this is not normally recommended since they are pretty inaccurate (look in the "Excluded data" tab of ccp4i/Scala)

If you are merging strong & weak datasets it should do the right thing, I think.

Phil


On 7 Jun 2010, at 15:09, Simon Kolstoe wrote:

Dear CCP4bb,

I was wondering if someone could tell me how mosflm and scala deal with overloaded reflections. From my understanding mosflm extrapolates the overloaded peaks but then scala throws them out completely - is this right?

If so am I right to not worry about "contamination" from extrapolated peaks when combining high and low resolution datasets from the same crystal?

Thanks

Simon

Harry
--
Dr Harry Powell,
MRC Laboratory of Molecular Biology,
Hills Road,
Cambridge,
CB2 0QH

Reply via email to