In other words, the free set for each complex must be
such that reflections that are also present in the apo dataset retain
the FreeR flag they had in that dataset.

A very easy way to achieve this- generate a complete dataset to ridiculously
high resolution with the cell of your crystal, and assign free-r flags.
(If the first structure has been already solved, merge it's free set and
extend to the new reflections)
Now for every new structure solved, discard any free set that the data
reduction program may have generated and merge with the complete set,
discarding reflection with no Fobs (MNF) or with SigF=0.

In fact, if we consider a dataset is just a 3-dimensional array, or some
subset of it enclosing the reciprocal space asymmetric unit, I don't
see any reason we couldn't assign one universal P1 free-R set and
use it for every structure in whatever space group. By taking each
new dataset, merging with the universal Free-R, and discarding those
reflections not present in the new data, you would obtain a random
set for your structure. There could be nested (concentric?) free-R sets
with 10%, 5%, 2%, 1% free so that if you start out excluding 5% for a
low-res structure then get a high resolution dataset and want to exclude 2%,
you could be sure that all the 2% free reflections were also free in
your previous 5% set.

Thin or thick shells could be predefined. There may be problems when
it is desired to exclude reflections according to some twin law or NCS.

(just now read Nick Keep's post which expresses some similar ideas)
eab

On 06/04/2015 10:29 AM, Gerard Bricogne wrote:
Dear Graeme and other contributors to this thread,

      It seems to me that the "how many is too many" aspect of this
question, and the various culinary procedures that have been proposed
as answers, may have obscured another, much more fundamental issue,
namely: is it really the business of the data processing package to
assign FreeR flags?

      I would argue that it isn't. From the statistical viewpoint that
justifies the need for FreeR flags, these are pre-refinement entities
rather than post-processing ones. If one considers a single instance
of going from a dataset to a refined structure, then this distinction
may seem artificial. Consider, instead, the case of high-throughput
screening to detect fragment binding on a large number of crystals of
complexes between a given target protein (the "apo") and a multitude
of small, weakly-binding fragments into solutions of which crystals of
the apo have been soaked.

      The model for the apo crystal structure comes from a refinement
against a dataset, using a certain set of FreeR flags. In order to
guard the detection of putative bound fragments against the evils of
model bias, it is very important to ensure that the refinement of each
complex against data collected on it does not treat as free any
reflections that were part of the working set in the refinement of the
apo structure. In other words, the free set for each complex must be
such that reflections that are also present in the apo dataset retain
the FreeR flag they had in that dataset. Any mixup, in the FreeR flags
for a complex, of the work vs. free status of the reflections also in
the apo would push Rwork up and Rfree down, invalidating their role as
indicators of quality of fit or of incipient overfitting.

      Great care must therefore be exercised, in the form of adequate
book-keeping and procedures for generating the FreeR flags in the mtz
file for each complex from that for the apo, to properly enforce this
"inheritance" of work vs. free status.

      In such a context there is a clear and crucial difference between
a post-processing entity and a pre-refinement one. FreeR flags belong
to the latter category. In fact, the creation of FreeR flags at the
end of the processing step can create a false perception, among people
doing ligand screening under pressure, that they cannot re-use the
FreeR flag information of the apo in refining their complexes, simply
because a new set has been created for each of them. This is clearly
to be avoided. Preserving the FreeR flags of the reflections that were
used in the refinement of the apo structure is one of the explicit
recommendations explicitly in the 2013 paper by Pozharski et al. (Acta
Cryst. D69, 150-167) - see section 1.1.3, p.152.

      Best practice in this area may therefore not be only a question
of numbers, but also of doing the appropriate thing in the appropriate
place. There are of course "corner cases" where e.g. substantial
unit-cell changes start to introduce some cross-talk between working
and free reflections, but the possibililty of such complications is no
argument to justify giving up on doing the right thing when the right
thing can be done.


      With best wishes,

           Gerard.

--
On Thu, Jun 04, 2015 at 08:30:57AM +0000, Graeme Winter wrote:
Hi Folks,

Many thanks for all of your comments - in keeping with the spirit of the BB
I have digested the responses below. Interestingly I suspect that the
responses to this question indicate the very wide range of resolution
limits of the data people work with!

Best wishes Graeme

===================================

Proposal 1:

10% reflections, max 2000

Proposal 2: from wiki:

http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/Test_set

including Randy Read "recipe":

So here's the recipe I would use, for what it's worth:
   <10000 reflections:        set aside 10%
    10000-20000 reflections:  set aside 1000 reflections
    20000-40000 reflections:  set aside 5%
   >40000 reflections:        set aside 2000 reflections

Proposal 3:

5% maximum 2-5k

Proposal 4:

3% minimum 1000

Proposal 5:

5-10% of reflections, minimum 1000

Proposal 6:

50 reflections per "bin" in order to get reliable ML parameter
estimation, ideally around 150 / bin.

Proposal 7:

If lots of reflections (i.e. 800K unique) around 1% selected - 5% would be
40k i.e. rather a lot. Referees question use of > 5k reflections as test
set.

Comment 1 in response to this:

Surely absolute # of test reflections is not relevant, percentage is.

============================

Approximate consensus (i.e. what I will look at doing in xia2) - probably
follow Randy Read recipe from ccp4wiki as this seems to (probably) satisfy
most of the criteria raised by everyone else.



On Tue, Jun 2, 2015 at 11:26 AM Graeme Winter <graeme.win...@gmail.com>
wrote:

Hi Folks

Had a vague comment handed my way that "xia2 assigns too many free
reflections" - I have a feeling that by default it makes a free set of 5%
which was OK back in the day (like I/sig(I) = 2 was OK) but maybe seems
excessive now.

This was particularly in the case of high resolution data where you have a
lot of reflections, so 5% could be several thousand which would be more
than you need to just check Rfree seems OK.

Since I really don't know what is the right # reflections to assign to a
free set thought I would ask here - what do you think? Essentially I need
to assign a minimum %age or minimum # - the lower of the two presumably?

Any comments welcome!

Thanks & best wishes Graeme


--

      ===============================================================
      *                                                             *
      * Gerard Bricogne                     g...@globalphasing.com  *
      *                                                             *
      * Global Phasing Ltd.                                         *
      * Sheraton House, Castle Park         Tel: +44-(0)1223-353033 *
      * Cambridge CB3 0AX, UK               Fax: +44-(0)1223-366889 *
      *                                                             *
      ===============================================================

Reply via email to