Dear Gerard,

The theory's fine as long as the space group can be unambiguously
determined from the diffraction pattern.  However practice is
frequently just like the ugly fact that destroys the beautiful theory,
which means that a decision on the choice of unit cell may have to be
made on the basis of incomplete or imperfect information (i.e.
mis-identification of the systematic absences).  The 'conservative'
choice (particularly if it's not necessary to make a choice at that
time!) is to choose the space group without screw axes (i.e. P222 for
orthorhombic).  Then if it turns out later that you were wrong it's
easy to throw away the systematic absences and change the space group
symbol.  If you make any other choice and it turns out you were wrong
you might find it hard sometime later to recover the reflections you
threw away!  This of course implies that the unit-cell choice
automatically conforms to the IT convention; this convention is of
course completely arbitrary but you have to make a choice and that one
is as good as any.

So at that point lets say this is the 1970s and you know it might be
several years before your graduate student is able to collect the
high-res data and do the model-building and refinement, so you publish
the unit cell and tentative space group, and everyone starts making
use of your data.  Some years later the structure solution and
refinement is completed and the space group can now be assigned
unambiguously.  The question is do you then revise your previous
choice of unit cell risking the possibility of confusing everyone
including yourself, just in order that the space-group setting
complies with a completely arbitrary 'standard' (and the unit cell
non-conventional), and requiring a re-index of your data (and
permutation of the co-ordinate datasets).  Or do you stick with the IT
unit cell convention and leave it as it is?  For me the choice is easy
('if it ain't broke then don't fix it!').

Cheers

-- Ian

On Fri, Apr 1, 2011 at 1:40 PM, Gerard Bricogne <g...@globalphasing.com> wrote:
> Dear Boaz,
>
>     I think you are the one who is finally asking the essential question.
>
>     The classification we all know about, which goes back to the 19th
> century, is not into 230 space groups, but 230 space-group *types*, i.e.
> classes where every form of equivalencing (esp. by choice of setting) has
> been applied to the enumeration of the classes and the choice of a unique
> representative for each of them. This process of maximum reduction leaves
> very little room for the introducing "conventions" like a certain ordering
> of the lengths of cell parameters. This seems to me to be a major mess-up in
> the field - a sort of "second-hand mathematics by (IUCr) committee" which
> has remained so ill-understood as to generate all these confusions. The work
> on the derivation of the classes of 4-dimensional space groups explained the
> steps of this classification beautifully (arithmetic classes -> extension by
> non-primitive translations -> equivalencing under the action of the
> normaliser), the last step being the choice of a privileged setting *in
> termns of the group itself* in choosing the representative of each class.
> The extra "convention" a<b<c leads to choosing that representative in a way
> that depends on the metric properties of the sample instead of once and for
> all (how about that for a brilliant step backward!). Software providers then
> have to de-standardise the set of 230 space group *types* (where each
> representative is uniquely defined once you give the space group (*type*)
> number) to accommodate all alternative choices of settings that might be
> randomly thrown at them by the metric properties of e.g. everyone's
> orthorhombic crystals. Mathematically, what one then needs to return to is
> the step before taking out the action of the normaliser, but this picture
> gets drowned in clerical disputes about low-level software issues.
>
>     My own take on this (when I was writing symmetry-reduction routines for
> my NCS-averaging programs, along with space-group specific FFT routines in
> the dark ages) was: once you have a complete mathematical classification
> that is engraved in stone (i.e. in the old International Tables and in
> crystallographic software as we knew it), then stick to it and re-index back
> and forth to/from the unique representative listed under the IT number, as
> needed - don't try and extend group-theoretic Tables to re-introduce
> incidental metrical properties that had been so neatly factored out from the
> final symmetry picture. Otherwise you get a dog's dinner.
>
>
>     So much for my 0.02 Euro.
>
>
>     With best wishes,
>
>          Gerard.
>
> --
> On Fri, Apr 01, 2011 at 11:30:12AM +0000, Boaz Shaanan wrote:
>> Excuse my naive (perhaps ignorant) question: when was the
>>  a<b<c rule/convention/standard/whatever introduced? None of the
>> textbooks I came across mentions it as far as I could see (not that this is 
>> reason for or against this rule of course).
>>
>>     Thanks,
>>
>>                Boaz
>>
>>
>> Boaz Shaanan, Ph.D.
>> Dept. of Life Sciences
>> Ben-Gurion University of the Negev
>> Beer-Sheva 84105
>> Israel
>> Phone: 972-8-647-2220 ; Fax: 646-1710
>> Skype: boaz.shaanan‎
>
> --
>
>     ===============================================================
>     *                                                             *
>     * Gerard Bricogne                     g...@globalphasing.com  *
>     *                                                             *
>     * Global Phasing Ltd.                                         *
>     * Sheraton House, Castle Park         Tel: +44-(0)1223-353033 *
>     * Cambridge CB3 0AX, UK               Fax: +44-(0)1223-366889 *
>     *                                                             *
>     ===============================================================
>

Reply via email to