Hi All,
Notwithstanding the stimulating discussion about the B-factor, I'd like
to chime in with my $0.02 on the original question of to build or not to
build and what are the rules and standards... and sorry for the lengthy
e-mail - I was trying to respond to several comments at once.

I thought there was a very well-defined rule: models based on
experimental data should represent the experimental data correctly. If a
model has parts that are not substantiated by experimental data and are
based only on assumptions, it's no longer an experimental model. Based
on this, one should leave out the atoms for which there is no observable
electron density. And one need not say that "we were unable to build a
model of a missing side chains (or any other segments of the structure).
There is also no need to guess or fake "most probable" conformations of
the unobserved parts. Instead, it should be reported that the segment in
question was so flexible that it could not be described by just one or
two (and may be three) conformers. As such, this observation stands on
its own feet just like any other observation of "visible" segments and
there is no need to fake a model. If the work was done properly, a model
with missing parts is not intrinsically inferior to other, more complete
model. The fact that a side chain displays flexibility may be
biologically much more relevant than some well-defined Ile in the core
of the molecule. Omitting unobserved side chains from the model would
also help avoid assumptions as if we know for sure that the side chain
is there. Given side chain actually may not be there for some reason or
another. Sequence errors and radiation-induced damage come to mind, for
example. The latter is also often the reason that the side chain may not
be fully occupied in the structure derived from a specific data set
(i.e. the sum of occupancies of all its existing conformations may not
be 1, contrary to earlier suggestions in the thread). Back in the day I
personally spent large amounts of time and effort constructing and
refining multi-conformational models of some side chains because I was
sure they were there somewhere. Later on, as we learned more, I realized
that some of them have been sheered by radiation damage and actually
were not there. As knowledge advances, many of our assumptions may
crumble and that's why we ought to keep "experimentally visible" models
apart from those with assumed parts.

As for the downstream consumers of our models, we may not need to
confuse them with strange B factors or occupancies. We just need to give
them correct information. Namely, that the given part(s) of the molecule
could not be "seen" experimentally due to its flexibility (or, in some
cases, to radiation damage).  There was an interesting suggestion of two
models - one accurately describing the experimental observations and the
other for the downstream users. It would be a good way to separate Sci
from Fi but there may be a problem. When theories are derived further
downstream, it'll be impossible to keep track of what came from Sci and
what came from Fi versions.

Best regards,
N.


Ruslan Sanishvili (Nukri), Ph.D.

GM/CA-CAT
Biosciences Division, ANL
9700 S. Cass Ave.
Argonne, IL 60439

Tel: (630)252-0665
Fax: (630)252-0667
rsanishv...@anl.gov

-----Original Message-----
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Dale Tronrud
Sent: Thursday, March 31, 2011 4:51 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] what to do with disordered side chains

On 3/31/2011 12:52 PM, Jacob Keller wrote:
>> The only advantage of a large, positive, number is that it would
create
>> bugs that are more subtle.
>
> Although most of the users on this BB probably know more about the
> software coding, I am surprised that bugs--even subtle ones--would be
> introduced by residues flagged with 0 occupancy and b-factor = 500.
> Can you elaborate/enumerate?

    The principle problems with defining a particular value of the B
factor
as magical have already been listed in this thread.  B factors are
usually
restrained to the values of the atoms their atom is bonded to and
sometimes to other atoms they pack against.  You may set the B factor
equal to 500.00 but it will not stick.  At worst its presence will pull
up the B factors of nearby atoms that do have density.

    In addition, the only refinement program I know of that takes
occupancy
into account when restraining bond lengths and angles is CNX.  The
presence
of atoms with occ=0 will affect the location of atoms they share
geometry
restraints with.

    Of course you could modify the refinement programs, and every other
program that reads crystallographic models, to deal with your
redefinition
of _atom_site.B_iso_or_equiv.  In fact you would have to, just as you
would
have to when you change the definition of any of the parameters in our
models.  If we have to modify code, why not create a solution that is
explicit, clear, and as consistent with previous practices as possible?
>
> I think that the worst that could happen is that the unexperienced yet
> b-factor-savvy user would be astonished by the huge b-factors, even if
> he did not realize they were flags. At best, being surprised at the
> precise number 500, he would look into the pdb file and see occupancy
> = zero, google it, and learn something new about crystallography.
>
    How about positive difference map peaks on neighboring atoms?  How
about values for B factors that don't relate to the mean square motion
of the atom, despite that being the direct definition of the B factor?

    The concept of an "unexperienced yet b-factor-savvy user" is
amusing.
I'm not b-factor-savvy.  Atomic displacement values are easy, but I'm
learning new subtleties about B factors all the time.

>
>>    The fundamental problem with your solution is that you are trying
to
>> cram two pieces of information into a single number.  Such density
always
>> causes problems.  Each concept needs its own value.
>
> What two pieces of information into what single number? Occupancy = 0
> tells you that the atom cannot be modelled, and B=500 is merely a flag
> for same, and always goes with occ=0. What is so dense? On the
> contrary, I think the info is redundant if anything...

    To be honest I had forgotten that you were proposing that the
occupancy
be set to zero at the same time.  Besides putting two pieces of
information
in the B factor column (The B factor's value and a flag for
"imaginaryness".)
You do the same for occupancy (the occupancy's value and a flag for
"imaginaryness".)  This violates another rule of data structures - that
each concept be stored in one, and only one, place.  How do you
interpret
an atom with an occupancy of zero but a B factor of 250?  How about an
atom with a B factor of 500.00 and an occupancy of 1.00?  Now we have
the
confusing situation that the B factor can only be interpreted in the
context of the the value of the occupancy and vice versa.
Database-savvy
people (and I'm not one of them either) are not going to like this.

    If you want to calculate the average B factor for a model, certainly
those atoms with their B factor = 500 should not be included.  However,
I gather we do need to include those equal to 500 if their occupancy is
not equal to 0.0.  This is a mess.  In a database application we can't
simply SELECT the row with the B factors and average them.  We have to
SELECT both the B factor and occupancy rows and perform some really
weird "if" statements element by element - just to calculate an average!
What should be a simple task becomes very complex.  Will a graduate
student code the calculation correctly?  Probably not.  They will likely
not recall all the complicated interpretations of special values your
convention would require.

    Now consider this.  Refinement is running along and the occupancy
for
an atom happens to overshoot and, in the middle of refinement, assumes
a value of 0.00.  There is positive difference density the next cycle.
(I did say that it overshot.)  Should the refinement program interpret
that Occ=0.00 to mean that the atom is imaginary and should not be
considered as part of the crystallographic model?  Wouldn't it be bad
if the atom suddenly disappeared because of a fluctuation?  Or should
the refinement program use one definition of "occupancy" during
refinement, but write a PDB file occupancy that has a different
definition?
(It might be relevant to this line of thought to recall that the TNT
refinement package writes each intermediate coordinate file to disk and
reads it back in at the start of the next cycle.  There can be no
difference between the meaning of the parameters in memory and on disk.)

    A great deal of what we do with our models depends on the details of
the definitions of these parameters.  Adding extra meanings and special
cases causes all sorts of problems at all levels of use.
>
>
>> either.  You can't out-think someone who's not paying attention.  At
>> some point you have to assume that people being paid to perform
research
>> will learn the basics of the data they are using, even if you know
that
>> assumption is not 100% true.
>
> Well, the problem is not *should* but *do*. Should we print bilingual
> danger signs in the US? Shouldn't we assume that people know English?
> But there is danger, and we care about sparing lives. Here too, if we
> care about the truth being abused or missed, it seems we should go out
> of our way.
>
    I've not advocated doing nothing.  I've advocated that the solution
we choose should be clearly defined and that definition be consistent
with past definitions (as much as possible) and consistent with the
principles of data structures created by the people who study such
things.

    We *should* go out of our way to make a solution to this common
problem.
The solution we choose should be one that actually solves the problem
and not simply creates more confusion.

Dale Tronrud

P.S. I just Googled "occupancy zero".  The top hit is a letter from
Bernhard Rupp recommending that occupancies not be set to zero :-)

Reply via email to