At this point I usually chime in with an explanation of why
the Protein Data Bank made some choice or other in the early
days but on the matter of U vs. B I have not information to
contribute.

I can point out the at that time characters were stored in
display code on a CDC 6600 and display code used 6 bits so
'bytes' at that time were less obese.  6 bits per character
explains, of course, why lower case characters were not
routinely used.

                    Frances

=====================================================
****                Bernstein + Sons
*   *       Information Systems Consultants
****    5 Brewster Lane, Bellport, NY 11713-2803
*   * ***
**** *            Frances C. Bernstein
  *   ***      f...@bernstein-plus-sons.com
 ***     *
  *   *** 1-631-286-1339    FAX: 1-631-286-1999
=====================================================

On Wed, 12 Oct 2011, James Holton wrote:

I think the PDB decided to store "B" instead of "U" because unless the
B factor was > 80, there would always be a leading "0." in that
column, and that would just be a pitiful waste of two bytes.  At the
time the PDB was created, I understand bytes cost about $100 each!
(But that could be a slight exaggeration)

-James Holton
MAD Scientist

On Wed, Oct 12, 2011 at 2:56 PM, Phil Evans <p...@mrc-lmb.cam.ac.uk> wrote:
Indeed that paper does lay out clearly the various definitions, thank you, but 
I note that you do explicitly discourage use of B (= 8 pi^2 U), and don't 
explain why the factor is 8 rather than 2 (ie why it multiplies (d*/2)^2 rather 
than d*^2). I think James Holton's reminder that the definition dates from 1914 
answers my question.

So why do we store B in the PDB files rather than U?  :-)

Phil

On 12 Oct 2011, at 21:19, Pavel Afonine wrote:

This may answer some of your questions or at least give pointers:

Grosse-Kunstleve RW, Adams PD:
On the handling of atomic anisotropic displacement parameters.
Journal of Applied Crystallography 2002, 35, 477-480.

http://cci.lbl.gov/~rwgk/my_papers/iucr/ks0128_reprint.pdf

Pavel

On Wed, Oct 12, 2011 at 6:55 AM, Phil Evans <p...@mrc-lmb.cam.ac.uk> wrote:
I've been struggling a bit to understand the definition of B-factors, 
particularly anisotropic Bs, and I think I've finally more-or-less got my head 
around the various definitions of B, U, beta etc, but one thing puzzles me.

It seems to me that the natural measure of length in reciprocal space is d* = 
1/d = 2 sin theta/lambda

but the "conventional" term for B-factor in the structure factor expression is 
exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
factor of 4?

Or should we just get used to U instead?

My guess is that it is a historical accident (or relic), ie that is the 
definition because that's the way it is

Does anyone understand where this comes from?

Phil


Reply via email to