On Friday, January 06, 2012 09:30:22 am Nat Echols wrote: > 2012/1/6 Pete Meyer <pame...@mcw.edu>: > > However, at 3.2 Angstroms I'd recommend against using atomic B-factors - > > the "rule of thumb" for this is 2.8 Angstroms for atomic B-factors (or > > at least it was back in the day). �It might help to use an overall > > B-factor combined with one (or a few) TLS groups. > > This may be true for older software which restraints B-factors only to > bonded atoms, but it is not the case in Phenix*, which takes into > account all nearby atoms, not just bonded ones. The result is that > individual B-factor refinement is very stable at low resolution - we > don't know what the limit is, but it routinely works very well at 4A.
Unfortunately, "stable" and "statistically correct" are two very different criteria. It is quite possible to have a stable refinement that produces nonsensical, or at least unjustifiable, B factors. Actually this caveat applies to things other than B factors as well, but I'll stay on topic. At last year's CCP4 Study Weekend I presented a statistical approach to deciding what treatment of B could be justified at various resolutions. "To B or not to B?" The presentations from that meeting should appear in a special issue of Acta D soon. Based on the set of representative cases I have examined, I am willing to bet that with the limited obs/parameter ratio in the case at hand, a model with individual Bs would turn out to be statistically unjustified even if the refinement is "stable". A TLS model is more likely to be appropriate. cheers, Ethan > Of course the performance is still dependent on solvent content, NCS, > etc., but it is very rare that grouped B-factor refinement actually > works better. > > -Nat > > * I think Refmac may do something similar, but I haven't tried this > recently. I would be very surprised if it did not work well at 3.2A, > however. > -- Ethan A Merritt Biomolecular Structure Center, K-428 Health Sciences Bldg University of Washington, Seattle 98195-7742