How high B factors can go depends on the refinement program you are using.
In fact, my impression is that the division between the "let the B
factors blow up" and "delete the unseen" camps is correlated to their
preferred refinement program. You see, phenix.refine is relatively
aggressive with B factor refinement, and will allow "missing" atoms to
attain very high B factors. Refmac, on the other hand, has restraints
that try to make B factor distributions look like those found in the
PDB, and so tends to keep nearby B factors similar. As a result, you may
get "red density" for disordered regions from refmac, inviting you to
delete the offending atoms, but not from phenix, which will raise the B
factor until the density fits.
Then there are programs like VagaBond that don't formally have B
factors, but rather let an ensemble of chains spread out in the loopy
regions you are concerned about. This might be the way to go?
You can also do ensemble refinement in the latest Amber. That is, you
run an MD simulation of a unit cell (or more) and gradually increase
structure factor restraints. This would probably result in the "fan" of
loops you have in mind?
-James Holton
MAD Scientist
On 7/28/2024 8:13 AM, Javier Gonzalez wrote:
Dear CCP4bb,
I'm refining the ~3A crystal structure of a big protein, largely
composed of alpha helices connected by poorly-resolved loops.
In the old pre-AlphaFold (AF) days I used to simply remove those
loops/regions with too high B factors, because there was little to
none density at 1 sigma in a 2Fo-Fc map.
However, considering that the quality of a readily-computable AF model
is comparable to a 3A experimental structure, and that the UniProt
database is flooded with noodle-like AF models, I was considering
depositing a combined model in the PDB.
Once R/Rfree reach a minimum for the model truncated in poorly
resolved loops, I would calculate an augmented model with AF
calculated missing regions (provided they have an acceptable pLDDT
value), assign them zero occupancy, and run only one cycle of
refinement to calculate the formal refinement statistics.
Would that be acceptable? Has anyone tried a similar approach?
I'd rather do that instead of depositing a counterintuitive model with
truncated regions that few people would find useful!!
Thank you for your comments,
Javier
--
Dr. Javier M. González
Instituto de Bionanotecnología del NOA (INBIONATEC-CONICET)
Universidad Nacional de Santiago del Estero (UNSE)
RN9, Km 1125. Villa El Zanjón. (G4206XCP)
Santiago del Estero. Argentina
Tel: +54-(0385)-4238352
Email <mailto:bio...@gmail.com> Twitter <https://twitter.com/_biojmg>
------------------------------------------------------------------------
To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1
<https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
########################################################################
To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1
This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list
hosted by www.jiscmail.ac.uk, terms & conditions are available at
https://www.jiscmail.ac.uk/policyandsecurity/