Another question on RMSD:
I have two structures of the same protein superposed with the LSQ
Superpose in Coot by matching the first ~100 residues of the N-terminal
domain. Now I'd like to calculate the pair-wise RMSD for the entire
pre-superposed structures (~400 residues). Can LSQKAB output RM
On Tuesday, November 23, 2010, Paula Salgado wrote:
> Resolution is 1.5A, Wilson B is 11.8.
Right. So there you have it. The Wilson B is telling you straight from
the data that the expected B factors are on the order of 11.8
You reported your model as
> > > Average B value for main ch
Resolution is 1.5A, Wilson B is 11.8.
Does that mean that might values are more than within expected ranges? I
knew at this resolution B factors are considerably lower, but I was
suspicious that some seemed, so low, specially after recent problems of over
refinement of B factors with another prote
Note that Refmac will include the TLS contribution to the B-factors
on the atom records if the keyword "TLSO ADDU" is given (of course,
this is good for analysis and PDB submission, but not good for running
subsequent Refmac runs since Refmac expects the atom records to
contain the "residual" B-fac
Dear Paula,
I do not know about phenix, but the refmac5 output PDB file does not contain the
contribution from the TLS refinement (not sure I get the terminology right). In
order to get an idea of the average B-values, you should run tlsanl on the
output of refmac5 with the keyword 'bresid true'.
Hi Joe,
I really want to re-iterate both Annie and Patrick's suggestion of trying
different drop ratios and protein amounts, however you might also try adding
some fresh reducing agent, particularly if you are trying the scale-up with
protein that has been hanging around (or sitting around) si
On Tuesday, November 23, 2010, Paula Salgado wrote:
> Dear all
>
> I'm refining a 33kDa protein model and I have noticed that although all
> other statistics seem fine, B factor values are quite low,
What is the resolution?
What is the Wilson B?
> with many around
> 7-10A2 and average overall
Dear all
I'm refining a 33kDa protein model and I have noticed that although all
other statistics seem fine, B factor values are quite low, with many around
7-10A2 and average overall values as follows.
Total number of atoms in chain A 2378
Average B value for main chain
Joe-
You might want to try your original drop ratio of 0.6 ul protein + 0.5 ul well
in your optimizations in the 96- well format used to obtain your initial
crystals.
HTH!
annie
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Joe
Sent: Tuesday, November 23, 2010 12:03 PM
Joe
The most common reason why people have difficulty scaling up is that
they put too much protein in the drop. This may be true in your case
because you report that you are getting heavier precipitation.
What people don't realize is that you lose a lot of protein on the
surface of the pla
I would suggest doing some rigorous quality-control on your protein
stock, e.g., mass-spec'ing. Also, it may help to aliquot small
aliquots into pcr tubes and freeze at -80, and use only those stocks
for crystallizations. I had exactly this problem, and finally
concluded that the problem was hetero
Hi Mischa,
We have a similar case. There is difference density, but only for some of
> the hydrogens (mostly methyl groups on Leu, Ile, Val, Ala). How does one
> decide which hydrogens to include in explicit refinement? The case in
> question has 0.99Å data (diffraction is significantly better, b
Hi all,
I recently have problems reproducing some conditions identified from high
throughput screenings.
The initial screening (10 mg/ml protein, 0.5 ul well+ 0.6 ul protein, 23 C)
gave rise to at least three hits from different screen kits.
The follow-up grid optimization (1.5 ul well + 1.5 ul
The default mode of autoindexing in HKL2000 and denzo is to search for
unit cell lengths producing spots that can be resolved at the data
collection distance and with the specified spot size (it can be changed in
HKL2000 interface). If the unit cell of your crystal is significantly
longer, the prog
Hi all,
I recently have problems reproducing some conditions identified from high
throughput screenings.
The initial screening (10 mg/ml protein, 0.5 ul well+ 0.6 ul protein, 23 C)
gave rise to at least three hits from different screen kits.
The follow-up grid optimization (1.5 ul well + 1.5 ul
I have rarely worked with such data, but when we did, we always kept to
the riding hydroge positions in for refinement. You can check the
important ones by calculating sfs without them and seeing how well they
fit the map. For important residues like ASPS in catalytic triads that
can be very re
Hi Petr,
You don't have to rely on automatically picked spots if you do not trust them.
Have you tried manually selecting spots and then limiting the high resolution
cutoff for indexing? Oftentimes that is sufficient to get past at least the
indexing step.
hope that helps,
Eric
___
Hi,
I agree with what has been mentioned about "fuzzy spots". But what seems
obvious as well is that the resolution for spot picking should be
limited (to 3.5 or 4 A resolution). It is difficult to judge from an
image of a diffraction pattern, but it seems to me from this image that
the spots
Petr,
Looks to me as if the problem is that your spots are very fuzzy, so even though
you can see them the spot-picking algorithms prefer to pick the sharp little
spots which are really noise. What I would do is set your peak-picking
parameters to pick very few spots, e.g. use "Fewer peaks" ma
We have a similar case. There is difference density, but only for some of the
hydrogens (mostly methyl groups on Leu, Ile, Val, Ala). How does one decide
which hydrogens to include in explicit refinement? The case in question has
0.99Å data (diffraction is significantly better, but data were col
Dear colleagues,
I am working on one dataset that is hard to process. The data are about 3A
of resolution. As we are not able to reproduce the experiment again, I have
to use this one, collected in a dirty way.
The problem starts immediately with finding of spots. I have tried HKL2000,
XDS, D*trek
Hi everybody
I have a 2_scala.log with all programs with Normal termination except
Mtzdump that finished with NO REFLECTIONS LISTED.
What can it mean?
Thanks for your attention
Marta
Dear all,
I have been using Coot during a long time without any problem, but
today, when I try to open the program, something has failed. The
message was:
/Applications/coot.app/Contents/Resources/script: line 205: 1965
Segmentation fault $COOT_PREFIX/bin/coot-real $command
coot-e
PS on a matter of terminology, what you are calling the 'sd' (i.e.
'standard deviation') is not the true standard deviation, it's only an
estimate since it's obtained from the data. Such an estimate of the
standard deviation used to be called (you guessed it!) the 'estimated
standard deviation' (o
Bryan,
Assuming that your values of sd(I) are accurate estimates of the
standard uncertainties of your Is, then I/sd(I) is a normalised
variate with SU = 1. So the SU of the mean value of I/sd(I) (where as
Phil says the Is are simply the measurements of the various
reflections in a shell) is give
Can you see the hydrogens in the maps when they are excluded from the
phasing? There is little hope of refining them independently if you cant
see discrete peaks for them.
Eleanor
On 11/23/2010 05:18 AM, Kenneth Satyshur wrote:
Sirs:
We are attempting to refine hydrogens on a ligand (which
I don't think this is a meaningful question. For Mn(I/sd), we take all
measurements of each reflection h to get its average Ih, and an estimate of the
SD of this average sd(Ih) (from the adjusted input sigmas), hence the ratio
Ih/sd(Ih). Then we average this ratio over all reflections in a resol
Even with SHELX, at 1.2A you should use a riding model for hydrogens and
not refine them freely. SHELX has a useful facility (OMIT $H or OMIT
followed by specific hydrogen atom names) to keep the hydrogen atoms in
the atom list but not include their contributions to the structure
factors. Then yo
28 matches
Mail list logo