Hi,
It looks like you're missing your LABIN line.
Pete
Ashu Kumar wrote:
Dear all,
I am trying to use CAD to apply b-factor sharpening to my 3.7 A data.
Following is the script which I am running.
cad hklin1 test.mtz hklout test_out.mtz <
Rojan Shrestha wrote:
What is the tool in CCP4 to calculate F from F(+) and F(-) with their
standard deviations?
mtzMADmod
Another vote for Situs colores - we've had very good luck with using it
to dock domain structures into low resolution multi-crystal SAD maps.
Pete
Oliver Clarke wrote:
Hi all,
Can anyone recommend software to dock previously solved domains into a (very)
low-resolution experimentally phased m
sftools is another option (for applying randomly distributed errors to Fc).
Depending on what you're simulating, it might be worth pointing out that
while it's relatively easy to simulate Fo, it's more difficult to
simulate sigFo realistically. This matters for any simulations using
sigma_a (
Ed Pozharski wrote:
[snip]
If I understand your proposal and reference to SQL correctly, you want
some scripting language that sounds like simple English. Is the
advantage over existing APIs here that one does not need to learn
Python, C++, (or, heaven forbid, FORTRAN)? I.e. programs would lo
gical
questions, if we're using computational tools then I feel it's helpful
to understand how those tools work (and possibly more importantly, how
to tell when they fail to work as we would expect).
Just my 2 cents...
Pete
Nat Echols wrote:
On Fri, Jun 7, 2013 at 8:37 AM, Pete Mey
"Resist the urge to duplicate the work of others" should be the first rule of
programming if its not already (although it's the hardest rule to follow).
On the other hand, programming an implementation of something is a good
way to make sure that you really understand it - even if you end up
#for line in open("PDBfile.pdb"):
#if "ATOM" in line:
#column=line.split()
#c4=column[4]
If you're dealing with a pdb that may have REMARK lines, you're better
off using "if line.startswith('ATOM') or line.startswith('HETATM')" for
your conditional here.
I'd also use the
Wenzong,
You may want to look at rosetta's design modules.
Pete
Wenzong Li wrote:
I want to build the loop based on a computer screen against my peptide.
Thanks
Wenzong
- Original Message -
From: "Joel Tyndall"
To: "Wenzong Li" , CCP4BB@JISCMAIL.AC.UK
Sent: Monday, April 22, 2013
Guangyu,
If I'm understanding your question correctly; you're asking if all other
things are equal (resolution, degree of disorder, etc), does improving
the data/parameter ratio result in an improved model?
The short answer is: (at least sometimes) yes.
Pete
Guangyu Zhu wrote:
Ian,
Because
Ed,
Ed Pozharski wrote:
> IIUC, you are saying that nature of the error should be independent of
my decision to model it or not. Other words, if I can potentially
sample some additional random variable in my experiment, it contributes
to precision whether I do it or not. When it's not sampled
Hi Ed,
Ed Pozharski wrote:
An interesting thing happens when I do that. What used to be a
systematic error of pipetting now becomes statistical error, because my
experiment now includes reproducing dilution of the stock. In a
nutshell,
Whether a particular source of error contributes to accura
I use bzip2 as well. In addition, I generally store md5sums of the
images (before and after compression) because it's quicker to check the
hash for validity than load up the images - but this may be overkill.
Pete
Graeme Winter wrote:
Dear Eugene,
Personally I have a habit of using bzip2 fo
native datasets? And would a native dataset for refinement be
compromised by friedel pair differences collected at an edge?
If you've got anomalous data available, you may get better results by
using your friedel pair differences in refinement - especially if your
diffraction stays in the ~ 4
Just out of curiosity, why use PDB format instead of converting PDB into
a format readable by a more general 3d graphics program and combining
with your cube/sphere/line segment there?
Pete
Francois Berenger wrote:
Hello,
For some new project, I'd like to be able to generate things
and store
Hi Yoni,
You can have multiple versions of CCP4 installed on a system without
causing problems (although usually only one is active).
In general, it's sufficient to source the new ccp4.setup-(bash|csh)
prior to compiling the new version (I'm assuming you're using a source
version from the "m
I agree with Carlos's suggestions (search for your two domains
separately, and as poly-Ala models).
There's a few other things that may help MR, which you may or may not
have tried:
1. Reset the B-factor of your search models to the wilson B-factor of
your dataset.
2. Search using only lower
3) Activating map sharpening results in mtz files that look just normal
and open in coot after the typical map calculation break, but no maps
are displayed. This is independent of the sharpening factor I choose
(between 5 and 60).
I haven't used coot for map sharpening, but using the range
One thing to keep in mind is that there's usually a trade-off between
setup (writing and testing) and execution time. For one-off data
processing, I'd focus on implementation speed rather than execution
speed (in other words, FORTRAN might not be ideal unless you're already
fluent with it).
Hi Uma,
I've used HKL2000 to combine datasets from different crystals, so it's
definitely possible to do so (although depending on the data volume it
may be better to deal with scalepack directly).
There are two things that you don't mention about your data - the
(approximate) resolution, an
I don't know of anything directly within CCP4 - but mrc format is almost
identical to ccp4 map format. You could try using the mrc version as
ccp4 and see if it works. If not, upsalla mapman will convert brix to ccp4.
Pete
Froehlich, Chris wrote:
Dear colleagues,
is there any program enclo
Hi Ed,
1. Go to NCBI BLAST and run the sequence against the PDB subset. The
resulting list will have identities listed, so manual parsing is doable
if there aren't too many hits.
Have you looked at using NCBI EUtils for this? This might not solve
your problem directly, but it should only
Jason,
It seems like you're on the right track - it's usually not a problem to
increase array sizes and rebuild.
How are you building (aka "cd $CCP4/src; make fsearch" or something
else)? If you're calling the fortran compiler (or ld) directly, it
looks like there's an a problem with the li
Nicholas,
My experience from comparing conventional (FFT-based) and maximum-entropy-
related maps is that the main source of differences between the two maps
has more to do with missing data (especially low resolution overloaded
reflections) and putative outliers (for difference Patterson maps
Ed,
I may be wrong here (and please by all means correct me), but I think
it's not entirely true that experimental errors are not used in modern
map calculation algorithm. At the very least, the 2mFo-DFc maps are
calibrated to the model error (which can be ideologically seen as the
"error of ex
Your understanding is correct, sigmaF values aren't required for
calculating electron density.
Many programs that calculate maps have an option to use the F/sigmaF
ratio to threshold the amplitudes used in map calculation - which would
require sigmaF. This isn't something I've seen used recen
James Holton wrote:
In general, a Bijvoet ratio of 3% or so is needed to solve a structure (the
current world record is 0.5% and lots of multiplicity). The above web page
will also tell you how many crystals you need if you type in their size in all
three dimensions. but this estimate assum
On the whole, however, I have not seen any significant performance
advantage of 64 over 32 bit running crystallography programs
side-by-side on equivalent hardware. I have also been unimpressed with
the supposed "memory access" advantages of 64 bit. I had to do a LOT of
recompiling programs i
Have you tried mtz2various (with cif output)?
Pete
Francis E Reyes wrote:
It seems that deposition of map coefficients is a good idea. Does someone have an mtz2cif that can handle this?
Thanks!
F
-
Francis E. Reyes M.Sc.
215 UCB
University of
Artem Evdokimov wrote:
I can't resist asking: If we assume that the data fabrication
techniques and the techniques for discovery of such activities should
have the same sort of arms race as the development of viruses and
anti-malvare software (but of course on a much more modest scale since
struc
Hi Graeme,
That's interesting. When I looked at this (and I would say I looked
reasonably carefully) I found it only made a difference in the scaling
- integrating across the whole area was fine. However, I would expect
to see a difference, and likely an improvement, in scaling only the
data you
Hi,
1. You don't mention how many Zn sites you have, or how big your protein
is - as Phil mentioned, these are factors.
2. I'll add to the chorus - pick your wavelength(s) based on a
fluorescence scan.
3. If "1.5A0" is your wavelength, not your resolution, you may still
have some anomalous sign
rote one a while ago:
phenix.reciprocal_space_arrays
https://www.phenix-online.org/documentation/reciprocal_space_arrays.htm
it does exactly what you asked for and much much more.
Let me know if you have questions.
Pavel
On Fri, Feb 24, 2012 at 9:28 AM, Pete Meyer
mailto:pame...@mcw.edu>
Does anyone know of a program that will output sigma_a and D values for
individual reflections (yes, I know they're usually calculated as bin
values)? Or, for that matter, an option to an existing program I
haven't found in the documentation yet?
Thanks,
Pete
Oh, and don't fall for the "so other people can read your code" trick.
Trust me, NOBODY wants to read your code! Unless, of course, they are
trying to re-write it in their favorite language.
I don't think this is necessarily the case. If I'm using your code to
do something scientifically in
Hi,
Regardless of what the consensus on naming for the technique, I'd
suggest you combine these datasets during phasing (I'm aware of MLPHARE,
SHARP, PHASIT supporting multiple anomalous datasets during phasing;
others probably do as well). Combining at the merging step
(pointless/scalepack/
This may be true for older software which restraints B-factors only to
bonded atoms, but it is not the case in Phenix*, which takes into
account all nearby atoms, not just bonded ones. The result is that
individual B-factor refinement is very stable at low resolution - we
don't know what the lim
As others have mentioned, your geometry/x-ray weight may need to be
adjusted.
However, at 3.2 Angstroms I'd recommend against using atomic B-factors -
the "rule of thumb" for this is 2.8 Angstroms for atomic B-factors (or
at least it was back in the day). It might help to use an overall
B-factor
Filip Van Petegem wrote:
In a case I'm currently looking at, I'm particularly dealing with cryo-EM data,
not X-ray structures, but with the same underlying principles: what are the
odds that all pixels of the map move together in the same direction?
I suspect you may be better off asking an
James Holton wrote:
I go to a lot of methods meetings, and it pains me to see the most
brilliant minds in the field starved for "interesting" data sets. The
problem is that it is very easy to get people to send you data that is
so bad that it can't be solved by any software imaginable (I've go
Hi,
It's definitely possible, but the devil is in the details. It's a
two-step process: transform the coordinates of the mask so it's in the
right spot in the new cell, and interpolate the mask into the
appropriate grid for the new crystal form.
The last time I did this I was using O/PHASES
Similar to Tim's suggestion, but your low resolution limit may be too
low (check refmac's chart of R vs resolution to confirm this).
Pete
王瑞 wrote:
hello everyone:
Excuse me, could anyone give me some suggestions? Afetr several cycles of
refmac, it give me such a result:
I could be wrong, but my understanding is that they're removing motion
blur from the image - so I don't think it'll be directly applicable to
density modification.
But I'd be very happy to be wrong on this one.
Pete
Francis E Reyes wrote:
http://hoowstuffworks.blogspot.com/2011/10/adobe-demo
I've noticed that people seem to be using or recommending secondary
structure restraints for low resolution refinement lately, but I'm
somewhat confused about the logic underlying their use.
Using ballpark figures from a system I'm familiar with: 3 atoms
(9 positional parameters), 4500
For a fourth choice, you can do a stand-alone build of PyMOL and its
dependencies with relatively little problem (at least on 10.6; haven't
upgraded yet).
Pete
Xiaoguang Xue wrote:
As I know, you have 3 choices:
1, You can apply a free educational-use-only MacPymol from the company,
Schrodi
I'd be surprised if the model phases were introducing bias into an
anomalous difference map (they might be adding noise, but that's another
story). I've also never seen feedback in model-phased anomalous
difference maps (experimental-phased anomalous difference maps can have
pretty bad feedbac
Hi,
Depending on how many zn sites you have, you may be able to do zn-mad
for your native crystals. You don't mention if you've tried combining
your various sources of phase information; if not, it's worth looking into.
You may also want to look into various multi-crystal techniques
(averagi
Paul Smith wrote:
2) ditch all gui support or, from scratch, develop a gui front-end that uses
none of the following: Qt, Ruby, Perl, Python, TK/TCl, etc. This gui must
compile and run on all mainstream hardware on all major operating systems. The
custom gui might also need a custom driver
We know one protein can interact with different partners by different
domains or different parts. Is there a protein that it could interact
with different proteins by the same part (maybe the same part but in
different conformations?)? Thank you in advance!!
The RNA Polymerase II rpb1 CTD intera
What software do people on the bb use for encryption? What can be
recommended without hesitation?
I've had no problems with TrueCrypt, which I've been using on linux and
OS X for ~4 years (for personal systems, not lab ones). But I haven't
used it for storing files I've been doing anything I
Hi Francis,
You may want to look into ACMI, which is designed for building into low
resolution maps.
In my experience with lower resolution data (4.2 - 4.5), "performing
well" is a different story than at higher resolutions. At low
resolution, I'd regard a composite omit map calculated fro
but that last command makes every single file executable, which is
rather ugly (but doing a selective chmod 0755/0644 is a bit tricky
I was wondering about having every file executable in the source
distributions a while back, and came up with an quick way to make it
less ugly (inline due to a
It'll depend on your data, but you'd probably be better off using the
inflection (rather than peak) and remote datasets for dispersive
difference maps. This signal is usually fairly weak to begin with, and
not using the infection datasets weakens it further.
Pete
FWIW - my understanding is
In my hands, maprot can sometimes be finicky. You might want to check
MODE FROM vs MODE TO; or use an Fcalc map so that you'll be able to tell
when the transformation is correct.
Francis E Reyes wrote:
Hi all
I was just fiddling around with ncs and maps, so I tried to use
maprot a 2fofc
You might want to look into the upsalla compressed mask format (it's an
exact representation of a binary mask - compression is handled by just
tracking the start and end points of the stretches).
Pete
Hailiang Zhang wrote:
Hi,
As I understand, the general molecular mask generated by CCP4 (eg
There are a couple of tricks I know for dealing with datasets that are
problematic to index (ignore if you've tried them):
1. double-check the machine parameters (detector to crystal distance and
beam center in particular).
2. Index in mosflm using widely separated images (usually n,n+45,n+90
Ben Eisenbraun wrote:
Hi Pete,
Usually python will regenerate pyc files as needed (assuming the source
files are available). So "find $COOTDIR -name "*.pyc" -exec rm {} \;"
might be worth a try.
This is true, but the import error is being generated by Coot trying to
read the _system_ .pyc f
They're both getting Python "magic number" errors, which from what I know
result from one Python version trying to run a .pyc file created with another Python
version, but I'm not sure exactly how to work around it in the context of the stand-alone
package. The students aren't planning to do
My take on things is that the helpfulness of stereo depends inversely on
the resolution (and/or quality) of the data. If I'm dealing with a 4-5
Angstrom map, stereo is more or less required. For high quality 2-3
Angstrom data, not so much.
For teaching purposes, it's probably a good idea to
Xiaoguang Xue wrote:
[snip]
Why don't we try to use these idle computing time to help us doing some more
important and interesting things, like determining the proteins structures?
Battery life. If a phone is doing number crunching, it's not putting
the processor into a low-power state. The
Or could anyone suggest a program that would be of help?
CAD scaling with a scale factor of 1.0 and negative B-factor (isotropic
or anisotropic) should do the trick. I haven't had much luck with
density sharpening (at least at ~4-5 Angstroms), but others have
apparently had some success with
Ian,
Now it's still true that Ncon reduces Npar but it's no longer true
that Ncon increases Nobs.
This seems counter-intuitive enough for me to ask a stupid question:
which parameters are being removed in restrained refinement?
For constraints this seems straightforward (transformation of >
mapview_x from PHASES shows maps (and masks) in 2D.
Pete
Leila Foroughi wrote:
I'm currently trying to compare single crystal X-ray data for a protein crystal
with previously published electron miscroscopy data. I would really like to be
able to take my electron density map (from X-ray) and v
A few things that might be worth looking at:
1. How is your beam divergence varying as you fix mosaicity at different
levels? Does it look relatively stable at a realistic value for the
beamline? If I'm remembering correctly, mosaicity and beam divergence
are highly correlated within mosflm.
following this discussion, I was wondering how much sense it makes to use
TLS domains and experimental phases at the same time.
My take on it is that if you have low resolution data, then it makes a
good deal of sense to use both experimental phases and TLS groups. But
this assumes that the T
A few things to try (or double-check):
1. If you ran phaser with SGALTERNATIVE ALL, make sure the mtz file you
gave to refmac has the same screw axes as the MR solution. If this is
off, it'll lead to higher R-factors.
2. As Fred mentioned, try with a single copy of the protein, and check
th
The rumblings here at the Univ. of Washington among the computational modelers is that
some of their current models might be more representative of protein structures in
solution than are the crystal structure models. It may take less than a "couple of
decades" for a reduced emphasis on crysta
While we're on the subject, I notice that imosflm has a convenient
environment variable for specifying the location of wish. Is there a
similar means of specifying the location of bltwish for ccp4, or am I
obliged to dump it in /usr/bin ?
CCCP4I_TCLTK in include/ccp4.setup-[bash|csh].
-Kevin
Inverting a map with sfall has different restrictions on allowable
gridpoints and axis order within the electron density map (due to FFT vs
slow FT). Check what the sfall documentation lists as requirements
for your spacegroup, and adjust the coot map output settings as needed
(or if it's eas
Obviously, in there is no need to list (in PDB header or in .log file)
groups for the first and second options, since it is obvious.
In the third option a user provides atom selections for the groups, so
they are listed in a parameter file and therefore listing them in .log
or in PDB files will
Does anyone know offhand if or where the ADP groups used in refinement
are listed in the PDB headers? I've found TLS groups and NCS groups,
but nothing about which residues are in the ADP groups.
I'm trying to figure out exactly what was done in a published structure,
and don't currently have
In the first cycle DMMULTI calculates solvent masks for the two crystals. Does
it continously improve these masks or stick to the initial one ?
If it acts like DM, then it updates it (using default options). But on
the other hand, DMMULTI doesn't appear to to support SOLMASK UPDATE
keywords.
I don't have any suggestions, but I do have a few questions that might
shed some light on this:
1. How exactly are you building scala (I assume it's something like:
download source distribution -> edit & source config file -> ./configure
; make ; make install -> run test scripts (scala ok) ->
Brunger, A., DeLaBarre, B., Davies, J. & Weis, W. X-ray structure
determination at low resolution. ACTA CRYSTALLOGRAPHICA SECTION
D-BIOLOGICAL CRYSTALLOGRAPHY 65, 128-133 (2009).
The paper quotes Wayne Hendrickson (says "submitted") regarding
"determinancy" point (i.e. where the number of obse
As of 6.1.2, is there a consensus about using ctruncate versus truncate
(the fortran version)?
Pete
But now I want to run the script from a cgi script as part of the
processing by a web service. No matter what the name of the file
that was originally uploaded, the web server has homogenized it to
a local temporary file whose name is generated by a hash function
from the pid. The cgi script pas
If you want to treat the end result as a single dataset, you're probably
better off combining unmerged datasets (two different batches in
scala, or two different scaling sets in scalepack).
As far as I understand it, the data model in the mtz files
(project/crystal/dataset/column) isn't reall
>> 1) By submitting my structure to the PDB Validation Server for precheck and
>> validation, will my structure become publicly available in any way?
>
> It will not.
> But if you prefer, you can download the validation tools and run them
> on your home computer.
However, most recent workstation
I had a similar problem a few months back...
cd $CCP4/src; make lsqkab ; cp lsqkab $CCP4/bin/
Pete
Edward Miller wrote:
> Hey Folks,
>
> I need to recompile lsqkab to increase the parameter NATOM to 8.
>
> What would be the best way to only recompile lsqkab without having to
> recompile al
r Zn, although
I suppose that it's possible partial Se incorporation could be a factor
as well.
>
> Guangyu
>
>
> On 5/14/09 2:31 PM, "Pete Meyer" wrote:
>
>> You might find Structure 14, 973-982 Jun 2006 of interest.
>>
>> Larger protein, Z
You might find Structure 14, 973-982 Jun 2006 of interest.
Larger protein, Zn instead of Se (although only 8 sites), roughly
comparable to slightly lower resolution. This was mainly testing to
confirm that there was enough anomalous signal to phase off of, so the
model was essentially known; howe
If experience from intrinsic zinc is ok, I'll add my two cents.
> trying). I would be happy to hear about Se-Met cases, and data
> collection strategies (2wl vs. 3wl MAD vs. SAD, etc.) and phasing
> methods used in these cases, or references of them. Again, no other
Bert already mentioned collect
h segfault. (try to open Help -> About Pymol)
> (that bug is present in both python 2.4 an 2.5
> and none of pymol versions works.)
> see also
> https://lists.ubuntu.com/archives/universe-bugs/2008-November/019193.html
>
> Witek
>
> Pete Meyer wrote:
>> Two suggestio
Two suggestions...
Check that the coordinate fields in your pqr file are separated by
spaces (some pdb to pqr converters don't always do this in my hands,
probably due to large negative coordinate values). Also, try using your
.pqr file and .in file to run apbs from the command line and see if it
I was testing FPARTi,PHIPi in refmac 5.5.0089 and got an unexpected
result. With scaling off (no SCPART keywords), adding an FPART1,PHIP1
where all entries for both columns were 0 gives Rwork and Rfree of 1.0.
When scaling of FPART is turned on, the Rwork,Rfree values are as
expected (comparible
Well, either there haven't been any replies yet or my institution's
email server is unhappy with me (if it's email problems, sorry if I'm
duplicating other responses).
> This instance leaves room for some concern about the verification of
> deposited structure factors: what happened here is c
> PS: I vote for that "structure factor amplitude" be used in text books
> and |F| on cell phones. Student of 2015: "You mean 'abs-F' is really
> pronounced 'structure factor amplitude'? I didn't know that!"
By 2015, it would probably be some less-comprehensible variant of
instant-messenging con
> to my knowledge, none of the existing reciprocal-space refinement
> programs is really suitable for low-resolution refinement. In my
For what it's worth, I've had good luck with refmac5D, which
incorporates an SAS target into model refinement (I believe this was has
now been incorporated into th
FYI, the distribution drivers work fine on our systems with the X4600
(stereo and everything).
Pete
Chris Ulens wrote:
> Hi,
> I'm trying to install a nVidia Quadro FX3000 card on a PC running Ubuntu
> 8.10 intrepid.
>
> I run into the following error when I install drivers I downloaded from
> t
In addition to the possible problem Ho mentioned (DNA conformational
changes), a few other things could effect how doable it is: percentage
of DNA vs protein in the complex and the resolution of the data.
Too little DNA mass wouldn't help phase the remainder of the complex, if
it could be located.
ucture.
> I do not know any ways of mapping RDCs on a structure.
>
> HTH,
> Robbie
>
> Pete Meyer wrote:
>>> dipolar couplings (NMR). But even then one should always look at the
>>> structure model in the context of the experimental data. High
>&
Eleanor,
Any ideas if the 0.01 in truncate is just being used as "arbitrary small
number to prevent overflow", or if it's serving another purpose? I
wasn't sure from reading truncate.f.
Thanks,
Pete
Eleanor Dodson wrote:
> Truncate doesnt "truncate" intensities or modify them in any way except
> dipolar couplings (NMR). But even then one should always look at the
> structure model in the context of the experimental data. High
Is there an easy way to do this for NMR data? For x-ray data, it's
relatively straightforward to re-calculate a map using the deposited
model and amplitudes, whic
It might be possible that there's a problem in the low resolution
portion of your data. You could check the wilson plot to make sure it
looks normal at that range, and possibly change the low-resolution
cutoff refmac is using.
Pete
Jan Abendroth wrote:
> Hi all,
> I have a number of low-ish reso
For what it's worth, I'd try to get as much as possible out of the
experimental phases before going on to phased MR.
> I have SeMet MAD data to 3.6A that gives decent looking anomalous
> difference peaks, looks stable in mlphare, and produces solvent
> flattened maps to 2.8A in DM that look like t
Looks like this was a combination of new atom names, and using on older
version of refmac. Thanks to Garib, a combination of using the new
dictionaries and refmac 5.6 does the trick.
Pete
Pete Meyer wrote:
> Thanks for the quick reply.
>
> I got the noexit option for the ccp4-6.0.2
best wishes,
>
> Gerard.
>
> --
> On Mon, Oct 27, 2008 at 09:53:19AM -0500, Pete Meyer wrote:
>> Apologies for going slightly further off-topic...
>>
>> Last time I had a free half-day to look into sharp, I noticed that the
>> academic license pro
Apologies for going slightly further off-topic...
Last time I had a free half-day to look into sharp, I noticed that the
academic license prohibits reverse-engineering. This seemed to put any
comparative testing into a slightly grey area. For example, if I find
that sharp does the best job refin
y with the latest 5.5 version.
Thanks again,
Pete
Garib Murshudov wrote:
> The keyword is
>
> make newligand continue
>
> (I need to add noexit option. It makes sense)
>
> regards
> Garib
>
> On 26 Oct 2008, at 22:10, Pete Meyer wrote:
>
>> Hi,
>>
Hi,
I'm attempting to use refmac to re-calculate a map from a published
structure. Most of the time, this works with no problems.
Occasionally, refmac stops with "New ligand has been encountered.
Stopping now".
>From the manual, I'd though that using MAKE NEWLIGAND NOEXIT should
prevent this, but
1 - 100 of 103 matches
Mail list logo