On Wed, 2012-05-23 at 18:06 +0300, Nicholas M Glykos wrote: > It seems that although you are not doubting the importance of maximum > likelihood for refinement, you do seem to doubt the importance of > closely > related probabilistic methods (such as maximum entropy methods) for > map > calculation. I think you can't have it both ways ... :-)
Nicholas, I think that we are not comparing ML to no-ML (or maximum entropy), but rather ML inflated by experimental errors vs pure ML that ignores them. I may be crazy or stupid (or both), but certainly not crazy/stupid enough to "doubt the importance of maximum likelihood for refinement". (On the other hand, one who promises to never doubt maximum likelihood shall never use SHELX :) Cheers, Ed. -- I don't know why the sacrifice thing didn't work. Science behind it seemed so solid. Julian, King of Lemurs