Dear Frank,
Re "pushing terabytes around cyberspace".
Well, actually, the synchrotron facilities hosting the datasets
locally that were measured there is a major step forward for
diffraction data preservation, especially for MX but also true for
SAXS, XAFS etc, as is being pushed forward by Alun A
(Old thread, just cleaning up, sorry...)
I thought James' algorithm didn't do anything to the spots, just to the
stuff in between.
So one obvious way to handle this is for the data processing programs to
be looking between the integrated spots as well, whether they're missing
anything; the
The mp3/music analogy might be quite appropriate.
On some commercial music download sites, there are several options for
purchase, ranging from audiophool-grade 24-bit, 192kHz sampled music, to
CD-quality (16-bit, 44.1kHz), to mp3 compression and various lossy bit-rates.
I am told that the res
ADSC has been a leader in supporting compressed CBF's.
=
Herbert J. Bernstein
Professor of Mathematics and Computer Science
Dowling College, Kramer Science Center, KSC 121
Idle Hour Blvd, Oakdale, NY, 11769
It would be a good start to get all images written now with lossless
compression, instead of the uncompressed images we still get from the ADSC
detectors. Something that we've been promised for many years
Phil
Le 08/11/2011 20:46, mjvdwo...@netscape.net a écrit :
> Hmmm, so you would, when collecting large data images, say 4 images,
> 100MB in size, per second, in the middle of the night, from home, reject
> seeing compressed images on your data collection software, while the
> "real thing" is lingering
ication/case/location etc.
So yes, James, of course this is useful and not a waste of time.
Mark
-Original Message-
From: Miguel Ortiz Lombardia
To: CCP4BB
Sent: Tue, Nov 8, 2011 12:29 pm
Subject: Re: [ccp4bb] image compression
Le 08/11/2011 19:19, James Holton a écrit :
> A
Le 08/11/2011 19:19, James Holton a écrit :
> At the risk of putting this thread back on-topic, my original question
> was not "should I just lossfully compress my images and throw away the
> originals". My question was:
>
> "would you download the compressed images first?"
>
> So far, noone ha
Hi James,
Fair enough.
However I would still be quite interested to see how different the
results are from the originals and the compressed versions. If the
differences were pretty minor (i.e. not really noticeable) then I
would certainly have a good look at the mp3 version.
Also it would make m
At the risk of putting this thread back on-topic, my original question
was not "should I just lossfully compress my images and throw away the
originals". My question was:
"would you download the compressed images first?"
So far, noone has really answered it.
I think it is obvious that of co
Dear Herbert,
Sorry, the point I was getting at was that the process is one way, but
if it is also *destructive* i.e. the original "master" is not
available then I would not be happy. If the master copy of what was
actually recorded is available from a tape someplace perhaps not all
that quickly t
Um, but isn't Crystallograpy based on a series of
one-way computational processes:
photons -> images
images -> {struture factors, symmetry}
{structure factors, symmetry, chemistry} -> solution
{structure factors, symmetry, chemistry, solution}
-> refined solution
At each stage w
Le 08/11/11 10:15, Kay Diederichs a écrit :
> Hi James,
>
> I see no real need for lossy compression datasets. They may be useful
> for demonstration purposes, and to follow synchrotron data collection
> remotely. But for processing I need the real data. It is my experience
> that structure soluti
Hi
> I am not a fan
> of one-way computational processes with unique data.
>
> Thoughts anyone?
>
> Cheerio,
>
> Graeme
I agree.
Harry
--
Dr Harry Powell, MRC Laboratory of Molecular Biology, MRC Centre, Hills Road,
Cambridge, CB2 0QH
http://www.iucr.org/resources/commissions/crystallograp
Hi James,
I see no real need for lossy compression datasets. They may be useful
for demonstration purposes, and to follow synchrotron data collection
remotely. But for processing I need the real data. It is my experience
that structure solution, at least in the difficult cases, depends on
squ
HI James,
Regarding the suggestion of lossy compression, it is really hard to
comment without having a good idea of the real cost of doing this. So,
I have a suggestion:
- grab a bag of JCSG data sets, which we know should all be essentially OK.
- you squash then unsquash them with your macguff
I think that real universal image depositions will not take off without a
newish type of compression that will speed up and ease up things.
Therefore the compression discussion is highly relevant - I would even
suggest to go to mathematicians and software engineers to provide
a highly efficient com
So the purists of speed seem to be more relevant than the purists of images.
We complain all the time about how many errors we have out there in our
experiments that we seemingly cannot account for. Yet, would we add
another source?
Sorry if I'm missing something serious here, but I cannot unders
I'll second that... can't remember anybody on the barricades about
"corrected" CCD images, but they've been just so much more practical.
Different kind of problem, I know, but equivalent situation: the people
to ask are not the purists, but the ones struggling with the huge
volumes of data.
Dear James,
You are _not_ wasting your time. Even if the lossy compression ends
up only being used to stage preliminary images forward on the net while
full images slowly work their way forward, having such a compression
that preserves the crystallography in the image will be an important
cont
So far, all I really have is a "proof of concept" compression algorithm here:
http://bl831.als.lbl.gov/~jamesh/lossy_compression/
Not exactly "portable" since you need ffmpeg and the x264 libraries
set up properly. The latter seems to be constantly changing things
and breaking the former, so I'm
This is a very good question. I would suggest that both versions
of the old data are useful. If was is being done is simple validation
and regeneration of what was done before, then the lossy compression
should be fine in most instances. However, when what is being
done hinges on the really fin
At the risk of sounding like another "poll", I have a pragmatic question
for the methods development community:
Hypothetically, assume that there was a website where you could download
the original diffraction images corresponding to any given PDB file,
including "early" datasets that were fro
23 matches
Mail list logo