> Dear All,
>
> I wonder how people currently do their long term backups. I see DATs/DLTs being slowly dropped off at the beamlines and most people brings their data home in external HDs.
>
> Anyone using blue-ray or double layer DVDs for long term backups? If so what kind of hardware? Do you use HDs for long term storage? If so, do you do a second copy and how do you store them?
>
> I will try to compile the answers and relay back to the list a resume.

----------------------------------------------------------------------
David Aragao (our own setup):

We currently have online NAS server (QNAP 209 Pro -
http://www.qnap.com/pro_detail_feature.asp?p_id=93) with 2x hotswappable 1 Tb disks. The system allows ftp, nfs, samba over the network and also to directly connect our USB2 transport HD. The drawback is that the system is very very slow (4h-6h to transfer 150 Gb) and has crashed a few times needing reboot. We are not using any of the RAID options on the QNAP since we use 1 Tb for x-ray diffraction data (latest trips) and 1 Tb for automatic office/windoze backups over the network. We keep an extra 1 Tb HD for failures.

Then we have been using an extra external USB2 750 Gb HDs for second offline copy of the data.

One of the reasons that triggered my question was that we cannot rely
on single HD type for backup. Unfortunately our QNAP has exactly this hardisk: http://www.theinquirer.net/inquirer/news/374/1050374/seagate-barracudas-7200-11-failing

----------------------------------------------------------------------
Graeme Winter:

I have quite a lot of data, as you know, and I have a three-phase way of handling data...

I keep the data on hard drive on computers as far as possible, primary backup of everything to firewire with secondary backup to DVD as bzip2'd images. This way if I lose something I can fetch the data from firewire (or process from there if I run out of space) then if one of those fails (and they do) I can recover the data from DVD.

Overall I have a few TB of data kept in this way.

Redundancy is good, as is orthogonality i.e. DVD and Firewire, not 2x firewire disk or 2x DVD.

Follow up:

Ok, so I don't accumulate data at this kind of rate, so I write the DVD's manually. I know at Brookhaven they have a DVD robot, which would probably do what you want....

Otherwise probably the two HD backups is probably actually the best you are going to get.
----------------------------------------------------------------------
Roger Rowlett:
We keep our backups on hard drives of two servers (master and backup) in separate locations on campus. Some data is kept on CD-ROM, but we're doing that less now.
----------------------------------------------------------------------
Stephen Graham:

If at all possible you should consider outsourcing it. You might have access to some kind of large university of national facility for archiving scientific/academic 'data'. Otherwise there are companies who specialise in archiving data - for a fee they will take the problem out of your hands (and you don't need to worry about what format to use, what to do once the media you currently use become scarce, etc).

Either way, we should all lobby the PDB or someone to archive all the images for us pronto!
----------------------------------------------------------------------
Kay Diederichs:

Burning DVDs must be a nightmare, and recovering from DVD failure even more so. Whenever I burn a DVD with important data I also create a CD with the ECC data (see http://www.dvdisaster.de).

I have my synchrotron data since 1999 online on harddisk (_all_ our data, not only those datasets that gave structures). Disks are cheap and convenient. Whenever we start to be short on disk space, I go shopping for bigger disks.

The hardware currently is an eSATA 4 TB RAID5 in a €340,- RaidSonic Stardom ST6600-5S-S2 5-disk case (http://www.raidsonic.de/de/pages/search/search_list.php?we_objectID=4239&pid=0). A terabyte disk now is less than €100, so the whole thing costs €800,- . RAID5 guards against single-disk failures, and I keep a spare terabyte disk in case I have to exchange one of the five internal ones. The unit is hooked up to a Linux machine with a recent kernel (which supports the SATA port multipier feature) and a eSATA adapter (e.g. Adaptec 1225SA).

We have two of those in different buildings, and I do a daily (rsync) copy of the master to the backup. I'm running this for over a year, and am happy with it.
----------------------------------------------------------------------
Patrick Loll:

We're currently using normal DVD-Rs. I don't know how robust these will prove to be in the long term, but for right now it's cheap and easy and requires no fancy hardware.
----------------------------------------------------------------------
Wladek Minor:

Hard drive.

1TB cost around $120. 1.5TB are not as reliable yet.

Plus Thermaltake BlacX ST0005U - storage enclosure around $46
----------------------------------------------------------------------
Sergei Strelkov:

I have been looking into that some time ago.
We have chosen a rather simple way.
We save all collected data on external disks (single copy).
We never delete anything from these disks.
Then students copy whatever they need to their machines.
These are backed up from time to time.
----------------------------------------------------------------------
Paul Swepston:

The Australians have something that addresses this: TARDIS is a multi-institutional collaborative venture that aims to facilitate the archiving and sharing of raw X-ray diffraction images (collectively known as a 'dataset') from the Australian protein crystallography community.
http://www.tardis.edu.au/
----------------------------------------------------------------------
James Holton
MAD Scientist

At ALS beamlines 8.3.1 and 12.3.1 we use a combination of DVD-R and LTO-4 tapes for long-term backup, and have the entire data collection history of each beamline backed up on DVD-R disks. This is at about 50 TB for 8.3.1 (built in 2001) and 30 TB for 12.3.1 (built in 2004). We also make a DVD of the user's data automatically and near-real-time using a ~$4k robot that inkjet prints the user's name and dataset summary onto each disk. Portable hard disk drives for "sneakernet" are also popular, but so is transferring the data over the internet, which can also be done in near-real time.

I started using LTO-4 tapes recently for two reasons: 1) the price per TB became competitive with DVD-R and the tape drive is only ~$4k. 2) I used to keep two copies of each DVD, but found this was not really "redundant" because if you write two DVD's one after the other on the same day with the same writer using media from the same batch, then if you can't read one of these disks 4 years later, the chances of not being able to read the other disk are pretty high. So, a lesson I learned is to store data on two very different media types so you get "orthogonal" failure modes.

I can also tell you that it is a good idea to erase your LTO tapes 2-3 times before writing any data to them. I think this is because the primary source of error on these tapes is the roughness of the edge of the tape itself (which is used for alignment) and running it back and forth a few times probably wears/folds down any big bumps. Sounds strange, but I had some tapes I initially thought had "bad spots" on them, but upon erasing and re-writing the data to them again, the "bad spots" are now gone, and have remained gone each time I have checked those tapes over the last year. Subsequent tapes that I have erased 3x before use have never had "bad spots". Also, you need to write data to them at a minimum of 80 MB/s, or you can actually have problems reading back the tape. I do my writes in 2 GB chunks from the system RAM. ALWAYS test reading back the tape. Preferably more than once.

DVD-R media should also be verified and preferably in a low-quality DVD drive. This is because writers tend to have much higher quality than average drive mechanisms and I have seen many DVDs that read back just fine in the drive that wrote them, but throw all kinds of media errors when you take them home to a dusty old DVD reader.

As for getting the PDB to do image backup for us, I don't think that will be easy.

The average data collection rate at 8.3.1 is 2 GB/hour or ~10 TB/year. So I imagine storing all of the data from the ~100 MX beamlines around the world would be a ~1 PetaB/year proposition. Since an average of 25 to 50 data sets are collected for every one that is published, the storage demand on the PDB would be ~30 TB/year. Why only 1 in 50 you ask? That is a very good question, and it will probably never be answered unless the 49 of 50 unsolved data sets can be made available to methods developers.

I just now Froogled for media prices and got this:

$33/TB    LTO-4
$60/TB    DVD-R
$100/TB  hard disks
$400/TB  Blue-Ray
$3000/TB  Solid-state drives (such as USB thumbdrives)
$3M/TB   clay tablets

So PDB will only need to find an "extra" ~$1k/year to buy the media for 1 dataset/structure, or $30k/year for all of the data. Unfortunately, the media is not nearly as expensive as access to it. An LTO tape library with ~50 TB storage capacity is ~$20k on eBay, but this is EMPTY! You have to fill it with tapes, and then write software to make the data sets available on the web. Tape librarys in the multiple PetaB range are available, but not their prices. Clearly this represents a non-trivial investment in resources and effort for the PDB. The central problem is that the per-GB prices of storage do not scale well to PetaB-class systems. However, there is now Stimulus Package mone available in the US for large equipment investments like this. Perhaps someone at Rutgers could submit one? I, for one, am very willing to write them a letter of support.

Another approach is to try and spread the storage out across the world and create a central registry for finding it. The TARDIS initiative in Australia (Androulakis et al. Acta D 2008) seems to be an important step in that direction, but I haven't been able to test it since I don't have a Fedora Repository Server. I do, however, have a web server, and I think a repository of URLs is probably better than nothing.
----------------------------------------------------------------------
Ed Pozharski:

DVDs. Single-layer DVDR holds ~4.6Gb - enough for most datasets. We do most of our data collection at SSRL and they have a nice option of shipping you DVDs for free.
----------------------------------------------------------------------
Mark:

We have a tiered system:
a) Personal files. Small and many, change often. Typical: CCP4, coot, CNS and other files. Backed up daily. b) X-ray images. Not so many, but large. Large in total. Never change once established. Backed up every two hours. c) Archive. Mostly X-ray images but also some personal files from people who have left the lab. Projects that have been or are being published and data that need to be preserved 'indefinitely'. Backed up when I have time or when we run low on storage space (whichever comes first).

All files reside on a network-attached storage device with currently 2TB of space, can be expanded to 4x largest HD (currently 4x1TB or better, I lose track). We have two of these devices, one primary and one backup in a different building.

We archive (are set up to archive) to external HDs. We make two archive copies, one stays in a file cabinet, one goes home to PI, so there are copies at all times. Presumably entire projects will be archived (with multiple data sets, consisting of hundreds of X-ray images) at once.

We designed it this way because we wanted 'instant security' once the files are established and we did not want to overwhelm the campus network with large backups overnight when data are collected.

In the end, all our storage is on standard HDs, always in duplicate. Our network-attached storage consists of two Infrant (now NetGear) ReadyNAS NV+ systems (they are X-RAIDed). We have run this system for a coupl e of years now and it works like a charm. Our local computers do not have disk storage other than O/S, so no local files. Our O/S systems are backed up once in a long while to a VM server so in theory everything should be disaster-proof.

I don't know that I would ask 'outsiders' like PDB to keep copies of files. After all, the researcher is responsible to keep good copies of their research data. It is not hard to do, but it requires quite a bit of thinking, probably by an IT specialist. In particular, I can remember when our 9-track tape system was thrown out in grad school. All media (with data) were subsequently useless. So you have to stay with time and upgrade storage once in a while, even if I have to admit that James' clay tablets are 'almost forever'. Technically I think that our 'forever' storage ends when the PI(s) retire(s).

----------------------------------------------------------------------
Ashley Buckle

We are working on a new version of TARDIS that massively simplifies the software requirements (no database needed), using Web Stores. We are planning to release this at the beginning of April (but not the 1st!)

See http://tardis.edu.au/wiki/index.php/TARDIS_Web_Stores

In a nutshell:


TARDIS Web Stores takes the original federated approach and makes it far more powerful, flexible and easy to set up in individual labs and institutions. Instead of the current requirement for data/metadata to reside in a Fedora Digital Repository, TARDIS Web Stores indexes files stored on any simple web server (and optional additional FTP server).

Aside from the greatly simplified storage setup, added bonuses of this approach involve data sets no longer residing in large archives - one can download individual files or entire data sets at once. Metadata will be storable/searchable on any level (experiment/dataset/datafile) meaning the flexibility of what metadata can be stored for display on the TARDIS site is virtually infinite. Shifting data from server to server, or changes in web address pointing to data is no problem, as all that needs to be done for data to show up in TARDIS is a link to an XML manifest residing next to the data itself. A program to scan files for metadata and produce a tardis-compatible manifest file for registration will also be distributed. We believe this added functionality, coupled with the ease of making data known to TARDIS will greatly increase the data indexed once this next iteration is released.
----------------------------------------------------------------------

Reply via email to