Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Richard Elling
On Jan 24, 2010, at 8:26 PM, Frank Middleton wrote: > What an entertaining discussion! Hope the following adds to the > entertainment value :). > > Any comments on this Dec. 2005 study on disk failure and error rates? > http://research.microsoft.com/apps/pubs/default.aspx?id=64599 > > Seagate sa

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Bob Friesenhahn
On Sun, 24 Jan 2010, Frank Middleton wrote: You seem to have it in for Seagate :-). Newegg by default displays reviews worst to best. The review statistics as of 23 Jan 2010) were: ST31500341AS (older, 7200RPM 1.5GB drive) Excellent 911 - 49% Good233 - 12% Average 113 - 6% Poor 123

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Middleton
What an entertaining discussion! Hope the following adds to the entertainment value :). Any comments on this Dec. 2005 study on disk failure and error rates? http://research.microsoft.com/apps/pubs/default.aspx?id=64599 Seagate says their 1.5TB consumer grade drives are good for 24*365 operation

Re: [zfs-discuss] Degraded Zpool

2010-01-24 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 03:55:59PM +0100, Matthias Appel wrote: > I have a serious issue with my zpool. Yes. You need to figure out what the root cause of the issue is. > My zpool consists of 4 vdevs which are assembled to 2 mirrors. > > One of this mirrors got degraded cause of too many errors

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Daniel Carosone
Another issue with all this arithmetic: one needs to factor in the cost of additional spare disks (what were you going to resilver onto?). I look at it like this: you purchase the same number of total disks (active + hot spare + cold spare), and raidz2 vs raidz3 simply moves a disk from one of the

Re: [zfs-discuss] ZFS backup/restore

2010-01-24 Thread Edward Ned Harvey
> Are there any plans to have a tool to restore individual files from zfs > send streams - like ufsrestore? The best advice I've heard so far is thus: On your backup media, create a zpool in a file container. When you "zfs send" don't save the data stream. Instead, feed it directly into

Re: [zfs-discuss] optimise away COW when rewriting the same data?

2010-01-24 Thread Kjetil Torgrim Homme
David Magda writes: > On Jan 24, 2010, at 10:26, Kjetil Torgrim Homme wrote: > >> But it occured to me that this is a special case which could be >> beneficial in many cases -- if the filesystem uses secure checksums, >> it could check the existing block pointer and see if the replaced >> data ma

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-24 Thread Simon Breden
> Thank you for the effort Simon. Thank you too Dusan, for creating this post that made me aware of this new card -- it looks like a good one, and doesn't have the unnecessary RAID stuff included :) > Good to know from the feedback in your thread that the mpt_sas(7d) driver is > actually respo

Re: [zfs-discuss] Degrated pool menbers excluded from writes ?

2010-01-24 Thread Bill Sommerfeld
On 01/24/10 12:20, Lutz Schumann wrote: One can see that the degrated mirror is excluded from the writes. I think this is expected behaviour right ? (data protection over performance) That's correct. It will use the space if it needs to but it prefers to avoid "sick" top-level vdevs if there

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread R.G. Keen
Registered: 7/7/05 Re: [zfs-discuss] Best 1.5TB drives for consumer RAID? Posted: Jan 24, 2010 11:20 AM in response to: r.g. Click to reply to this thread Reply On January 24, 2010 Frank wrote: >Sorry I missed this part of your post before responding just a m

[zfs-discuss] Degrated pool menbers excluded from writes ?

2010-01-24 Thread Lutz Schumann
Hello, I'm testing with snv_131 (nexentacore 3 alpha 4). I did a bonnie benchmark to my disks and pulled a disk whil benchmarking. Everything went smoothly,however I found that the now degrated device is excluded from the writes. So this is my pool after I have pulled the disk pool: m

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread R.G. Keen
On January 24, 2010 Frank Cusack wrote: >That's the point I was arguing against. Yes, that's correct. >You did not respond to my argument, and you don't have to now, Thanks for the permission. I'll need that someday. >but as long as you keep stating this without correcting me I will keep >resp

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Joerg Schilling
Colin Raven wrote: > > A better digital archival medium may already exist: > > http://hardware.slashdot.org/story/09/11/13/019202/Synthetic-Stone- > > DVD-Claimed-To-Last-1000-Years > > > That would be nice - but - I have to wonder how they would test it in order > to justify the actual lifespan

Re: [zfs-discuss] Drive Identification

2010-01-24 Thread Paul Gress
On 01/24/10 04:10 AM, Lutz Schumann wrote: Is there a way (besides format and causing heavy I/O on the device in question) how to identify a drive. Is there some kind of SES (enclosure service) for this ?? (e.g. "and now let the red led blink") Try /usr/bin/iostat -En ___

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Colin Raven
On Sun, Jan 24, 2010 at 19:34, Toby Thain wrote: > > On 24-Jan-10, at 11:26 AM, R.G. Keen wrote: > > ... >> >> I’ll just blather a bit. The most durable data backup medium humans have >> come up with was invented about 4000-6000 years ago. It’s fired cuniform >> tablets as used in the Middle Eas

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Toby Thain
On 24-Jan-10, at 11:26 AM, R.G. Keen wrote: ... I’ll just blather a bit. The most durable data backup medium humans have come up with was invented about 4000-6000 years ago. It’s fired cuniform tablets as used in the Middle East. Perhaps one could include stone carvings of Egyptian and/or

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Richard Elling
On Jan 24, 2010, at 8:26 AM, R.G. Keen wrote: > > “Disk drives cost $100”: yes, I fully agree, with minor exceptions. End of > marketing, which is where the cost per drive drops significantly, is > different from end of life – I hope! http://en.wikipedia.org/wiki/End-of-life_(product) Some vend

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Will Murnane
On Sun, Jan 24, 2010 at 11:41, Erik Trimble wrote: > Rob Logan wrote: >>> >>> a 1U or 2U JBOD chassis for 2.5" drives, >>> >> >> from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 >> (single) or E2 (dual) options have a SAS expander so >> http://supermicro.com/products/chassis/2U/?

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Tim Cook
On Sun, Jan 24, 2010 at 10:38 AM, Frank Cusack wrote: > On January 23, 2010 8:23:08 PM -0600 Tim Cook wrote: > >> I bet you'll get the same performance out of 3x1.5TB drives you get out of >> 6x500GB drives too. >> > > Yup. And if that's the case, probably you want to go with the 3 drives > bec

Re: [zfs-discuss] optimise away COW when rewriting the same data?

2010-01-24 Thread David Magda
On Jan 24, 2010, at 10:26, Kjetil Torgrim Homme wrote: But it occured to me that this is a special case which could be beneficial in many cases -- if the filesystem uses secure checksums, it could check the existing block pointer and see if the replaced data matches. [...] Are there any ZFS

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 8:26:07 AM -0800 "R.G. Keen" wrote: “Fewer/bigger versus more/smaller drives”: Tim and Frank have worked this one over. I made the choice based on wanting to get a raidz3 setup, for which more disks are needed than raidz or raidz2. This idea comes out of the time-to-resilver

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 8:26:07 AM -0800 "R.G. Keen" wrote: In my case, I got 0.75TB drives for $58 each. The cost per bit is bigger than buying 1TB or 1.5TB drives, all right, but I can buy more of them, and that lets me put another drive on for the next level of error correction data. That's th

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 8:41:00 AM -0800 Erik Trimble wrote: an external JBOD chassis, not a server chassis. ___ zfs-discuss mailing list zfs-discuss@opensolaris

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Erik Trimble
Rob Logan wrote: a 1U or 2U JBOD chassis for 2.5" drives, from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 (single) or E2 (dual) options have a SAS expander so http://supermicro.com/products/chassis/2U/?chs=216 fits your build or build it your self with http://supermicro.

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 11:45:57 AM +1100 Daniel Carosone wrote: On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote: Smaller devices cost more $/GB; ie they are more expensive. Usually, other than the very largest (most recent) drives, that are still at a premium price. Yes, I should

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 23, 2010 8:23:08 PM -0600 Tim Cook wrote: I bet you'll get the same performance out of 3x1.5TB drives you get out of 6x500GB drives too. Yup. And if that's the case, probably you want to go with the 3 drives because your operating costs (power consumption) will be less. Are you

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Kjetil Torgrim Homme
Tim Cook writes: > On Sat, Jan 23, 2010 at 7:57 PM, Frank Cusack wrote: > > I mean, just do a triple mirror of the 1.5TB drives rather than > say (6) .5TB drives in a raidz3. > > I bet you'll get the same performance out of 3x1.5TB drives you get > out of 6x500GB drives too. no, it will

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread R.G. Keen
Let me start this off with a personal philosophy statement. In technical matters, there is almost never a “best”. There only the best compromise given the objective you’re trying to achieve. If you change the objectives even slightly, you may get wildly different “best compromise” answers. I

[zfs-discuss] optimise away COW when rewriting the same data?

2010-01-24 Thread Kjetil Torgrim Homme
I was looking at the performance of using rsync to copy some large files which change only a little between each run (database files). I take a snapshot after every successful run of rsync, so when using rsync --inplace, only changed portions of the file will occupy new disk space. Unfortunately,

Re: [zfs-discuss] ZFS backup/restore

2010-01-24 Thread David Magda
On Jan 24, 2010, at 07:18, Alexander Welter wrote: Are there any plans to have a tool to restore individual files from zfs send streams - like ufsrestore? No publicly stated plans by Sun. If you have a support contract with Sun, I'd recommend calling them up and telling them that you wish

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Rob Logan
> a 1U or 2U JBOD chassis for 2.5" drives, from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 (single) or E2 (dual) options have a SAS expander so http://supermicro.com/products/chassis/2U/?chs=216 fits your build or build it your self with http://supermicro.com/products/accessori

[zfs-discuss] zfs destroy snapshot: dataset already exists?

2010-01-24 Thread Nathanael Burton
Hi, I recently upgraded from 2009.06 to b131 (mainly to get dedup support). The upgrade to b131 went fairly smoothly, but then I ran into an issue trying to get the old datasets snapshotted and send/recv'd to dedup the existing data. Here's the steps I ran: zfs snapshot -r data/me...@prereplica

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Olli Lehtola
I'm in the process of building semi-beefy file/general-purpose-server(Lynnfield Xeon, 4GB Ecc) and hard drive choice is the problem. I've been googling for a day and a half now and the main points seem to be: - ~all consumer class drives have the same problem with TLER/ERC/CCTL - ~all "for raid"

Re: [zfs-discuss] Degraded Zpool

2010-01-24 Thread Alexander Welter
Hi, the only thing that might help is an export/import, so zfs is forced to re-scan the pool for operative devices. If that doesn't help If you suspect the server itself to be the problem, try to attach the drives to a different box and import the pool there. Just make sure, that the 'new'

[zfs-discuss] ZFS backup/restore

2010-01-24 Thread Alexander Welter
Are there any plans to have a tool to restore individual files from zfs send streams - like ufsrestore? Specially for recovering from unintended file removal/changes of OS related files it would be nice to be able to restore just the file in question, instead of restoring the entire filesystem.

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-24 Thread Dusan Kysel
Thank you for the effort Simon. Good to know from the feedback in your thread that the mpt_sas(7d) driver is actually responsible for the SuperMicro AOC-USAS2-L8e support. As far as the support for the other RAID capable lsi-2008 variants in the mr_sas(7d) driver goes, it is but of little conce

Re: [zfs-discuss] Drive Identification

2010-01-24 Thread ???
iostat -En -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-01-24 Thread Dusan Kysel
Thanks for the feedback and hardware recommendation gea. Thus driver support is there out-of-the-box at least in snv_124+. -dusan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolar

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Colin Raven
On Sun, Jan 24, 2010 at 08:36, Erik Trimble wrote: > These days, I've switched to 2.5" SATA laptop drives for large-storage > requirements. > They're going to cost more $/GB than 3.5" drives, but they're still not > horrible ($100 for a 500GB/7200rpm Seagate Momentus). They're also easier > to c

Re: [zfs-discuss] zfs streams

2010-01-24 Thread Erik Trimble
dick hoogendijk wrote: Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ; ZFS filesystem version 4)? No. You cannot import a stream into a zpool of earlier revision, thought the reverse is possible. --

[zfs-discuss] zfs streams

2010-01-24 Thread dick hoogendijk
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ; ZFS filesystem version 4)? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131 + All that's really w

[zfs-discuss] Drive Identification

2010-01-24 Thread Lutz Schumann
Is there a way (besides format and causing heavy I/O on the device in question) how to identify a drive. Is there some kind of SES (enclosure service) for this ?? (e.g. "and now let the red led blink") Regards, Robert -- This message posted from opensolaris.org ___

[zfs-discuss] Forum Suggestion: Sticky Threads

2010-01-24 Thread Lutz Schumann
I follow this list / forum now for some weeks. During this time the "I tried dedup but on delete in hangs" popped up several times. I know that other forum's use sticky threads to have like a "in band f.a.q.". The Dedupe issue would be a candidate I guess. I think this kind of feature would be

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-24 Thread Lutz Schumann
Thanks for the feedback Richard. Does that mean that the L2ARC can be part of ANY pool and that there is only ONE L2ARC for all pools active on the machine ? Thesis: - There is one L2ARC on the machine for all pools - all Pools active share the same L2ARC - the L2ARC can be part of any