Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote: > On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn > wrote: >> On Sat, 12 Dec 2009, Brent Jones wrote: >> >>> I've noticed some extreme performance penalties simply by using snv_128 >> >> Does the 'zpool scrub' rate seem similar to before?  Do

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-12 Thread Jack Kielsmeier
My system was pingable again, unfortunately I disabled all services such as ssh. My console was still hung, but I was wondering if I had hung USB crap (since I use a USB keyboard and everything had been hung for days). I force rebooted and the pool was not imported :(. I started the process off

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-12 Thread Michael Herf
Most manufacturers have a utility available that sets this behavior. For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to run the app, but it is simple to change the setting to the enterprise behavior. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-12 Thread Jack Kielsmeier
It's been over 72 hours since my last import attempt. System still is non-responsive. No idea if it's doing anything -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] compressratio vs. dedupratio

2009-12-12 Thread Robert Milkowski
Hi, The compressratio property seems to be a ratio of compression for a given dataset calculated in such a way so all data in it (compressed or not) is taken into account. The dedupratio property on the other hand seems to be taking into account only dedupped data in a pool. So for example if

Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Robert Milkowski
Andrey Kuzmin wrote: As to whether it makes sense (as opposed to two distinct physical devices), you would have read cache hits competing with log writes for bandwidth. I doubt both will be pleased :-) As usual it depends on your workload. In many real-life scenarios the bandwidth probably won

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Richard Elling
On Dec 12, 2009, at 10:32 AM, Mattias Pantzare wrote: On Sat, Dec 12, 2009 at 18:08, Richard Elling > wrote: On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote: On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: The host identity had - of course - changed with the new motherboard an

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Mike Gerdts
On Sat, Dec 12, 2009 at 9:58 AM, Edward Ned Harvey wrote: > I would suggest something like this:  While the system is still on, if the > failed drive is at least writable *a little bit* … then you can “dd > if=/dev/zero of=/dev/rdsk/FailedDiskDevice bs=1024 count=1024” … and then > after the syste

Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn wrote: > On Sat, 12 Dec 2009, Brent Jones wrote: > >> I've noticed some extreme performance penalties simply by using snv_128 > > Does the 'zpool scrub' rate seem similar to before?  Do you notice any read > performance problems?  What happens if yo

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Toby Thain
On 12-Dec-09, at 1:32 PM, Mattias Pantzare wrote: On Sat, Dec 12, 2009 at 18:08, Richard Elling wrote: On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote: On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: The host identity had - of course - changed with the new motherboard and

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Mattias Pantzare
On Sat, Dec 12, 2009 at 18:08, Richard Elling wrote: > On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote: > >> On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: >> >>> The host identity had - of course - changed with the new motherboard >>> and it no longer recognised the zpool as its own

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Bob Friesenhahn
On Sat, 12 Dec 2009, dick hoogendijk wrote: Because, like I said, I always understood it was very difficult to change disks to another system and run the installed solaris version on that new hardware. A place where I used to work had several thousand Sun workstations and I noticed that if a

[zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-12 Thread Roman Ivanov
Am I missing something? I have had monthly,weekly,daily,hourly,frequent snapshots since March 2009. Now with new b129 I lost all of them. >From zpool history: 2009-12-12.20:30:02 zfs destroy -r rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-26-09:28 2009-12-12.20:30:03 zfs destroy -r rpool/ROOT/b

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 09:08 -0800, Richard Elling wrote: > On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote: > > > On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: > > > >> The host identity had - of course - changed with the new motherboard > >> and it no longer recognised the zpool as

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Patrick O'Sullivan
I've found that when I build a system, it's worth the initial effort to install drives one by one to see how they get mapped to names. Then I put labels on the drives and SATA cables. If there were room to label the actual SATA ports on the motherboard and cards, I would. While this isn't foolproo

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread Richard Elling
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote: On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: The host identity had - of course - changed with the new motherboard and it no longer recognised the zpool as its own. 'zpool import -f rpool' to take ownership, reboot and it all wor

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Ed Plese
On Sat, Dec 12, 2009 at 8:17 AM, Paul Bruce wrote: > Hi, > I'm just about to build a ZFS system as a home file server in raidz, but I > have one question - pre-empting the need to replace one of the drives if it > ever fails. > How on earth do you determine the actual physical drive that has faile

Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Andrey Kuzmin
As to whether it makes sense (as opposed to two distinct physical devices), you would have read cache hits competing with log writes for bandwidth. I doubt both will be pleased :-) On 12/12/09, Robert Milkowski wrote: > Jens Elkner wrote: >> Hi, >> >> just got a quote from our campus reseller, th

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Edward Ned Harvey
This is especially important, because if you have 1 failed drive, and you pull the wrong drive, now you have 2 failed drives. And that could destroy the dataset (depending on whether you have raidz-1 or raidz-2) Whenever possible, always get the hotswappable hardware, that will blink a red lig

Re: [zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Bob Friesenhahn
On Sat, 12 Dec 2009, Brent Jones wrote: I've noticed some extreme performance penalties simply by using snv_128 Does the 'zpool scrub' rate seem similar to before? Do you notice any read performance problems? What happens if you send to /dev/null rather than via ssh? Bob -- Bob Friesenha

Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-12 Thread Robert Milkowski
Jens Elkner wrote: Hi, just got a quote from our campus reseller, that readzilla and logzilla are not available for the X4540 - hmm strange Anyway, wondering whether it is possible/supported/would make sense to use a Sun Flash Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs? If

[zfs-discuss] Messed up zpool (double device label)

2009-12-12 Thread Dr. Martin Mundschenk
Hi! I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen't seem to be well-engineered. After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status: pool: tank state:

[zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Paul Bruce
Hi, I'm just about to build a ZFS system as a home file server in raidz, but I have one question - pre-empting the need to replace one of the drives if it ever fails. How on earth do you determine the actual physical drive that has failed ? I've got the while zpool status thing worked out, but h

[zfs-discuss] ZFS Kernel Panic

2009-12-12 Thread Dr. Martin Mundschenk
Hi! My OpenSolaris 2009.06 box runs into kernel panics almost every day. There are 4 FireWire drives, as a RaidZ pool attached to a MacMini. The panic seems to be related to this known bug: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6835533 Since there are no known workarounds,

[zfs-discuss] ZFS send/recv extreme performance penalty in snv_128

2009-12-12 Thread Brent Jones
sed a majority of my snapshots to do this: receiving incremental stream of pdxfilu01/vault/0...@20091212-01:15:00 into pdxfilu02/vault/0...@20091212-01:15:00 received 13.8KB stream in 491 seconds (28B/sec) De-dupe is NOT enabled on any pool, but I have upgraded to the newest ZFS pool version,

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-12 Thread dick hoogendijk
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote: > The host identity had - of course - changed with the new motherboard > and it no longer recognised the zpool as its own. 'zpool import -f > rpool' to take ownership, reboot and it all worked no problem (which > was amazing in itself as