Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-14 Thread Jack Kielsmeier
> On Dec 15, 2009, at 5:50, Jack Kielsmeier > wrote: > > > Thanks. > > > > I've decided now to only post when: > > > > 1) I have my zfs pool back > > or > > 2) I give up > > > > I should note that there are periods of time where > I can ping my > > server (rarely), but most of the time not. I h

[zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-14 Thread Giridhar K R
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREECAP

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-14 Thread Victor Latushkin
On Dec 15, 2009, at 5:50, Jack Kielsmeier wrote: Thanks. I've decided now to only post when: 1) I have my zfs pool back or 2) I give up I should note that there are periods of time where I can ping my server (rarely), but most of the time not. I have not been able to ssh into it, and the

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-14 Thread Victor Latushkin
On Dec 4, 2009, at 9:33, James Risner wrote: It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.) At some point I knocked it out (export) somehow, I don't remember doing so intentionally. So I can't do commands like zpool replace since there are no pools. Ha

Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-14 Thread Jens Elkner
On Mon, Dec 14, 2009 at 01:29:50PM +0300, Andrey Kuzmin wrote: > On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner > wrote: ... > > Problem is pool1 - user homes! So GNOME/firefox/eclipse/subversion/soffice ... > Flash-based read cache should help here by minimizing (metadata) read > latency, and flash

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-14 Thread Jack Kielsmeier
Thanks. I've decided now to only post when: 1) I have my zfs pool back or 2) I give up I should note that there are periods of time where I can ping my server (rarely), but most of the time not. I have not been able to ssh into it, and the console is hung (minus the little blinking cursor).

Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-14 Thread Mike Gerdts
On Mon, Dec 14, 2009 at 3:54 PM, Craig S. Bell wrote: > I am also accustomed to seeing diluted properties such as compressratio.   > IMHO it could be useful (or perhaps just familiar) to see a diluted dedup > ratio for the pool, or maybe see the size / percentage of data used to arrive > at dedu

Re: [zfs-discuss] Something wrong with zfs mount

2009-12-14 Thread Mattias Pantzare
>> Is there better solution to this problem, what if the machine crashes? >> > > Crashes are abnormal conditions. If it crashes you should fix the problem to > avoid future crashes and probably you will need to clear the pool dir > hierarchy prior to import the pool. Are you serious? I really hope

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Andrey Kuzmin
On 12/14/09, Cyril Plisko wrote: > On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin > wrote: >> >> Right, but 'verify' seems to be 'extreme safety' and thus rather rare >> use case. > > Hmm, dunno. I wouldn't set anything, but scratch file system to > dedup=on. Anything of even slight significance

Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-14 Thread Craig S. Bell
I am also accustomed to seeing diluted properties such as compressratio. IMHO it could be useful (or perhaps just familiar) to see a diluted dedup ratio for the pool, or maybe see the size / percentage of data used to arrive at dedupratio. As Jeff points out, there is enough data available to

Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-12-14 Thread Trevor Pretty
Sorry if you got this twice but I never saw it appear on the alias. OK Today I played with a J4400 connected to a Txxx server running S10 10/09   First off read the release notes I spent about 4 hours pulling my hair out as I could not get stmsboot to work until we read in the release no

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Nick
> > Hmm, dunno. I wouldn't set anything, but scratch file > system to > dedup=on. Anything of even slight significance is set > to dedup=verify. Why? Are you saying this because the ZFS dedup code is relatively new? Or because you think there's some other problem/disadvantage to it? We're pl

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Cyril Plisko
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin wrote: > > Right, but 'verify' seems to be 'extreme safety' and thus rather rare > use case. Hmm, dunno. I wouldn't set anything, but scratch file system to dedup=on. Anything of even slight significance is set to dedup=verify. > Saving cycles lost

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-14 Thread Cindy Swearingen
Hi Cesare, According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but then the problem reoccurred. According to the EMC PowerPath Release notes, here: www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf This problem is fixed in 5.2 SP1. I would review the related ZF

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Andrey Kuzmin
On Mon, Dec 14, 2009 at 9:53 PM, wrote: > >>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote: >>> ZFS deduplication is block-level, so to deduplicate one needs data >>> broken into blocks to be written. With compression enabled, you don't >>> have these until data is compressed. Look

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Casper . Dik
>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote: >> ZFS deduplication is block-level, so to deduplicate one needs data >> broken into blocks to be written. With compression enabled, you don't >> have these until data is compressed. Looks like cycles waste indeed, >> but ... > >ZFS c

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread A Darren Dunham
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote: > ZFS deduplication is block-level, so to deduplicate one needs data > broken into blocks to be written. With compression enabled, you don't > have these until data is compressed. Looks like cycles waste indeed, > but ... ZFS compressi

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-14 Thread Richard Elling
On Dec 14, 2009, at 10:18 AM, Markus Kovero wrote: How you can setup these values to fma? UTSL http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/fm/modules/common/zfs-diagnosis/zfs_de.c#775 Standard caveats for adjusting timeouts applies. -- richard __

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Andrey Kuzmin
On Sun, Dec 13, 2009 at 11:51 PM, Steve Radich, BitShop, Inc. wrote: > I enabled compression on a zfs filesystem with compression=gzip9 - i.e. > fairly slow compression - this stores backups of databases (which compress > fairly well). > > The next question is:  Is the CRC on the disk based on t

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-14 Thread Markus Kovero
How you can setup these values to fma? Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of R.G. Keen Sent: 14. joulukuuta 2009 20:14 To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] hard driv

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-14 Thread R.G. Keen
> FMA (not ZFS, directly) looks for a number of > failures over a period of time. > By default that is 10 failures in 10 minutes. If you > have an error that trips > on TLER, the best it can see is 2-3 failures in 10 > minutes. The symptom > you will see is that when these long timeouts happen, >

Re: [zfs-discuss] Weird ZFS errors

2009-12-14 Thread Cindy Swearingen
Hi James, What are the commands that are used to reboot this server? Also, you can use the fmdump -eV command to review any underlying hardware problems. You might see some clues about what is going on with c7t2d0. Thanks, Cindy On 12/13/09 16:46, James Nelson wrote: A majority of the time w

Re: [zfs-discuss] Something wrong with zfs mount

2009-12-14 Thread Gonzalo Siero
Hi, Martin Uhl wrote: > obviously that will fail. So AFAIK those directories will be created on mount but not removed on unmount Good point. I was not aware of this. Will check with engineering. The problem is not that exporting will not remove dirs (which I doubt it should) but moun

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-14 Thread Richard Elling
On Dec 13, 2009, at 11:28 PM, Yaverot wrote: Been lurking for about a week and a half and this is my first post... --- bfrie...@simple.dallas.tx.us wrote: On Fri, 11 Dec 2009, Bob wrote: Thanks. Any alternatives, other than using enterprise-level drives? You can of course use normal cons

Re: [zfs-discuss] Something wrong with zfs mount

2009-12-14 Thread Martin Uhl
> If you umount a ZFS FS that has some other FS's underneath it, then the > mount points for the "child" FS needs to be created to have those > mounted; that way if you don't export the pool the dirs won't be deleted > and next time you import the pool the FS will fail to mount because your > m

Re: [zfs-discuss] Space not freed?

2009-12-14 Thread Henrik Johansson
Hello, On 14 dec 2009, at 14.16, Markus Kovero wrote: Hi, if someone running 129 could try this out, turn off compression in your pool, mkfile 10g /pool/file123, see used space and then remove the file and see if it makes used space available again. I’m having trouble with this, reminds m

Re: [zfs-discuss] Space not freed?

2009-12-14 Thread Markus Kovero
>>Hi, if someone running 129 could try this out, turn off compression in your >>pool, mkfile 10g /pool >>/file123, see used space and then remove the file and see if it makes used >>space available again. I >>'m having trouble with this, reminds me of similar bug that occurred in >>111-release.

Re: [zfs-discuss] Space not freed?

2009-12-14 Thread Casper . Dik
>Hi, if someone running 129 could try this out, turn off compression in your >pool, mkfile 10g /pool /file123, see used space and then remove the file and see if it makes used space available again. I 'm having trouble with this, reminds me of similar bug that occurred in 111-release. Any auto

[zfs-discuss] Space not freed?

2009-12-14 Thread Markus Kovero
Hi, if someone running 129 could try this out, turn off compression in your pool, mkfile 10g /pool/file123, see used space and then remove the file and see if it makes used space available again. I'm having trouble with this, reminds me of similar bug that occurred in 111-release. Yours Markus

Re: [zfs-discuss] Something wrong with zfs mount

2009-12-14 Thread Gonzalo Siero
Hi, Martin Uhl wrote: We opened a Support Case (Case ID 71912304) which after some discussion came to the "conclusion" that we should not use /etc/reboot for rebooting. Yes. You are using "/etc/reboot" which is the same as calling "/usr/sbin/halt": % ls -l /etc/reboot lrwxrwxrwx 1 ro

Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-14 Thread Daniel Carosone
> There was an announcement made in November about auto > snapshots being made obsolete in build 128 That thread (which I know well) talks about the replacement of the [b]implementation[/b], while retaining the (majority of) the behaviour and configuration interface. The old implementation had

Re: [zfs-discuss] Something wrong with zfs mount

2009-12-14 Thread Martin Uhl
We are also running into this bug. Our system is a Solaris 10u4 SunOS sunsystem9 5.10 Generic_127112-10 i86pc i386 i86pc ZFS version 4 We opened a Support Case (Case ID 71912304) which after some discussion came to the "conclusion" that we should not use /etc/reboot for rebooting. This leads me

Re: [zfs-discuss] Accidentally added disk instead of attaching

2009-12-14 Thread Martijn de Munnik
On Mon, 2009-12-07 at 23:31 +0100, Martijn de Munnik wrote: > On Dec 7, 2009, at 11:23 PM, Daniel Carosone wrote: > > >> but if you attempt to "add" a disk to a redundant > >> config, you'll see an error message similar [..] > >> > >> Doesn't the "mismatched replication" message help? > > > > No

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-14 Thread Cesare
On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston wrote: > Thanks for the info Alexander... I will test this out.  I'm just wondering > what it's going to see after I install Power Path.  Since each drive will > have 4 paths, plus the Power Path...  after doing a "zfs import" how will I > force it to

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-14 Thread Ross
Thanks for the update, it's no help to you of course, but I'm watching your progress with interest. Your progress updates are very much appreciated. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] X4540 + SFA F20 PCIe?

2009-12-14 Thread Andrey Kuzmin
On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner wrote: > On Sat, Dec 12, 2009 at 04:23:21PM +, Andrey Kuzmin wrote: >> As to whether it makes sense (as opposed to two distinct physical >> devices), you would have read cache hits competing with log writes for >> bandwidth. I doubt both will be ple

Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-14 Thread Ross
There was an announcement made in November about auto snapshots being made obsolete in build 128, I assume major changes are afoot: http://www.opensolaris.org/jive/thread.jspa?messageID=437516&tstart=0#437516 -- This message posted from opensolaris.org ___

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-14 Thread Robert Milkowski
On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote: I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly slow compression - this stores backups of databases (which compress fairly well). The next question is: Is the CRC on the disk based on the uncompressed data