Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote: > On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote: > > > From my experience dealing with > 4TB you stop writing after 80% of zpool > > utilization > > YMMV. I have routinely completely filled zpools. There have been some > improvement

[zfs-discuss] zdb zpool show as 'Value too large for defined data type"

2010-04-14 Thread mingli
zdb zpool output as below: bash-3.00# zdb ttt version=15 name='ttt' state=0 txg=4 pool_guid=4724975198934143337 hostid=69113181 hostname='cdc-x4100s8' vdev_tree type='root' id=0 guid=4724975198934143337 children[0] type

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Dmitry
Yesterday I received a victim. "SuperServer 5026T-3RF 19" 2U, Intel X58, 1xCPU LGA1366 8xSAS/SATA hot-swap drive bays, 8 ports SAS LSI 1068E, 6 ports SATA-II Intel ICH10R, 2xGigabit Ethernet" and i have 2 ways Openfiler vs Opensolaris :) -- This message posted from opensolaris.org

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote: > I realize that I did things in the wrong order. I should have removed the > oldest snapshot first, on to the newest, and then removed the data in the > FS itself. For the problem in question, this is irrelevant. As discussed in the

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Erik Trimble
Daniel Carosone wrote: On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: So I turned deduplication on on my staging FS (the one that gets mounted on the database servers) yesterday, and since then I've been seeing the mount hang for short periods of time off and on. (It lights nag

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Erik Trimble
David Dyer-Bennet wrote: On 14-Apr-10 22:44, Ian Collins wrote: Hint: the southern hemisphere does exist! I've even been there. But the month/season relationship is too deeply built into too many things I follow (like the Christmas books come out of the publisher's fall list; for that matte

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: > So I turned deduplication on on my staging FS (the one that gets mounted > on the database servers) yesterday, and since then I've been seeing the > mount hang for short periods of time off and on. (It lights nagios up > like a Chris

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread David Dyer-Bennet
On 14-Apr-10 22:44, Ian Collins wrote: On 04/15/10 06:16 AM, David Dyer-Bennet wrote: Because 132 was the most current last time I paid much attention :-). As I say, I'm currently holding out for 2010.$Spring, but knowing how to get to a particular build via package would be potentially interest

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-14 Thread Willard Korfhage
As I mentioned earlier, I removed the hardware-based Raid6 array, changed all the disks to passthrough disks, made a raidz2 pool using all the disk. I used my backup program to copy 55GB of data to the disk, and now I have errors all over the place. # zpool status -v pool: bigraid state: DEG

[zfs-discuss] How can I get the unuse partition size or usage after it is been used.

2010-04-14 Thread likaijun
Hello,all, I use opensolaris snv-133 I use comstar to share IPSAN a partition werr/pwd to window client. Now I copy a 1.05GB file to the format disk. then the partition usage is about 80%,now I delete the file and the disk is idle.cancel IPSAN share I thought the partition usage is drop to abo

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Ian Collins
On 04/15/10 06:16 AM, David Dyer-Bennet wrote: Because 132 was the most current last time I paid much attention :-). As I say, I'm currently holding out for 2010.$Spring, but knowing how to get to a particular build via package would be potentially interesting for the future still. I hope it's

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Bill Sommerfeld
On 04/14/10 19:51, Richard Jahnel wrote: This sounds like the known issue about the dedupe map not fitting in ram. Indeed, but this is not correct: When blocks are freed, dedupe scans the whole map to ensure each block is not is use before releasing it. That's not correct. dedup uses a da

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Paul Archer
7:51pm, Richard Jahnel wrote: This sounds like the known issue about the dedupe map not fitting in ram. When blocks are freed, dedupe scans the whole map to ensure each block is not is use before releasing it. This takes a veeery long time if the map doesn't fit in ram. If you can try adding

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Eric D. Mudama
On Wed, Apr 14 at 13:16, David Dyer-Bennet wrote: I don't get random hangs in normal use; so I haven't done anything to "get past" this. Interesting. Win7-64 clients were locking up our 2009.06 server within seconds while performing common operations like searching and copying large directory

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Richard Jahnel
This sounds like the known issue about the dedupe map not fitting in ram. When blocks are freed, dedupe scans the whole map to ensure each block is not is use before releasing it. This takes a veeery long time if the map doesn't fit in ram. If you can try adding more ram to the system. -- This

[zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Paul Archer
I have an approx 700GB (of data) FS that I had dedup turned on for. (See previous posts.) I turned on dedup after the FS was populated, and was not sure dedup was working. I had another copy of the data, so I removed the data, and then tried to destroy the snapshots I had taken. The first two di

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Erik Trimble
Richard Elling wrote: On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote: From my experience dealing with > 4TB you stop writing after 80% of zpool utilization YMMV. I have routinely completely filled zpools. There have been some improvements in performance of allocations when free space

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread David Dyer-Bennet
On Wed, April 14, 2010 15:28, Miles Nordin wrote: >> "dd" == David Dyer-Bennet writes: > > dd> Is it possible to switch to b132 now, for example? > > yeah, this is not so bad. I know of two approaches: Thanks, I've filed and flagged this for reference. -- David Dyer-Bennet, d...@dd-b.

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Miles Nordin
> "dd" == David Dyer-Bennet writes: dd> Is it possible to switch to b132 now, for example? yeah, this is not so bad. I know of two approaches: * genunix.org assembles livecd's of each b tag. You can burn one, unplug from the internet, install it. It is nice to have a livecd ca

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bill Sommerfeld
On 04/14/10 12:37, Christian Molson wrote: First I want to thank everyone for their input, It is greatly appreciated. To answer a few questions: Chassis I have: http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm Motherboard: http://www.tyan.com/product_board_detail.aspx?pid=56

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Christian Molson
Just a quick update, Tested using bonnie++ just during its "Intelligent write": my 5 vdevs of 4x1tb drives wrote around 300-350MB/sec using that test. The 1vdev of 4x2TB drives wrote more inconsistently, between 200-300. This is not a complete test... just looking at iostat output while bonnie++

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Brandon High
On Wed, Apr 14, 2010 at 5:41 AM, Dmitry wrote: > Which build is the most stable, mainly for NAS? > I plann NAS  zfs + CIFS,iSCSI I'm using b133. My current box was installed with 118, upgraded to 128a, then 133. I'm avoiding b134 due to changes in the CIFS service that affect ACLs. http://bugs.o

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Christian Molson
First I want to thank everyone for their input, It is greatly appreciated. To answer a few questions: Chassis I have: http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm Motherboard: http://www.tyan.com/product_board_detail.aspx?pid=560 RAM: 24 GB (12 x 2GB) 10 x 1TB Seagates 7

Re: [zfs-discuss] [shell-discuss] getconf test for case insensitive ZFS?

2010-04-14 Thread Joerg Schilling
? wrote: > There is no way in the SUS standard to determinate if a file system is > case insensitive, i.e. with pathconf? SUS requires a case sensitive filesystem. There is no need to request this from a POSIX view Jörg -- EMail:jo...@schily.isdn.cs.tu-berlin.de (home)

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread David Dyer-Bennet
On Wed, April 14, 2010 12:29, Bob Friesenhahn wrote: > On Wed, 14 Apr 2010, David Dyer-Bennet wrote: Not necessarily for a home server. While mine so far is all mirrored pairs of 400GB disks, I don't even think about "performance" issues, I never come anywhere near the limits

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread David Dyer-Bennet
On Wed, April 14, 2010 11:51, Tonmaus wrote: >> >> On Wed, April 14, 2010 08:52, Tonmaus wrote: >> > safe to say: 2009.06 (b111) is unusable for the >> purpose, ans CIFS is dead >> > in this build. >> >> That's strange; I run it every day (my home Windows >> "My Documents" folder >> and all my pho

Re: [zfs-discuss] getconf test for case insensitive ZFS?

2010-04-14 Thread ольга крыжановская
Roy, I was looking for a C API which works for all types of file systems, including ZFS, CIFS, PCFS and others. Olga On Wed, Apr 14, 2010 at 7:46 PM, Roy Sigurd Karlsbakk wrote: > r...@urd:~# zfs get casesensitivity dpool/test > NAMEPROPERTY VALUESOURCE > dpool/test cas

Re: [zfs-discuss] getconf test for case insensitive ZFS?

2010-04-14 Thread Roy Sigurd Karlsbakk
r...@urd:~# zfs get casesensitivity dpool/test NAMEPROPERTY VALUESOURCE dpool/test casesensitivity sensitive- this seems to be settable only by create, not later. See man zfs for more info Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net ht

Re: [zfs-discuss] [shell-discuss] getconf test for case insensitive ZFS?

2010-04-14 Thread ольга крыжановская
There is no way in the SUS standard to determinate if a file system is case insensitive, i.e. with pathconf? Olga On Wed, Apr 14, 2010 at 7:48 PM, Glenn Fowler wrote: > > On Wed, 14 Apr 2010 17:54:02 +0200 =?KOI8-R?B?z8zYx8Egy9LZ1sHOz9fTy8HR?= > wrote: >> Can I use getconf to test if a ZFS file

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bob Friesenhahn
On Wed, 14 Apr 2010, David Dyer-Bennet wrote: Not necessarily for a home server. While mine so far is all mirrored pairs of 400GB disks, I don't even think about "performance" issues, I never come anywhere near the limits of the hardware. I don't see how the location of the server has any bea

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Cindy Swearingen
Jonathan, For a different diagnostic perspective, you might use the fmdump -eV command to identify what FMA indicates for this device. This level of diagnostics is below the ZFS level and definitely more detailed so you can see when these errors began and for how long. Cindy On 04/14/10 11:08,

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
> > Do worry about media errors. Though this is the most > common HDD > error, it is also the cause of data loss. > Fortunately, ZFS detected this > and repaired it for you. Right. I assume you do recommend swapping the faulted drive out though? Other file systems may not > be so gracious. >

Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-14 Thread Richard Elling
On Apr 14, 2010, at 5:13 AM, fred pam wrote: > I have a similar problem that differs in a subtle way. I moved a zpool > (single disk) from one system to another. Due to my inexperience I did not > import the zpool but (doh!) 'zpool create'-ed it (I may also have used a -f > somewhere in there..

Re: [zfs-discuss] Checksum errors on and after resilver

2010-04-14 Thread Richard Elling
[this seems to be the question of the day, today...] On Apr 14, 2010, at 2:57 AM, bonso wrote: > Hi all, > I recently experienced a disk failure on my home server and observed checksum > errors while resilvering the pool and on the first scrub after the resilver > had completed. Now everything

Re: [zfs-discuss] ZIL errors but device seems OK

2010-04-14 Thread Richard Elling
comment below... On Apr 14, 2010, at 1:49 AM, Richard Skelton wrote: > Hi, > I have installed OpenSolaris snv_134 from the iso at genunix.org. > Mon Mar 8 2010 New OpenSolaris preview, based on build 134 > I created a zpool:- >NAMESTATE READ WRITE CKSUM >tankON

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread David Dyer-Bennet
On Wed, April 14, 2010 12:06, Bob Friesenhahn wrote: > On Wed, 14 Apr 2010, David Dyer-Bennet wrote: >>> It should be "safe" but chances are that your new 2TB disks are >>> considerably slower than the 1TB disks you already have. This should >>> be as much cause for concern (or more so) than the

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
Yeah, -- $smartctl -d sat,12 -i /dev/rdsk/c5t0d0 smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bob Friesenhahn
On Wed, 14 Apr 2010, David Dyer-Bennet wrote: It should be "safe" but chances are that your new 2TB disks are considerably slower than the 1TB disks you already have. This should be as much cause for concern (or more so) than the difference in raidz topology. Not necessarily for a home server.

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Richard Elling
On Apr 14, 2010, at 9:56 AM, Jonathan wrote: > I just ran 'iostat -En'. This is what was reported for the drive in question > (all other drives showed 0 errors across the board. > > All drives indicated the "illegal request... predictive failure analysis" > --

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Eric Andersen
> I'm on snv 111b. I attempted to get smartmontools > workings, but it doesn't seem to want to work as > these are all sata drives. Have you tried using '-d sat,12' when using smartmontools? opensolaris.org/jive/thread.jspa?messageID=473727 -- This message posted from opensolaris.org __

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Richard Elling
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote: > From my experience dealing with > 4TB you stop writing after 80% of zpool > utilization YMMV. I have routinely completely filled zpools. There have been some improvements in performance of allocations when free space gets low in the past 6-9 month

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just ran 'iostat -En'. This is what was reported for the drive in question (all other drives showed 0 errors across the board. All drives indicated the "illegal request... predictive failure analysis" -- c7t1d0

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Tonmaus
> > On Wed, April 14, 2010 08:52, Tonmaus wrote: > > safe to say: 2009.06 (b111) is unusable for the > purpose, ans CIFS is dead > > in this build. > > That's strange; I run it every day (my home Windows > "My Documents" folder > and all my photos are on 2009.06). > > > -bash-3.2$ cat /etc/rele

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Richard Elling
On Apr 14, 2010, at 12:05 AM, Jonathan wrote: > I just started replacing drives in this zpool (to increase storage). I pulled > the first drive, and replaced it with a new drive and all was well. It > resilvered with 0 errors. This was 5 days ago. Just today I was looking > around and noticed t

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Yariv Graf
From my experience dealing with > 4TB you stop writing after 80% of zpool utilization 10 On Apr 14, 2010, at 6:53 PM, "eXeC001er" wrote: 20 % - it is big size on for large volumes. right ? 2010/4/14 Yariv Graf Hi Keep below 80% 10 On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote: Hi A

Re: [zfs-discuss] brtfs on Solaris? (Re: [osol-discuss] [indiana-discuss] So when are we gonna fork this sucker?)

2010-04-14 Thread ольга крыжановская
I would like to see brtfs under a BSD license that FreeBSD/OpenBSD/NetBSD can adopt it, too. Olga 2010/4/14 : > >>brtfs could be supported on Opensolaris, too. IMO it could even >>complement ZFS and spawn some concurrent development between both. ZFS >>is too high end and works very poorly with

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread eXeC001er
20 % - it is big size on for large volumes. right ? 2010/4/14 Yariv Graf > Hi > Keep below 80% > > 10 > > On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote: > > Hi All. > > How many disk space i need to reserve for save ZFS perfomance ? > > any official doc? > > Thanks. > > __

[zfs-discuss] getconf test for case insensitive ZFS?

2010-04-14 Thread ольга крыжановская
Can I use getconf to test if a ZFS file system is mounted in case insensitive mode? Olga -- , __ , { \/`o;-Olga Kryzhanovska -;o`\/ } .'-/`-/ olga.kryzhanov...@gmail.com \-`\-'. `'-..-| / Solaris/BSD//C/C++ progra

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Yariv Graf
Hi Keep below 80% 10 On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote: Hi All. How many disk space i need to reserve for save ZFS perfomance ? any official doc? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] brtfs on Solaris? (Re: [osol-discuss] [indiana-discuss] So when are we gonna fork this sucker?)

2010-04-14 Thread Casper . Dik
>brtfs could be supported on Opensolaris, too. IMO it could even >complement ZFS and spawn some concurrent development between both. ZFS >is too high end and works very poorly with less than 2GB while brtfs >reportedly works well with 128MB on ARM. Both have license issues; Oracle can now re-lice

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread Robert Milkowski
On 14/04/2010 16:04, John wrote: Hello, we set our ZFS filesystems to casesensitivity=mixed when we created them. However, CIFS access to these files is still case sensitive. Here is the configuration: # zfs get casesensitivity pool003/arch NAME PROPERTY VALUESOURCE

[zfs-discuss] ZFS Perfomance

2010-04-14 Thread eXeC001er
Hi All. How many disk space i need to reserve for save ZFS perfomance ? any official doc? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread David Dyer-Bennet
On Tue, April 13, 2010 10:38, Bob Friesenhahn wrote: > On Tue, 13 Apr 2010, Christian Molson wrote: >> >> Now I would like to add my 4 x 2TB drives, I get a warning message >> saying that: "Pool uses 5-way raidz and new vdev uses 4-way raidz" >> Do you think it would be safe to use the -f switch h

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread John
No the filesystem was created with b103 or earlier. Just to add more details, the issue only occurred for the first direct access to the file. >From a windows client that has never access the file, you can issue: dir \\filer\arch\myfolder\myfile.TXT and you will get file not found, if the file i

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread David Dyer-Bennet
On Tue, April 13, 2010 09:48, Christian Molson wrote: > > Now I would like to add my 4 x 2TB drives, I get a warning message saying > that: "Pool uses 5-way raidz and new vdev uses 4-way raidz" Do you think > it would be safe to use the -f switch here? Yes. 4-way on the bigger drive is *more*

[zfs-discuss] brtfs on Solaris? (Re: [osol-discuss] [indiana-discuss] So when are we gonna fork this sucker?)

2010-04-14 Thread ольга крыжановская
brtfs could be supported on Opensolaris, too. IMO it could even complement ZFS and spawn some concurrent development between both. ZFS is too high end and works very poorly with less than 2GB while brtfs reportedly works well with 128MB on ARM. Olga On Wed, Apr 14, 2010 at 5:31 PM, wrote: > > >

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread Tonmaus
was b130 also the version that created the data set? -Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread David Dyer-Bennet
On Wed, April 14, 2010 08:52, Tonmaus wrote: > safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead > in this build. That's strange; I run it every day (my home Windows "My Documents" folder and all my photos are on 2009.06). -bash-3.2$ cat /etc/release

[zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread John
Hello, we set our ZFS filesystems to casesensitivity=mixed when we created them. However, CIFS access to these files is still case sensitive. Here is the configuration: # zfs get casesensitivity pool003/arch NAME PROPERTY VALUESOURCE pool003/arch casesensitivity mixe

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Bruno Sousa
Hi, Maybe your zfs box used for dedup has a big load, therefore giving timeouts in nagios checks? I ask you this because i also suffer from that effect in a system with 2 Intel Xeon 3.0Ghz ;) Bruno On 14-4-2010 15:48, Paul Archer wrote: > So I turned deduplication on on my staging FS (the one th

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Tonmaus
safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead in this build. I am using B133, but I am not sure if this is best choice. I'd like to hear from others as well. -Tonmaus -- This message posted from opensolaris.org ___ zfs-dis

[zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Paul Archer
So I turned deduplication on on my staging FS (the one that gets mounted on the database servers) yesterday, and since then I've been seeing the mount hang for short periods of time off and on. (It lights nagios up like a Christmas tree 'cause the disk checks hang and timeout.) I haven't turne

[zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Dmitry
Which build is the most stable, mainly for NAS? I plann NAS zfs + CIFS,iSCSI Thanks -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-14 Thread fred pam
I have a similar problem that differs in a subtle way. I moved a zpool (single disk) from one system to another. Due to my inexperience I did not import the zpool but (doh!) 'zpool create'-ed it (I may also have used a -f somewhere in there...) Interestingly the script still gives me the old u

[zfs-discuss] Checksum errors on and after resilver

2010-04-14 Thread bonso
Hi all, I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I'm posting this to get help with calming my nerves and detect any possible future fault

Re: [zfs-discuss] ZFS panic

2010-04-14 Thread Ian Collins
On 04/ 2/10 10:25 AM, Ian Collins wrote: Is this callstack familiar to anyone? It just happened on a Solaris 10 update 8 box: genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 () genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 () genunix: [ID 655072 ke

[zfs-discuss] ZIL errors but device seems OK

2010-04-14 Thread Richard Skelton
Hi, I have installed OpenSolaris snv_134 from the iso at genunix.org. Mon Mar 8 2010 New OpenSolaris preview, based on build 134 I created a zpool:- NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 c7t4d0ONLINE 0 0 0

Re: [zfs-discuss] b134 panic in ddt_sync_entry()

2010-04-14 Thread Cyril Plisko
On Wed, Apr 14, 2010 at 3:01 AM, Victor Latushkin wrote: > > On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote: > >> Hello ! >> >> I've had a laptop that crashed a number of times during last 24 hours >> with this stack: >> >> panic[cpu0]/thread=ff0007ab0c60: >> assertion failed: ddt_object_upda

[zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just started replacing drives in this zpool (to increase storage). I pulled the first drive, and replaced it with a new drive and all was well. It resilvered with 0 errors. This was 5 days ago. Just today I was looking around and noticed that my pool was degraded (I see now that this occurred