[zfs-discuss] Checksum errors in storage pool

2007-03-07 Thread H.-J. Schnitzer
Hi, I am using ZFS under Solaris 10u3. After the defect of a 3510 Raid controller, I have several storage pools with defect objects. "zpool status -xv" prints a long list: DATASET OBJECT RANGE 4c0c 5dd lvl=0 blkid=2 28 b346lvl=0 blkid=9

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
Thanks for responses. There is a lot there I am looking forward to digesting. Right off the bat though I wanted to bring up something I found just before reading this reply as the answer to this question would automatically answer some other questinos There is a ZFS best practices wiki at http:

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Frank Cusack
On March 7, 2007 8:50:53 AM -0800 Matt B <[EMAIL PROTECTED]> wrote: Any thoughts on the best practice points I am raising? It disturbs me that it would make a statement like "don't use slices for production". I think that's just a performance thing. -frank _

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Richard Elling
Frank Cusack wrote: On March 7, 2007 8:50:53 AM -0800 Matt B <[EMAIL PROTECTED]> wrote: Any thoughts on the best practice points I am raising? It disturbs me that it would make a statement like "don't use slices for production". I think that's just a performance thing. yep, for those systems

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Richard Elling
Matt B wrote: Thanks for responses. There is a lot there I am looking forward to digesting. Right off the bat though I wanted to bring up something I found just before reading this reply as the answer to this question would automatically answer some other questinos There is a ZFS best practic

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
So it sounds like the consensus is that I should not worry about using slices with ZFS and the swap best practice doesn't really apply to my situation of a 4 disk x4200. So in summary(please confirm) this is what we are saying is a safe bet for using in a highly available production environment

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 03/07/2007 12:31:14 PM: > So it sounds like the consensus is that I should not worry about > using slices with ZFS > and the swap best practice doesn't really apply to my situation of a > 4 disk x4200. > > So in summary(please confirm) this is what we are saying is

Re: [zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Manoj Joseph
Matt B wrote: Any thoughts on the best practice points I am raising? It disturbs me that it would make a statement like "don't use slices for production". ZFS turns on write cache on the disk if you give it the entire disk to manage. It is good for performance. So, you should use whole disks w

[zfs-discuss] writes lost with zfs !

2007-03-07 Thread Ayaz Anjum
HI ! I have tested the following scenario created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2, Solaris 11/06 Currently i am having only one fc hba per server. 1. There is no IO to the zfs mountpoint. I disconnected the FC cable. Filesystem on zfs still shows as mounted (becaus

Re: [zfs-discuss] writes lost with zfs !

2007-03-07 Thread Manoj Joseph
Ayaz Anjum wrote: HI ! I have tested the following scenario created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2, Solaris 11/06 Currently i am having only one fc hba per server. 1. There is no IO to the zfs mountpoint. I disconnected the FC cable. Filesystem on zfs still sh