Hi,
I am using ZFS under Solaris 10u3.
After the defect of a 3510 Raid controller, I have several storage pools
with defect objects. "zpool status -xv" prints a long list:
DATASET OBJECT RANGE
4c0c 5dd lvl=0 blkid=2
28 b346lvl=0 blkid=9
Thanks for responses. There is a lot there I am looking forward to digesting.
Right off the bat though I wanted to bring up something I found just before
reading this reply as the answer to this question would automatically answer
some other questinos
There is a ZFS best practices wiki at
http:
On March 7, 2007 8:50:53 AM -0800 Matt B <[EMAIL PROTECTED]> wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like "don't use slices for production".
I think that's just a performance thing.
-frank
_
Frank Cusack wrote:
On March 7, 2007 8:50:53 AM -0800 Matt B <[EMAIL PROTECTED]> wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like "don't use slices for production".
I think that's just a performance thing.
yep, for those systems
Matt B wrote:
Thanks for responses. There is a lot there I am looking forward to digesting.
Right off the bat though I wanted to bring up something I found just before
reading this reply as the answer to this question would automatically answer
some other questinos
There is a ZFS best practic
So it sounds like the consensus is that I should not worry about using slices
with ZFS
and the swap best practice doesn't really apply to my situation of a 4 disk
x4200.
So in summary(please confirm) this is what we are saying is a safe bet for
using in a highly available production environment
[EMAIL PROTECTED] wrote on 03/07/2007 12:31:14 PM:
> So it sounds like the consensus is that I should not worry about
> using slices with ZFS
> and the swap best practice doesn't really apply to my situation of a
> 4 disk x4200.
>
> So in summary(please confirm) this is what we are saying is
Matt B wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like "don't use slices for
production".
ZFS turns on write cache on the disk if you give it the entire disk to
manage. It is good for performance. So, you should use whole disks w
HI !
I have tested the following scenario
created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2,
Solaris 11/06
Currently i am having only one fc hba per server.
1. There is no IO to the zfs mountpoint. I disconnected the FC cable.
Filesystem on zfs still shows as mounted (becaus
Ayaz Anjum wrote:
HI !
I have tested the following scenario
created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2,
Solaris 11/06
Currently i am having only one fc hba per server.
1. There is no IO to the zfs mountpoint. I disconnected the FC cable.
Filesystem on zfs still sh
10 matches
Mail list logo