You propose ((2-way mirrored) x RAID-Z (3+1)) . That gives
you 3 data disks worth and you'd have to loose 2 disk in
each mirror (4 total) to loose data.
For random read load you describe, I could expect that the
per device cache to work nicely; That is file blocks stored
at some given
Robert Milkowski <[EMAIL PROTECTED]> writes:
> So it can look like:
[...]
>c0t2d0s1c0t2d0s1 SVM mirror, SWAP SWAP/s1 size =
>sizeof(/ + /var +
> /opt)
You can avoid this by swapping to a zvol, though at the moment t
Tao Chen writes:
> Hello Robert,
>
> On 6/1/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> > Hello Anton,
> >
> > Thursday, June 1, 2006, 5:27:24 PM, you wrote:
> >
> > ABR> What about small random writes? Won't those also require reading
> > ABR> from all disks in RAID-Z to read the
On 06/02/06 10:09, Rainer Orth wrote:
Robert Milkowski <[EMAIL PROTECTED]> writes:
So it can look like:
[...]
c0t2d0s1c0t2d0s1 SVM mirror, SWAP SWAP/s1 size =
sizeof(/ + /var +
/opt)
You can avoid this by swappi
Gavin Maltby writes:
> > You can avoid this by swapping to a zvol, though at the moment this
> > requires a fix for CR 6405330. Unfortunately, since one cannot yet dump to
> > a zvol, one needs a dedicated dump device in this case ;-(
>
> Dedicated dump devices are *always* best, so this is no l
Rainer Orth wrote:
Gavin Maltby writes:
You can avoid this by swapping to a zvol, though at the moment this
requires a fix for CR 6405330. Unfortunately, since one cannot yet dump to
a zvol, one needs a dedicated dump device in this case ;-(
Dedicated dump devices are *always* best, so this i
Darren J Moffat writes:
> >>> You can avoid this by swapping to a zvol, though at the moment this
> >>> requires a fix for CR 6405330. Unfortunately, since one cannot yet dump
> >>> to
> >>> a zvol, one needs a dedicated dump device in this case ;-(
> >> Dedicated dump devices are *always* best,
Anton Rang writes:
> On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance
> Engineering wrote:
>
> > I'm not taking a stance on this, but if I keep a controler
> > full of 128K I/Os and assuming there are targetting
> > contiguous physical blocks, how different is that t
Hello All :
I have a 16xSATA disks DISK ARRAY with JBOD configuration and it's attached LSI
FC HBA card. I use 2 raidz groups there are combine with 8 disks. zpool status
result as following:
=== zpool status ==
NAME STATE READ WRITE CKSUM
pool
On Fri, Jun 02, 2006 at 10:42:08AM -0700, axa wrote:
> Hello All :
> I have a 16xSATA disks DISK ARRAY with JBOD configuration and it's attached
> LSI FC HBA card. I use 2 raidz groups there are combine with 8 disks. zpool
> status result as following:
>
> === zpool status ==
>
> NAME
hi folks...
I've just been exposed to zfs directly, since I'm trying it out on
"a certain 48-drive box with 4 cpus" :-)
I read in the archives, the recent " hard drive write cache "
thread. in which someone at sun made the claim that zfs takes advantage of
the disk write cache, selectively enabl
On Fri, Jun 02, 2006 at 12:42:53PM -0700, Philip Brown wrote:
> hi folks...
> I've just been exposed to zfs directly, since I'm trying it out on
> "a certain 48-drive box with 4 cpus" :-)
>
> I read in the archives, the recent " hard drive write cache "
> thread. in which someone at sun made the c
I've been writing via tar to a pool some stuff from backup, around
500GB. Its taken quite a while as the tar is being read from NFS. My
ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA
drives (sil3124 card)
Ever once in a while, a "df" stalls and during that time my io's go
fla
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 05/16 - 05/31
=
Threads or announcements origin
14 matches
Mail list logo