So, since you are using the ahci driver, does your cfgadm output show sata
devices too? =)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
More redundancy below...
Torrey McMahon wrote:
Phillip Fiedler wrote:
Thanks for the input. So, I'm trying to meld the two replies and come
up with a direction for my case and maybe a "rule of thumb" that I can
use in the future (i.e., near future until new features come out in
zfs) when I h
Phillip Fiedler wrote:
Thanks for the input. So, I'm trying to meld the two replies and come up with a
direction for my case and maybe a "rule of thumb" that I can use in the future
(i.e., near future until new features come out in zfs) when I have external storage
arrays that have built in R
On Mon, May 21, 2007 at 08:26:37PM -0700, Paul Armstrong wrote:
> GIven you're not using compression for rsync, the only thing I can
> think if would be that the stream compression of SSH is helping
> here.
SSH compresses by default? I thought you had to specify -oCompression
and/or -oCompressionL
> Personally I would go with ZFS entirely in most cases.
That's the rule of thumb :) If you have a fast enough CPU and enough RAM, do
everything with ZFS. This sounds koolaid-induced, but you'll need nothing else
because ZFS does it all.
My second personal rule of thumb concerns RAIDZ perform
GIven you're not using compression for rsync, the only thing I can think if
would be that the stream compression of SSH is helping here.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
There isn't a global hot spare, but you can add a hot spare to multiple pools.
Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hey swetha,
i don't think there is any easy answer for you here.
i'd recommend watching all device operations (open, read, write, ioctl,
strategy, prop_op, etc) that happen to the ramdisk device when you don't
use your layered driver, and then again when you do. then you could
compare the two to
On May 21, 2007 6:30:42 PM -0500 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > But still, how is tar/SSH
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:
> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > > But still, how is tar/SSH any more multi-threaded than tar/NFS?
> >
> > It's not that it is, but t
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > But still, how is tar/SSH any more multi-threaded than tar/NFS?
>
> It's not that it is, but that NFS sync semantics and ZFS sync
> semantics conspire against single
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync semantics
conspire against single-threaded performance.
___
zfs-
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote:
> Albert Chin wrote:
>
> >I think the bigger problem is the NFS performance penalty so we'll go
> >lurk somewhere else to find out what the problem is.
>
> Is this with Solaris 10 or OpenSolaris on the client as well?
Client is RHEL
I wanted to confirm the drivers I was using for the hard drives in my PC,
and here is the method I used. Maybe you can try something similar,
and see what you get.
I used the 'prtconf' command, with the device path from the 'format' command.
(Use bash as the shell, and use the tab key to expand th
Albert Chin wrote:
Well, there is no data on the file server as this is an initial copy,
Sorry Albert, I should have noticed that from your e-mail :-(
I think the bigger problem is the NFS performance penalty so we'll go
lurk somewhere else to find out what the problem is.
Is this with Sol
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote:
> Albert Chin wrote:
>
> >Why can't the NFS performance match that of SSH?
>
> One big reason is that the sending CPU has to do all the comparisons to
> compute the list of files to be sent - it has to fetch the attributes
> from bot
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
One big reason is that the sending CPU has to do all the comparisons to
compute the list of files to be sent - it has to fetch the attributes
from both local and remote and compare timestamps. With ssh, local
processes at eac
Thanks for the input. So, I'm trying to meld the two replies and come up with
a direction for my case and maybe a "rule of thumb" that I can use in the
future (i.e., near future until new features come out in zfs) when I have
external storage arrays that have built in RAID.
At the moment, I'm
[EMAIL PROTECTED] said:
> Why can't the NFS performance match that of SSH?
Hi Albert,
My first guess is the NFS vs array cache-flush issue. Have you configured
the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll make a huge difference
for NFS clients of ZFS file servers.
Also, you might ma
We're testing an X4100M2, 4GB RAM, with a 2-port 4GB Fibre Channel
QLogic connected to a 2GB Fibre Channel 6140 array. The X4100M2 is
running OpenSolaris b63.
We have 8 drives in the Sun 6140 configured as individual RAID-0
arrays and have a ZFS RAID-Z2 array comprising 7 of the drives (for
testin
Actually, if your zfs filesystem has snapshots zfs will complain that the fs
can't be destroyed (or that you have to use the -f switch to force it). So
the first thing I do when making a new filesystem is create a snapshot to
protect me from destroying a filesystem :)
On 5/21/07, Peter Schuller <
> I would much prefer to do
>
> for snap in $(zfs list -t snapshot -r foo/bar )
> do
> zfs destroy -t snapshot $snap
> do
>
> the not have the -t. Especially the further away the destroy is from the
> generation of the list. The extra -t would be belt and braces but that is
> how I like
> On the other hand personally I just don't see the need for this since
> the @ char isn't special to the shell so I don't see where the original
> problem came from.
I never actually *had* a problem, I am just nervous about it. And yes, @
is not special for classical shells, but it's still more s
Sorry, I'm fairly new to Solaris... I'm not sure if it's using the ata
driver or sata driver. Here are my current disks (0,1, and 2 are the
sata disks):
AVAILABLE DISK SELECTIONS:
0. c0d0
/[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL
PROTECTED],0
1. c0d1
My server used to use ODS/SDS/SVM/WhateverSunCallsItToday RAID 5. When
my old motherboard decided to flake out on me, SVM refused to recognize
the old RAID5 set. Fortunately, I resurrected my old parts long enough
to copy off almost all my data on to a pair of 750GB disks.
I'm now running on Z
Christopher Gibbs wrote:
XIU, I'm currently using that card with my modest three-disk raid-z
home server and it works great! Solaris 10 had native support for it
so no need to mess with drivers.
By "native support", I assume you mean IDE/ATA driver support ("ata" in
modinfo), not SATA driver s
XIU, I'm currently using that card with my modest three-disk raid-z
home server and it works great! Solaris 10 had native support for it
so no need to mess with drivers.
On 5/20/07, XIU <[EMAIL PROTECTED]> wrote:
About sata controllers, anyone tried
http://www.promise.com/product/product_detail
>
> On the other hand personally I just don't see the
> need for this since
> the @ char isn't special to the shell so I don't see
> where the original
> problem came from.
It is the combination of the fear of doing something bad and the the
consequence of doing that something bad that make pe
Chris Gerhard wrote:
You are not alone.
My preference would be for an optional -t option to zfs destroy:
zfs destroy -t snapshot tank/[EMAIL PROTECTED]
or
zfs destroy -t snapshot -r tank/fs
would delete all the snapshots below tank/fs
I agree since that would fit nicely with the existin
It's not easy to retry now because the 4 disks attached to the sil3114
controller are half of the raidz2 mirror... but in the previous home server
that I had, this controller had 3 * 320gb disks attached to it where I had a
raidz1 pool.
With dclarke's test it easily got 50MB/s in the first (mad
30 matches
Mail list logo