[zfs-discuss] Re: While we're sharing server info...

2007-05-21 Thread Diego Righi
So, since you are using the ahci driver, does your cfgadm output show sata devices too? =) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Richard Elling
More redundancy below... Torrey McMahon wrote: Phillip Fiedler wrote: Thanks for the input. So, I'm trying to meld the two replies and come up with a direction for my case and maybe a "rule of thumb" that I can use in the future (i.e., near future until new features come out in zfs) when I h

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Torrey McMahon
Phillip Fiedler wrote: Thanks for the input. So, I'm trying to meld the two replies and come up with a direction for my case and maybe a "rule of thumb" that I can use in the future (i.e., near future until new features come out in zfs) when I have external storage arrays that have built in R

Re: [zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 08:26:37PM -0700, Paul Armstrong wrote: > GIven you're not using compression for rsync, the only thing I can > think if would be that the stream compression of SSH is helping > here. SSH compresses by default? I thought you had to specify -oCompression and/or -oCompressionL

[zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread MC
> Personally I would go with ZFS entirely in most cases. That's the rule of thumb :) If you have a fast enough CPU and enough RAM, do everything with ZFS. This sounds koolaid-induced, but you'll need nothing else because ZFS does it all. My second personal rule of thumb concerns RAIDZ perform

[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over

2007-05-21 Thread Paul Armstrong
GIven you're not using compression for rsync, the only thing I can think if would be that the stream compression of SSH is helping here. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

[zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Paul Armstrong
There isn't a global hot spare, but you can add a hot spare to multiple pools. Paul This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: ZFS over a layered driver interface

2007-05-21 Thread Edward Pilatowicz
hey swetha, i don't think there is any easy answer for you here. i'd recommend watching all device operations (open, read, write, ioctl, strategy, prop_op, etc) that happen to the ramdisk device when you don't use your layered driver, and then again when you do. then you could compare the two to

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Frank Cusack
On May 21, 2007 6:30:42 PM -0500 Nicolas Williams <[EMAIL PROTECTED]> wrote: On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > But still, how is tar/SSH

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Nicolas Williams
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: > On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > > > It's not that it is, but t

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > It's not that it is, but that NFS sync semantics and ZFS sync > semantics conspire against single

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Nicolas Williams
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > But still, how is tar/SSH any more multi-threaded than tar/NFS? It's not that it is, but that NFS sync semantics and ZFS sync semantics conspire against single-threaded performance. ___ zfs-

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote: > Albert Chin wrote: > > >I think the bigger problem is the NFS performance penalty so we'll go > >lurk somewhere else to find out what the problem is. > > Is this with Solaris 10 or OpenSolaris on the client as well? Client is RHEL

[zfs-discuss] Re: Re: New zfs pr0n server :)))

2007-05-21 Thread Nigel Smith
I wanted to confirm the drivers I was using for the hard drives in my PC, and here is the method I used. Maybe you can try something similar, and see what you get. I used the 'prtconf' command, with the device path from the 'format' command. (Use bash as the shell, and use the tab key to expand th

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Robert Thurlow
Albert Chin wrote: Well, there is no data on the file server as this is an initial copy, Sorry Albert, I should have noticed that from your e-mail :-( I think the bigger problem is the NFS performance penalty so we'll go lurk somewhere else to find out what the problem is. Is this with Sol

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote: > Albert Chin wrote: > > >Why can't the NFS performance match that of SSH? > > One big reason is that the sending CPU has to do all the comparisons to > compute the list of files to be sent - it has to fetch the attributes > from bot

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Robert Thurlow
Albert Chin wrote: Why can't the NFS performance match that of SSH? One big reason is that the sending CPU has to do all the comparisons to compute the list of files to be sent - it has to fetch the attributes from both local and remote and compare timestamps. With ssh, local processes at eac

[zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Phillip Fiedler
Thanks for the input. So, I'm trying to meld the two replies and come up with a direction for my case and maybe a "rule of thumb" that I can use in the future (i.e., near future until new features come out in zfs) when I have external storage arrays that have built in RAID. At the moment, I'm

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Marion Hakanson
[EMAIL PROTECTED] said: > Why can't the NFS performance match that of SSH? Hi Albert, My first guess is the NFS vs array cache-flush issue. Have you configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll make a huge difference for NFS clients of ZFS file servers. Also, you might ma

[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
We're testing an X4100M2, 4GB RAM, with a 2-port 4GB Fibre Channel QLogic connected to a 2GB Fibre Channel 6140 array. The X4100M2 is running OpenSolaris b63. We have 8 drives in the Sun 6140 configured as individual RAID-0 arrays and have a ZFS RAID-Z2 array comprising 7 of the drives (for testin

Re: [zfs-discuss] Re: Making 'zfs destroy' safer

2007-05-21 Thread XIU
Actually, if your zfs filesystem has snapshots zfs will complain that the fs can't be destroyed (or that you have to use the -f switch to force it). So the first thing I do when making a new filesystem is create a snapshot to protect me from destroying a filesystem :) On 5/21/07, Peter Schuller <

Re: [zfs-discuss] Re: Re: Making 'zfs destroy' safer

2007-05-21 Thread Peter Schuller
> I would much prefer to do > > for snap in $(zfs list -t snapshot -r foo/bar ) > do > zfs destroy -t snapshot $snap > do > > the not have the -t. Especially the further away the destroy is from the > generation of the list. The extra -t would be belt and braces but that is > how I like

Re: [zfs-discuss] Re: Making 'zfs destroy' safer

2007-05-21 Thread Peter Schuller
> On the other hand personally I just don't see the need for this since > the @ char isn't special to the shell so I don't see where the original > problem came from. I never actually *had* a problem, I am just nervous about it. And yes, @ is not special for classical shells, but it's still more s

Re: [zfs-discuss] Re: New zfs pr0n server :)))

2007-05-21 Thread Christopher Gibbs
Sorry, I'm fairly new to Solaris... I'm not sure if it's using the ata driver or sata driver. Here are my current disks (0,1, and 2 are the sata disks): AVAILABLE DISK SELECTIONS: 0. c0d0 /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 1. c0d1

[zfs-discuss] While we're sharing server info...

2007-05-21 Thread Carson Gaspar
My server used to use ODS/SDS/SVM/WhateverSunCallsItToday RAID 5. When my old motherboard decided to flake out on me, SVM refused to recognize the old RAID5 set. Fortunately, I resurrected my old parts long enough to copy off almost all my data on to a pair of 750GB disks. I'm now running on Z

Re: [zfs-discuss] Re: New zfs pr0n server :)))

2007-05-21 Thread Carson Gaspar
Christopher Gibbs wrote: XIU, I'm currently using that card with my modest three-disk raid-z home server and it works great! Solaris 10 had native support for it so no need to mess with drivers. By "native support", I assume you mean IDE/ATA driver support ("ata" in modinfo), not SATA driver s

Re: [zfs-discuss] Re: New zfs pr0n server :)))

2007-05-21 Thread Christopher Gibbs
XIU, I'm currently using that card with my modest three-disk raid-z home server and it works great! Solaris 10 had native support for it so no need to mess with drivers. On 5/20/07, XIU <[EMAIL PROTECTED]> wrote: About sata controllers, anyone tried http://www.promise.com/product/product_detail

[zfs-discuss] Re: Re: Making 'zfs destroy' safer

2007-05-21 Thread Chris Gerhard
> > On the other hand personally I just don't see the > need for this since > the @ char isn't special to the shell so I don't see > where the original > problem came from. It is the combination of the fear of doing something bad and the the consequence of doing that something bad that make pe

Re: [zfs-discuss] Re: Making 'zfs destroy' safer

2007-05-21 Thread Darren J Moffat
Chris Gerhard wrote: You are not alone. My preference would be for an optional -t option to zfs destroy: zfs destroy -t snapshot tank/[EMAIL PROTECTED] or zfs destroy -t snapshot -r tank/fs would delete all the snapshots below tank/fs I agree since that would fit nicely with the existin

[zfs-discuss] Re: Re: New zfs pr0n server :)))

2007-05-21 Thread Diego Righi
It's not easy to retry now because the 4 disks attached to the sil3114 controller are half of the raidz2 mirror... but in the previous home server that I had, this controller had 3 * 320gb disks attached to it where I had a raidz1 pool. With dclarke's test it easily got 50MB/s in the first (mad