Re: [zfs-discuss] Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?

2007-09-26 Thread Marc Bevand
David Runyon sun.com> writes: > > I'm trying to get maybe 200 MB/sec over NFS for large movie files (need (I assume you meant 200 Mb/sec with a lower case "b".) > large capacity to hold all of them). Are there any rules of thumb on how > much RAM is needed to handle this (probably RAIDZ for all

[zfs-discuss] Slow ZFS Performance

2007-09-26 Thread Pete Majka
I have a raidz zpool comprised of 5 320GB SATA drives and I am seeing the following numbers. capacity operationsbandwidth pool used avail read write read write -- - - - - - - vol01 123G 1.33T 66182 8.22M

Re: [zfs-discuss] device alias

2007-09-26 Thread A Darren Dunham
On Wed, Sep 26, 2007 at 11:36:57AM -0700, Richard Elling wrote: > AFAIK, VxVM still only expects one private region per disk. The private > region stores info on the configuration of the logical devices on the > disk, and its participation therein. ZFS places this data in the on-disk > format on

Re: [zfs-discuss] device alias

2007-09-26 Thread Tim Spriggs
zdb? Damon Atkins wrote: > ZFS should allow 31+NULL chars for a comment against each disk. > This would work well with the host name string (I assume is max_hostname > 255+NULL) > If a disk fails it should report c6t4908029d0 failed "comment from > disk", it should also remember the comment unt

Re: [zfs-discuss] Nice chassis for ZFS server

2007-09-26 Thread Tomas Ögren
On 26 September, 2007 - Nigel Smith sent me these 1,2K bytes: > It's a pity that Sun does not manufacture something like this. > The x4500 Thumper, with 48 disks is way over the top for most companies, > and too expensive. And the new X4150 only has 8 disks. > This Intel box with 12 hot-swap driv

Re: [zfs-discuss] device alias

2007-09-26 Thread Damon Atkins
ZFS should allow 31+NULL chars for a comment against each disk. This would work well with the host name string (I assume is max_hostname 255+NULL) If a disk fails it should report c6t4908029d0 failed "comment from disk", it should also remember the comment until reboot This would be useful for

Re: [zfs-discuss] Nice chassis for ZFS server

2007-09-26 Thread Richard Elling
Nigel Smith wrote: > It's a pity that Sun does not manufacture something like this. > The x4500 Thumper, with 48 disks is way over the top for most companies, > and too expensive. And the new X4150 only has 8 disks. > This Intel box with 12 hot-swap drives and two internal boot drives > looks like

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Richard Elling
Vincent Fox wrote: >> Vincent Fox wrote: >> >> Is this what you're referring to? >> http://www.solarisinternals.com/wiki/index.php/ZFS_Evi >> l_Tuning_Guide#Cache_Flushes > > As I wrote several times in this thread, this kernel variable does not work > in Sol 10u3. > > Probably not in u4 althoug

Re: [zfs-discuss] Nice chassis for ZFS server

2007-09-26 Thread Nigel Smith
It's a pity that Sun does not manufacture something like this. The x4500 Thumper, with 48 disks is way over the top for most companies, and too expensive. And the new X4150 only has 8 disks. This Intel box with 12 hot-swap drives and two internal boot drives looks like the sweet-spot to me. The on

Re: [zfs-discuss] Best option for my home file server?

2007-09-26 Thread Jeff Bonwick
I would keep it simple. Let's call your 250GB disks A, B, C, D, and your 500GB disks X and Y. I'd either make them all mirrors: zpool create mypool mirror A B mirror C D mirror X Y or raidz the little ones and mirror the big ones: zpool create mypool raidz A B C D mirror X Y or, as yo

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Vincent Fox
> Vincent Fox wrote: > > Is this what you're referring to? > http://www.solarisinternals.com/wiki/index.php/ZFS_Evi > l_Tuning_Guide#Cache_Flushes As I wrote several times in this thread, this kernel variable does not work in Sol 10u3. Probably not in u4 although I haven't tried it. I would li

Re: [zfs-discuss] zpool status (advanced listing)?

2007-09-26 Thread David W. Smith
Yes, this is it. Thanks. David On Wed, 2007-09-26 at 13:55 -0400, Will Murnane wrote: > > David Smith wrote: > > > Under the GUI, there is an "advanced" option which shows vdev capacity, > > > etc. I'm drawing a blank about how to get with the commands... > > 'zpool iostat -v' gives that leve

[zfs-discuss] Nice chassis for ZFS server

2007-09-26 Thread Cyril Plisko
Hi, I just came across this box [0] (McKay Creek). Seems to be exceptional building material for ZFS based NAS/iSCSI unit. I especially liked the two extra system disks hidden inside the box ! Any one have an experience with these ? [0] http://www.intel.com/design/servers/storage/ssr212mc2 --

Re: [zfs-discuss] [networking-discuss] TCP connections not getting cleaned up after application exits

2007-09-26 Thread Matty
On 9/26/07, James Carlson <[EMAIL PROTECTED]> wrote: > [how is this zfs-related?] > > Matty writes: > > We are running zones on a number of Solaris 10 update 3 hosts, and we > > are bumping into an issue where the kernel doesn't clean up > > connections after an application exits. > > Are you sure?

Re: [zfs-discuss] [networking-discuss] TCP connections not getting cleaned up after application exits

2007-09-26 Thread James Carlson
[how is this zfs-related?] Matty writes: > We are running zones on a number of Solaris 10 update 3 hosts, and we > are bumping into an issue where the kernel doesn't clean up > connections after an application exits. Are you sure? One possible cause of this sort of problem is that the applicatio

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 26, 2007, at 14:10, Torrey McMahon wrote: > You probably don't have to create a LUN the size of the NVRAM > either. As > long as its dedicated to one LUN then it should be pretty quick. The > 3510 cache, last I checked, doesn't do any per LUN segmentation or > sizing. Its a simple front

Re: [zfs-discuss] Best option for my home file server?

2007-09-26 Thread Gary Gendel
> I'm about to build a fileserver and I think I'm gonna > use OpenSolaris and ZFS. > > I've got a 40GB PATA disk which will be the OS disk, > and then I've got 4x250GB SATA + 2x500GB SATA disks. > From what you are writing I would think my best > option would be to slice the 500GB disks in two 250

Re: [zfs-discuss] Best option for my home file server?

2007-09-26 Thread Christopher
I'm about to build a fileserver and I think I'm gonna use OpenSolaris and ZFS. I've got a 40GB PATA disk which will be the OS disk, and then I've got 4x250GB SATA + 2x500GB SATA disks. From what you are writing I would think my best option would be to slice the 500GB disks in two 250GB and then

[zfs-discuss] TCP connections not getting cleaned up after application exits

2007-09-26 Thread Matty
Howdy, We are running zones on a number of Solaris 10 update 3 hosts, and we are bumping into an issue where the kernel doesn't clean up connections after an application exits. When this issue occurs, the netstat utility doesn't show anything listening on the port the application uses (8080 in the

[zfs-discuss] Balancing reads across mirror sets

2007-09-26 Thread Jason P. Warr
Hi all, I have an interesting project that I am working on. It is a large volume file download service that is in need of a new box. There current systems are not able to handle the load because for various reasons they have become very I/O limited. We currently run on Debian Linux with 3war

Re: [zfs-discuss] Root raidz without boot

2007-09-26 Thread Lori Alt
Kugutsumen wrote: >> Matthew Ahrens wrote: >> >>> Ross Newell wrote: >>> What are this issues preventing the root directory being stored on raidz? I'm talking specifically about root, and not boot which I can see would be difficult. Would it be somethi

Re: [zfs-discuss] device alias

2007-09-26 Thread Richard Elling
Quick reset, Greg Shaw asked for a more descriptive output for zpool status. I've already demonstrated how to do that. We also discussed the difficulty in making a reliable name to physical location map without involving humans. continuing on... A Darren Dunham wrote: > On Wed, Sep 26, 2007 at

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Bryan Cantrill
On Wed, Sep 26, 2007 at 02:10:39PM -0400, Torrey McMahon wrote: > Albert Chin wrote: > > On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote: > > > >> I don't understand. How do you > >> > >> "setup one LUN that has all of the NVRAM on the array dedicated to it" > >> > >> I'm pretty fam

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Roch Bourbonnais
The theory I am going by is that 10 seconds worth of your synchronous writes is sufficient for the slog. That breaks down if the main pool is the bottleneck. -r Le 26 sept. 07 à 20:10, Torrey McMahon a écrit : > Albert Chin wrote: >> On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrot

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Torrey McMahon
Albert Chin wrote: > On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote: > >> I don't understand. How do you >> >> "setup one LUN that has all of the NVRAM on the array dedicated to it" >> >> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit >> thick here, but can you

Re: [zfs-discuss] zpool status (advanced listing)?

2007-09-26 Thread Will Murnane
> David Smith wrote: > > Under the GUI, there is an "advanced" option which shows vdev capacity, > > etc. I'm drawing a blank about how to get with the commands... 'zpool iostat -v' gives that level of detail. ___ zfs-discuss mailing list zfs-discuss@o

Re: [zfs-discuss] device alias

2007-09-26 Thread A Darren Dunham
On Wed, Sep 26, 2007 at 09:53:00AM -0700, Richard Elling wrote: > A Darren Dunham wrote: > >It seems to me that would limit the knowledge to the currently imported > >machine, not keep it with the pool. > > The point is that the name of the vdev doesn't really matter to ZFS. I would assume the

Re: [zfs-discuss] device alias

2007-09-26 Thread Richard Elling
A Darren Dunham wrote: > On Tue, Sep 25, 2007 at 06:09:04PM -0700, Richard Elling wrote: >> Actually, you can use the existing name space for this. By default, >> ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could >> setup your own space, say /dev/myknowndisks and use more desc

[zfs-discuss] Recommendation HP SAN, FC+SATA, ORACLE

2007-09-26 Thread Bruce Shaw
I've been presented with the following scenario: - this is to be used primarily for ORACLE, including usage of ORACLE RMAN backups (to disk) - HP SAN (will NOT do JBOD) - 256 Gb disk available on high-speed Fibre Channel disk, currently on one LUN (1) - 256 Gb disk available on slower-speed SATA

Re: [zfs-discuss] zpool status (advanced listing)?

2007-09-26 Thread Cindy . Swearingen
I think you want zpool iostat: % zpool iostat capacity operationsbandwidth pool used avail read write read write -- - - - - - - tank54.5M 16.7G 0 0 3 1 users217M 16.5G 0 0

[zfs-discuss] zpool status (advanced listing)?

2007-09-26 Thread David Smith
Under the GUI, there is an "advanced" option which shows vdev capacity, etc. I'm drawing a blank about how to get with the commands... Thanks, David This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?

2007-09-26 Thread David Runyon
I'm trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Bill Sommerfeld
On Wed, 2007-09-26 at 07:22 -0400, Jonathan Edwards wrote: > the bottom line is that there's 2 competing cache > strategies that aren't very complimentary. To put it differently, technologies like ZFS change the optimal way to build systems. The ARC exists to speed up reads, and needs to be l

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
Hey Neel - Try this: nv70b> cat zfs_page.d #!/usr/sbin/dtrace -s #pragma D option quiet zfs_putpage:entry { printf("zfs write to %s\n",stringof(args[0]->v_path)); } zfs_getpage:entry { printf("zfs read from %s\n",stringof(args[0]->v_path)); } I did some quick tests with mmap'd

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Roch - PAE
Neelakanth Nadgir writes: > io:::start probe does not seem to get zfs filenames in > args[2]->fi_pathname. Any ideas how to get this info? > -neel > Who says an I/O is doing work for a single pathname/vnode or for a single process. There is not that one to one correspondance anymore. Not in

Re: [zfs-discuss] device alias

2007-09-26 Thread A Darren Dunham
On Tue, Sep 25, 2007 at 06:09:04PM -0700, Richard Elling wrote: > Actually, you can use the existing name space for this. By default, > ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could > setup your own space, say /dev/myknowndisks and use more descriptive > names. You might

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
> What sayeth the ZFS team regarding the use of a stable DTrace provider > with their file system? > For the record, the above has a tone to it that I really did not intend (antagonistic?), so I had a good chat with Roch about this. The file pathname is derived via a translator from th

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Neelakanth Nadgir
Jim I can't use zfs_read/write as the file is mmap()'d so no read/write! -neel On Sep 26, 2007, at 5:07 AM, Jim Mauro <[EMAIL PROTECTED]> wrote: > > Hi Neel - Thanks for pushing this out. I've been tripping over this > for a while. > > You can instrument zfs_read() and zfs_write() to reliably

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Brian H. Nelson
Vincent Fox wrote: > It seems like ZIL is a separate issue. > > I have read that putting ZIL on a separate device helps, but what about the > cache? > > OpenSolaris has some flag to disable it. Solaris 10u3/4 do not. I have > dual-controllers with NVRAM and battery backup, why can't I make use

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
Hi Neel - Thanks for pushing this out. I've been tripping over this for a while. You can instrument zfs_read() and zfs_write() to reliably track filenames: #!/usr/sbin/dtrace -s #pragma D option quiet zfs_read:entry, zfs_write:entry { printf("%s of %s\n",probefunc, stringof(args[0]->v

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 25, 2007, at 19:57, Bryan Cantrill wrote: > > On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote: >> It seems like ZIL is a separate issue. > > It is very much the issue: the seperate log device work was done > exactly > to make better use of this kind of non-volatile memory.

Re: [zfs-discuss] Root raidz without boot

2007-09-26 Thread Kugutsumen
> Matthew Ahrens wrote: > > Ross Newell wrote: > >> What are this issues preventing the root directory being stored on > >> raidz? > >> I'm talking specifically about root, and not boot which I can see > >> would be > >> difficult. > >> > >> Would it be something an amateur programmer could address

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Roch - PAE
Vincent Fox writes: > I don't understand. How do you > > "setup one LUN that has all of the NVRAM on the array dedicated to it" > > I'm pretty familiar with 3510 and 3310. Forgive me for being a bit > thick here, but can you be more specific for the n00b? > > Do you mean from firmware