Re: [zfs-discuss] ZFS hang and boot hang when iSCSI device removed

2008-02-06 Thread Ross
Yes, I've learnt that I get the e-mail reply a long while before it appears on the boards. Not entirely sure how these boards are run, it's certainly odd for somebody used to forums and not mailing lists, but they do seem to work eventually :) Thanks for the help Vic, will try to get back into

Re: [zfs-discuss] [storage-discuss] dos programs on a

2008-02-06 Thread Maurilio Longo
Alan, I'm using nexenta core rc4 which is based on nevada 81/82. zfs casesensitivity is set to 'insensitive' Best regards. Maurilio. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.open

[zfs-discuss] available space?

2008-02-06 Thread Jure Pečar
Maybe a basic zfs question ... I have a pool: # zpool status backup pool: backup state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM backupONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s

Re: [zfs-discuss] status of zfs boot netinstall kit

2008-02-06 Thread Roman Morokutti
Hi, I would like to continue this (maybe a bit outdated) thread with the question: 1. How to create a netinstall image? 2. How to write the netinstall image back as an ISO9660 on DVD? (after patching it for the zfsboot) Roman This message posted from opensolaris.org

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread William Fretts-Saxton
Hi Marc, # cat /etc/release Solaris 10 8/07 s10x_u4wos_12b X86 I don't know if my application uses synchronous I/O transactions...I'm using Sun's Glassfish v2u1. I've deleted the ZFS partition and have setup an SVM stripe/mirror just to see if "ZFS" is getting in the wa

[zfs-discuss] zpool status -x strangeness on b78

2008-02-06 Thread Ben Miller
We run a cron job that does a 'zpool status -x' to check for any degraded pools. We just happened to find a pool degraded this morning by running 'zpool status' by hand and were surprised that it was degraded as we didn't get a notice from the cron job. # uname -srvp SunOS 5.11 snv_78 i386 #

Re: [zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-02-06 Thread eric kustarz
> > While browsing the ZFS source code, I noticed that "usr/src/cmd/ > ztest/ztest.c", includes ztest_spa_rename(), a ZFS test which > renames a ZFS storage pool to a different name, tests the pool > under its new name, and then renames it back. I wonder why this > functionality was not expo

Re: [zfs-discuss] ZFS number of file systems scalability

2008-02-06 Thread Shawn Ferry
There is a write up of similar findings and more information about sharemgr http://developers.sun.com/solaris/articles/nfs_zfs.html Unfortunately I don't see anything that says those changes will be in u5. Shawn On Feb 5, 2008, at 8:21 PM, Paul B. Henson wrote: > > I was curious to see abo

[zfs-discuss] Did MDB Functionality Change?

2008-02-06 Thread spencer
On Solaris 10 u3 (11/06) I can execute the following: bash-3.00# mdb -k Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy ip sctp usba nca md zfs random ipc nfs crypto cpc fctl fcip logindmux ptm sppp ] > arc::print { anon = ARC_anon mru = ARC_mru mru_ghost = ARC_mru_g

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread William Fretts-Saxton
I disabled file prefetch and there was no effect. Here are some performance numbers. Note that, when the application server used a ZFS file system to save its data, the transaction took TWICE as long. For some reason, though, iostat is showing 5x as much disk writing (to the physical disks) o

Re: [zfs-discuss] ZFS configuration for a thumper

2008-02-06 Thread eric kustarz
On Feb 4, 2008, at 5:10 PM, Marion Hakanson wrote: > [EMAIL PROTECTED] said: >> FYI, you can use the '-c' option to compare results from various >> runs and >> have one single report to look at. > > That's a handy feature. I've added a couple of such comparisons: > http://acc.ohsu.edu/

Re: [zfs-discuss] available space?

2008-02-06 Thread Richard Elling
Jure Pečar wrote: > Maybe a basic zfs question ... > > I have a pool: > > # zpool status backup > pool: backup > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > backupONLINE 0 0 0 > mirror ONLINE

[zfs-discuss] zfs send / receive between different opensolaris versions?

2008-02-06 Thread Michael Hale
Hello everybody, I'm thinking of building out a second machine as a backup for our mail spool where I push out regular filesystem snapshots, something like a warm/hot spare situation. Our mail spool is currently running snv_67 and the new machine would probably be running whatever the lates

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Will Murnane
On Feb 6, 2008 6:36 PM, William Fretts-Saxton <[EMAIL PROTECTED]> wrote: > Here are some performance numbers. Note that, when the > application server used a ZFS file system to save its data, the > transaction took TWICE as long. For some reason, though, iostat is > showing 5x as much disk writin

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread William Fretts-Saxton
It is a striped/mirror: # zpool status NAMESTATE READ WRITE CKSUM pool1 ONLINE 0 0 0 mirrorONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 mirrorONL

Re: [zfs-discuss] scrub halts

2008-02-06 Thread Lida Horn
I now have a improved sata and marvell88sx driver modules that deal with various error conditions in a much more solid way. Changes include reducing the number of required device resets, properly reporting media errors (rather than "no additional sense"), clearing aborted packets more rapidly so th

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Vincent Fox
Solaris 10u4 eh? Sounds a lot like fsync issues we want into, trying to run Cyrus mail-server spools in ZFS. This was highlighted for us by the filebench software varmail test. OpenSolaris nv78 however worked very well. This message posted from opensolaris.org __

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marc Bevand
William Fretts-Saxton sun.com> writes: > > I disabled file prefetch and there was no effect. > > Here are some performance numbers. Note that, when the application server > used a ZFS file system to save its data, the transaction took TWICE as long. > For some reason, though, iostat is showing

[zfs-discuss] MySQL, Lustre and ZFS

2008-02-06 Thread kilamanjaro
Hi all, Any thoughts on if and when ZFS, MySQL, and Lustre 1.8 (and beyond) will work together and be supported so by Sun? - Network Systems Architect Advanced Digital Systems Internet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Neil Perrin
Marc Bevand wrote: > William Fretts-Saxton sun.com> writes: > >> I disabled file prefetch and there was no effect. >> >> Here are some performance numbers. Note that, when the application server >> used a ZFS file system to save its data, the transaction took TWICE as long. >> For some reason,

Re: [zfs-discuss] ZFS configuration for a thumper

2008-02-06 Thread Marion Hakanson
[EMAIL PROTECTED] said: > Your finding for random reads with or without NCQ match my findings: http:// > blogs.sun.com/erickustarz/entry/ncq_performance_analysis > > Disabling NCQ looks like a very tiny win for the multi-stream read case. I > found a much bigger win, but i was doing RAID-0 inst

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marion Hakanson
[EMAIL PROTECTED] said: > Here are some performance numbers. Note that, when the application server > used a ZFS file system to save its data, the transaction took TWICE as long. > For some reason, though, iostat is showing 5x as much disk writing (to the > physical disks) on the ZFS partition. C

Re: [zfs-discuss] ZFS Performance Issue

2008-02-06 Thread Marc Bevand
Neil Perrin Sun.COM> writes: > > The ZIL doesn't do a lot of extra IO. It usually just does one write per > synchronous request and will batch up multiple writes into the same log > block if possible. Ok. I was wrong then. Well, William, I think Marion Hakanson has the most plausible explanatio

[zfs-discuss] ZFS taking up to 80 seconds to flush a single 8KB O_SYNC block.

2008-02-06 Thread Nathan Kroenert
Hey all - I'm working on an interesting issue where I'm seeing ZFS being quite cranky about writing O_SYNC written blocks. Bottom line is that I have a small test case that does essentially this: open file for writing -- O_SYNC loop( write() 8KB of random data print time taken