[zfs-discuss] bug id 6381203

2007-01-28 Thread Leon Koll
Hello, what is the status of the bug 6381203 fix in S10 u3 ? ("deadlock due to i/o while assigning (tc_lock held)") Was it integrated? Is there a patch? Thanks, [i]-- leon[/i] This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

[zfs-discuss] Re: bug id 6381203

2007-01-28 Thread Leon Koll
Too bad...I was in the situation where every "zpool ..." command was stuck (as well as df command) and my hope was - it's a known/fixed bug. I could not save the core files, not sure I can reproduce the bug. Thank you for quick reply, [i]-- leon[/i] This message posted from opensolaris.org _

[zfs-discuss] Zpooling problems

2007-01-31 Thread Leon Koll
http://napobo3.blogspot.com/2007/01/printing-problemz.html This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS vs NFS vs array caches, revisited

2007-02-10 Thread Leon Koll
Marion asked the community [i]> How can we explain the following results?[/i] and nobody replied so I ask this question again because it's very important to me: [b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 times faster[/u] than the ZFS on 1 slice of the same LUN via N

[zfs-discuss] Re: Re: ZFS vs NFS vs array caches, revisited

2007-02-11 Thread Leon Koll
Jeff, thank you for the explanation but it's hard to me to accept it because: 1.You described a different configuration: 7 LUNs. Marion post was about 7 slices of the same LUN. 2.I never saw the storage controller with cache-per-LUN setting. Cache size doesn't depend on number of LUNs IMHO, it'

[zfs-discuss] SPEC SFS testing of NFS/ZFS/B56

2007-02-12 Thread Leon Koll
Hello, I am running SPEC SFS benchmark [1] on dual Xeon 2.80GHz box with 4GB memory. More details: snv_56, zil_disable=1, zfs_arc_max = 0x8000 #2GB Configurations that were tested: 160 dirs/1 zfs/1 zpool/4 SAN LUNs 160 zfs'es/1 zpool/4 SAN LUNs 40 zfs'es/4 zpools/4 SAN LUNs One zpool was cre

[zfs-discuss] Re: Re: Re: ZFS vs NFS vs array caches, revisited

2007-02-13 Thread Leon Koll
Hi Marion, your one-liner works only on SPARC and doesn't work on x86: # dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] = count()}' dtrace: invalid probe specifier fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:[EMAIL PROTECTED] = count()}: probe description fbt::ssd_send_scsi_SYN

[zfs-discuss] Re: ZFS fragmentation

2007-02-14 Thread Leon Koll
Not sure is it related to the fragmentation, but I can say that serious performance degradation in my NFS/ZFS benchmarks [1] is a result of on-disk ZFS data layout. Read operations on directories (NFS3 readdirplus) are abnormally time consuming . That kills the server. After cold restart of the

[zfs-discuss] Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-14 Thread Leon Koll
An update: Not sure is it related to the fragmentation, but I can say that serious performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS data layout. Read operations on directories (NFS3 readdirplus) are abnormally time consuming . That kills the server. After cold restart

[zfs-discuss] Re: ZFS with SAN Disks and mutipathing

2007-02-18 Thread Leon Koll
Have you tried PowerPath/EMC and MPxIO/Pillar on the same host? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-18 Thread Leon Koll
Robert wrote: > Before jumping to any conclusions - first try to > eliminate nfs and do readdirs locally - I guess that would be quite fast. > Then check on a client (dtrace) the time distribution of nfs requests > and sends us results. We used this test program that is doing readdirs and can be

[zfs-discuss] Re: Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-20 Thread Leon Koll
> > As I understand the issue, a readdirplus is > 2X slower when data is already cached in the client > than when it is not. Yes, that's the issue. It's not always 2X slower, but ALWAYS SLOWER. My another 2runs on NFS/ZFS show: 1. real 3:14.185 user2.249 sys33.083 2.

[zfs-discuss] Re: Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-21 Thread Leon Koll
More detailed description of readdir test and conclusion at the end: Roch asked me: > Is this a NFS V3 or V4 test or don't care ? I am running NFS V3 but the short test of NFS V4 showed that the problem is there. Then Roch asked: > I've run rdir on a few of my large directories, However my > l

[zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Leon Koll
Hello, gurus I need your help. During the benchmark test of NFS-shared ZFS file systems at some moment the number of NFS threads jumps to the maximal value, 1027 (NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS is going down. I've collected the output of echo "::pgre

[zfs-discuss] Re: Why number of NFS threads jumps to the max value?

2007-03-01 Thread Leon Koll
Hi Jim, here are the answers to your questions : > > What size and type of server? SUNW,Sun-Fire-V240, Memory size: 2048 Megabytes > What size and type of storage? SAN-attached storage array, dual-path 2GB FC connection 4 LUNs 96GB each : # mpathadm list lu /dev/rdsk/c3t001738010140003

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-02 Thread Leon Koll
On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988 NFSD threads are created on a demand spike (all of them waiting on I/O) but thentend to stick around servicing moderate loads. -r Hello Roch, It's not my case. NFS

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-05 Thread Leon Koll
On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote: Leon Koll writes: > On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote: > > > > > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988 > > > > NFSD threads are created on a dem

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-05 Thread Leon Koll
On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote: Leon Koll writes: > On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote: > > > > Leon Koll writes: > > > On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote: > > > > > > > > &

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-03-05 Thread Leon Koll
On 3/5/07, Spencer Shepler <[EMAIL PROTECTED]> wrote: On Mar 5, 2007, at 11:17 AM, Leon Koll wrote: > On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote: >> >> Leon Koll writes: >> >> > On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote: >> >

[zfs-discuss] Re: Samba and ZFS ACL Question

2007-03-16 Thread Leon Koll
Hi Jiri, > > Currently the samba-3.0.25 will introduce the modular > (.so) interface to plug the VFS modules handling the > ACLs according the FS used. Do you know - will the Samba/ZFS ACL be implemented in samba-3.0.26 ? Thanks, -- Leon This message posted from opensolaris.org _

[zfs-discuss] Re: storage type for ZFS

2007-04-18 Thread Leon Koll
Yes, it is: SYNCHRONIZE_CACHE(10) opcode 0x35 SYNCHRONIZE_CACHE(16) opcode 0x91 [i]-- leon[/i] This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)

2007-04-20 Thread Leon Koll
Welcome to the club, Andy... I tried several times to attract the attention of the community to the dramatic performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS combination - without any result : http://www.opensolaris.org/jive/thread.jspa?messageID=98592";>[1] , http://www.opensolari

[zfs-discuss] Re: Re: ZFS+NFS on storedge 6120 (sun t4)

2007-04-23 Thread Leon Koll
Hello, Roch <...> > Then SFS over ZFS is being investigated by others > within > Sun. I believe we have stuff in the pipe to make ZFS > match > or exceed UFS on small server level loads. So I > think your > complaint is being heard. You're the first one who said this and I am glad I'm being he

[zfs-discuss] Re: [nfs-discuss] Multi-tera, small-file filesystems

2007-04-23 Thread Leon Koll
> > On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote: > > > Hello, > > > > I'd like to plan a storage solution for a system > currently in > > production. > > > > The system's storage is based on code which writes > many files to > > the file system, with overall storage needs > currently aroun

[zfs-discuss] Re: Re: [nfs-discuss] Multi-tera, small-file filesystems

2007-04-23 Thread Leon Koll
> Leon Koll wrote: > > My guess that Yaniv assumes that 8 pools with 62.5 > million files each have significantly less chances to > be corrupted/cause the data loss than 1 pool with 500 > million files in it. > > Do you agree with this? > > I do not agree with th

[zfs-discuss] zpool command causes a crash of my server

2007-05-01 Thread Leon Koll
Hello, on my sparc server running s10/u3 w/all latest patches. I created a zpool and one fs and started to copy the data to it. The host crashed during the tar... | tar ... run. After it happened I tried "zpool destroy" and the host crashed. The same with "zpool export". It looks like a bug http:

[zfs-discuss] Re: Samba and ZFS ACL Question

2007-05-07 Thread Leon Koll
> Have there been any new developments regarding the > availability of vfs_zfsacl.c? Jeb, were you able to > get a copy of Jiri's work-in-progress? I need this > ASAP (as I'm sure most everyone watching this thread > does)... me too... A.S.A.P.!!! [i]-- leon[/i] This message posted from ope

[zfs-discuss] Re: Samba and ZFS ACL Question

2007-05-16 Thread Leon Koll
> May be this link could help you? > > http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html > Looks exactly what we need. It's strange it wasn't posted to zfs-discuss. SO many people were waiting for this code. Thanks, Dmitry. This message posted from opensolaris.org

[zfs-discuss] Does iSCSI target support SCSI-3 PGR reservation ?

2007-07-26 Thread Leon Koll
Does opensolaris iSCSI target support SCSI-3 PGR reservation ? My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a 3-node suncluster. [1] zfs set shareiscsi=on [2] iscsitadm create target . Thanks, -- leon This message posted from opensolaris.org ___

[zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread Leon Koll
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB LUNs, connected via FC SAN. The filesystems that were created on LUNS: UFS,VxFS,ZFS. Unfortunately the ZFS test couldn't complete bacuase the box was hung under very moderate load (3000 IOPs). Additional tests were done using UFS

Re: [zfs-discuss] Re: SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread Leon Koll
On 8/7/06, William D. Hathaway <[EMAIL PROTECTED]> wrote: If this is reproducible, can you force a panic so it can be analyzed? The core files and explorer output are here: http://napobo3.lk.net/vinc/ The core files were created after the box was hungbreak to OBP...sync

Re: [zfs-discuss] Re: SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread Leon Koll
On 8/7/06, George Wilson <[EMAIL PROTECTED]> wrote: Leon, Looking at the corefile doesn't really show much from the zfs side. It looks like you were having problems with your san though: /scsi_vhci/[EMAIL PROTECTED] (ssd5) offline /scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, pa

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-08 Thread Leon Koll
On 8/8/06, eric kustarz <[EMAIL PROTECTED]> wrote: Leon Koll wrote: > I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB > LUNs, connected via FC SAN. > The filesystems that were created on LUNS: UFS,VxFS,ZFS. > Unfortunately the ZFS test couldn't compl

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-09 Thread Leon Koll
<...> So having 4 pools isn't a recommended config - i would destroy those 4 pools and just create 1 RAID-0 pool: #zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0 c4t001738010140001Cd0 c4t0017380101400012d0 each of those devices is a 64GB lun, right? I did it - created one po

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-10 Thread Leon Koll
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote: Leon Koll wrote: > <...> > >> So having 4 pools isn't a recommended config - i would destroy those 4 >> pools and just create 1 RAID-0 pool: >> #zpool create sfsrocks c4t00173801014Bd0 c4t00

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-11 Thread Leon Koll
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote: Leon Koll wrote: > On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote: > >> Leon Koll wrote: >> >> > <...> >> > >> >> So having 4 pools isn't a recommended config

[zfs-discuss] Re: ZFS Web administration interface

2006-09-03 Thread Leon Koll
My question is not related directly to ZFS but maybe you know the answer. Currently I can run the ZFS Web administration interface only locally - by pointing my browser to [i]https://localhost:6789/zfs/[/i] What should be done to enable an access to [i]https://zfshost:6789/zfs/[/i] for other host

[zfs-discuss] Re: ZFS Web administration interface

2006-09-04 Thread Leon Koll
> My question is not related directly to ZFS but maybe > you know the answer. > Currently I can run the ZFS Web administration > interface only locally - by pointing my browser to > [i]https://localhost:6789/zfs/[/i] > What should be done to enable an access to > [i]https://zfshost:6789/zfs/[/i] fo