Hello,
what is the status of the bug 6381203 fix in S10 u3 ?
("deadlock due to i/o while assigning (tc_lock held)")
Was it integrated? Is there a patch?
Thanks,
[i]-- leon[/i]
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Too bad...I was in the situation where every "zpool ..." command was stuck (as
well as df command) and my hope was - it's a known/fixed bug. I could not save
the core files, not sure I can reproduce the bug.
Thank you for quick reply,
[i]-- leon[/i]
This message posted from opensolaris.org
_
http://napobo3.blogspot.com/2007/01/printing-problemz.html
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Marion asked the community
[i]> How can we explain the following results?[/i]
and nobody replied so I ask this question again because it's very important to
me:
[b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 times
faster[/u] than the ZFS on 1 slice of the same LUN via N
Jeff,
thank you for the explanation but it's hard to me to accept it because:
1.You described a different configuration: 7 LUNs. Marion post was about 7
slices of the same LUN.
2.I never saw the storage controller with cache-per-LUN setting. Cache size
doesn't depend on number of LUNs IMHO, it'
Hello,
I am running SPEC SFS benchmark [1] on dual Xeon 2.80GHz box with 4GB memory.
More details:
snv_56, zil_disable=1, zfs_arc_max = 0x8000 #2GB
Configurations that were tested:
160 dirs/1 zfs/1 zpool/4 SAN LUNs
160 zfs'es/1 zpool/4 SAN LUNs
40 zfs'es/4 zpools/4 SAN LUNs
One zpool was cre
Hi Marion,
your one-liner works only on SPARC and doesn't work on x86:
# dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] =
count()}'
dtrace: invalid probe specifier fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:[EMAIL
PROTECTED] = count()}: probe description
fbt::ssd_send_scsi_SYN
Not sure is it related to the fragmentation, but I can say that serious
performance degradation in my NFS/ZFS benchmarks [1] is a result of on-disk ZFS
data layout.
Read operations on directories (NFS3 readdirplus) are abnormally time consuming
. That kills the server. After cold restart of the
An update:
Not sure is it related to the fragmentation, but I can say that serious
performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS
data layout.
Read operations on directories (NFS3 readdirplus) are abnormally time consuming
. That kills the server. After cold restart
Have you tried PowerPath/EMC and MPxIO/Pillar on the same host?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert wrote:
> Before jumping to any conclusions - first try to
> eliminate nfs and do readdirs locally - I guess that would be quite fast.
> Then check on a client (dtrace) the time distribution of nfs requests
> and sends us results.
We used this test program that is doing readdirs and can be
>
> As I understand the issue, a readdirplus is
> 2X slower when data is already cached in the client
> than when it is not.
Yes, that's the issue. It's not always 2X slower, but ALWAYS SLOWER.
My another 2runs on NFS/ZFS show:
1. real 3:14.185
user2.249
sys33.083
2.
More detailed description of readdir test and conclusion at the end:
Roch asked me:
> Is this a NFS V3 or V4 test or don't care ?
I am running NFS V3 but the short test of NFS V4 showed that the
problem is there.
Then Roch asked:
> I've run rdir on a few of my large directories, However my
> l
Hello, gurus
I need your help. During the benchmark test of NFS-shared ZFS file systems at
some moment the number of NFS threads jumps to the maximal value, 1027
(NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS
is going down.
I've collected the output of
echo "::pgre
Hi Jim,
here are the answers to your questions :
>
> What size and type of server?
SUNW,Sun-Fire-V240, Memory size: 2048 Megabytes
> What size and type of storage?
SAN-attached storage array, dual-path 2GB FC connection
4 LUNs 96GB each :
# mpathadm list lu
/dev/rdsk/c3t001738010140003
On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend to stick around servicing
moderate loads.
-r
Hello Roch,
It's not my case. NFS
On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
Leon Koll writes:
> On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> >
> > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
> >
> > NFSD threads are created on a dem
On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
Leon Koll writes:
> On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> > Leon Koll writes:
> > > On 2/28/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
> > > >
> > > >
&
On 3/5/07, Spencer Shepler <[EMAIL PROTECTED]> wrote:
On Mar 5, 2007, at 11:17 AM, Leon Koll wrote:
> On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
>>
>> Leon Koll writes:
>>
>> > On 3/5/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
>> >
Hi Jiri,
>
> Currently the samba-3.0.25 will introduce the modular
> (.so) interface to plug the VFS modules handling the
> ACLs according the FS used.
Do you know - will the Samba/ZFS ACL be implemented in samba-3.0.26 ?
Thanks,
-- Leon
This message posted from opensolaris.org
_
Yes, it is:
SYNCHRONIZE_CACHE(10) opcode 0x35
SYNCHRONIZE_CACHE(16) opcode 0x91
[i]-- leon[/i]
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Welcome to the club, Andy...
I tried several times to attract the attention of the community to the dramatic
performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS combination -
without any result : http://www.opensolaris.org/jive/thread.jspa?messageID=98592";>[1] , http://www.opensolari
Hello, Roch
<...>
> Then SFS over ZFS is being investigated by others
> within
> Sun. I believe we have stuff in the pipe to make ZFS
> match
> or exceed UFS on small server level loads. So I
> think your
> complaint is being heard.
You're the first one who said this and I am glad I'm being he
>
> On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
>
> > Hello,
> >
> > I'd like to plan a storage solution for a system
> currently in
> > production.
> >
> > The system's storage is based on code which writes
> many files to
> > the file system, with overall storage needs
> currently aroun
> Leon Koll wrote:
> > My guess that Yaniv assumes that 8 pools with 62.5
> million files each have significantly less chances to
> be corrupted/cause the data loss than 1 pool with 500
> million files in it.
> > Do you agree with this?
>
> I do not agree with th
Hello,
on my sparc server running s10/u3 w/all latest patches.
I created a zpool and one fs and started to copy the data to it. The host
crashed during the tar... | tar ... run.
After it happened I tried "zpool destroy" and the host crashed. The same with
"zpool export".
It looks like a bug http:
> Have there been any new developments regarding the
> availability of vfs_zfsacl.c? Jeb, were you able to
> get a copy of Jiri's work-in-progress? I need this
> ASAP (as I'm sure most everyone watching this thread
> does)...
me too... A.S.A.P.!!!
[i]-- leon[/i]
This message posted from ope
> May be this link could help you?
>
> http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
>
Looks exactly what we need. It's strange it wasn't posted to zfs-discuss. SO
many people were waiting for this code.
Thanks, Dmitry.
This message posted from opensolaris.org
Does opensolaris iSCSI target support SCSI-3 PGR reservation ?
My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a
3-node suncluster.
[1] zfs set shareiscsi=on
[2] iscsitadm create target .
Thanks,
-- leon
This message posted from opensolaris.org
___
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
LUNs, connected via FC SAN.
The filesystems that were created on LUNS: UFS,VxFS,ZFS.
Unfortunately the ZFS test couldn't complete bacuase the box was hung
under very moderate load (3000 IOPs).
Additional tests were done using UFS
On 8/7/06, William D. Hathaway <[EMAIL PROTECTED]> wrote:
If this is reproducible, can you force a panic so it can be analyzed?
The core files and explorer output are here:
http://napobo3.lk.net/vinc/
The core files were created after the box was hungbreak to OBP...sync
On 8/7/06, George Wilson <[EMAIL PROTECTED]> wrote:
Leon,
Looking at the corefile doesn't really show much from the zfs side. It
looks like you were having problems with your san though:
/scsi_vhci/[EMAIL PROTECTED] (ssd5) offline
/scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, pa
On 8/8/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
> LUNs, connected via FC SAN.
> The filesystems that were created on LUNS: UFS,VxFS,ZFS.
> Unfortunately the ZFS test couldn't compl
<...>
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I did it - created one po
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> <...>
>
>> So having 4 pools isn't a recommended config - i would destroy those 4
>> pools and just create 1 RAID-0 pool:
>> #zpool create sfsrocks c4t00173801014Bd0 c4t00
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
>
>> Leon Koll wrote:
>>
>> > <...>
>> >
>> >> So having 4 pools isn't a recommended config
My question is not related directly to ZFS but maybe you know the answer.
Currently I can run the ZFS Web administration interface only locally - by
pointing my browser to
[i]https://localhost:6789/zfs/[/i]
What should be done to enable an access to [i]https://zfshost:6789/zfs/[/i] for
other host
> My question is not related directly to ZFS but maybe
> you know the answer.
> Currently I can run the ZFS Web administration
> interface only locally - by pointing my browser to
> [i]https://localhost:6789/zfs/[/i]
> What should be done to enable an access to
> [i]https://zfshost:6789/zfs/[/i] fo
38 matches
Mail list logo