Roch wrote:
Ok lets get a profile then:
dtrace -n '[EMAIL PROTECTED](20)]=count()} END{trunc(@,20)}'
I sent this output offline to Roch, here's the essential ones and (first)
his reply:
So it looks like this:
6421427 netra x1 slagged by NFS over ZFS leading to long spins in the A
thanks for your feedback!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello zfs-discuss,
Server is v440, Solaris 10U2 + patches. Each test repeated at least two times
and two results posted. Server connected with dual-ported FC card with
MPxIO using FC-AL (DAS).
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-b
Hello,
The change to add the SYNC_NV bit came with SBC-2 rev. 14 ( May 2004 ).
In SBC-2 rev. 13 ( March 2004 ) the bit was Reserved.
It looks like devices that don't support this bit should continue to
sync to media and ignore the fact that the bit is set, but, it was a
Reserved bit before rev
Hi,
[EMAIL PROTECTED] cat /etc/release
Solaris Nevada snv_33 X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 06 February 2006
I have zfs running well o
Ricardo Correia wrote:
Wow, congratulations, nice work!
I'm the one porting ZFS to FUSE and seeing you doing such progress so fast is
very very encouraging :)
I'd like to throw a "me too" into the pile of thank-you messages!
I spent part of the weekend expanding and manipulating a set of L
I've filed a bug for the problem Tim mentions below.
6463140 zfs recv with a snapshot name that has 2 @@ in a row succeeds
This is most likely due to the order in which we call
zfs_validate_name in the zfs recv code, which would explain why other
snapshot commands like 'zfs snapshot' will fai
Tony Galway wrote:
A question (well lets make it 3 really) – Is vdbench a useful tool when testing file system performance of a ZFS file system?
Not really. VDBench simply reads and writes from the allocated file.
Filesystem tests do things like create files, read files, delete files,
move f
Hi all,
Customer has another questions. I'm resending :
I guess since the zones we are working with
are running /acting as Oracle 10 database
servers, the 100% memory usage prstat is not
accurate. Also, from the text below it seems
that rcapd is not the way to go to segregate
memory in zones and
I understand Legato doesn't work with ZFS yet. I looked through the
email archives, cpio and tar were mentioned. What's is my best option
if I want to dump approx 40G to tape?
-Karen
--
NOTICE: This email message
Karen Chau wrote:
I understand Legato doesn't work with ZFS yet. I looked through the
email archives, cpio and tar were mentioned. What's is my best option
if I want to dump approx 40G to tape?
Am I correct in saying that the issue was not getting the files to tape,
but properly storing comp
On Wed, Aug 23, 2006 at 09:57:04AM -0400, James Foronda wrote:
> Hi,
>
> [EMAIL PROTECTED] cat /etc/release
>Solaris Nevada snv_33 X86
> Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
>Use is subject to license terms.
>
Luke Scharf wrote:
Karen Chau wrote:
I understand Legato doesn't work with ZFS yet. I looked through the
email archives, cpio and tar were mentioned. What's is my best option
if I want to dump approx 40G to tape?
Am I correct in saying that the issue was not getting the files to tape,
but p
> The Legato
> software aborts the entire backup when it receives ENOSYS from the
> acl(2) syscall. Legato receives the ENOSYS because it was trying to
> find out how many POSIX draft ACL entries exist on a given file. Since
> ZFS doesn't support POSIX draft ACLs it returns ENOSYS. Whereas,
Luke Scharf <[EMAIL PROTECTED]> wrote:
> Karen Chau wrote:
> > I understand Legato doesn't work with ZFS yet. I looked through the
> > email archives, cpio and tar were mentioned. What's is my best option
> > if I want to dump approx 40G to tape?
> Am I correct in saying that the issue was no
On Wed, 2006-08-23 at 14:38 -0700, Darren Dunham wrote:
> For those folks that like to live just *over* the edge and would like to
> use ACL-less backups on ZFS with existing networker clients, what is the
> possibility of creating a pre-loadable library that wrapped acl(2)?
I may regret admitting
On 24/08/2006, at 6:40 AM, Matthew Ahrens wrote:
However, once you upgrade to build 35 or later (including S10
6/06), do
not downgrade back to build 34 or earlier, per the following message:
Summary: If you use ZFS, do not downgrade from build 35 or later to
build 34 or earlier
On Thu, Aug 24, 2006 at 08:12:34AM +1000, Boyd Adamson wrote:
> Isn't the whole point of the zpool upgrade process to allow users to
> decide when they want to remove the "fall back to old version" option?
>
> In other words shouldn't any change that eliminates going back to an
> old rev requi
On 24/08/2006, at 8:20 AM, Matthew Ahrens wrote:
On Thu, Aug 24, 2006 at 08:12:34AM +1000, Boyd Adamson wrote:
Isn't the whole point of the zpool upgrade process to allow users to
decide when they want to remove the "fall back to old version"
option?
In other words shouldn't any change that
I need help on this and don't know what to give to customer.
System is V40z running Solaris 10 x86 and customer is trying to create 3
disks as Raidz. After creating the pool,
looking at the disk space and configuration, he thinks that this is not
raidz pool but rather
stripes. THis is what exac
On 8/23/06, Arlina Goce-Capiral <[EMAIL PROTECTED]> wrote:
I need help on this and don't know what to give to customer.
System is V40z running Solaris 10 x86 and customer is trying to create 3
disks as Raidz. After creating the pool,
looking at the disk space and configuration, he thinks that t
Hello James,
Thanks for the response.
Yes. I got the bug id# and forwarded that to customer. But cu said that
he can create a large file
that is large as the stripe of the 3 disks. And if he pull a disk, the
whole zpool failes, so there's no
degraded pools, just fails.
Any idea on this?
Th
I just realized that I forgot to send this message to zfs-discuss back
in May when I fixed this bug. Sorry for the delay.
The putback of the following bug fix to Solaris Nevada build 42 and
Solaris 10 update 3 build 3 (and coinciding with the change to ZFS
on-disk version 3) changes the behavior
On 24/08/2006, at 10:14 AM, Arlina Goce-Capiral wrote:
Hello James,
Thanks for the response.
Yes. I got the bug id# and forwarded that to customer. But cu said
that he can create a large file
that is large as the stripe of the 3 disks. And if he pull a disk,
the whole zpool failes, so ther
Hi, experts,
I install Solaris 10 06/06 x86 on vmware 5.5, and admin zfs by command line and
web, all is good. Web admin is more convenient, I needn't type commands. But
after my computer lost power , and restarted, I get a problem on zfs web admin
(https://hostname:6789/zfs).
The problem is,
25 matches
Mail list logo