> And it started replacement/resilvering... after few minutes system became
unavailbale. Reboot only gives me a few minutes, then resilvering make system
unresponsible.
>
> Is there any workaroud or patch for this problem???
Argh, sorry -- the problem is that we don't do aggressive enough
scrub
Hello Eric,
Wednesday, August 16, 2006, 4:48:46 PM, you wrote:
ES> What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
ES> import? My only guess is that you have some explicit mountpoint set
ES> that's confusing the DSl-orderered mounting code. If this is the case,
ES> this w
I believe this is what you're hitting:
6456888 zpool attach leads to memory exhaustion and system hang
We are currently looking at fixing this so stay tuned.
Thanks,
George
Daniel Rock wrote:
Joseph Mocker schrieb:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS
SVM part
Hello Bob,
Wednesday, August 16, 2006, 3:55:26 PM, you wrote:
BE> Hi, this is a follow up to "Significant pauses to zfs writes".
BE> I'm getting about 15% slower performance using ZFS raidz than if
BE> I just mount the same type of drive using ufs.
BE> Based on some of the suggestions I receive
I have similar behaviour on S10 U2 but in a different situation.
I had working mirror with one of mirrors failed:
mirror DEGRADED 0 0 0
c0d0 ONLINE 0 0 0
c0d1 UNAVAILABLE 0 0 0
After replacing corrupted hard disk i've run:
# zpool replace tank c0d1
And it started replacement/resilvering... afte
> - When the filesystems have compress=ON I see the following: reads from
compressed filesystems come in waves; zpool will report for long durations (60+
seconds) no read activity while the write activity is consistently reported at
20MB/S (no variation in the write rate throughtout the test).
Wee Yeh Tan wrote:
Hi all,
My company will be acquiring the Sun SE6920 for our storage
virtualization project and we intend to use quite a bit of ZFS as
well. The 2 technologies seems somewhat at odds since the 6920 means
layers of hardware abstraction but ZFS seems to prefer more direct
access
Completely forgot to mention the OS in my previous post; Solaris 10 06/06.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Test setup:
- E2900 with 12 US-IV+ 1.5GHz processor, 96GB memory, 2x2Gbps FC HBAs, MPxIO
in round-robbin config.
- 50x64GB EMC disks presented on both 2 FCs.
- ZFS pool defined using all 50 disks
- Multiple ZFS filesystems built on the above pool.
I'm observing the following:
- When the
For an explanation of why this report has changed, see:
http://mail.opensolaris.org/pipermail/opensolaris-discuss/2006-August/019428.html
=
zfs-discuss 08/01 - 08/15
=
Size of all threads during period:
Thread size Topic
---
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 16, 2006 10:34:31 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
> On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
>> On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
>> > Is there a way to allow simple ex
On Wed, Aug 16, 2006 at 01:09:59PM -0700, Frank Cusack wrote:
> Sorry, I'm an email deleter, not a hoarder, so this is a new thread.
> Usually I save a thread I'm interested in for awhile before killing it,
> but I jumped the gun this time. Anyway ...
>
> I looked up neopath, cool product!
See a
Sorry, I'm an email deleter, not a hoarder, so this is a new thread.
Usually I save a thread I'm interested in for awhile before killing it,
but I jumped the gun this time. Anyway ...
I looked up neopath, cool product!
But ISTM that to use it would give up some zfs features like snapshots.
I wo
On Wed, Aug 16, 2006 at 01:44:48PM -0400, William Fretts-Saxton wrote:
> Perhaps this JNI code is what I'm looking for?
>
> http://cvs.opensolaris.org/source/xref/on/usr/src/lib/libzfs_jni/
>
> It says the ZFS GUI uses this so, I'm assuming, anyone could. Although
> I am a Java programmer, I ha
On August 16, 2006 10:34:31 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
> Is there a way to allow simple export commands the traverse multiple
> ZFS filesystems for e
Perhaps this JNI code is what I'm looking for?
http://cvs.opensolaris.org/source/xref/on/usr/src/lib/libzfs_jni/
It says the ZFS GUI uses this so, I'm assuming, anyone could. Although
I am a Java programmer, I have ZERO experience with JNI. Am I on the
right track?
Bill Sommerfeld wrote:
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
> Is there a way to allow simple export commands the traverse multiple
> ZFS filesystems for exporting? I'd hate to have to have hundreds of
> mounts required for every p
On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
Is there a way to allow simple export commands the traverse multiple
ZFS filesystems for exporting? I'd hate to have to have hundreds of
mounts required for every point in a given tree (we have users,
projects, src, etc)
This is my first post on the opensolaris.org forums. I'm still trying
to figure out whether I'm actually on the zfs-discuss alias and, if not,
how to add myself to it!
Anyway, I'm looking for a way to get information like "zpool list" or
"zfs get all " through an API instead of parsing the
i
One of the things espoused on this list again and again is that quotas
for users are not ideal, and that one should just make a filesystem
per user.
Ok.. I did that. I now have per just one "volume" within my pool some
380 odd users. By way of example, lets say I have
/pool/common/users/user1 ...
On Wed, 2006-08-16 at 11:49 -0400, Eric Enright wrote:
> On 8/16/06, William Fretts-Saxton <[EMAIL PROTECTED]> wrote:
> > I'm having trouble finding information on any hooks into ZFS. Is
> > there information on a ZFS API so I can access ZFS information
> > directly as opposed to having to const
On 8/16/06, William Fretts-Saxton <[EMAIL PROTECTED]> wrote:
I'm having trouble finding information on any hooks into ZFS. Is there
information on a ZFS API so I can access ZFS information directly as opposed to
having to constantly parse 'zpool' and 'zfs' command output?
libzfs: http://cvs.
I'm having trouble finding information on any hooks into ZFS. Is there
information on a ZFS API so I can access ZFS information directly as opposed to
having to constantly parse 'zpool' and 'zfs' command output?
This message posted from opensolaris.org
___
This seems like a reasonable RFE. Feel free to file it at
bugs.opensolaris.org.
- Eric
On Wed, Aug 16, 2006 at 06:44:44AM +0200, Robert Milkowski wrote:
> Hello zfs-discuss,
>
> I do have several pools in a SAN shared environment where some pools
> are mounted by one server and some by anot
What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
import? My only guess is that you have some explicit mountpoint set
that's confusing the DSl-orderered mounting code. If this is the case,
this was fixed in build 46 (likely to be in S10u4) to always mount
datasets in mountpoi
On 16/08/06, Joerg Schilling <[EMAIL PROTECTED]> wrote:
"Dick Davies" <[EMAIL PROTECTED]> wrote:
> As an aside, is there a general method to generate bootable
> opensolaris DVDs? The only way I know of getting opensolaris on
> is installing sxcr and then BFUing on top.
A year ago, I did publish
Hello Mark,
Wednesday, August 16, 2006, 3:23:43 PM, you wrote:
MM> Robert,
MM> Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
MM> following the import? These messages imply that the d5110 and d5111
MM> directories in the top-level filesystem of pool nfs-s5-p0 are not
MM> empt
Robert,
Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
following the import? These messages imply that the d5110 and d5111
directories in the top-level filesystem of pool nfs-s5-p0 are not
empty. Could you verify that 'df /nfs-s5-p0/d5110' displays
nfs-s5-p0/d5110 as the "Fil
"Dick Davies" <[EMAIL PROTECTED]> wrote:
> As an aside, is there a general method to generate bootable
> opensolaris DVDs? The only way I know of getting opensolaris on
> is installing sxcr and then BFUing on top.
A year ago, I did publish a toolkit to create bootable SchilliX CDs/DVDs.
Would thi
On Wednesday 16 August 2006 11:55, Wee Yeh Tan wrote:
> Hi all,
>
> My company will be acquiring the Sun SE6920 for our storage
> virtualization project and we intend to use quite a bit of ZFS as
> well. The 2 technologies seems somewhat at odds since the 6920 means
> layers of hardware abstractio
On 8/7/06, Adam Leventhal <[EMAIL PROTECTED]> wrote:
Needless to say, this was a pretty interesting piece of the keynote from a
technical point of view that had quite a few of us scratching our heads.
After talking to some Apple engineers, it seems like what they're doing is
more or less this:
W
Jaganraj Janarthanan wrote:
Customer had come back saying.
I've tried this and it doesn't work. It still displays the size as 1 gig after
saying it's bringing the lun back online.
Eric Schrock wrote On 08/15/06 20:49,:
[ For ZFS discussions, try 'zfs-discuss@opensolaris.org' or
'[EMAIL PROTEC
Wee Yeh Tan wrote:
My company will be acquiring the Sun SE6920 for our storage
virtualization project and we intend to use quite a bit of ZFS as
well. The 2 technologies seems somewhat at odds since the 6920 means
layers of hardware abstraction but ZFS seems to prefer more direct
access to disk.
homerun wrote:
been using now zfs since 06/06 u2 release has been out. one thing
have notised. zfs eats a lot of memory. right after boot mem usage is
about 280M but after accessing zfs disks usage rises fast to be 900M.
and it seems to stay in level of 90% of tot mem. also noted it frees
used m
Hi all,
My company will be acquiring the Sun SE6920 for our storage
virtualization project and we intend to use quite a bit of ZFS as
well. The 2 technologies seems somewhat at odds since the 6920 means
layers of hardware abstraction but ZFS seems to prefer more direct
access to disk.
I tried t
Hi
been using now zfs since 06/06 u2 release has been out.
one thing have notised.
zfs eats a lot of memory.
right after boot mem usage is about 280M
but after accessing zfs disks usage rises fast to be 900M.
and it seems to stay in level of 90% of tot mem.
also noted it frees used mem but running
36 matches
Mail list logo