Hello Rainer,
Tuesday, November 14, 2006, 4:43:32 AM, you wrote:
RH> Sorry for the delay...
RH> No, it doesn't. The format command shows the drive, but zpool
RH> import does not find any pools. I've also used the detached bad
RH> SATA drive for testing; no go. Once a drive is detached, there
RH>
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
TM> Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older
machines and
that's not always what I would like.
REP> The big question, though, is "10% of
On 11/13/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Generally requirements are similar if you would use different file
system.
CL> i noticed the "free" phyiscal memory dropped
CL> quickly while doing "dd" on zfs files:
It's because ZFS doesn't use page cache and it uses kernel memory.
Si
Sorry for the delay...
No, it doesn't. The format command shows the drive, but zpool import does not
find any pools. I've also used the detached bad SATA drive for testing; no go.
Once a drive is detached, there seems to be no (not enough?) information about
the pool that allows import.
I have
Hello Cecilia,
Tuesday, November 14, 2006, 3:01:42 AM, you wrote:
CL> Hi,
CL>
CL> not sure whether there is a minimum physical
CL> memory requirement to run zfs?
Generally requirements are similar if you would use different file
system.
CL> i noticed the "free" phyiscal memory dropped
CL>
Hi,
not sure whether there is a minimum physical
memory requirement to run zfs?
i noticed the "free" phyiscal memory dropped
quickly while doing "dd" on zfs files:
kthr memorypagedisk faults cpu
r b w swap free re mf pi po fr de sr m1 m1 m1 m2 i
Matt:
What's your contact information so that I can send that information to you?
My apologies for taking so long to get back to this.
Sincerely,
Ewen
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Mike,
Yes, outside of the hot-spares feature, you can detach, offline, and
replace existing devices in a pool, but you can't remove devices, yet.
This feature work is being tracked under this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Cindy
Mike Seda wrote:
Hello Jason,
Friday, November 3, 2006, 4:54:04 AM, you wrote:
JJWW> Hi Robert,
JJWW> Out of curiosity would it be possible to see the same test but hitting
JJWW> the disk with write operations instead of read?
Sorry... I really wanted to and I'm afraid I won't find enough time
before my vacatio
Tomas Ögren wrote:
On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
Tomas,
comments inline...
arc::print struct arc
{
anon = ARC_anon
mru = ARC_mru
mru_ghost = ARC_mru_ghost
mfu = ARC_mfu
mfu_ghost = ARC_mfu_ghost
size = 0x6f7a400
p = 0x5d9bd5a
Hi All,
From reading the docs, it seems that you can add devices (non-spares)
to a zpool, but you cannot take them away, right?
Best,
Mike
Victor Latushkin wrote:
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performan
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performance impact of copy
operations. Such operations are usually per-
formed on small regions of the volume (nor-
Howdy Robert.
Robert Milkowski wrote:
You've got the same behavior with any LVM when you replace a disk.
So it's not something unexpected for admins. Also most of the time
they expect LVM to resilver ASAP. With default setting not being 100%
you'll definitely see people complaining ZFS is slooo
Tomas Ögren writes:
> On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
>
> > Tomas,
> >
> > comments inline...
> >
> >
> > >>arc::print struct arc
> > >>
> > >>
> > >{
> > > anon = ARC_anon
> > > mru = ARC_mru
> > > mru_ghost = ARC_mru_gh
On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
> Tomas,
>
> comments inline...
>
>
> >>arc::print struct arc
> >>
> >>
> >{
> > anon = ARC_anon
> > mru = ARC_mru
> > mru_ghost = ARC_mru_ghost
> > mfu = ARC_mfu
> > mfu_ghost = ARC_mfu_ghost
> > si
Anton B. Rang wrote:
Pretty much the only way to tell if you've used up all the space available for
file nodes is to actually try creating a file, though if 'df -e' returns 0 you
*may* not be able to create any new files. It may be possible to create empty
files (and very small ones) even i
16 matches
Mail list logo