Mike,
This RFE is still being worked and I have no ETA on completion...
cs
Mike Seda wrote:
I noticed that there is still an open bug regarding removing devices
from a zpool:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be im
I noticed that there is still an open bug regarding removing devices
from a zpool:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
[SNIP]
Tunable in a form of pool property, with default 100%.
On the other hand maybe simple algorithm Veritas has used is good
e
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
[SNIP]
Tunable in a form of pool property, with default 100%.
On the other hand maybe simple algorithm Veritas has used is good
enough - simple delay be
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
TM> Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older
machines and
that's not always what I would like.
REP> The big question, though, is "10% of
Hi Mike,
Yes, outside of the hot-spares feature, you can detach, offline, and
replace existing devices in a pool, but you can't remove devices, yet.
This feature work is being tracked under this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Cindy
Mike Seda wrote:
Hi All,
From reading the docs, it seems that you can add devices (non-spares)
to a zpool, but you cannot take them away, right?
Best,
Mike
Victor Latushkin wrote:
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performan
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performance impact of copy
operations. Such operations are usually per-
formed on small regions of the volume (nor-
Howdy Robert.
Robert Milkowski wrote:
You've got the same behavior with any LVM when you replace a disk.
So it's not something unexpected for admins. Also most of the time
they expect LVM to resilver ASAP. With default setting not being 100%
you'll definitely see people complaining ZFS is slooo
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
TM> Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older machines and
that's not always what I would like.
REP> The big question, though, is "10% of what?" User CPU? iop
Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older machines and
that's not always what I would like.
REP> The big question, though, is "10% of what?" User CPU? iops?
AH> Probably N% of I/O Ops/Second would work well.
Or if 100% means full speed, then 1
Richard Elling - PAE schrieb:
For modern machines, which *should* be the design point, the channel
bandwidth is underutilized, so why not use it?
And what about encrypted disks? Simply create a zpool with checksum=sha256,
fill it up, then scrub. I'd be happy if I could use my machine during
s
Richard Elling - PAE wrote:
The better approach is for the file system to do what it needs
to do as efficiently as possible, which is the current state of ZFS.
This implies that the filesystem has exclusive use of the channel - SAN
or otherwise - as well as the storage array front end control
Daniel Rock wrote:
Richard Elling - PAE schrieb:
The big question, though, is "10% of what?" User CPU? iops?
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performance impact of copy
operations. Su
Richard Elling - PAE schrieb:
The big question, though, is "10% of what?" User CPU? iops?
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces toe system performance impact of copy
operations. Such operations are us
Robert Milkowski wrote:
Saturday, November 4, 2006, 12:46:05 AM, you wrote:
REP> Incidentally, since ZFS schedules the resync iops itself, then it can
REP> really move along on a mostly idle system. You should be able to resync
REP> at near the media speed for an idle system. By contrast, a har
Richard Elling - PAE wrote:
Incidentally, since ZFS schedules the resync iops itself, then it can
really move along on a mostly idle system. You should be able to resync
at near the media speed for an idle system. By contrast, a hardware
RAID array has no knowledge of the context of the data
Al Hopper wrote:
[1] Using MTTDL = MTBF^2 / (N * (N-1) * MTTR)
But ... I'm not sure I buy into your numbers given the probability that
more than one disk will fail inside the service window - given that the
disks are identical? Or ... a disk failure occurs at 5:01 PM (quitting
time) on a Frida
On Fri, 3 Nov 2006, Richard Elling - PAE wrote:
> ozan s. yigit wrote:
> > for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
> > basis for this recommendation? i assume it is performance and not failure
> > resilience, but i am just guessing... [i know, recommendation was in
ozan s. yigit wrote:
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no f
Hello ozan,
Friday, November 3, 2006, 3:57:00 PM, you wrote:
osy> for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
osy> basis for this recommendation? i assume it is performance and not failure
osy> resilience, but i am just guessing... [i know, recommendation was intended
21 matches
Mail list logo