Hi,
there might be value in a "zpool scrub -r (as in "re-write blocks") other than
the prior discussion on encryption and compression.
For instance, a bit that is just about to rot might not be detected with a
regular zfs scrub but it would be rewritten with a re-writing scrub.
It would also exe
Mark Shellenbaum wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs a
Mark Shellenbaum wrote:
Glenn Skinner wrote:
The following is a nit-level comment, so I've directed it onl;y to you,
rather than to the entire list.
Date: Mon, 17 Jul 2006 09:57:35 -0600
From: Mark Shellenbaum <[EMAIL PROTECTED]>
Subject: [zfs-discuss] Proposal: delegated administ
> >PERMISSION GRANTING
> >
> > zfs allow [-l] [-d] <"everyone"|user|group> [,...] \
> >...
> > zfs unallow [-r] [-l] [-d]
> >
>
> If we're going to use English words, it should be "allow" and "disallow".
The problem with 'disallow' is that it implies precluding a behavior
that would no
Jeff Bonwick wrote:
PERMISSION GRANTING
zfs allow [-l] [-d] <"everyone"|user|group> [,...] \
...
zfs unallow [-r] [-l] [-d]
If we're going to use English words, it should be "allow" and "disallow".
The problem with 'disallow' is that it implies precluding a beha
Jeff Bonwick wrote:
PERMISSION GRANTING
zfs allow [-l] [-d] <"everyone"|user|group> [,...] \
...
zfs unallow [-r] [-l] [-d]
If we're going to use English words, it should be "allow" and "disallow".
The problem with 'disallow' is that it implies precluding a behavior
that wo
> Of course the re-writing must be 100% safe, but that can be done with COW
> quite easily.
Almost. The hard part is snapshots. If you modify a data block,
you must also modify every L1 indirect block in every snapshot
that points to it, and every L2 above each L1, all the way up
to the uberbloc
I like to think of delegation as being a bit different than granting
permision--in fact, as a special permission that may include counts.
For example, you might delegate to a manager the ability to grant
select permissions. You may want to limit the number of users the
manager may grant these
> For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
> or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way mirroring, of the (6 choose 2) = 6*5/2 = 15 possible
two-disk failu
Richard Elling wrote:
> Dana H. Myers wrote:
>
>> Jonathan Wheeler wrote:
>>
>>> On the one hand, that's greater then 1 disk's worth, so I'm getting
>>> striping performance out of a mirror GO ZFS. On the other, if I can get
>>> striping performance from mirrored reads, why is it only 94MB/sec
Hi,
a simple question..
is add dataset not part of zonecfg ?
global# zonecfg -z myzone (OK)
zonecfg:myzone> add dataset (fails as there is no dataset option)
zonecfg:myzone> add zfs (fails as there is no dataset option)
Basically how do I add a dataset to a zone ?
Thanks
Roshan
please cc
On 7/18/06, Roshan Perera <[EMAIL PROTECTED]> wrote:
Hi,
a simple question..
is add dataset not part of zonecfg ?
global# zonecfg -z myzone (OK)
zonecfg:myzone> add dataset (fails as there is no dataset option)
zonecfg:myzone> add zfs (fails as there is no dataset option)
Basically how do
Which version of Solaris are you using? You should be able to add a
dataset if you're running Solaris express. Not sure if this feature was
backported to S10u2.
global# uname -a
SunOS psonali1 5.11 snv_42 sun4u sparc SUNW,Sun-Fire-V210
global# zonecfg -z fozoone
fozoone: No such zone configured
Hello,
What is the access algorithm used within multi-component pools for a
given pool, and does it change when one or more members of the pool
become degraded ?
examples:
zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror
c5t0d0 c6t0d0
or;
zpool create ztank raidz c1t0d
On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
> > Of course the re-writing must be 100% safe, but that can be done with COW
> > quite easily.
>
> the ability to remap blocks would be *so* useful -- it would
> enable compression of preexisting data, removing devices from
> a pool, a
> On Mon, 17 Jul 2006, Roch wrote:
>
> >
> > Sorry to plug my own blog but have you had a look
> at these ?
> >
> >
>http://blogs.sun.com/roller/page/roch?entry=when_to_a
> nd_not_to (raidz)
> >
>http://blogs.sun.com/roller/page/roch?entry=the_dynam
> ics_of_zfs
> >
> > Also, my thinking i
> On 7/17/06, Jonathan Wheeler <[EMAIL PROTECTED]>
> wrote:
> > Reads: 4 disks gives me 190MB/sec. WOAH! I'm very
> happy with that. 8 disks should scale to 380 then,
> Well 320 isn't all that far off - no biggie.
> > Looking at the 6 disk raidz is interesting though,
> 290MB/sec. The disks are
On 7/18/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
>
> the ability to remap blocks would be *so* useful -- it would
> enable compression of preexisting data, removing devices from
> a pool, automatically moving cold data to slower
Currently, the algorithm is approximately round-robin. We try to keep
all vdevs working at all times, with a slight bias towards those with
less capacity. So if you add a new disk to a pool whose vdevs are 70%
full, we'll gradually schedule more work for the empty disk until all
the vdevs are aga
On Tue, Jul 18, 2006 at 09:46:44AM -0400, Chad Mynhier wrote:
> On 7/18/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> >On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
> >>
> >> the ability to remap blocks would be *so* useful -- it would
> >> enable compression of preexisting data,
On 7/18/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
On Tue, Jul 18, 2006 at 09:46:44AM -0400, Chad Mynhier wrote:
> On 7/18/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> >
> >Being able to remove devices from a pool would be a good thing. I can't
> >personally think of any reason that I wo
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way mirroring, of the (6 choose 2) = 6*5/2 = 15 poss
Darren Reed wrote:
Then make the removal operation another arg to "allow".
Or better yet, use a pair of words where you're not tempted to use bad
English, such as "grant" and "revoke",
or just use "revoke" anyway?
Grant matches what we do with authorisations in RBAC.
You grant a user an aut
Bill La Forge wrote:
I like to think of delegation as being a bit different than granting
permision--in fact, as a special permission that may include counts.
For example, you might delegate to a manager the ability to grant select
permissions. You may want to limit the number of users the man
On Jul 18, 2006, at 8:58, Richard Elling wrote:
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way
Title: Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
The prefetch and I/O scheduling of nv41 were responsible for some quirky performance. First time read performance might be good, then subsequent reads might be very poor.
With a very recent update to the zfs
> On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
> > > Of course the re-writing must be 100% safe, but that can be done with COW
> > > quite easily.
> >
> > the ability to remap blocks would be *so* useful -- it would
> > enable compression of preexisting data, removing devices from
more below...
Ed Gould wrote:
On Jul 18, 2006, at 8:58, Richard Elling wrote:
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two dis
Richard Elling schrieb:
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way mirroring, of the (6 ch
Also note that the current prefetching (even in snv_41) still suffers
from some major systematic performance problems. This should be fixed
by snv_45/s10u3, and is covered by the following bug:
6447377 ZFS prefetch is inconsistant
I'll duplicate Mark's evaluation here, since it doesn't show up o
On Jul 18, 2006, at 10:35 AM, Ed Gould wrote:On Jul 18, 2006, at 8:58, Richard Elling wrote: Jeff Bonwick wrote: For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Zor RAID-Z2. Maybe I'm missing something, but it ought to be the other way around.With 6 disks, RAID-Z2 can tolerate any
Daniel,
When we take into account time, the models we use are Markov models
which consider the amount of space used, disk size, block and whole
disk failures, RAID scheme, recovery from tape time, spares, etc.
All of these views of the system are being analyzed. Needless to say,
with the zillions
On Tue, 18 Jul 2006, Daniel Rock wrote:
> Richard Elling schrieb:
> > Jeff Bonwick wrote:
> >>> For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
> >>> or RAID-Z2.
> >>
> >> Maybe I'm missing something, but it ought to be the other way around.
> >> With 6 disks, RAID-Z2 can tolera
On Tue, Jul 18, 2006 at 10:37:35AM -0700, Richard Elling wrote:
> Daniel,
> When we take into account time, the models we use are Markov models
> which consider the amount of space used, disk size, block and whole
> disk failures, RAID scheme, recovery from tape time, spares, etc.
> All of these vi
On Tue, Jul 18, 2006 at 10:59:59AM -0700, Eric Schrock wrote:
>
> One thing I would pay attention to is the future world of native ZFS
> root. On a thumper, you only have two drives which are bootable from
> the BIOS. For any application in which reliability is important, you
> would have these
On Tue, Jul 18, 2006 at 10:59:59AM -0700, Eric Schrock wrote:
>
> - 5x(7+2), 1 hot spare, 21.0TB
^
Sigh, that should also be '17.5TB', not 21TB.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
_
Darren J Moffat wrote:
Bill La Forge wrote:
I like to think of delegation as being a bit different than granting
permision--in fact, as a special permission that may include counts.
For example, you might delegate to a manager the ability to grant
select permissions. You may want to limit the
On 7/18/06, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
Darren J Moffat wrote:
> Bill La Forge wrote:
>> I like to think of delegation as being a bit different than granting
>> permision--in fact, as a special permission that may include counts.
>>
>> For example, you might delegate to a manager
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 07/01 - 07/15
=
Threads or announcements origi
Al Hopper schrieb:
On Tue, 18 Jul 2006, Daniel Rock wrote:
I think this type of calculation is flawed. Disk failures are rare and
multiple disk failures at the same time are even more rare.
Stop right here! :) If you have a large number of identical disks which
operate in the same environment
Hello Roch,
Monday, July 17, 2006, 6:09:54 PM, you wrote:
R> Robert Milkowski writes:
>> Hello zfs-discuss,
>>
>> What would you rather propose for ZFS+ORACLE - zvols or just files
>> from the performance standpoint?
>>
>>
>> --
>> Best regards,
>> Robert
On Tue, 18 Jul 2006, Richard Elling wrote:
> Daniel,
> When we take into account time, the models we use are Markov models
> which consider the amount of space used, disk size, block and whole
> disk failures, RAID scheme, recovery from tape time, spares, etc.
> All of these views of the system ar
On Tue, 2006-07-18 at 15:32, Daniel Rock wrote:
> > Stop right here! :) If you have a large number of identical disks which
> > operate in the same environment[1], and possibly the same enclosure, it's
> > quite likely that you'll see 2 or more disks die within the same,
> > relatively short, time
On Sun, 2006-07-16 at 15:21, Darren J Moffat wrote:
> Jeff Victor wrote:
> > Why? Is the 'data is encrypted' flag only stored in filesystem
> > metadata, or is that flag stored in each data block?
>
> Like compression and which checksum algorithm it will be stored in
> every dmu object.
I tho
>On Sun, 2006-07-16 at 15:21, Darren J Moffat wrote:
>> Jeff Victor wrote:
>> > Why? Is the 'data is encrypted' flag only stored in filesystem
>> > metadata, or is that flag stored in each data block?
>>
>> Like compression and which checksum algorithm it will be stored in
>> every dmu object
On 7/18/06, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
On Tue, 2006-07-18 at 15:32, Daniel Rock wrote:
> > Stop right here! :) If you have a large number of identical disks which
> > operate in the same environment[1], and possibly the same enclosure, it's
> > quite likely that you'll see 2 or m
Bill Sommerfeld wrote:
On Tue, 2006-07-18 at 15:32, Daniel Rock wrote:
Stop right here! :) If you have a large number of identical disks which
operate in the same environment[1], and possibly the same enclosure, it's
quite likely that you'll see 2 or more disks die within the same,
relatively s
On Wed, Jul 19, 2006 at 03:10:00AM +0200, [EMAIL PROTECTED] wrote:
> So how many of the 128 bits of the blockpointer are used for things
> other than to point where the block is?
128 *bits*? What filesystem have you been using? :) We've got
luxury-class block pointers that are 128 *bytes*. We
Bill Moore wrote:
On Wed, Jul 19, 2006 at 03:10:00AM +0200, [EMAIL PROTECTED] wrote:
So how many of the 128 bits of the blockpointer are used for things
other than to point where the block is?
128 *bits*? What filesystem have you been using? :) We've got
luxury-class block pointers
On Tue, 18 Jul 2006, Al Hopper wrote:
On Tue, 18 Jul 2006, Daniel Rock wrote:
Richard Elling schrieb:
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID
On Wed, Jul 19, 2006 at 12:38:01PM +0800, Darren Reed wrote:
>
> Of course this can change (grow) in the future if the ZFS version number
> changes?
>
Possibly. But it will be difficult. Unlike current version upgrades,
such a change would require rewriting every block (because each block
isn
On Wed, Jul 19, 2006 at 12:43:21AM -0400, Matty wrote:
>
> A good SMART implementation combined with a decent sensor framework can
> also be useful for dealing with these conditions. Smartmontools is
> currently able to send E-amil when the ambient temperature of a disk
> drive goes beyond the
Eric Schrock wrote:
On Wed, Jul 19, 2006 at 12:38:01PM +0800, Darren Reed wrote:
Of course this can change (grow) in the future if the ZFS version number
changes?
...
Another possibility is to use the block birth time and some pool-wide
metadata to determine when an upgrade took place,
One thing I've seen time and again that could easily (a relative term!)
be done here is to have an extended bit.
Just dedicate one of the currently unused bits to be the extended bit.
When set, the 128 byte pointer is actually 256 bytes long.
Bill
Darren Reed wrote:
Eric Schrock wrote:
On
Bill La Forge wrote:
One thing I've seen time and again that could easily (a relative
term!) be done here is to have an extended bit.
Just dedicate one of the currently unused bits to be the extended bit.
When set, the 128 byte pointer is actually 256 bytes long.
And use the same bit in ever
Hi all,
IHACWHAC (I have a colleague who has a customer - hello, if you're
listening :-) who's trying to build and test a scenario where he can
salvage the data off the (internal ?) disks of a T2000 in case the sysboard
and with it the on-board raid controller dies.
If I understood correctly
Hey,
I have a portable harddisk Western Digital 120GB USB. Im running Nevada b42a on
Thinkpad T43. Is this a supported configuration for setting up ZFS on portable
disks ?
Found out some old blogs about this topic:
http://blogs.sun.com/roller/page/artem?entry=zfs_on_the_go and some other info
57 matches
Mail list logo