On 11/05/2006, at 9:04 AM, Darren Dunham wrote:
I've seen this question asked in a number of forums but I haven't
seen an answer. (May just have not looked hard enough)
Copy-on-write means that most writes are sequential, which is good
for write performance, but won't it mean that random writes
On Wed, 2006-05-10 at 20:42 -0500, Mike Gerdts wrote:
> On 5/10/06, Boyd Adamson <[EMAIL PROTECTED]> wrote:
> > What we need is some clear blueprints/best practices docs on this, I
> > think.
> >
In due time... it was only recently that some of the performance
enhancements were put back.
Note: id
On 5/10/06, Boyd Adamson <[EMAIL PROTECTED]> wrote:
What we need is some clear blueprints/best practices docs on this, I
think.
Most definitely. Key things that people I work with (including me...)
would like to see are...
- Some success stories of people running large databases (working se
On 11/05/2006, at 9:17 AM, James C. McPherson wrote:
- Redundancy is performed at the filesystem level, probably on all
disks in the pool.
more at the pool level iirc, but yes, over all the disks where you
have them mirrored or raid/raidZ-ed
Yes, of course. I meant at the filesystem level
One word of caution about random writes. From my experience, they are
not nearly as fast as sequential writes (like 10 to 20 times slower)
unless they are carefully aligned on the same boundary as the file
system record size. Otherwise, there is a heavy read penalty that you
can easily observe by
Hi Boyd,
Boyd Adamson wrote:
One question that has come up a number of times when I've been speaking
with people (read: evangelizing :) ) about ZFS is about database
storage. In conventional use storage has separated redo logs from table
space, on a spindle basis.
I'm not a database expert bu
> I've seen this question asked in a number of forums but I haven't
> seen an answer. (May just have not looked hard enough)
>
> Copy-on-write means that most writes are sequential, which is good
> for write performance, but won't it mean that random writes to a file
> will put bits of the f
Thanks for setting this up. To prove me wrong, could someone grab any
linux-2.2.*.tar.gz and untar into the array over NFS and measure the
time. You have equallogic targets, and those are probably the most
performant. I'm using SBEi targets, which go up to ERL2 and tend to
also be performant (80-1
I've seen this question asked in a number of forums but I haven't
seen an answer. (May just have not looked hard enough)
Copy-on-write means that most writes are sequential, which is good
for write performance, but won't it mean that random writes to a file
will put bits of the file all ove
One question that has come up a number of times when I've been
speaking with people (read: evangelizing :) ) about ZFS is about
database storage. In conventional use storage has separated redo logs
from table space, on a spindle basis.
I'm not a database expert but I believe the reasons boi
> For example, today you can do:
>
> # zfs snapshot data/[EMAIL PROTECTED]
> # find .zfs/snapshot -name "daily-*" -ctime +7d
Does the actual snapshot creation time appear as one of the stat() times
of the snap directory? When I tried it, they all reflected the actual
times of the ori
> I'd like to let this run. I'd like to see if it makes sense
> to audit in addition to build the history. The down side being
> an additional required privilege.
The zpool command is in the ZFS Storage Management Rights Profile
and the zfs command is in the ZFS
I have a test configuration up and running internally. I'm
not sure I'm seeing the exact same issues. For this testing
I'm using GRITS. I don't know any really great FS performance
tools. I tend to do my performance testing with vdbench with
raw SCSI IO on purpose to avoid FS caches. (I'm mo
> -Original Message-
> From: Eric Schrock [mailto:[EMAIL PROTECTED]
>
> The clone and snapshot "share" space. This relationship must
> be maintained for internal accounting purposes, but also for
> the administrator to know which clones are sharing space with
> the original snapsho
[u][b]Snapshot Management:[/b][/u]
With all the talk of snapshots as of late, is there an interest for a ZFS
discuss sub-group for Snapshots?
Perhaps this may prevent further anything getting "lost in the shuffle".
This message posted from opensolaris.org
On Wed, May 10, 2006 at 02:22:27PM -0500, James Dickens wrote:
> This was posted earlier but i guess it got lost in the shuffle i still
> think its a good idea. Even thought you can put 10x32 snapshots,
> doesn't mean we should have to.
>
> snapshot subdirectories enhancement that could make deal
This was posted earlier but i guess it got lost in the shuffle i still
think its a good idea. Even thought you can put 10x32 snapshots,
doesn't mean we should have to.
snapshot subdirectories enhancement that could make dealing with
snapshots better, perhaps we can create a directory structure u
Of interest to this thread, Tim Foster has just created an interesting blog
entry at:
http://blogs.sun.com/roller/page/timf#zfs_automatic_snapshots_prototype_1
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
On Wed, May 10, 2006 at 12:12:16PM -0600, Morreale, Peter W wrote:
>
> Is there some benefit to me (err, the administrator) to maintaining the
> parent/child relationship between a snap and its clone?
If I remember correctly, it's mainly an issue of accounting; where does
the space for the bas
On Wed, May 10, 2006 at 12:12:16PM -0600, Morreale, Peter W wrote:
>
> Is there some benefit to me (err, the administrator) to maintaining the
> parent/child relationship between a snap and its clone?
>
> To my twisted little mind, by virtue of the fact that I created a clone
> I have severed t
thanks for the detailed explanation.
this all makes much more sense to me now.
turns out my confusion was due to a general lack of understanding about
how property inheritance works. (i had assumed that it was always
based off of "clone origin filesystem" rather than the the "parent
filesystem".)
Hey All,
So, just to get this idea out of my brain, and on to the screen, I've
got a prototype of a mechanism that takes automatic snapshots.
More info (and a tarball!) from my blog at
http://blogs.sun.com/roller/page/timf?entry=zfs_automatic_snapshots_prototype_1
cheers,
Is there some benefit to me (err, the administrator) to maintaining the
parent/child relationship between a snap and its clone?
To my twisted little mind, by virtue of the fact that I created a clone
I have severed the relationship between a snap and a clone. They are
immediately at least one
On Wed, May 10, 2006 at 09:10:10AM -0700, Edward Pilatowicz wrote:
> out of curiousity, how are properties handled?
I think you're confusing[*] the "clone origin filesystem" and the
"parent filesystem". The parent filesystem is the one that is above it
in the filesystem namespace, from which it i
your suggestion worked just fine -- I did the dd on the target disk, then was
able to do the 'zpool replace'.
requested files coming shortly (I'm attaching them after I've already
successfully run
'zpool replace', btw).
thanks
This message posted from opensolaris.org
___
out of curiousity, how are properties handled?
for example if you have a fs with compression disabled, you snapshot
it, you clone it, and you enable compression on the clone, and then
you promote the clone. will compressions be enabled on the new parent?
and what about other clones that have prope
On Wed, May 10, 2006 at 08:15:15AM -0700, Gary Winiger wrote:
> > >> The Solaris audit facility will record a command execution as soon as
>
> > Yes, that's a special case of my reason #3 - (sufficient) auditing may
> > not be enabled.
>
> I'd like to let this run. I'd like to see if it
On Wed, May 10, 2006 at 02:07:01AM -0600, Lori Alt wrote:
> 5. The new BE works fine, so the administrator decides to promote
> the BE's dataset (which is still a clone) to primary dataset status.
> Here I'm not sure what's best: should liveupgrade promote the
> dataset as part of its
Gary Winiger wrote:
The Solaris audit facility will record a command execution as soon as
Yes, that's a special case of my reason #3 - (sufficient) auditing may
not be enabled.
I'd like to let this run. I'd like to see if it makes sense
to audit in addition to build the hist
> >> The Solaris audit facility will record a command execution as soon as
> Yes, that's a special case of my reason #3 - (sufficient) auditing may
> not be enabled.
I'd like to let this run. I'd like to see if it makes sense
to audit in addition to build the history. The down
On Wed, May 10, 2006 at 02:07:01AM -0600, Lori Alt wrote:
> So, for the purposes of zfs boot and liveupgrade, I think your new
> "promote" function works very well. Am I missing anything?
Thanks! Your use case with liveupgrade looks great!
--matt
___
On 09 May 2006, at 23:48, Joerg Schilling wrote:
Wout Mertens <[EMAIL PROTECTED]> wrote:
WOFS lives on a Write once medium, WOFS itself is not write once.
Oops, now that I read your thesis, I see. So you can treat a WORM
like a normal disk. Cool :)
How come it never got traction? There
So let me work through a scenario of how clone promotion might work
in conjunction with
liveupgrade once we have bootable zfs datasets:
1. We are booted off the dataset pool/root_sol10_u4
2. We want to upgrade to U5. So we begin by lucreating a new
boot environment (BE) as a clone o
33 matches
Mail list logo