The real problem for us is down to the fact that with ufsdump and ufsrestore
they handled tape spanning and zfs send does not.
we looked into having a wrapper to "zfs send" to a file and running gtar (which
does support tape spanning), or cpio ... then we looked at the amount we
started storing
On Nov 12, 2007 4:16 PM, <[EMAIL PROTECTED]> wrote:
> >I don't think it should be too bad (for ::memstat), given that (at
> >least in Nevada), all of the ZFS caching data belongs to the "zvp"
> >vnode, instead of "kvp".
>
> ZFS data buffers are attached to zvp; however, we still keep m
On Nov 8, 2007 4:21 PM, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
> Hey all -
>
> Just a quick one...
>
> Is there any plan to update the mdb ::memstat dcmd to present ZFS
> buffers as part of the summary?
>
> At present, we get something like:
> > ::memstat
> Page SummaryPages
On 9/11/07, Dick Davies <[EMAIL PROTECTED]> wrote:
>
> I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored
> zpool.
> Noticed during some performance testing today that its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be a quick win.
>
> I
everything out in parallel.
3a. if any write fails, re-do 1+2 for that block, and 2 for all of its
parents, then start over at 3 with all of the changed blocks.
4. once everything is on stable storage, update the uberblock.
That's a lot more complicated than the
7;ll probably comment out that stuff and see I can
> bring up the nfs server code and share a UFS filesystem using the
> traditional methods. Once that's OK I'll move on to the ZFS portion and
> investigate.
Out of curiosity, wh
o get access to profile-enabled
commands:
$ zfs create pool/aux2
cannot create 'pool/aux2': permission denied
$ pfksh
$ zfs create pool/aux2
$ exit
$
Either set your shell to pf{k,c,}sh, or run it explicitly.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
t the next
> time.
>
> Is this a known issue?
The easiest way to work around it is to turn the zfs mount into a "legacy"
mount, and mount it using vfstab.
zfs set mountpoint=legacy pool/dataset
(add pool/dataset mount l
n [EMAIL PROTECTED]
> Gesendet: Do 22.06.2006 20:23
> An: Nicolas Williams
> Cc: Jonathan Adams; Nicolai Johannes; [EMAIL PROTECTED]
> Betreff: Re: AW: AW: [zfs-discuss] Proposal for new basic privileges related
> with filesystem access checks
>
>
> >Thinking about
nied.
>
> I was thinking of caching the {vfs, inode #, gen#, pid} and using that
> to allow such processes to re-open files they _recently_ (the cache
> should have LRU/LFU eviction) opened.
That doesn't seem like a very predictable interface. The security guarantees
are not very s
On Thu, Jun 22, 2006 at 07:46:57PM +0200, Roch wrote:
>
> As I recall, the zfs sync is, unlike UFS, synchronous.
Uh, are you talking about sync(2), or lockfs -f? IIRC, lockfs -f is always
synchronous.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Devel
d
0 -> cv_broadcast
0 <- cv_broadcast
0<- releasef
0 <- ioctl
So the sync happens.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It's written each time a transaction group
commits.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
provides you with some safeguards.
>
> --matt
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n reading this file sequentially
> will not be that sequential.
On the other hand, if you are reading the file sequentially, ZFS has
very good read-ahead algorithms.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1M), fsck(1M),
etc. Given that you use zfs(1M) for all that kind of manipulation,
it seems like this is not a huge deal.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
me pool/project/beta pool/project/production
> > > destroyed>
> > # zfs destroy pool/project/legacy
>
> 6. Resources and Schedule
> 6.4. Steering Committee requested information
> 6.4.1. Consolidation C-team Name:
> ON
> 6.5. ARC review type: FastTrack
>
> - End forwarded message -
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e directory when
it gets empty or "much smaller", which would fix this as well.
Cheers,
- jonathan
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt;
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jonathan Adams, Solaris Kernel Development
___
zfs-discuss mailing l
19 matches
Mail list logo