On Thu, Jul 20, 2006 at 03:45:54PM -0700, Jeff Bonwick wrote:
> > However, we do have the advantage of always knowing when something
> > is corrupted, and knowing what that particular block should have been.
>
> We also have ditto blocks for all metadata, so that even if any block
> of ZFS metada
Yeah I was a little suspicious of my mkfile in tmpfs test so I went
ahead and wrote a program not so different than this one.
The results were the same. I could only allocate about 512M before
things went bad.
--joe
Nathan Kroenert wrote:
Something I often do when I'm a little suspicious
Bart Smaalders wrote:
How much swap space is configured on this machine?
Zero. Is there any reason I would want to configure any swap space?
--joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Something I often do when I'm a little suspicious of this sort of
activity is to run something that steals vast quantities of memory...
eg: something like this:
#include
#include
int main()
{
int memsize=0;
char *input_string;
char *memory;
long i=0;
Joseph Mocker wrote:
Eric,
Thanks for the explanation. I am familiar with the UFS cache and assumed
ZFS cache would have worked the same way.
However, it seems like there are a few bugs here. Here's what I see.
1. I can cause an out of memory situation by simply copying a bunch of
files bet
Eric,
Thanks for the explanation. I am familiar with the UFS cache and assumed
ZFS cache would have worked the same way.
However, it seems like there are a few bugs here. Here's what I see.
1. I can cause an out of memory situation by simply copying a bunch of
files between folders in a ZFS
There two things to note here:
1. The vast majority of the memory is being used by the ZFS cache, but
appears under 'kernel heap'. If you actually need the memory, it
_should_ be released. Under UFS, this cache appears as the 'page
cache', and users understand that it can be released w
So what's going on! Please help. I want my memory back!
This is essentially by design, due to the way that ZFS uses kernel
memory for caching and other stuff.
You can alleviate this somewhat by running a 64bit processor, which
has a significantly larger address space to play with.
Uhh. I
Joseph Mocker wrote:
...
Anyways, I found the ::memstat dcmd for mdb. So I gave it a spin and it
looked something like
Page SummaryPagesMB %Tot
Kernel 139650 1091 36%
Ano
> However, we do have the advantage of always knowing when something
> is corrupted, and knowing what that particular block should have been.
We also have ditto blocks for all metadata, so that even if any block
of ZFS metadata is destroyed, we always have another copy.
Bill Moore describes ditto
Last week I upgraded to Solaris 10_U2 and migrated my old UFS partitions
to a ZFS pool.
Since then I've noticed some of my nightly cron jobs failing because of
memory allocation errors.
So today I decided to look into it.
First thing I looked at was user process memory. The only two real user
p
> Basically, the first step is to identify the file in question so the
> user knows what's been lost. The second step is a way to move these
> blocks into pergatory, where they won't take up filesystem namespace,
> but still account for used space. The final step is to actually delete
> the block
Och, sorry - a clarification might needed to my reply:
Tim Foster wrote:
Darren Dunham wrote:
I meant, rather than taring it up, can you just pass the snapshot mount
point to Networker as a saveset?
Yup, in my brief testing, I was able to backup a snapdir using
Networker.
... ** with the
Luc I. Suryo wrote:
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
Well we not (yet) worry about the ACLs as long we have a backup, using
zfs sent/receieve of the snapshot to 1 single tar en dan to tape..
I meant, rather than taring it up, can yo
Darren Dunham wrote:
I meant, rather than taring it up, can you just pass the snapshot mount
point to Networker as a saveset?
Yup, in my brief testing, I was able to backup a snapdir using
Networker. Pointing Networker at a ZFS mountpoint with the snapdir shown
( .zfs, at the top level direct
> > > Do you have ACLs you need to maintain? Can you just specify a snapshot
> > > as a saveset directly?
> >
> > Well we not (yet) worry about the ACLs as long we have a backup, using
> > zfs sent/receieve of the snapshot to 1 single tar en dan to tape..
>
> I meant, rather than taring it up,
Note that there are two common reasons to have a fsck-like utility -
1. Detect corruption
2. Repair corruption
For the first, we have scrubbing (and eventually background scrubbing)
so it's pointless in the ZFS world. For the latter, the type of things
it repairs are known pathologies endemic to
See:
http://www.opensolaris.org/jive/thread.jspa?threadID=11305&tstart=0
Basically, the first step is to identify the file in question so the
user knows what's been lost. The second step is a way to move these
blocks into pergatory, where they won't take up filesystem namespace,
but still accoun
On Thu, 20 Jul 2006, Darren Dunham wrote:
> > Well the fact that it's a level 2 indirect block indicates why it can't
> > simply be removed. We don't know what data it refers to, so we can't
> > free the associated blocks. The panic on move is quite interesting -
> > after BFU give it another sh
> > Do you have ACLs you need to maintain? Can you just specify a snapshot
> > as a saveset directly?
>
> Well we not (yet) worry about the ACLs as long we have a backup, using
> zfs sent/receieve of the snapshot to 1 single tar en dan to tape..
I meant, rather than taring it up, can you just p
> Well the fact that it's a level 2 indirect block indicates why it can't
> simply be removed. We don't know what data it refers to, so we can't
> free the associated blocks. The panic on move is quite interesting -
> after BFU give it another shot and file a bug if it still happens.
What's the
> > do you know if this is for 7.3 or will it work for 7.2 too??
> > we are still using 7.2 and have no plan to update to 7.3 yet...
> >
> > right now we doing snapshots and send to tar-tape, ugly...
>
> Do you have ACLs you need to maintain? Can you just specify a snapshot
> as a saveset direc
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug if it still happens.
- Eric
On Thu, Jul
Eric Schrock wrote:
What does 'zpool status -v' show? This sounds like you have corruption
# zpool status -v
pool: junk
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in questio
On Thu, Jul 20, 2006 at 12:58:31AM -0700, Trond Norbye wrote:
> I have been using iosoop script (see
> http://www.opensolaris.org/os/community/dtrace/scripts/) written by
> Brendan Gregg to look at the IO operations of my application.
...
> So how can I get the same information from a ZFS file-syst
Anne Wong schrieb:
The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS
NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for
September release.
Will it also support the new zfs style automounts?
Or do I have to set
zfs set mountpoint=legacy zfs/file/syst
Thanks to everybody that replied.Having NetWorker support for ZFS will remove a major stumbling block to adoption of ZFS.On Jul 20, 2006, at 11:25 AM, Anne Wong wrote:The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting
> do you know if this is for 7.3 or will it work for 7.2 too??
> we are still using 7.2 and have no plan to update to 7.3 yet...
>
> right now we doing snapshots and send to tar-tape, ugly...
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
--
Darr
The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS
NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for
September release.
Regards,
-- Anne
Mark Shellenbaum wrote:
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
supp
Luc I. Suryo wrote:
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
support ZFS?
Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
EMC has a patch that we have tested and it appears to work.
Last I heard they were planning on
> Gregory Shaw wrote:
> >Hey, does anybody know the timeframe for when Legato Networker will
> >support ZFS?
> >
> >Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
>
> EMC has a patch that we have tested and it appears to work.
> Last I heard they were planning on re
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
support ZFS?
Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
EMC has a patch that we have tested and it appears to work.
Last I heard they were planning on releasing the patch
Hey, does anybody know the timeframe for when Legato Networker will support ZFS?Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed? -Gregory Shaw, IT ArchitectPhone: (303) 673-8273 Fax: (303) 673-2773ITCTO Group, Sun Microsystems Inc.1 StorageTek Drive ULVL4-382
What does 'zpool status -v' show? This sounds like you have corruption
in the dnode (a.k.a. metadata). This corruption is unrepairable at the
moment, since we have no way of knowing the extent of the blocks that
this dnode may be referencing. You should be able to move this file
aside, however.
The zdb interface is certainly unstable. We plan on automatically doing
this at a future date (bugid not handy), but it's a little tricky for
live filesystems. If your filesystem is undergoing a lot of churn, you
may notice that zdb(1M) will blow up with an I/O error or assertion
failure somewher
... and in a related question - since rsync uses the ACL code from the Samba
project - has there been some progress in that direction too?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
Peter Eriksson wrote:
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be
really convenient if it would support transparent conversions from old-style
Posix ACLs to ZFS ACLs on the fly
One way Posix->ZFS is probably good enough. I've tried Googling, but haven't
come
David Magda wrote:
Hello,
Does the work of IEEE's Security in Storage Working Group [1] have any
affect on the design of ZFS's encryption modules? Or do the two efforts
deal with different "layers"?
See the draft design doc on the ZFS crypto page:
http://www.opensolaris.org/os/project/zfs-c
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be
really convenient if it would support transparent conversions from old-style
Posix ACLs to ZFS ACLs on the fly
One way Posix->ZFS is probably good enough. I've tried Googling, but haven't
come up with much. There see
Hi. I'm in the process of writing an introductory paper on ZFS. The paper is meant to be something that could be given to a systems admin at a site to introduce ZFS and document common procedures for using ZFS. In the paper, I want to document the method for identifying which file has a checksu
Hello,
Does the work of IEEE's Security in Storage Working Group [1] have
any affect on the design of ZFS's encryption modules? Or do the two
efforts deal with different "layers"?
Seems that 1619 is more geared towards SAN disks, where 'regular'
file systems tend to sit on and not know th
G'Day Trond,
On Thu, 20 Jul 2006, Trond Norbye wrote:
> I have been using iosoop script (see
> http://www.opensolaris.org/os/community/dtrace/scripts/) written by
> Brendan Gregg to look at the IO operations of my application. When I was
> running my test-program on a UFS filesystem I could see b
I have been using iosoop script (see
http://www.opensolaris.org/os/community/dtrace/scripts/) written by Brendan
Gregg to look at the IO operations of my application. When I was running my
test-program on a UFS filesystem I could see both read and write operations
like:
UID PID DBLOCK
43 matches
Mail list logo