think this bug describes some or all of the problem:
https://defect.opensolaris.org/bz/show_bug.cgi?id=16361
Thanks,
Cindy
On 02/18/11 12:34, Bill Shannon wrote:
In the last few days my performance has gone to hell. I'm running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I&
In the last few days my performance has gone to hell. I'm running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I'll upgrade as soon as the desktop hang bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I've changed recent
Running zdb on my broken system, one of the things I see is the hostname.
I'm not sure why zfs needs to know about the hostname of the system it's
on, but...
The thing I did that started all my problems was I changed the hostname
of my system. Do I need to do something with zfs to tell it the new
I upgraded my machine to snv_101a_rc1 and now that machine is broken.
I described my problem here
http://www.opensolaris.org/jive/thread.jspa?threadID=81928&tstart=0
and here
http://www.opensolaris.org/jive/thread.jspa?threadID=80625&tstart=0#304281
The problem seems to be some low level zfs files
Bill Shannon wrote:
> If I do something like this:
>
> zfs snapshot [EMAIL PROTECTED]
> zfs send [EMAIL PROTECTED] > tank.backup
> sleep 86400
> zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
> zfs snapshot [EMAIL PROTECTED]
> zfs send -I [EMAIL PROTECTED] [EMAIL PROTE
Bill Shannon wrote:
> datsun# zfs recv -d test < d.0
> cannot open 'test/tckuser': dataset does not exist
Despite the error message, the recv does seem to work.
Is it a bug that it prints the error message, or is it a bug that i
I'm trying to figure out how to restore a filesystem using zfs recv.
Obviously there's some important concept I don't understand. I'm
using my zfsdump script to create the dumps that I'm going to restore.
Here's what I tried:
Save a "level 0" dump in d.0:
datsun# zfsdump 0 home/tckuser > d.0
zfs
I wrote a simple script to dump zfs snapshots in a similar way to
what I did with ufsdump. In particular, it supports dump levels.
This is still very crude, but I'd appreciate comments. Does what
I'm doing make sense, or do I misunderstand zfs?
Thanks.
-
#!/bin/ksh
#
# XXX - real option pa
Is there a zfs recv-like command that will list a table of contents
for what's in a stream?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jonathan Edwards wrote:
>
> On Mar 14, 2008, at 3:28 PM, Bill Shannon wrote:
>> What's the best way to backup a zfs filesystem to tape, where the size
>> of the filesystem is larger than what can fit on a single tape?
>> ufsdump handles this quite nicely. Is there a
I just wanted to follow up on this issue I raised a few weeks ago.
With help from several of you, I had all the information and tools
I needed to start debugging my problem. Which of course meant that
my problem disappeared!
At one point my theory was that ksh93 was updating my .history file
per
What's the best way to backup a zfs filesystem to tape, where the size
of the filesystem is larger than what can fit on a single tape?
ufsdump handles this quite nicely. Is there a similar backup program
for zfs? Or a general tape management program that can take data from
a stream and split it a
Darren J Moffat wrote:
> I know this isn't answering the question but rather than using "today"
> and "yesterday" why not not just use dates ?
Because then I have to compute yesterday's date to do the incremental dump.
I don't suppose I can create symlinks to snapshots in order to give them
mult
If I do something like this:
zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] > tank.backup
sleep 86400
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs snapshot [EMAIL PROTECTED]
zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr
Am I going to be able to restore the streams?
Roland Mainz wrote:
> Bill Shannon wrote:
>> Roland Mainz wrote:
>>> What's the exact filename and how often are the accesses ? Is this an
>>> interactive shell or is this a script (an interactive shell session will
>>> do periodical lookups for things lik
Roland Mainz wrote:
> What's the exact filename and how often are the accesses ? Is this an
> interactive shell or is this a script (an interactive shell session will
> do periodical lookups for things like the MAIL*-variables (see ksh(1)
> and ksh93(1) manual pages) while scripts may do random stu
Jonathan Edwards wrote:
>
> On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote:
>> Ok, that's much better! At least I'm getting output when I touch files
>> on zfs. However, even though zpool iostat is reporting activity, the
>> above program isn't showing an
Roch Bourbonnais wrote:
>>> this came up sometime last year .. io:::start won't work since ZFS
>>> doesn't call bdev_strategy() directly .. you'll want to use something
>>> more like zfs_read:entry, zfs_write:entry and zfs_putpage or zfs_getpage
>>> for mmap'd ZFS files
>>
>
> Ed:
> That's not ent
Jonathan Edwards wrote:
>
> On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote:
>> Running just plain "iosnoop" shows accesses to lots of files, but none
>> on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m
>> /export/home/shannon"
>&g
Bob Friesenhahn wrote:
> On Sat, 1 Mar 2008, Bill Shannon wrote:
>>
>> Curiously, when I came in to my office this morning, I didn't hear my
>> disk making noise. It wasn't until after I unlocked the screen that
>> the noise started, which makes my think
Marty Itzkowitz wrote:
> Interesting problem. I've used disk rattle as a measurement of io
> activity before
> there were such tools for measurement. It's crude, but effective.
>
> To answer your question: you could try er_kernel. It uses DTrace to
> do statistical callstack sampling, and is d
I recently converted my home directory to zfs on an external disk drive.
Approximately every three seconds I can hear the disk being accessed,
even if I'm doing nothing. The noise is driving me crazy!
I tried using dtrace to find out what process might be accessing the
disk. I used the iosnoop p
I've just started using zfs. I copied data from a ufs filesystem on
disk 1 to a zfs pool/filesystem on disk 2. Can I add disk 1 as a mirror
for disk 2, and then remove disk 2 from the mirror, and end up with all
the data back on disk 1 in zfs (after some amount of time, of course)?
If disk 1 is l
23 matches
Mail list logo