[zfs-discuss] Dreadful lofi read performance on snv_111

2009-04-06 Thread John Levon

On snv_111 OpenSolaris. heaped is snv_101b, both ZFS:

# mount -F hsfs /rpool/dc/media/OpenSolaris.iso /mnt
# ptime cp /mnt/boot/boot_archive /var/tmp

real 3:31.453461873
user0.003283729
sys 0.376784567
# mount -F hsfs /net/heaped/export/netimage/opensolaris/vnc-fix.iso /mnt2
# ptime cp /mnt2/boot/boot_archive /var/tmp

real1.442180764
user0.004013447
sys 0.442550604
# mount -F hsfs /net/localhost/rpool/dc/media/OpenSolaris.iso /mnt3
# ptime cp /mnt3/boot/boot_archive /var/tmp

real 3:41.182920499
user0.004244172
sys 0.430159730

I see a couple of bugs about lofi performance like 6382683, but I'm not sure if 
this
related, it seems to be a newer issue.

Any ideas?

regards
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dreadful lofi read performance on snv_111

2009-04-06 Thread John Levon
On Mon, Apr 06, 2009 at 04:46:12PM +0700, Fajar A. Nugraha wrote:

> On Mon, Apr 6, 2009 at 4:41 PM, John Levon  wrote:
> > I see a couple of bugs about lofi performance like 6382683, but I'm not 
> > sure if this
> > related, it seems to be a newer issue.
> 
> Isn't it 6806627?
> 
> http://opensolaris.org/jive/thread.jspa?threadID=98043&tstart=0

Ah, I thought that made 111, but it sounds like it's going in the respin
instead - should have checked.

thanks
john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dreadful lofi read performance on snv_111

2009-04-06 Thread Fajar A. Nugraha
On Mon, Apr 6, 2009 at 4:41 PM, John Levon  wrote:
> I see a couple of bugs about lofi performance like 6382683, but I'm not sure 
> if this
> related, it seems to be a newer issue.

Isn't it 6806627?

http://opensolaris.org/jive/thread.jspa?threadID=98043&tstart=0

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Error ZFS-8000-9P

2009-04-06 Thread Jens Elkner
On Fri, Apr 03, 2009 at 10:41:40AM -0700, Joe S wrote:
> Today, I noticed this:
...
> According to http://www.sun.com/msg/ZFS-8000-9P:
> 
> The Message ID: ZFS-8000-9P indicates a device has exceeded the
> acceptable limit of errors allowed by the system. See document 203768
> for additional information.
...
I've had the same on a thumper with S10u6 1|2 month ago. Since logs did
not show any disk error/warning for the last 6 month I just cleared the
pool and finally scrubbed it and put back the 'tmp hotspare' used to
the hot spare pool. No errors or warnings since then for that disk,
so it was obviously a false/brain damaged alarm ...

regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-06 Thread Gary Mills
I've been watching the ZFS ARC cache on our IMAP server while the
backups are running, and also when user activity is high.  The two
seem to conflict.  Fast response for users seems to depend on their
data being in the cache when it's needed.  Most of the disk I/O seems
to be writes in this situation.  However, the backup needs to stat
all files and read many of them.  I'm assuming that all of this
information is also added to the ARC cache, even though it may never
be needed again.  It must also evict user data from the cache, causing
it to be reloaded every time it's needed.

We use Networker for backups now.  Is there some way to configure ZFS
so that backups don't churn the cache?  Is there a different way to
perform backups to avoid this problem?  We do keep two weeks of daily
ZFS snapshots to use for restores of recently-lost data.  We still
need something for longer-term backups.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] jigdo or lofi can crash nfs+zfs

2009-04-06 Thread Frank Middleton

These problems both occur when accessing a ZFS dataset from
Linux (FC10) via NFS.

Jigdo is a fairly new bit-torrent-like downloader. It is not
entirely bug free, and the one time I tried it, it recursively
downloaded one directory's worth until ZFS eventually sort
of died. It put all the disks into error, and even the (UFS)
root disks became unreadable. It took a reboot to free everything
up and some twiddling to get ZFS going again. I really don't
want to even try to reproduce this! With 4GB physical, 10GB swap,
and almost 3TB of raidz, it probably didn't run out of memory
or disk space. There wasn't room on the boot disks to save the
crash dump after halt, sync. Is there any point in submitting
a bug report, and if so, what would you call it?

Is there a practical way to force the crash dump to go to a ZFS
dataset instead of the UFS boot disks?

Also, there is a reasonably reproducible problem that causes
a panic doing an NFS network install when the DVD image is copied
to a ZFS dataset on snv103. I submitted this as a bug report to
bugs.opensolaris.org, and it was acknowledged, but then it vanished.
This is actually an NFS/ZFS problem, so maybe it was applied
against the wrong group, or perhaps this was a transition issue.
I wasn't able to get a crash core saved because there wasn't
enough space on the boot (UFS) disks. I do have the panic traces
for the 3 times I reproduced this. Should this be resubmitted to
defect.opensolaris.org, and if so, against what? This problem
doesn't happen of the DVD image is itself mounted via NFS, or
is on a UFS partition.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad SWAP performance from zvol

2009-04-06 Thread Lori Alt

I'm not sure where this issue stands now (am just now checking
mail after being out for a few days), but here are the block sizes
used when the install software creates swap and dump zvols:

swap:  block size is set to PAGESIZE  (4K for x86, 8K for sparc)
dump:  block size is set to 128 KB

Liveupgrade should use the same characteristics, though I think
there was a bug at one time where it did not.

If that does not improve dump/swap zvol performance, further
investigation should be done.  Perhaps file a bug.

Lori


On 03/31/09 03:02, casper@sun.com wrote:

I've upgraded my system from ufs to zfs (root pool).

By default, it creates a zvol for dump and swap.

It's a 4GB Ultra-45 and every late night/morning I run a job which takes 
around 2GB of memory.


With a zvol swap, the system becomes unusable and the Sun Ray client often 
goes into "26B".


So I removed the zvol swap and now I have a standard swap partition.
The performance is much better (night and day).  The system is usable and 
I don't know the job is running.


Is this expected?

Casper



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss