On snv_111 OpenSolaris. heaped is snv_101b, both ZFS:
# mount -F hsfs /rpool/dc/media/OpenSolaris.iso /mnt
# ptime cp /mnt/boot/boot_archive /var/tmp
real 3:31.453461873
user0.003283729
sys 0.376784567
# mount -F hsfs /net/heaped/export/netimage/opensolaris/vnc-fix.iso /mnt2
On Mon, Apr 06, 2009 at 04:46:12PM +0700, Fajar A. Nugraha wrote:
> On Mon, Apr 6, 2009 at 4:41 PM, John Levon wrote:
> > I see a couple of bugs about lofi performance like 6382683, but I'm not
> > sure if this
> > related, it seems to be a newer issue.
>
> Isn't it 6806627?
>
> http://opensol
On Mon, Apr 6, 2009 at 4:41 PM, John Levon wrote:
> I see a couple of bugs about lofi performance like 6382683, but I'm not sure
> if this
> related, it seems to be a newer issue.
Isn't it 6806627?
http://opensolaris.org/jive/thread.jspa?threadID=98043&tstart=0
Regards,
Fajar
On Fri, Apr 03, 2009 at 10:41:40AM -0700, Joe S wrote:
> Today, I noticed this:
...
> According to http://www.sun.com/msg/ZFS-8000-9P:
>
> The Message ID: ZFS-8000-9P indicates a device has exceeded the
> acceptable limit of errors allowed by the system. See document 203768
> for additional inform
I've been watching the ZFS ARC cache on our IMAP server while the
backups are running, and also when user activity is high. The two
seem to conflict. Fast response for users seems to depend on their
data being in the cache when it's needed. Most of the disk I/O seems
to be writes in this situati
These problems both occur when accessing a ZFS dataset from
Linux (FC10) via NFS.
Jigdo is a fairly new bit-torrent-like downloader. It is not
entirely bug free, and the one time I tried it, it recursively
downloaded one directory's worth until ZFS eventually sort
of died. It put all the disks in
I'm not sure where this issue stands now (am just now checking
mail after being out for a few days), but here are the block sizes
used when the install software creates swap and dump zvols:
swap: block size is set to PAGESIZE (4K for x86, 8K for sparc)
dump: block size is set to 128 KB
Liveup