Hello,

I received the following question from a company I am working with:

We are having issues with our early experiments with ZFS with volumes mounted from a 6130.

Here is what we have and what we are seeing:

  T2000 (geronimo) on the fibre with a 6130.
6130 configured with UFS volumes mapped and mounted on several other hosts. it's the only host using ZFS volume (only one volume/filesystem configured).

When I attempt to load the volume from backup, we see memory being consumed at a very high rate on the host with the ZFS filesystem mounted and it seems that disk latencies all hosts connected through the fibre to the 6130 increase to the point
  where performance problems are noted.

Our monitoring system eventually got blocked, I assume, due to resource starvation; either the machine was thrashing or waiting for I/O. Before the system hung, I looked at memory allocation using kdb and saw anonymous allocations responsible for far and away the biggest chunk. Also, when the backup is suspended the memory is not freed. Eventually, the server hung and rebooted (perhaps due to Oracle cluster-health mechanism - I won't blame ZFS for that ;-).

I suspect a ZFS caching issue. I was directed to this doc (http:// blogs.digitar.com/jjww/?itemid=44) . It sort of addresses the issue we have encountered but I'd rather get the news from you guys.

How shall I proceed? I have a system I can use and abuse in preproduction for this purpose. We need to load a Terabyte into a production ZFS filesystem without pulling down everyone on the fibre...



Please respond to me directly as well as to the alias as I am not added yet.

Thanks,
Jeff
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to