I had to let this go and get on with testing DB2 on Solaris. I had to abandon zfs on local discs in x64 Solaris 10 5/08.

The situation was that:

   * DB2 buffer pools occupied up to 90% of 32GB RAM on each host
   * DB2 cached the entire database in its buffer pools
         o having the file system repeat this was not helpful
   * running high-load DB2 tests for 2 weeks showed 100% file-system
     writes and practically no reads

Having database tables on zfs meant launching applications took minutes instead of sub-second.

The test configuration I ended up with was:

   * transaction logs worked well to zfs on SAN with compression
     (nearly 1 TB of logs)
   * database tables worked well with ufs directio to multiple SCSI
     discs on each of 4 hosts (using DB2 database partitioning feature)

I refer to DIRECTIO only as this already provides a reasonable set of hints to the OS:

   * reads and writes need not be cached
   * write() should not return until data is in non-volatile storage
         o DB2 has multiple concurrent write() threads to optimize this
           strategy
   * I/O will usually be in whole blocks aligned on block boundaries

As an aside: It should be possible to state to zfs that a device cache is non-volatile (e.g. on SAN) and does not need flushing. Otherwise SAN must be configured to ignore all cache flush commands.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to