This is a bit of a sidebar to the discussion about getting the 
best performance for PostgreSQL from ZFS, but may affect
you if you're doing sequential scans through the 70GB table
or its segments.

  ZFS copy-on-write results in tables' contents being spread across
the full width of their stripe, which is arguably a good thing
for transaction processing performance (or at least can be), but
makes sequential table-scan speed degrade.
 
  If you're doing sequential scans over large amounts of data
which isn't changing very rapidly, such as older segments, you
may want to re-sequentialize that data.

 I was talking to one of the Slony developers back whern this
first came up, and he suggested a process to do this in PostgreSQL.

  He suggested doing a "cluster" operation, relative to a specific 
index, then dropping and recreating the index.  This results in the 
relation being rewritten in the order the index is sorted by, which
should defragment/linearize it. The dropping and recreating
the index rewrites it sequentially too.

  Neither he nor I know the cost if the relation has more than one
index: we speculate they should be dropped before the clustering
and recreated last.

 --dave
-- 
David Collier-Brown            | Always do right. This will gratify
Sun Microsystems, Toronto      | some people and astonish the rest
[EMAIL PROTECTED]                 |                      -- Mark Twain
(905) 943-1983, cell: (647) 833-9377, (800) 555-9786 x56583
bridge: (877) 385-4099 code: 506 9191#
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to