On Thu, 13 Dec 2007, Gregory Stark wrote:
Note that even though the processor is 99% in wait state the drive is only handling about 3 MB/s. That translates into a seek time of 2.2ms which is actually pretty fast...But note that if this were a raid array Postgres's wouldn't be getting any better results. A Raid array wouldn't improve i/o latency at all and since it's already 99% waiting for i/o Postgres is not going to be able to issue any more.
If it's a straight stupid RAID array, sure. But when you introduce a good write caching controller into the mix, that can batch multiple writes, take advantage of more elevator sorting, and get more writes/seek accomplished. Combine that improvement with having multiple drives as well and the PITR performance situation becomes very different; you really can get more than one drive in the array busy at a time. It's also true that you won't see everything that's happening with vmstat because the controller is doing the low-level dispatching.
I'll try to find time to replicate the test Tom suggested, as I think my system is about middle ground between his and Joshua's. In general I've never been able to get any interesting write throughput testing at all without at least a modest caching controller in there. Just like Tom's results, with a regular 'ole drive everything gets seek bottlenecked, WIO goes high, and it looks like I've got all the CPU in the world. I run a small Areca controller with 3 drives on it (OS+DB+WAL) at home to at least get close to a real server.
-- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend