> I believe that PostgreSQL was been developed and optimized for > sequential access. To get full advantage of SSDs its necessary to > rewrite almost the whole project - there are so much code written with > the sequential mechanism in mind.
You can believe whatever you want, that doesn't make it true. Unless you have some kind of hard data that SSD data access is somehow *qualitatively* different from SAS data access, then you're just engaging in idle water-cooler speculation. Plenty of vendors launched products based on the supposed "revolutionary" nature of SSDs when they first came out. All have failed. SSDs are just faster disks, that's all. Their ratio of random-access to sequential might be less than 4.0, but it's not 1.0. Heck, even RAM isn't 1.0. I'm also involved with the Redis project, which is an in-memory database. Even for a pure-RAM database, it turns out that just using linked lists and 100% random access is slower than accessing page images. I use SSDs for many PostgreSQL instances. They work great. No changes to PostgreSQL were required other than adjusting random_page_cost down to 2.0 (this number could use exhaustive testing, but seems to work pretty well right now). -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers