On Thu, Sep 8, 2016 at 2:35 AM, dandl <da...@andl.org> wrote: > I understand that. What I'm trying to get a handle on is the magnitude of > that cost and how it influences other parts of the product, specifically > for Postgres. If the overhead for perfect durability were (say) 10%, few > people would care about the cost. But Stonebraker puts the figure at 2500%! > His presentation says that a pure relational in-memory store can beat a row > store with disk fully cached in memory by 10x to 25x. [Ditto column stores > beat row stores by 10x for complex queries in non-updatable data.] >
VoltDB replication is synchronous in the same cluster/data center, and asynchronous with a remote cluster/data center. As a consequence, if your application needs to survive a data center power failure with zero data loss, then you have to enable VoltDB's synchronous command logging (which by the way is not available in the Community Edition — only in the Enterprise Edition). When Stonebraker says VoltDB's throughput is 10~25x greater, I'd guess this is with no command logging at all, and no periodic snapshotting. So my question is not to challenge the Postgres way. It's simply to ask > whether there are any known figures that would directly support or refute > his claims. Does Postgres really spend 96% of its time in thumb-twiddling > once the entire database resides in memory? > Alas, I've been unable to find any relevant benchmark. I'm not motivated enough to install a PostgreSQL and VoltDB and try it for myself :-)