Hi Bruce

2013/11/20 Bruce Momjian <br...@momjian.us>

> On Sun, Nov 17, 2013 at 09:00:05PM +0900, Michael Paquier wrote:
> > On Sun, Nov 17, 2013 at 8:25 PM, Stefan Keller <sfkel...@gmail.com>
> wrote:
> > > How can Postgres be used and configured as an In-Memory Database?
> > >
> > > Does anybody know of thoughts or presentations about this "NoSQL
> feature" -
> > > beyond e.g. "Perspectives on NoSQL" from Gavin Roy at PGCon 2010)?
> > >
> > > Given, say 128 GB memory or more, and (read-mostly) data that fit's
> into
> > > this, what are the hints to optimize Postgres (postgresql.conf etc.)?
> > In this case as you are trading system safety (system will not be
> > crash-safe) for performance... The following parameters would be
> > suited:
> > - Improve performance by reducing the amount of data flushed:
> > fsync = off
> > synchronous_commit=off
> > - Reduce the size of WALs:
> > full_page_writes = off
> > - Disable the background writer:
> > bgwriter_lru_maxpages = 0
> > Regards,
>
> FYI, the Postgres manual covers non-durability settings:
>
>         http://www.postgresql.org/docs/9.3/static/non-durability.html


Thanks for the hint. On 17. November 2013 22:26 I referred to the same
document page.
Aside config params it is suggested to use memory-backed file system (i.e.
RAM disk).
But what I am interested in, is how Postgres can be functionally enhanced
given the dataset fits into (some big) memory!
Being aware and assured that the dataset is in-memory, does'nt this lead to
significant speed up, like Stonebraker, Ora and SAP affirm?

-S.

Reply via email to