Em 17/11/2013 19:26, Stefan Keller escreveu:
Hi Edson
As Rob wrote: Having a feature like an in-memory table like SQLite has
[1] would make application cahces obsoleteand interesting to discuss
(but that was'nt exactly what I asked above).
Hi, Stephan,
I don't think any feature you add to database server would bring
obsolescence to app server caches: app server caches have just no lag at
all:
1) Don't need network connection to database server
2) Don't need to materialize results (for instance, I have in mind a
Java or .Net app server running hundred thousands of objects in memory).
IMHO, no matter how much you improve database, app server caches
provides additional level of speed that cannot be achieved by database.
That said, I still can see huge improvements in database server.
Having strong in memory operation would bring substantial improvements.
For instance, if you have in-memory database (tables, indexes, etc) for
all sort of queries, and just **commit** to disks, then you will have
unprecedent performance.
I would get benefit from this architecture, since typical customer
database has < 64Gb on size (after 2 or 3 years of data recording). So,
a database server with 64Gb of memory would keep everything in memory,
and just commit data to disc.
In this case, commited data would be instantly available to queries
(because they are all in memory) while log (changes) is recorded in a
fast disk (a SSD, perhaps) and then those changes are made persistent
data, written async into slow massive disks (SCSI or SAS).
This would allow also a hybrid operation (too keep as much data pages as
possible in memory, with a target of 50% or more in memory).
When database server is started, it would have lazy load (data is loaded
and kept in memory as it is used) or eager load (for slower startup but
faster execution).
May be I'm just wondering too much, since I don't know PostgreSQL
internals...
Regards,
Edson
--Stefan
[1] http://www.sqlite.org/inmemorydb.html
[2] http://www.postgresql.org/docs/9.1/static/non-durability.html
2013/11/17 Edson Richter <edsonrich...@hotmail.com
<mailto:edsonrich...@hotmail.com>>
Em 17/11/2013 12:15, rob stone escreveu:
On Sun, 2013-11-17 at 12:25 +0100, Stefan Keller wrote:
How can Postgres be used and configured as an In-Memory
Database?
Does anybody know of thoughts or presentations about this
"NoSQL
feature" - beyond e.g. "Perspectives on NoSQL" from Gavin
Roy at PGCon
2010)?
Given, say 128 GB memory or more, and (read-mostly) data
that fit's
into this, what are the hints to optimize Postgres
(postgresql.conf
etc.)?
-- Stefan
Not as being completely "in memory".
Back in the "good ol'days" of DMS II (written in Algol and ran on
Burroughs mainframes) and Linc II (also Burroughs) it was
possible to
define certain tables as being memory resident. This was
useful for low
volatile data such as salutations, street types, county or
state codes,
time zones, preferred languages, etc.
It saved disk I/O twice. Firstly building your drop down lists and
secondly when the entered data hit the server and was validated.
It would be good to have a similar feature in PostgreSql.
If a table was altered by, say inserting a new street type,
then the
data base engine has to refresh the cache. This is the only
overhead.
Cheers,
Robert
For this purpose (building drop down lists, salutations, street
types, county or state codes), I keep a permanent data cache at
app server side (after all, they will be shared among all users -
even on a multi tenant application). This avoids network
connection, and keep database server memory available for database
operations (like reporting and transactions).
But I agree there are lots of gaings having a "in memory" option
for tables and so. I believe PostgreSQL already does that
automatically (most used tables are kept in memory cache).
Edson.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general