On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller wrote:
> Hi Jeff and Wales,
>
> 2012/2/26 Jeff Janes wrote:
>>> The problem is that the initial queries are too slow - and there is no
>>> second chance. I do have to trash the buffer every night. There is
>>> enough main memory to hold all table con
Mark Hills writes:
> What is that prevents the index condition from being used in earlier parts
> of the query? Only where a single condition is present is it be used below
> the final join.
"WHERE job.jid IN (1234)" is simplified to "WHERE job.jid = 1234", and
that in combination with "JOIN ON
Hi,
2012/2/26 Cédric Villemain wrote:
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>
> There was a patch proposed for postgresql which purpose was to
Which patch are you referring to?
> snapshot/Restore postgresql buffers, but it is still not sure how far that
> really hel
Le dimanche 26 février 2012 01:16:08, Stefan Keller a écrit :
> Hi,
>
> 2011/10/24 Stephen Frost wrote
>
> > Now, we've also been discussing ways to have PG automatically
> > re-populate shared buffers and possibly OS cache based on what was in
> > memory at the time of the last shut-down, but I
On 02/26/2012 01:11 PM, Stefan Keller wrote:
2012/2/26 Andy Colson wrote:
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How
2012/2/26 Andy Colson wrote:
> On 02/25/2012 06:16 PM, Stefan Keller wrote:
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>> 2. Are there any hints on how to tell Postgres to read in all table
>> contents into memory?
>>
>> Yours, Stefan
>
> How about after you load the data,
Thanks for your valuable inputs !
The cost is "13.88" to fetch 1 row by scanning an Primary Key
>> indexed column.
>>
>> Isn't the cost for fetching 1 row is too high ?
>>
>
> Not really. The "cost" is really just an estimate to rank alternate query
> plans so the database picks the least expensiv
>
> > The cost is "13.88" to fetch 1 row by scanning an Primary Key
> > indexed column.
> >
> > Isn't the cost for fetching 1 row is too high ?
>
> I don't know, how many index pages will need to be randomly accessed
> in addition to the random heap access? How many dead versions of
> the row will
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How about after you load the data, vacuum freeze it, then do something like:
S
I found in a production system that mostly the performance is being
crippled by the handling of one particular case.
I hope I've reduced this to a minimal example, below.
Summary: when more than one key is given (or the data is sourced from a
table) the planner is forced to a join of the whole
* Stefan Keller (sfkel...@gmail.com) wrote:
> So, are there any developments going on with PostgreSQL as Stephen
> suggested in the former thread?
While the idea has been getting kicked around, I don't know of anyone
actively working on developing code to implement it.
Thanks,
Stefan Keller wrote on 26.02.2012 01:16:
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
What about creating tablespace on a RAM Fileystem (tmpfs), then create a second schema in
your database where all tables are located in the that "temp" tablespace.
Hi Jeff and Wales,
2012/2/26 Jeff Janes wrote:
>> The problem is that the initial queries are too slow - and there is no
>> second chance. I do have to trash the buffer every night. There is
>> enough main memory to hold all table contents.
>
> Just that table, or the entire database?
The entire
Hi again, everyone.
Wow, I can't get over how helpful everyone has been.
Shaun wrote:
The main problem you're going to run into is that your table is larger than the
memory in that server. 4GB is really pretty small for a server hosting a 200+GB
database. That they didn't mean it to get that
You can try PostgreSQL 9.x master/slave replication, then try run slave
on persistent RAM Fileystem(tmpfs)
So, access your all data from slave PostgreSQL that run on tmpfs..
发件人: Jeff Janes
收件人: Stefan Keller
抄送: pgsql-performance@postgresql.org; Stephen Fro
15 matches
Mail list logo