Re: [PERFORM] Heavy contgnous load

2011-10-23 Thread kzsolt
So guys, lot of thank you for all of the explanation and ideas!


Jeff Janes wrote:
> What happens if the database has a hiccup and can't accept records for
> a few seconds or minutes?

Craig Ringer wrote:
> If you really need absolutely maximum insert performance, you should 
> just use a flat file or a different database system.
This need some explanation:
Just for easy explanation our system constructed by Pmodules called PMs. The
transport between PMs is a special reliable protocol with elastic high
capacity buffers. This absorb the peaks of asynchrnous event storm.
The related (small) part of our system called A_PM. This A_PM accept
asynchrnous event from many (can be more dozen) other PMs, format it and
store onto record of SQL table.  
After the record inserted all must be open for complex querys requested by 3
or more PM. 
Othersides we need to provide common public access to this records (and to
many other functions). This is why we use SQL database server for. But the
requirement is the user can be select freely the vendor of database server
from four database server set (one of is PGR). To implement this we have
twin interface.  

The synchronous_commit=off and unlogged table are good idea. I try it.  
The crash make mouch more trouble for our system than trouble generated by
loss of 200-300 record...




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Heavy-contgnous-load-tp4913425p4922748.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] explain workload

2011-10-23 Thread Robins Tharakan

Hi Radhya,

Make multiple EXPLAIN requests, and add them up in your application, I 
guess?


--
Robins
Sr. PGDBA
Comodo India

On 10/22/2011 06:41 AM, Radhya sahal wrote:

such as
explain (q1,q2,q3)..i want the total cost for all queries in the
workload using one explain ,,??




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [PERFORM] explain workload

2011-10-23 Thread Tom Lane
Robins Tharakan  writes:
> Hi Radhya,
> Make multiple EXPLAIN requests, and add them up in your application, I 
> guess?

Or maybe contrib/auto_explain would help.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] hstore query: Any better idea than adding more memory?

2011-10-23 Thread Stephen Frost
* Stefan Keller (sfkel...@gmail.com) wrote:
> Adding more memory (say to total of 32 GB) would only postpone the problem.

Erm, seems like you're jumping to conclusions here...

> First time the query lasts about 10 time longer (~ 1010 ms) - but I'd
> like to get better results already in the first query.

Do you mean first time after a database restart?

> => 1. When I add the "actual time" from EXPLAIN above, I get 11 + 10 +
> 10ms which is three times greater than the 11ms reported.  Why?

Because they include the times from the nodes under them.

> => 2. Why does the planner choose to sort first instead of sorting the
> (smaller)  result query at the end the?

You're reading the explain 'backwards' regarding time..  It *does* do
the sort last.  Nodes which are indented feed the nodes above them, so
the bitmap index scan and recheck feed into the sort, hence the sort is
actually done after.  Can't really work any other way anyway, PG has to
get the data before it can sort it..

> => 3. What could I do to speed up such queries (first time, i.e.
> without caching) besides simply adding more memory?

There didn't look like anything there that could really be done much
faster, at the plan level.  It's not uncommon for people to
intentionally get a box with more memory than the size of their
database, so everything is in memory.

At the end of the day, if the blocks aren't in memory then PG has to get
them from disk.  If disk is slow, the query is going to be slow.  Now,
hopefully, you're hitting this table often enough with similar queries
that important, common, parts of the table and index are already in
memory, but there's no magic PG can perform to ensure that.

If there's a lot of updates/changes to this table, you might check if
there's a lot of bloat (check_postgres works great for this..).
Eliminating excessive bloat, if there is any, could help with all
accesses to that table, of course, since it would reduce the amount of
data which would need to be.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] hstore query: Any better idea than adding more memory?

2011-10-23 Thread Stefan Keller
Hi Stephen

Thanks for your answer and hints.

2011/10/24 Stephen Frost  wrote:
> * Stefan Keller (sfkel...@gmail.com) wrote:
>> Adding more memory (say to total of 32 GB) would only postpone the problem.
> Erm, seems like you're jumping to conclusions here...

Sorry. I actually only wanted to report here what's special in my
postgresql.conf.

>> First time the query lasts about 10 time longer (~ 1010 ms) - but I'd
>> like to get better results already in the first query.
>
> Do you mean first time after a database restart?

No: I simply meant doing the query when one can assume that the query
result is not yet in the postgres' cache.
You can check that here online: http://labs.geometa.info/postgisterminal

>> => 1. When I add the "actual time" from EXPLAIN above, I get 11 + 10 +
>> 10ms which is three times greater than the 11ms reported.  Why?
>
> Because they include the times from the nodes under them.
>
>> => 2. Why does the planner choose to sort first instead of sorting the
>> (smaller)  result query at the end the?
>
> You're reading the explain 'backwards' regarding time..  It *does* do
> the sort last.  Nodes which are indented feed the nodes above them, so
> the bitmap index scan and recheck feed into the sort, hence the sort is
> actually done after.  Can't really work any other way anyway, PG has to
> get the data before it can sort it..

Oh, thanks. I should have realized that.

But then what should the arrow ("->") wants to stand for?
Sort (cost=30819.51...
  ->  Bitmap Heap Scan on osm_point  (cost=313.21...
  ->  Bitmap Index Scan on osm_point_tags_idx

I would suggest that the inverse arrow would be more intuitive:
Sort (cost=30819.51...
  <-  Bitmap Heap Scan on osm_point  (cost=313.21...
  <-  Bitmap Index Scan on osm_point_tags_idx

>> => 3. What could I do to speed up such queries (first time, i.e.
>> without caching) besides simply adding more memory?
>
> There didn't look like anything there that could really be done much
> faster, at the plan level.  It's not uncommon for people to
> intentionally get a box with more memory than the size of their
> database, so everything is in memory.
>
> At the end of the day, if the blocks aren't in memory then PG has to get
> them from disk.  If disk is slow, the query is going to be slow.  Now,
> hopefully, you're hitting this table often enough with similar queries
> that important, common, parts of the table and index are already in
> memory, but there's no magic PG can perform to ensure that.
>
> If there's a lot of updates/changes to this table, you might check if
> there's a lot of bloat (check_postgres works great for this..).
> Eliminating excessive bloat, if there is any, could help with all
> accesses to that table, of course, since it would reduce the amount of
> data which would need to be.

Thanks for the hint.

But there are only periodic updates (currently once a night) and these
are actually done by 1. truncating the database and 2. bulk loading
all the stuff, then 3. reindexing.

If one tries to completely fit the whole data into memory, then to me
PostgreSQL features borrowed from in-memory databases become
interesting.

=> Is there anything else than "index-only scans" (planned for 9.2?)
which could be of interest here?

Stefan

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] hstore query: Any better idea than adding more memory?

2011-10-23 Thread Stephen Frost
* Stefan Keller (sfkel...@gmail.com) wrote:
> >> Adding more memory (say to total of 32 GB) would only postpone the problem.
> > Erm, seems like you're jumping to conclusions here...
> 
> Sorry. I actually only wanted to report here what's special in my
> postgresql.conf.

My comment was referring to "postpone the problem".

> No: I simply meant doing the query when one can assume that the query
> result is not yet in the postgres' cache.
> You can check that here online: http://labs.geometa.info/postgisterminal

If it's not in PG's cache, and it's not in the OS's cache, then it's
gotta come from disk. :/

> But then what should the arrow ("->") wants to stand for?

Eh..  I wouldn't read the arrows as meaning all that much. :)  They're
there as a visual aide only, aiui.  Also, explain really shows the
*plan* that PG ended up picking for this query, thinking about it that
way might help.

> I would suggest that the inverse arrow would be more intuitive:

Perhaps, but don't get your hopes up about us breaking explain-reading
applications by changing that. :)

> But there are only periodic updates (currently once a night) and these
> are actually done by 1. truncating the database and 2. bulk loading
> all the stuff, then 3. reindexing.

Well, that would certainly help avoid bloat. :)

> If one tries to completely fit the whole data into memory, then to me
> PostgreSQL features borrowed from in-memory databases become
> interesting.

... huh?  I don't know of any system that's going to be able to make
sure that all your queries perform like in-memory queries when you don't
have enough memory to actually hold it all..

> => Is there anything else than "index-only scans" (planned for 9.2?)
> which could be of interest here?

index-only scans may be able to help with this as it may be able to
reduce the amount of disk i/o that has to be done, and reduce the amount
of memory needed to get everything into memory, but if you don't have
enough memory then you're still going to see a performance difference
between querying data that's cached and data that has to come from disk.

I don't know if index-only scans will, or will not, be able to help with
these specific queries.  I suspect they won't be much help since the
data being returned has to be in the index.  If I remember your query,
you were pulling out data which wasn't actaully in the index that was
being used to filter the result set.  Also, I don't know if we'll have
index-only scans for GIST/GIN indexes in 9.2 or if it won't be available
till a later release.  AIUI, only btree indexes can perform index-only
scans in the currently committed code.

Now, we've also been discussing ways to have PG automatically
re-populate shared buffers and possibly OS cache based on what was in
memory at the time of the last shut-down, but I'm not sure that would
help your case either since you're rebuilding everything every night and
that's what's trashing your buffers (because everything ends up getting
moved around).  You might actually want to consider if that's doing more
harm than good for you.  If you weren't doing that, then the cache
wouldn't be getting destroyed every night..

Thanks,

Stephen


signature.asc
Description: Digital signature