Re: [PERFORM] Improving PostgreSQL insert performance

2017-06-11 Thread Vladimir Sitnikov
Alvaro>Something like INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I did not Frits>try that, to be honest. pgjdbc does automatically rewrite insert values(); into insert ... values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual multivalues to be not

Re: [PERFORM] Improving PostgreSQL insert performance

2017-06-09 Thread Vladimir Sitnikov
Frits, Would you mind sharing the source code of your benchmark? >BTW: It seems you need a recent driver for this; I'm using postgresql-42.1.1.jar Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209 (2016-07-15) Vladimir

Re: [PERFORM] EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT

2015-03-05 Thread Vladimir Sitnikov
>What could cause this? Note that there is no ANALYZE. Can you capture pstack and/or perf report while explain hangs? I think it should shed light on the activity of PostgreSQL. Vladimir -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subsc

Re: [PERFORM] Query taking long time

2014-03-03 Thread Vladimir Sitnikov
st top-n rows, then index on entity_compounddict2document(name, a.hepval) might help. Regards, Vladimir Sitnikov

Re: [PERFORM] Performance of complicated query

2013-05-23 Thread Vladimir Sitnikov
>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence I'm only summing the last read for each device for each patient. Is "reads" table insert-only? Do you have updates/deletes of the "historical" rows? >>3. Can I modify my tables to make this query (which is the crux of my

Re: [PERFORM] FW: performance issue with a 2.5gb joinded table

2013-01-09 Thread Vladimir Sitnikov
nsorted-array -- Regards, Vladimir Sitnikov

Re: [PERFORM] filesystems benchmark

2008-12-15 Thread Vladimir Sitnikov
when it fits in memory). Regards, Vladimir Sitnikov

Re: [PERFORM] query plan with index having a btrim is different for strings of different length

2008-12-10 Thread Vladimir Sitnikov
)) if you have other queries with different values for owner. One more point that could improve bitmap scans is greater value for work_mem. You'll need 8*15494737 ~ 130Mb == 13 for work_mem (however, that is way too high unless you have lots of RAM and just couple of active database sessions) Regards, Vladimir Sitnikov

Re: [PERFORM] Slow SQL query (14-15 seconds)

2008-11-13 Thread Vladimir Sitnikov
S nombre FROM societes_adresses_facturation WHERE is_deleted = FALSE GROUP BY fk_societe_id ) AS stats_adresses_facturation ON stats_adresses_facturation.societe_id = societes.pk_societe_id WHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1 ORDER BY LOWER(denomination_commerciale); Bien a vous, Vladimir Sitnikov

Re: [PERFORM] Increasing select max(datecol) from bilkaib where datecol<=date'2008-11-01' and (cr='00' or db='00') speed

2008-11-12 Thread Vladimir Sitnikov
as Vladimir > suggests or is there better appoach ? I am afraid PostgreSQL is not smart enough to rewrite query with "or" into two separate index scans. There is no way to improve the query significantly without rewriting it. Note: for this case indices on (datecol), (cr) and (db) are not very helpful. Regards, Vladimir Sitnikov

Re: [PERFORM] slow full table update

2008-11-12 Thread Vladimir Sitnikov
an degrades when the number of rows in table (not the total number of returned rows) is greater than work_mem*1024*8. 60'000 rows bitmap scan will require 60'000/8=7'500 bytes ~ 8Kbytes of memory to run without additional recheck, thus I do not believe it hurts you in this particular case Regards, Vladimir Sitnikov

Re: [PERFORM] Increasing select max(datecol) from bilkaib where datecol<=date'2008-11-01' and (cr='00' or db='00') speed

2008-11-12 Thread Vladimir Sitnikov
(select max(date) from bilkaib where datecol<=date'2008-11-01' and db='00')) Regards, Vladimir Sitnikov

Re: [PERFORM] Using index for IS NULL query

2008-11-11 Thread Vladimir Sitnikov
archnulls +--+--- btree | t| t hash | f| f gist | t| t gin| f| f bitmap | t| t (5 rows) Sincerely yours, Vladimir Sitnikov

Re: [PERFORM] Improve Seq scan performance

2008-11-10 Thread Vladimir Sitnikov
a bit confused, though, since I thought > in this case it couldn't actually execute the query w/o a sequential scan, > and would just use one irrespective of the enable_seqscan param. That's what > happens here. Please, follow the case carefully: the index is only 30 pages long. Why is PostgreSQL doing 2529 I/O? It drives me crazy. Regards, Vladimir Sitnikov

Re: [PERFORM] Improve Seq scan performance

2008-11-09 Thread Vladimir Sitnikov
on seq_test (cost=0.00..1643.74 rows=356 width=508) (actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529) read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0) Filter: (i ~~ '%123%'::text) Total runtime: 16.863 ms Hopefully, there will be a clear distinction between filtering via index and filtering via table access. Regards, Vladimir Sitnikov

Re: [PERFORM] indexing for distinct search in timestamp based table

2008-09-05 Thread Vladimir Sitnikov
You might get great improvement for '%' cases using index on channel_name(, start_time) and a little bit of pl/pgsql Basically, you need to implement the following algorithm: 1) curr_ = ( select min() from ad_log ) 2) record_exists = ( select 1 from ad_log where =cur_ and _all_other_conditions