Alvaro>Something like
INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I
did not
Frits>try that, to be honest.
pgjdbc does automatically rewrite insert values(); into insert ...
values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual
multivalues to be not
Frits,
Would you mind sharing the source code of your benchmark?
>BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209
(2016-07-15)
Vladimir
>What could cause this? Note that there is no ANALYZE.
Can you capture pstack and/or perf report while explain hangs?
I think it should shed light on the activity of PostgreSQL.
Vladimir
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subsc
st top-n rows, then index on
entity_compounddict2document(name, a.hepval) might help.
Regards,
Vladimir Sitnikov
>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence
I'm only summing the last read for each device for each patient.
Is "reads" table insert-only? Do you have updates/deletes of the
"historical" rows?
>>3. Can I modify my tables to make this query (which is the crux of my
nsorted-array
--
Regards,
Vladimir Sitnikov
when it fits in memory).
Regards,
Vladimir Sitnikov
)) if you have other queries with
different values for owner.
One more point that could improve bitmap scans is greater value for
work_mem. You'll need 8*15494737 ~ 130Mb == 13 for work_mem (however,
that is way too high unless you have lots of RAM and just couple of active
database sessions)
Regards,
Vladimir Sitnikov
S nombre
FROM societes_adresses_facturation
WHERE is_deleted = FALSE
GROUP BY fk_societe_id
) AS stats_adresses_facturation ON
stats_adresses_facturation.societe_id = societes.pk_societe_id
WHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1
ORDER BY LOWER(denomination_commerciale);
Bien a vous,
Vladimir Sitnikov
as Vladimir
> suggests or is there better appoach ?
I am afraid PostgreSQL is not smart enough to rewrite query with "or" into
two separate index scans. There is no way to improve the query significantly
without rewriting it.
Note: for this case indices on (datecol), (cr) and (db) are not very
helpful.
Regards,
Vladimir Sitnikov
an degrades when the number of rows in
table (not the total number of returned rows) is greater than
work_mem*1024*8. 60'000 rows bitmap scan will require 60'000/8=7'500 bytes ~
8Kbytes of memory to run without additional recheck, thus I do not believe
it hurts you in this particular case
Regards,
Vladimir Sitnikov
(select max(date) from bilkaib where datecol<=date'2008-11-01' and
db='00'))
Regards,
Vladimir Sitnikov
archnulls
+--+---
btree | t| t
hash | f| f
gist | t| t
gin| f| f
bitmap | t| t
(5 rows)
Sincerely yours,
Vladimir Sitnikov
a bit confused, though, since I thought
> in this case it couldn't actually execute the query w/o a sequential scan,
> and would just use one irrespective of the enable_seqscan param. That's what
> happens here.
Please, follow the case carefully: the index is only 30 pages long. Why is
PostgreSQL doing 2529 I/O? It drives me crazy.
Regards,
Vladimir Sitnikov
on seq_test (cost=0.00..1643.74 rows=356 width=508)
(actual time=0.334..16.746 rows=*20 *loops=1 read_shared=2529(2529)
read_local=0(0) flush=0 local_flush=0 file_read=0 file_write=0)
Filter: (i ~~ '%123%'::text)
Total runtime: 16.863 ms
Hopefully, there will be a clear distinction between filtering via index and
filtering via table access.
Regards,
Vladimir Sitnikov
You might get great improvement for '%' cases using index on
channel_name(,
start_time) and a little bit of pl/pgsql
Basically, you need to implement the following algorithm:
1) curr_ = ( select min() from ad_log )
2) record_exists = ( select 1 from ad_log where =cur_ and
_all_other_conditions
16 matches
Mail list logo