Hello All,
I was measuring the execution time of 18 queries from the TPC-H benchmark.
I run 10 times each query using the EXPLAIN ANALYZE command and store the
times in a table called control_tab.
To record the times, I have a java program that does the whole process as
follows:
Step 1: Insert in
On Sat, Nov 4, 2017 at 6:41 PM, Marc-Olaf Jaschke wrote:
> Perhaps I misunderstand the discussion but would "INSERT .. ON CONFLICT DO
> SELECT [FOR ..]" not provide a solution for the following use case?
>
> [ .. ]
>
> That works. But it is a bit inconvenient to write the pseudo update clause.
>
Neto pr writes:
> I expected that the first run would always take longer than the others
> because of not having cached data, but look what happened:
>- in 6 cases the first execution was more faster than all executions.
>- in 2 cases only, the first exececution was more slower than all
>
On Sat, Nov 4, 2017 at 10:45 AM, Mark Fletcher wrote:
>
> While trying to track down my logical decoding problem, I noticed that
> my pg_logical/snapshots directory has ~5000 .snap files and is growing at a
> rate of about 4 files a minute. The earliest file is from yesterday
> afternoon, dating
Goal is to return all vendors which exist in all three companies
I think I got lucky figuring this out. Is there an obviously better way?
combined_item_master looks like this:
company_code character varying(10) NOT NULL,
primary_vendor_no character varying(7)
..more fields
data looks like this:
Hi,
Since I have no subscription for pgsql-hackers, I will try to answer the
question for comments raised in a discussion on the topic [1] in this way.
[1]
https://www.postgresql.org/message-id/flat/CAL9smLAu8X7DLWdJ7NB0BtcN%3D_kCz33Fz6WYUBDRysWdG0yFig%40mail.gmail.com#CAL9smLAu8X7DLWdJ7NB0BtcN
Thank you, Justin Pryzby.
I reset shared_buffer to 16GB,and the memory usage of checkpoint and
recovering just stayed at 16GB.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
192956 postgres 20 0 18.5g 16g 16g S 1.3 25.9 19:44.69 postgres:
startup process recover