2014/7/25 9:53, Tom Lane wrote:
Rural
[ shrug... ] Insufficient data. When I try a simple test case based on
what you've told us, I get planning times of a couple of milliseconds.
I can think of contributing factors that would increase that, but not by
four orders of magnitude. So there's somet
Rural Hunter writes:
> I have a table partitioned with about 60 children tables. Now I found
> the planning time of simple query with partition key are very slow.
> ...
> You can see the timing output that the actual run time of the 'explain
> analyze' is 30 seconds while the select sql itself
On 07/25/2014 03:50 AM, Reza Taheri wrote:
> Hi Craig,
>> It's not just that statement that is relevant.
>> Is that statement run standalone, or as part of a larger transaction?
>
> Yes, the "size" of the transaction seems to matter here. It is a complex
> transaction (attached). Each "frame" is
Hi Kevin,
Thanks for the reply
> As already pointed out by Craig, statements don't have serialization
> failures; transactions do. In some cases a transaction may become
> "doomed to fail" by the action of a concurrent transaction, but the actual
> failure cannot occur until the next statement
Hi all.
I have a database with quiet heavy writing load (about 20k tps, 5k of which do
writes). And I see lots of writing I/O (I mean amount of data, not iops) to
this database, much more than I expect. My question is how can I debug what for
backend processes do lots of writes to the $PGDATA/b
Reza Taheri wrote:
> I am running into very high failure rates when I run with the
> Serializable Isolation Level. I have simplified our configuration
> to a single database with a constant workload, a TPC-E workload
> if you will, to focus on this this problem. We are running with
> PGSQL 9.2.4