I have a test where a user creates a temp table and then disconnect,
concurrently we try to do DROP OWNED BY CASCADE on the same user. Seems
this causes race condition between temptable deletion during disconnection
(@RemoveTempRelations(myTempNamespace)) and DROP OWNED BY CASCADE operation
which w
Thanks Robert,
On Mon, Mar 18, 2019 at 9:01 PM Robert Haas wrote:
> On Mon, Mar 18, 2019 at 3:04 AM Mithun Cy wrote:
> > autoprewarm waorker should not be restarted. As per the code
> @apw_start_database_worker@ master starts a worker per database and wait
> until it
t tOn Mon, Feb 25, 2019 at 12:10 AM Mithun Cy wrote:
> Thanks Hans, for a simple reproducible tests.
>
> The "worker.bgw_restart_time" is never set for autoprewarm workers so on
> error it get restarted after some period of time (default behavior). Since
> database itse
On Wed, Mar 13, 2019 at 1:38 PM Michael Paquier wrote:
> On Tue, Mar 12, 2019 at 06:23:01PM +0900, Michael Paquier wrote:
> > And you are pointing out to the correct commit. The issue is that
> > process_target_file() has added a call to check_file_excluded(), and
> > this skips all the folders
I think pg_rewind's feature to rewind the promoted standby as a new
standby is broken in 11
STEPS:
1. create master standby setup.
Use below script for same.
2. Promote the standby
[mithuncy@localhost pgrewmasterbin]$ ./bin/pg_ctl -D standby promote
waiting for server to promote done
server p
On Fri, Jan 11, 2019 at 3:54 AM John Naylor
wrote:
>
> On Wed, Jan 9, 2019 at 10:50 PM Amit Kapila
wrote:
> > Thanks, Mithun for performance testing, it really helps us to choose
> > the right strategy here. Once John provides next version, it would be
> > good to see the results of regular pgbe
Hi John Naylor,
On Tue, Jan 8, 2019 at 2:27 AM John Naylor wrote:
> I've attached two patches for testing. Each one applies on top of the
> current patch.
Thanks for the patch, I did a quick test for both of the patches same
tests as in [1], now for fillfactors 20, 70, 100 (Note for
HEAP_FSM_CREA
On Thu, Dec 6, 2018 at 10:53 PM John Naylor wrote:
> On 12/3/18, Amit Kapila wrote:
> > fsm_local_set is being called from RecordAndGetPageWithFreeSpace and
> > GetPageWithFreeSpace whereas the change we have discussed was specific
> > to GetPageWithFreeSpace, so not sure if we need any change in
On Sat, Dec 8, 2018 at 6:35 PM Amit Kapila wrote:
>
> On Fri, Dec 7, 2018 at 7:25 PM John Naylor wrote:
> >
> > On 12/6/18, Amit Kapila wrote:
> > > On Thu, Dec 6, 2018 at 10:53 PM John Naylor
wrote:
> > >>
> > >> I've added an additional regression test for finding the right block
I did run s
On Thu, Dec 6, 2018 at 11:13 AM Amit Kapila wrote:
>
> On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule wrote:
> >
> > čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy
> > napsal:
> >>
> >> COPY command seems to have improved very slightly with zheap in both wi
> On Thu, Mar 1, 2018 at 7:39 PM Amit Kapila
wrote:
I did some testing for performance of COPY command for zheap against heap,
here are my results,
Machine : cthulhu, (is a 8 node numa machine with 500GB of RAM)
server non default settings: shared buffers 32GB, max_wal_size = 20GB,
min_wal_size =
On Tue, Nov 20, 2018 at 7:59 PM Tomas Vondra
wrote:
>
> On 11/20/18 3:06 PM, 066ce...@free.fr wrote:
> > Hi,
> >
> >> When gdb will be active, then use command c, and then run query in
> >> session. gdb should to catch segfault.
> >
> > Thank you very much. It's been helpfull.
> >
> > BTW behavio
On Mon, Sep 10, 2018 at 7:33 PM, Amit Kapila wrote:
> On Mon, Sep 10, 2018 at 1:12 PM Haribabu Kommi
> wrote:
>>
>> On Wed, Sep 5, 2018 at 2:04 PM Haribabu Kommi
>> wrote:
>>>
>> pg_stat_get_tuples_hot_updated and others:
>> /*
>> * Counter tuples_hot_updated stores number of hot updates for h
On Fri, Jul 20, 2018 at 10:52 AM, Thomas Munro <
thomas.mu...@enterprisedb.com> wrote:
> On Fri, Jul 20, 2018 at 7:56 AM, Tom Lane wrote:
> >
> > It's not *that* noticeable, as I failed to demonstrate any performance
> > difference before committing the patch. I think some more investigation
> >
Hi Andres,
On Fri, Jul 20, 2018 at 1:21 AM, Andres Freund wrote:
> Hi,
>
> On 2018-01-24 00:06:44 +0530, Mithun Cy wrote:
> > Server:
> > ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> > max_wal_size=20GB -c checkpoint_timeout=900 -c
>
On Wed, Jan 24, 2018 at 7:36 AM, Amit Kapila wrote:
> Both the cases look identical, but from the document attached, it
> seems the case-1 is for scale factor 300.
Oops sorry it was a typo. CASE 1 is scale factor 300 which will fit in
shared buffer =8GB.
--
Thanks and Regards
Mithun C Y
Ente
Hi all,
When I was trying to do read-write pgbench bench-marking of PostgreSQL
9.6.6 vs 10.1 I found PostgreSQL 10.1 regresses against 9.6.6 in some
cases.
Non Default settings and test
==
Server:
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
max_wal_size=20G
On Tue, Dec 19, 2017 at 5:52 AM, Masahiko Sawada wrote:
> On Mon, Dec 18, 2017 at 2:04 PM, Masahiko Sawada
> wrote:
>> On Sun, Dec 17, 2017 at 12:27 PM, Robert Haas wrote:
>>>
>>> I have to admit that result is surprising to me.
>>
>> I think the environment I used for performance measurement d
18 matches
Mail list logo