On Thu, Jun 30, 2016 at 9:13 AM, Andres Freund wrote:
> On 2016-06-30 08:59:16 +0530, Amit Kapila wrote:
>> On Wed, Jun 29, 2016 at 10:30 PM, Andres Freund wrote:
>> > On 2016-06-29 19:04:31 +0530, Amit Kapila wrote:
>> >> There is nothing in this record which recorded the information about
>> >>
On 21/06/16 03:53, Mark Dilger wrote:
On Jun 18, 2016, at 5:48 PM, Josh Berkus wrote:
On 06/16/2016 11:01 PM, Craig Ringer wrote:
I thought about raising this, but I think in the end it's replacing one
confusing and weird versioning scheme for another confusing and weird
versioning scheme.
It
On Thu, Jun 30, 2016 at 5:10 AM, Alvaro Herrera
wrote:
> Yury Zhuravlev wrote:
>> Hello Hackers.
>>
>> I decided to talk about the current state of the project:
>> 1. Merge with 9.6 master. 2. plpython2, plpython3, plperl, pltcl, plsql all
>> work correctly (all tests pass).
>> 3. Works done for a
On 2016-06-30 08:59:16 +0530, Amit Kapila wrote:
> On Wed, Jun 29, 2016 at 10:30 PM, Andres Freund wrote:
> > On 2016-06-29 19:04:31 +0530, Amit Kapila wrote:
> >> There is nothing in this record which recorded the information about
> >> visibility clear flag.
> >
> > I think we can actually defer
On Wed, Jun 29, 2016 at 10:30 PM, Andres Freund wrote:
> On 2016-06-29 19:04:31 +0530, Amit Kapila wrote:
>> There is nothing in this record which recorded the information about
>> visibility clear flag.
>
> I think we can actually defer the clearing to the lock release?
How about the case if aft
I'm happy with this patch.
On 6/29/16 12:41 PM, Robert Haas wrote:
On Tue, Jun 28, 2016 at 10:10 PM, Peter Eisentraut
wrote:
On 6/27/16 5:37 PM, Robert Haas wrote:
Please find attached an a patch for a proposed alternative approach.
This does the following:
1. When the client_encoding GUC i
On Wed, Jun 29, 2016 at 11:54 AM, Julien Rouhaud
wrote:
> On 29/06/2016 06:29, Amit Kapila wrote:
>> On Wed, Jun 29, 2016 at 2:57 AM, Julien Rouhaud
>> wrote:
>>>
>>> Thanks a lot for the help!
>>>
>>> PFA v6 which should fix all the issues mentioned.
>>
>> Couple of minor suggestions.
>>
>> -
On 30 June 2016 at 07:21, Tom Lane wrote:
> Alvaro Herrera writes:
> > Tom Lane wrote:
> >> Thanks for investigating! I'll go commit that change. I wish someone
> >> would put up a buildfarm critter using VS2013, though.
>
> > Uh, isn't that what woodlouse is using?
>
> Well, it wasn't reporti
On Wed, Jun 29, 2016 at 4:35 PM, Piotr Stefaniak
wrote:
> On 2016-06-29 18:58, Robert Haas wrote:
>> This code predates be7558162acc5578d0b2cf0c8d4c76b6076ce352, prior to
>> which proc_exit(0) forced an immediate, unconditional restart. It's
>> true that, given that commit, changing this code to
On Mon, Jun 27, 2016 at 4:49 PM, Tom Lane wrote:
> A few specific comments:
>
> * Can't we remove wholePlanParallelSafe altogether, in favor of just
> examining best_path->parallel_safe in standard_planner?
>
> * In grouping_planner, I'm not exactly convinced that
> final_rel->consider_parallel ca
Alvaro Herrera writes:
> Tom Lane wrote:
>> Thanks for investigating! I'll go commit that change. I wish someone
>> would put up a buildfarm critter using VS2013, though.
> Uh, isn't that what woodlouse is using?
Well, it wasn't reporting this crash, so there's *something* different.
Michael Paquier wrote:
> On Fri, Jun 24, 2016 at 11:51 AM, Tsunakawa, Takayuki
> wrote:
> >> From: pgsql-hackers-ow...@postgresql.org
> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Paquier
> >> Sent: Friday, June 24, 2016 11:37 AM
> >> On Fri, Jun 24, 2016 at 11:33 AM, Craig
Tom Lane wrote:
> "Haroon ." writes:
> > The problem appears to be related to 'taking the address of a formal
> > parameter'. NOT passing the original formal parameter to
> > get_foreign_key_join_selectivity fixes it (dodges the problem) on VS2013.
>
> Thanks for investigating! I'll go commit t
"Haroon ." writes:
> On Sat, Jun 25, 2016 at 6:40 PM, Tom Lane wrote:
>> This leads to a couple of suggestions for dodging the problem:
>>
>> 2. Don't pass the original formal parameter to
>> get_foreign_key_join_selectivity, ie do something like
>>
>> static double
>> calc_joinrel_size_estimat
Michael Paquier writes:
> On Thu, Jun 30, 2016 at 6:47 AM, Tom Lane wrote:
>> It strikes me that keeping a password embedded in the conninfo from being
>> exposed might be quite a bit harder/riskier if it became a GUC. Something
>> to keep in mind if we ever try to make that change ...
> Exposi
On 30 June 2016 at 02:32, Andres Freund wrote:
>
> Hi,
>
> On 2016-06-28 10:01:28 +, Rajeev rastogi wrote:
> > >3) Our 1-by-1 tuple flow in the executor has two major issues:
> >
> > Agreed, In order to tackle this IMHO, we should
> > 1. Makes the processing data-centric instead of operator c
On 29 June 2016 at 21:49, Sachin Kotwal wrote:
> Hi,
>
>
> On Wed, Jun 29, 2016 at 6:29 PM, Craig Ringer
> wrote:
>
>> On 29 June 2016 at 18:47, Sachin Kotwal wrote:
>>
>>
>>> I am testing pgbench with more than 100 connections.
>>> also set max_connection in postgresql.conf more than 100.
>>>
On Sat, Jun 25, 2016 at 6:40 PM, Tom Lane wrote:
>
> If that is the explanation, I'm suspicious that it's got something to do
> with the interaction of a static inline-able (single-call-site) function
> and taking the address of a formal parameter. We certainly have multiple
> other instances of
On Thu, Jun 30, 2016 at 6:47 AM, Tom Lane wrote:
> Magnus Hagander writes:
>> There was also that (old) thread about making the recovery.conf parameters
>> be general GUCs. I don't actually remember the consensus there, but diong
>> that would certainly change how it's handled as well.
>
> It str
Magnus Hagander writes:
> There was also that (old) thread about making the recovery.conf parameters
> be general GUCs. I don't actually remember the consensus there, but diong
> that would certainly change how it's handled as well.
It strikes me that keeping a password embedded in the conninfo f
On Wed, Jun 29, 2016 at 11:18 PM, Michael Paquier wrote:
> On Thu, Jun 30, 2016 at 6:01 AM, Alvaro Herrera
> wrote:
> > Alvaro Herrera wrote:
> >
> >> I propose to push this patch, closing the open item, and you can rework
> >> on top -- I suppose you would completely remove the original conninf
On Thu, Jun 30, 2016 at 6:01 AM, Alvaro Herrera
wrote:
> Alvaro Herrera wrote:
>
>> I propose to push this patch, closing the open item, and you can rework
>> on top -- I suppose you would completely remove the original conninfo
>> from shared memory and instead only copy the obfuscated version th
Alvaro Herrera wrote:
> I propose to push this patch, closing the open item, and you can rework
> on top -- I suppose you would completely remove the original conninfo
> from shared memory and instead only copy the obfuscated version there
> (and probably also remove the ready_to_display flag). I
On 2016-06-29 18:58, Robert Haas wrote:
This code predates be7558162acc5578d0b2cf0c8d4c76b6076ce352, prior to
which proc_exit(0) forced an immediate, unconditional restart. It's
true that, given that commit, changing this code to do proc_exit(0)
instead of proc_exit(1) would be harmless. Howeve
Yury Zhuravlev wrote:
> Hello Hackers.
>
> I decided to talk about the current state of the project:
> 1. Merge with 9.6 master. 2. plpython2, plpython3, plperl, pltcl, plsql all
> work correctly (all tests pass).
> 3. Works done for all contrib modules. 4. You can use gettext, .po->.mo will
> hav
Andrew Gierth writes:
> If the query was produced by rule expansion then the code that populates
> fkinfo includes FK references to the OLD and NEW RTEs, but those might not
> appear in the jointree (the testcase for the bug is a DELETE rule where
> NEW clearly doesn't apply) and hence build_simpl
> "Tom" == Tom Lane writes:
> Tomas Vondra writes:
>> Attached is a reworked patch, mostly following the new design proposal
>> from this thread.
Tom> Comments and testing appreciated.
This blows up (see bug 14219 for testcase) in
match_foreign_keys_to_quals on the find_base_rel call(
Oleg Bartunov writes:
>> On Tue, Jun 28, 2016 at 9:32 AM, Noah Misch wrote:
> This PostgreSQL 9.6 open item now needs a permanent owner. Would any other
> committer like to take ownership? I see Teodor committed some things relevant
> to this item just today, so the task may be as simple as ver
Shawn wrote:
> Unfortunately...no. I have been trying to repro this scenario. Is there a
> specific way to make a Python connection where this is possible?
>
> My end game, if this is not something that can be fixed on the Postgres
> side, is to come up with a way to automatically cause the conn
Hi,
On 2016-06-28 10:01:28 +, Rajeev rastogi wrote:
> >3) Our 1-by-1 tuple flow in the executor has two major issues:
>
> Agreed, In order to tackle this IMHO, we should
> 1. Makes the processing data-centric instead of operator centric.
> 2. Instead of pulling each tuple from immediate oper
Fujii Masao wrote:
> On Thu, Jun 30, 2016 at 2:50 AM, Alvaro Herrera
> wrote:
> > Fujii Masao wrote:
> >> On Wed, Jun 29, 2016 at 12:23 PM, Alvaro Herrera
> >> wrote:
> >> > Michael Paquier wrote:
> >> >> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
> >> >> wrote:
> >> >
> >> >> > I have alre
On Thu, Jun 30, 2016 at 2:50 AM, Alvaro Herrera
wrote:
> Fujii Masao wrote:
>> On Wed, Jun 29, 2016 at 12:23 PM, Alvaro Herrera
>> wrote:
>> > Michael Paquier wrote:
>> >> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
>> >> wrote:
>> >
>> >> > I have already edited the patch following some of
Fujii Masao wrote:
> On Wed, Jun 29, 2016 at 12:23 PM, Alvaro Herrera
> wrote:
> > Michael Paquier wrote:
> >> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
> >> wrote:
> >
> >> > I have already edited the patch following some of these ideas. Will
> >> > post a new version later.
> >>
> >> Coo
On Wed, Jun 29, 2016 at 12:23 PM, Alvaro Herrera
wrote:
> Michael Paquier wrote:
>> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
>> wrote:
>
>> > I have already edited the patch following some of these ideas. Will
>> > post a new version later.
>>
>> Cool, thanks.
>
> Here it is. I found it
On Wed, Jun 29, 2016 at 1:26 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jun 27, 2016 at 6:04 PM, Tom Lane wrote:
>>> Huh? The final tlist would go with the final_rel, ISTM, not the scan
>>> relation. Maybe we have some rejiggering to do to make that true, though.
>
>> Mumble. You're
Robert Haas writes:
> On Mon, Jun 27, 2016 at 6:04 PM, Tom Lane wrote:
>> Huh? The final tlist would go with the final_rel, ISTM, not the scan
>> relation. Maybe we have some rejiggering to do to make that true, though.
> Mumble. You're right that there are two rels involved, but I think
> I'
On 2016-06-29 19:04:31 +0530, Amit Kapila wrote:
> There is nothing in this record which recorded the information about
> visibility clear flag.
I think we can actually defer the clearing to the lock release? A tuple
being locked doesn't require the vm being cleared.
> I think in this approach,
On Mon, Jun 27, 2016 at 11:40 PM, Michael Paquier
wrote:
> On Tue, Jun 28, 2016 at 6:49 AM, Robert Haas wrote:
>> On Sun, Jun 26, 2016 at 6:19 AM, Piotr Stefaniak
>> wrote:
while investigating the shm_mq code and its testing module I made some
cosmetic improvements there. You can see t
On Tue, Jun 28, 2016 at 10:10 PM, Peter Eisentraut
wrote:
> On 6/27/16 5:37 PM, Robert Haas wrote:
>> Please find attached an a patch for a proposed alternative approach.
>> This does the following:
>>
>> 1. When the client_encoding GUC is changed in the worker,
>> SetClientEncoding() is not calle
Hello,
I am testing it with 9.6-beta1 binaries. For server and client it is same.
I am using pgbench on top of postgres_fdw.
Hmmm... So pgbench is connected to some pg instance, and this pg instance
is connected to something else on another host? Or to the same instance,
in which case you w
Hello Hackers.
I decided to talk about the current state of the project:
1. Merge with 9.6 master.
2. plpython2, plpython3, plperl, pltcl, plsql all work correctly (all tests
pass).
3. Works done for all contrib modules.
4. You can use gettext, .po->.mo will have converted by CMake.
5. All t
On Mon, Jun 27, 2016 at 6:04 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Jun 27, 2016 at 5:28 PM, Tom Lane wrote:
>>> Seems to me that it should generally be the case that consider_parallel
>>> would already be clear on the parent rel if the tlist isn't parallel safe,
>>> and if it isn'
Peter,
* Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote:
> Do this:
>
> CREATE DATABASE test1;
> REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;
>
> Run pg_dumpall.
>
> In 9.5, this produces
>
> CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> REVOKE ALL ON DATABASE te
On Sat, Jun 25, 2016 at 3:44 AM, Andreas Karlsson wrote:
> On 06/24/2016 01:31 PM, David Rowley wrote:
>> Seems there's a small error in the upgrade script for citext for 1.1
>> to 1.2 which will cause min(citext) not to be parallel enabled.
>>
>> max(citext)'s combinefunc is first set incorrectly
On Wed, Jun 29, 2016 at 5:36 AM, Sachin Kotwal wrote:
> Hi Fabien,
>
> Sorry for very short report.
> I feel pgbench is not so complex tool.
>
> Please see below answers to your questions.
>
>
> On Wed, Jun 29, 2016 at 5:07 PM, Fabien COELHO wrote:
>>
>>
>> Hello Sachin,
>>
>> Your report is very
On Fri, Jun 24, 2016 at 2:23 PM, Flavius Anton wrote:
> Any other thoughts on this? My guess is that it might be an important
> addition to Postgres that can attract even more users, but I am not
> sure if there's enough interest from the community. If I want to pick
> this task, how should I move
On Wed, Jun 29, 2016 at 8:36 AM, Sachin Kotwal wrote:
> postgresql does not give any error.
>
> pgbench says:
> client 36 aborted in state 2: ERROR: could not connect to server "server_1"
> DETAIL: FATAL: sorry, too many clients already
The error message that you are seeing there "FATAL: sorry
"David G. Johnston" writes:
> A correlated subquery, on the other hand, has to be called once for every
> row and is evaluated within the context supplied by said row. Each time
> random is called it returns a new value.
> Section 4.2.11 (9.6 docs)
> https://www.postgresql.org/docs/9.6/static/sq
We may also want to consider handling abstract events such as
"tuples-are-available-at-plan-node-X".
One benefit is : we can combine this with batch processing. For e.g. in
case of an Append node containing foreign scans, its parent node may not
want to process the Append node result until Append
[ Please do not quote the entire thread in each followup. That's
disrespectful of your readers' time, and will soon cause people to
stop reading the thread, meaning you don't get answers. ]
Alex Ignatov writes:
> In this subquery(below) we have reference to outer variables but it is
> not worki
On 29.06.2016 15:30, David G. Johnston wrote:
More specifically...
On Wed, Jun 29, 2016 at 7:34 AM, Michael Paquier
mailto:michael.paqu...@gmail.com>>wrote:
On Wed, Jun 29, 2016 at 7:43 PM, Alex Ignatov
mailto:a.igna...@postgrespro.ru>> wrote:
> Hello!
>
> Got some strange
On 29/06/2016 08:51, Amit Kapila wrote:
> On Wed, Jun 29, 2016 at 11:54 AM, Julien Rouhaud
> wrote:
>> Or should we allow setting it to -1 for instance to disable the limit?
>>
>
> By disabling the limit, do you mean to say that only
> max_parallel_workers_per_gather will determine the workers re
Hi,
On Wed, Jun 29, 2016 at 6:29 PM, Craig Ringer wrote:
> On 29 June 2016 at 18:47, Sachin Kotwal wrote:
>
>
>> I am testing pgbench with more than 100 connections.
>> also set max_connection in postgresql.conf more than 100.
>>
>> Initially pgbench tries to scale nearby 150 but later it come
On Wed, Jun 29, 2016 at 11:14 AM, Masahiko Sawada wrote:
> On Fri, Jun 24, 2016 at 11:04 AM, Amit Kapila wrote:
>> On Fri, Jun 24, 2016 at 4:33 AM, Andres Freund wrote:
>>> On 2016-06-23 18:59:57 -0400, Alvaro Herrera wrote:
Andres Freund wrote:
> I'm looking into three approaches
* Robert Haas (robertmh...@gmail.com) wrote:
> On Tue, Jun 28, 2016 at 11:12 PM, Peter Eisentraut
> wrote:
> > Do this:
> >
> > CREATE DATABASE test1;
> > REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;
> >
> > Run pg_dumpall.
> >
> > In 9.5, this produces
> >
> > CREATE DATABASE test1 WITH TEMPLATE
On Tue, Jun 28, 2016 at 11:12 PM, Peter Eisentraut
wrote:
> Do this:
>
> CREATE DATABASE test1;
> REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;
>
> Run pg_dumpall.
>
> In 9.5, this produces
>
> CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> REVOKE ALL ON DATABASE test1 FROM PUBLI
On 29 June 2016 at 18:47, Sachin Kotwal wrote:
> I am testing pgbench with more than 100 connections.
> also set max_connection in postgresql.conf more than 100.
>
> Initially pgbench tries to scale nearby 150 but later it come down to 100
> connections and stable there.
>
> It this limitation o
Thank you.
On Tue, Jun 28, 2016 at 11:06 PM Oleg Bartunov wrote:
> On Wed, Jun 29, 2016 at 6:17 AM, M Enrique <
> enrique.mailing.li...@gmail.com> wrote:
>
>> What's a good source code entry point to review how this is working for
>> anyarray currently? I am new to the postgres code. I spend som
Hi Fabien,
Sorry for very short report.
I feel pgbench is not so complex tool.
Please see below answers to your questions.
On Wed, Jun 29, 2016 at 5:07 PM, Fabien COELHO wrote:
>
> Hello Sachin,
>
> Your report is very imprecise so it is hard to tell anything.
>
> What version of client and s
More specifically...
On Wed, Jun 29, 2016 at 7:34 AM, Michael Paquier
wrote:
> On Wed, Jun 29, 2016 at 7:43 PM, Alex Ignatov
> wrote:
> > Hello!
> >
> > Got some strange behavior of random() function:
> >
> > postgres=# select (select random() ) from generate_series(1,10) as i;
> > random
Hello Sachin,
Your report is very imprecise so it is hard to tell anything.
What version of client and server are you running? On what hardware ? (200
connections => 200 active postgres processes, how many processes per core
are you expecting to run? the recommanded value is about 2 connectio
On Wed, Jun 29, 2016 at 7:43 PM, Alex Ignatov wrote:
> Hello!
>
> Got some strange behavior of random() function:
>
> postgres=# select (select random() ) from generate_series(1,10) as i;
> random
> ---
> 0.831577288918197
> [...]
> (10 rows)
I recall that this is treated a
Hi,
I am testing pgbench with more than 100 connections.
also set max_connection in postgresql.conf more than 100.
Initially pgbench tries to scale nearby 150 but later it come down to 100
connections and stable there.
It this limitation of pgbench? or bug? or i am doing it wrong way?
---
I tes
Hello!
Got some strange behavior of random() function:
postgres=# select (select random() ) from generate_series(1,10) as i;
random
---
0.831577288918197
0.831577288918197
0.831577288918197
0.831577288918197
0.831577288918197
0.831577288918197
0.831577288918197
0.83
64 matches
Mail list logo