For two-phase commit, PrepareTransaction() needs to execute pending syncs.
On Thu, Jul 25, 2019 at 10:39:36AM +0900, Kyotaro Horiguchi wrote:
> --- a/src/backend/access/heap/heapam_handler.c
> +++ b/src/backend/access/heap/heapam_handler.c
> @@ -715,12 +702,6 @@ heapam_relation_copy_for_cluster(Re
Greetings,
On Sat, Aug 17, 2019 at 18:30 Ahsan Hadi wrote:
> The current calendar entry for TDE weekly call will not work for EST
> timezone. I will change the invite so we can accommodate people from
> multiple time zones.
>
I appreciate the thought but at least for my part, I already have reg
The current calendar entry for TDE weekly call will not work for EST
timezone. I will change the invite so we can accommodate people from
multiple time zones.
Stay tuned.
On Sun, 18 Aug 2019 at 2:29 AM, Sehrope Sarkuni wrote:
> On Sat, Aug 17, 2019 at 12:43 PM Ibrar Ahmed
> wrote:
>
>> +1 for
On Sat, Aug 17, 2019 at 12:43 PM Ibrar Ahmed wrote:
> +1 for voice call, bruce we usually have a weekly TDE call.
>
Please add me to the call as well. Thanks!
Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/
Thomas Munro writes:
> On Tue, Aug 6, 2019 at 6:18 PM Tom Lane wrote:
>> Yeah, there have been half a dozen failures since deadlock-parallel
>> went in, mostly on critters that are slowed by CLOBBER_CACHE_ALWAYS
>> or valgrind. I've tried repeatedly to reproduce that here, without
>> success :-(
Hi,
On 2019-08-17 13:41:18 -0400, Tom Lane wrote:
> Try this:
> alter system set max_parallel_workers = 20;
> and restart the system.
>
> max_parallel_workers is still 8, according to both SHOW and
> pg_controldata, nor can you launch more than 8 workers.
Hm. I can't reproduce that. I do get wha
Sergei Kornilov writes:
>> which should certainly not happen for a PGC_POSTMASTER parameter.
> But max_parallel_workers is PGC_USERSET and this behavior seems be documented:
Argh! I was looking at max_worker_processes in one window and
max_parallel_workers in another, and failed to see the disc
On Sat, Aug 17, 2019 at 10:41 PM Tom Lane wrote:
> Try this:
> alter system set max_parallel_workers = 20;
> and restart the system.
>
> max_parallel_workers is still 8, according to both SHOW and
> pg_controldata, nor can you launch more than 8 workers.
>
> Even odder, if you just do
>
> regress
Hello
> Try this:
> alter system set max_parallel_workers = 20;
> and restart the system.
> max_parallel_workers is still 8
Hmm, I got 20 on my local 11.5 and on HEAD.
> which should certainly not happen for a PGC_POSTMASTER parameter.
But max_parallel_workers is PGC_USERSET and this behavior s
Hi,
On my PG11 I have set it to 64 upon setup and it propogated to
postgresql.auto.conf and is set after restart. I've upgraded to PG12 since
then, and parameter is read from postgresql.auto.conf correctly and is
displayed via SHOW (just checked on 12beta3).
I also spent some time trying to get a
Greetings,
* Bruce Momjian (br...@momjian.us) wrote:
> I will state whet I have already told some people privately, that for
> this feature, we have many people understanding 40% of the problem, but
> thinking they understand 90%. I do agree we should plan for our
> eventual full feature set, but
On Mon, Jul 8, 2019 at 9:46 AM Paul A Jungwirth
wrote:
> - A multirange type is an extra thing you get when you define a range
> (just like how you get a tstzrange[]). Therefore
I've been able to make a little more progress on multiranges the last
few days, but it reminded me of an open quest
Try this:
alter system set max_parallel_workers = 20;
and restart the system.
max_parallel_workers is still 8, according to both SHOW and
pg_controldata, nor can you launch more than 8 workers.
Even odder, if you just do
regression=# set max_parallel_workers = 200;
SET
regression=# show max_para
Hi,
On 2019-08-17 12:05:21 -0400, Robert Haas wrote:
> On Wed, Aug 14, 2019 at 12:39 PM Andres Freund wrote:
> > > > Again, I think it's not ok to just assume you can lock an essentially
> > > > unbounded number of buffers. This seems almost guaranteed to result in
> > > > deadlocks. And there's
On Sat, Aug 17, 2019 at 6:58 PM Daniel Migowski
wrote:
> Hello,
>
> attached you find a patch that adds a new GUC:
>
Quick questions before looking at the patch.
>
> prepared_statement_limit:
>
> - Do we have a consensus about the name of GUC? I don't think it is
the right name for that.
- I
Greetings,
* Bruce Momjian (br...@momjian.us) wrote:
> On Sat, Aug 17, 2019 at 08:16:06AM +0200, Antonin Houska wrote:
> > Bruce Momjian wrote:
> >
> > > On Thu, Aug 15, 2019 at 09:01:05PM -0400, Stephen Frost wrote:
> > > > * Bruce Momjian (br...@momjian.us) wrote:
> > > > > Why would it not be
Greetings,
* Ibrar Ahmed (ibrar.ah...@gmail.com) wrote:
> On Sat, Aug 17, 2019 at 3:04 AM Bruce Momjian wrote:
> > +1 for voice call, bruce we usually have a weekly TDE call. I will include
> you in
> that call. Currently, in that group are
> moon_insung...@lab.ntt.co.jp,
> sawada.m...@gmail.co
On Sat, Aug 17, 2019 at 3:04 AM Bruce Momjian wrote:
> On Fri, Aug 16, 2019 at 07:47:37PM +0200, Antonin Houska wrote:
> > Bruce Momjian wrote:
> >
> > > I have seen no one present a clear description of how anything beyond
> > > all-cluster encryption would work or be secure. Wishing that were
On Wed, Aug 14, 2019 at 12:39 PM Andres Freund wrote:
> > > Again, I think it's not ok to just assume you can lock an essentially
> > > unbounded number of buffers. This seems almost guaranteed to result in
> > > deadlocks. And there's limits on how many lwlocks one can hold etc.
> >
> > I think f
On Sat, Aug 17, 2019 at 08:16:06AM +0200, Antonin Houska wrote:
> Bruce Momjian wrote:
>
> > On Thu, Aug 15, 2019 at 09:01:05PM -0400, Stephen Frost wrote:
> > > * Bruce Momjian (br...@momjian.us) wrote:
> > > > Why would it not be simpler to have the cluster_passphrase_command run
> > > > whatev
On Fri, Aug 16, 2019 at 06:04:39PM -0400, Bruce Momjian wrote:
> I suggest we schedule a voice call and I will go over all the issues and
> explain why I came to the conclusions listed. It is hard to know what
> level of detail to explain that in an email, beyond what I have already
> posted on th
Hello,
attached you find a patch that adds a new GUC:
prepared_statement_limit:
Specifies the maximum amount of memory used in each session to
cache
parsed-and-rewritten queries and execution plans. This affects
the maximum memory
a backend threads will reserve when ma
On Fri, Aug 16, 2019 at 03:29:30PM -0700, Andres Freund wrote:
> but I don't quite see GUCs like default_tablespace, search_path (due to
> determining a created table's schema), temp_tablespace,
> default_table_access_method fit reasonably well under that heading. They
> all can affect persistent s
On Sat, Aug 17, 2019 at 10:11:27AM +0200, Peter Eisentraut wrote:
> I was a bit confused by some of the comments around the SCRAM function
> read_any_attr(), used to skip over extensions.
>
> The comment "Returns NULL if there is attribute.", besides being
> strangely worded, appears to be wrong a
Hi,
I'm not subscribed to the list, so please include me directly if you want my
attention. This is a "drive-by" patch as it were.
Attached is a minor patch to fix the name param documentation for create role,
just adding a direct quote from user-manag.sgml talking about what the role
name is
I was a bit confused by some of the comments around the SCRAM function
read_any_attr(), used to skip over extensions.
The comment "Returns NULL if there is attribute.", besides being
strangely worded, appears to be wrong anyway, because the function never
returns NULL.
This lead me to wonder how
Hi John,
> >> Also, I don't think
> >> the new logic for the ctrl/c variables is an improvement:
> >>
> >> 1. iter->ctrlc is intialized with '8' (even in the uncompressed case,
> >> which is confusing). Any time you initialize with something not 0 or
> >> 1, it's a magic number, and here it's far
27 matches
Mail list logo