From: Fujii Masao [mailto:masao.fu...@gmail.com]
> When multiple relations are deleted at the same transaction, the files of
> those relations are deleted by one call to smgrdounlinkall(), which leads
> to scan whole shared_buffers only one time. OTOH, during recovery,
> smgrdounlink() (not smgrdou
From: Jerry Sievers [mailto:gsiever...@comcast.net]
> Wonder if this is the case for streaming standbys replaying truncates
> also?
Yes, As I wrote in my previous mail, TRUNCATE is worse than DROP TABLE.
Regards
Takayuki Tsunakawa
From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
Oracle, for example, you can create dedicated and non-dedicated backends.
> I wonder why we do not want to have something similar in Postgres.
Yes, I want it, too. In addition to dedicated and shared server processes,
Oracle provides D
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> a very long time before accessing to the relation. Which would cause the
> response-time spikes, for example, I observed such spikes several times
> on
> the server with shared_buffers = 300GB while running the benchmark.
FYI, a long transaction t
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> Yeah, it's worth working on this problem. To decrease the number of scans
> of
> shared_buffers, you would need to change the order of truncations of files
> and
> WAL logging. In RelationTruncate(), currently WAL is logged after FSM and
> VM
> are
From: Michael Paquier [mailto:mich...@paquier.xyz]
> The last patch submitted is here:
> https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D
> 1F8ECF73@G01JPEXMBYT05
> And based on the code paths it touches I would recommend to not play with
> REL_12_STABLE at this stage.
I'm re
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Alvaro Herrera from 2ndQuadrant writes:
> > Well, IMV this is a backpatchable, localized bug fix.
>
> I dunno. This thread is approaching two years old, and a quick
> review shows few signs that we actually have any consensus on
> making behavioral ch
From: Alvaro Herrera from 2ndQuadrant [mailto:alvhe...@alvh.no-ip.org]
> Testing protocol version 2 is difficult! Almost every single test fails
> because of error messages being reported differently; and streaming
> replication (incl. pg_basebackup) doesn't work at all because it's not
> possible
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> SIGTERM, which needs to be adjusted. For another, its
> SIGQUIT handler does exit(1) not _exit(2), which seems rather
> dubious ... should we make it more like the rest? I think
> the reasoning there might've been that if some DBA decides to
> SIGQUIT
From: David Steele [mailto:da...@pgmasters.net]
> > Can't we use SIGKILL instead of SIGINT/SIGTERM to stop the grandchildren,
> just in case they are slow to respond to or ignore SIGINT/SIGTERM? That
> matches the idea of pg_ctl's immediate shutdown.
>
> -1, at least not immediately. Archivers c
From: Michael Paquier [mailto:mich...@paquier.xyz]
> Imagine an application which relies on Postgres, still does *not* start
> it as a service but uses "pg_ctl start"
> automatically. This could be triggered as part of another service startup
> which calls say system(), or as another script. Woul
Hello,
In the following code in execTuples.c, shouldn' srcdesc point to the source
slot's tuple descriptor? The attached fix passes make check. What kind of
failure could this cause?
BTW, I thought that in PostgreSQL coding convention, local variables should be
defined at the top of blocks,
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> I temporarily changed the Assert to be "==" rather than "<=", and
> it still passed check-world, so evidently we are not testing any
> cases where the descriptors are of different lengths. This explains
> the lack of symptoms. It's still a bug though,
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> On 2019-Sep-03, Tsunakawa, Takayuki wrote:
> > I don't think it's rejected. It would be a pity (mottainai) to refuse
> > this, because it provides significant speedup despite its simple
> > modification.
&g
From: Michael Paquier [mailto:mich...@paquier.xyz]
> This makes the test page-size sensitive. While we don't ensure that tests
> can be run with different page sizes, we should make a maximum effort to
> keep the tests compatible if that's easy enough. In this case you could
> just use > 0 as bas
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't think that a VACUUM option would be out of place, but a GUC
> sounds like an attractive nuisance to me. It will encourage people to
> just flip it blindly instead of considering the particular cases where
> they need that behavior, and I t
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> Attached are the updated patches.
Thanks, all look fixed.
> The target_server_type option yet to be implemented.
Please let me review once more and proceed to testing when the above is added,
to make sure the final code looks good. I'd
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> I measured the memory context accounting overhead using Tomas's tool
> palloc_bench,
> which he made it a while ago in the similar discussion.
> https://www.postgresql.org/message-id/53f7e83c.3020...@fuzzy.cz
>
> This tool is a littl
From: Michael Paquier [mailto:mich...@paquier.xyz]
> So we could you consider adding an option for the VACUUM command as well
> as vacuumdb? The interactions with the current patch is that you need to
> define the behavior at the beginning of vacuum for a given heap, instead
> of reading the param
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> Robert used the phrase "attractive nuisance", which maybe sounds like a
> good thing to have to a non native speaker, but it actually isn't -- he
> was saying we should avoid a GUC at all, and I can see the reason for
> that. I think we shou
From: Ideriha, Takeshi/出利葉 健
> [Size=800, iter=1,000,000]
> Master |15.763
> Patched|16.262 (+3%)
>
> [Size=32768, iter=1,000,000]
> Master |61.3076
> Patched|62.9566 (+2%)
What's the unit, second or millisecond?
Why is the number of digits to the right of the decimal point?
Is the measurement c
From: Fabien COELHO [mailto:coe...@cri.ensmp.fr]
> >> If the user reconnects, eg "\c db", the setting is lost. The
> >> re-connection handling should probably take care of this parameter, and
> maybe others.
> > I think your opinion is reasonable, but it seems not in this thread.
>
> HI think that
From: Robert Haas [mailto:robertmh...@gmail.com]
> The first thing I notice about the socket_timeout patch is that the
> documentation is definitely wrong:
Agreed. I suppose the description should be clearer about:
* the purpose and what situation this timeout will help: not for canceling a
lon
From: Robert Haas [mailto:robertmh...@gmail.com]
> But that's not what it will do. As long as the server continues to
> dribble out protocol messages from time to time, the timeout will
> never fire no matter how much time passes. I saw a system once where
> every 8kB read took many seconds to co
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> If so, in turn the socket_timeout doesn't work as expected? I
> understand that what is proposed here is to disconnect after that
> time of waiting for *the first tuple* of a query, regardless of
> it is a long query or network fail
From: Fabien COELHO [mailto:coe...@cri.ensmp.fr]
> I think that the typical use-case of \c is to connect to another database
> on the same host, at least that what I do pretty often. The natural
> expectation is that the same "other" connection parameters are used,
> otherwise it does not make much
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> > For example, OS issues such as abnormally (buggy) slow process scheduling
> or paging/swapping that prevent control from being passed to postgres. Or,
> abnormally long waits on lwlocks in postgres. statement_timeout doesn't
> t
From: Robert Haas [mailto:robertmh...@gmail.com]
> One other thing -- I looked a bit into the pgsql-jdbc implementation
> of a similarly-named option, and it does seem to match what you are
> proposing here. I wonder what user experiences with that option have
> been like.
One case I faintly reca
From: Robert Haas [mailto:robertmh...@gmail.com]
> Now you might say - what if the server is stopped not because of
> SIGSTOP but because of some other reason, like it's waiting for a
> lock? Well, in that case, the database server is still functioning,
> and you will not want the connection to be
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> Do you mind me asking you whether you have thought that solving your problem
> can lead to the problem in the other user applications?
> Let's imagine a possible problem:
> 1. end-user sets 'socket_timeout' only for current session
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> In case of failure PQcancel() terminates in 'socket_timeout'. So, control
> to the end-user in such a failure situation will be returned in 2 *
> 'socket_timeout' interval. It is much better than hanging forever in some
> specific c
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> Based on your comment it seems to me that 'socket_timeout' should be
> connected with statement_timeout. I mean that end-user should wait
> statement_timeout + 'socket_timeout' for returning control. It looks much
> more safer for m
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> Target_session_attrs Target_server_type
>
> read-write prefer-slave, slave
>
> prefer-read master, slave
> read-onlymaster, prefer-slave
>
> I know that some of the cas
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't think so. I think it's just a weirdly-design parameter
> without a really compelling use case. Enforcing limits on the value
> of the parameter doesn't fix that. Most of the reviewers who have
> opined so far have been somewhere between
Hi Peter, Imai-san,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> Your changes in LOCALLOCK still refer to PGPROC, from your first version
> of the patch.
>
> I think the reordering of struct members could be done as a separate
> preliminary patch.
>
> Some more documentatio
From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
> Fixed.
Rebased on HEAD.
Regards
Takayuki Tsunakawa
0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
Description: 0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
0002-speed-up-LOCALL
From: legrand legrand [mailto:legrand_legr...@hotmail.com]
> There are many projects that use alternate QueryId
> distinct from the famous pg_stat_statements jumbling algorithm.
I'd like to welcome the standard QueryID that DBAs and extension developers can
depend on. Are you surveying the needs
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> > > needs.1: stable accross different databases,
> >
> > Does this mean different database clusters, not different databases in
> a single database cluster?
>
> Does this mean you want different QueryID for the same-looking
> query
From: Robert Haas [mailto:robertmh...@gmail.com]
> I really dislike having both target_sesion_attrs and
> target_server_type. It doesn't solve any actual problem. master,
> slave, prefer-save, or whatever you like could be put in
> target_session_attrs just as easily, and then we wouldn't end up
From: David Steele [mailto:da...@pgmasters.net]
> This patch appears to have been stalled for a while.
>
> Takayuki -- the ball appears to be in your court. Perhaps it would be
> helpful to summarize what you think are next steps?
disable_index_cleanup is handled by Sawada-san in another thread.
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut
> wrote:
> > Perhaps "speeding up planning with partitions" needs to be accepted first?
>
> Yeah, I think it likely will require that patch to be able to measure
> the gains from this patch.
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> My understanding of what David wrote is that the slowness of bloated hash
> table is hard to notice, because planning itself is pretty slow. With the
> "speeding up planning with partitions" patch, planning becomes quite fast,
> so the bl
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Here a benchmark doing that using pgbench's script weight feature.
Wow, I didn't know that pgbench has evolved to have such a convenient feature.
Thanks for telling me how to utilize it in testing. PostgreSQL is cool!
Regards
Takayuki
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> On Wed, Mar 27, 2019 at 2:30 AM Robert Haas wrote:
> >
> > On Tue, Mar 26, 2019 at 11:23 AM Masahiko Sawada
> wrote:
> > > > I don't see a patch with the naming updated, here or there, and I'm
> > > > going to be really unhappy if we end up w
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> while going through the old patch where the GUC_REPORT is implemented, Tom
> has commented the logic of sending the signal to all backends to process
> the hot standby exit with SIGHUP, if we add the logic of updating the GUC
> variable value
I've looked through 0004-0007. I've only found the following:
(5) 0005
With this read-only option type, application can connect to
connecting to a read-only server in the list of hosts, in case
if there is any read-only servers available, the connection
attempt fails.
"connecting to" can be remo
From: Robert Haas [mailto:robertmh...@gmail.com]
> You're both right and I'm wrong.
>
> However, I think it would be better to stick with the term 'truncate'
> which is widely-used already, rather than introducing a new term.
Yeah, I have the same feeling. OTOH, as I referred in this thread, shr
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> +if (setsockopt(conn->sock, IPPROTO_TCP, TCP_USER_TIMEOUT,
> + (char *) &timeout, sizeof(timeout)) < 0 && errno !=
> ENOPROTOOPT)
> +{
> +charsebuf[256];
> +
> +appendPQExpBuffer(&conn->er
From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> > +if (setsockopt(conn->sock, IPPROTO_TCP, TCP_USER_TIMEOUT,
> > + (char *) &timeout, sizeof(timeout)) < 0 &a
Nagaura-san,
The client-side tcp_user_timeout patch looks good.
The server-side tcp_user_timeout patch needs fixing the following:
(1)
+ GUC_UNIT_MS | GUC_NOT_IN_SAMPLE
+ 12000, 0, INT_MAX,
GUC_NOT_IN_SAMPLE should be removed because the parameter appears in
Nagaura-san,
The socket_timeout patch needs the following fixes. Now that others have
already tested these patches successfully, they appear committable to me.
(1)
+ else
+ goto iiv_error;
...
+
+iiv_error:
+ conn->status = CONNECTION_BAD;
+ prin
Hi Hari-san,
I've reviewed all the files. The patch would be OK when the following have
been fixed, except for the complexity of fe-connect.c (which probably cannot be
improved.)
Unfortunately, I'll be absent next week. The earliest date I can do the test
will be April 8 or 9. I hope someon
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Another counter-argument to this is that there's already an
> unexplainable slowdown after you run a query which obtains a large
> number of locks in a session or use prepared statements and a
> partitioned table with the default plan_cache
From: Matsumura, Ryo [mailto:matsumura@jp.fujitsu.com]
> Detail:
> If target_session_attrs is set to read-write, PQconnectPoll() calls
> PQsendQuery("SHOW transaction_read_only") althogh previous return value
> was PGRES_POLLING_READING not WRITING.
The current code probably assumes that PQsen
From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
> PL/pgSQL: 29044.361 ms
> C/SPI: 22785.597 ms
>
> The fact that difference between PL/pgSQL and function implemented in C
> using SPI is not so large was expected by me.
This PL/pgSQL overhead is not so significant compared
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> After investigation, the mechanism that's causing that is that the
> src/test/recovery/t/010_logical_decoding_timelines.pl test shuts
> down its replica server with a mode-immediate stop, which causes
> that postmaster to shut down all its children with
From: Kyotaro Horiguchi [mailto:horikyota@gmail.com]
> Since we are allowing OPs to use arbitrary command as
> archive_command, providing a replacement with non-standard signal
> handling for a specific command doesn't seem a general solution
> to me. Couldn't we have pg_system(a tentative name
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> Hmm ... is this patch rejected, or is somebody still trying to get it to
> committable state? David, you're listed as committer.
I don't think it's rejected. It would be a pity (mottainai) to refuse this,
because it provides significant s
From: Jing Wang [mailto:jingwang...@gmail.com]
> This is a proposal that let libpq support 'prefer-read' option in
> target_session_attrs in pg_conn. The 'prefer-read' means the libpq will
> try to connect to a 'read-only' server firstly from the multiple server
> addresses. If failed to connect to
From: Simon Riggs [mailto:si...@2ndquadrant.com]
> When will the next version be posted?
I'm very sorry I haven't submitted anything. I'd like to address this during
this CF. Thanks for remembering this.
Regards
Takayuki Tsunakawa
From: Robert Haas [mailto:robertmh...@gmail.com]
> Oh, incidentally -- in our internal testing, we found that
> wal_sync_method=open_datasync was significantly faster than
> wal_sync_method=fdatasync. You might find that open_datasync isn't much
> different from pmem_drain, even though they're bot
From: Robert Haas [mailto:robertmh...@gmail.com]
> I think open_datasync will be worse on systems where fsync() is expensive
> -- it forces the data out to disk immediately, even if the data doesn't
> need to be flushed immediately. That's bad, because we wait immediately
> when we could have defe
Hello,
I've found a problem that an orphaned temporary table could cause XID
wraparound. Our customer encountered this problem with PG 9.5.2, but I think
this will happen with the latest PG.
I'm willing to fix this, but I'd like to ask you what approach we should take.
PROBLEM
==
> From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> As a superuser, DROP TABLE should work on the temporary schema of another
> session. Have you tried that to solve the situation?
Yes, we asked the customer to do that today. I think the customer will do in
the near future.
> > * In th
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> On Thu, Jan 25, 2018 at 08:10:00AM +0000, Tsunakawa, Takayuki wrote:
> > I understood you suggested a new session which recycle the temp schema
> > should erase the zombie metadata of old temp tables or recreate the
> &g
From: Robert Haas [mailto:robertmh...@gmail.com]
> On Wed, Jan 24, 2018 at 10:31 PM, Tsunakawa, Takayuki
> wrote:
> > As you said, open_datasync was 20% faster than fdatasync on RHEL7.2, on
> a LVM volume with ext4 (mounted with options noatime, nobarrier) on a PCIe
> flash me
From: Robert Haas [mailto:robertmh...@gmail.com]> On Thu, Jan 25, 2018 at 7:08
PM, Tsunakawa, Takayuki
> wrote:
> > No, I'm not saying we should make the persistent memory mode the default.
> I'm simply asking whether it's time to make open_datasync the default
>
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> Or to put it short, the lack of granular syncs in ext3 kills performance
> for some workloads. Tomas Vondra's presentation on such matters are a really
> cool read by the way:
> https://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrf
From: Robert Haas [mailto:robertmh...@gmail.com]
> If I understand correctly, those results are all just pg_test_fsync results.
> That's not reflective of what will happen when the database is actually
> running. When you use open_sync or open_datasync, you force WAL write and
> WAL flush to happe
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> On Thu, Jan 25, 2018 at 3:14 PM, Tsunakawa, Takayuki
> wrote:
> > * Why does autovacuum launcher always choose only one database when that
> database need vacuuming for XID wraparound? Shouldn't it also choose other
&g
From: Robert Haas [mailto:robertmh...@gmail.com]
> I think we should consider having backends try to remove their temporary
> schema on startup; then, if a temp table in a backend is old enough that
> it's due for vacuum for wraparound, have autovacuum kill the connection.
> The former is necessary
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> What I thought is that a worker reports these two values after scanned
> pg_class and after freezed a table. The launcher decides to launch a new
> worker if the number of tables requiring anti-wraparound vacuum is greater
> than the number of
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> On Mon, Jan 29, 2018 at 3:33 PM, Tsunakawa, Takayuki
> wrote:
> > I can understand your concern. On the other hand, it's unfair that one
> database could monopolize all workers, because other databases might also
> be
From: Robert Haas [mailto:robertmh...@gmail.com]
> Unfortunately, I think a full solution to the problem of allocating AV
> workers to avoid wraparound is quite complex.
Yes, that easily puts my small brain into an infinite loop...
> Given all of the foregoing this seems like a very hard problem.
Hello,
Some user hit a problem with ECPG on Windows. The attached patch is a fix for
it. I'd appreciate it if you could backport this in all supported versions.
The problem is simple. free() in the following example crashes:
char *out;
out = PGTYPESnumeric_to_asc(...);
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> Thank you for suggestion. It sounds more smarter. So it would be more better
> if we vacuums database for anti-wraparound in ascending order of
> relfrozenxid?
I thought so, too. The current behavior is inconsistent: the launcher tries to
as
From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
> +#ifndef PGTYPES_FREE
> +#define PGTYPES_FREE
> + extern void PGTYPES_free(void *ptr);
> +#endif
>
> It seems quite strange to repeat this in pgtypes_date.h, pgtypes_interval.h
> and pgtypes_numeric.h. I guess you might not want to intro
From: Robert Haas [mailto:robertmh...@gmail.com]
> Temporary tables contain XIDs, so they need to be vacuumed for XID
> wraparound. Otherwise, queries against those tables by the session
> that created them could yield wrong answers. However, autovacuum
> can't perform that vacuuming; it would ha
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> I am not sure that we would like to give up that easily the property that
> we have now to clean up past temporary files only at postmaster startup
> and only when not in recovery. If you implement that, there is a risk that
> the backend
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> > postmaster deletes temporary relation files at startup by calling
> > RemovePgTempFiles() regardless of whether it's in recovery. It
> > doesn't call that function during auto restart after a crash when
> > restart_after_crash is on.
>
Hello,
Our customer encountered a rare bug of PostgreSQL which prevents a cascaded
standby from starting up. The attached patch is a fix for it. I hope this
will be back-patched. I'll add this to the next CF.
PROBLEM
==
The PostgreSQL version is 9.5. The cluste
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> reloption for TOAST is also required?
# I've come back to the office earlier than planned...
Hm, there's no reason to not provide toast.vacuum_shrink_enabled. Done with
the attached patch.
Regards
Takayuki Tsunakawa
disable-vacuum-truncat
Hi Peter,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I did a bit of performance testing, both a plain pgbench and the
> suggested test case with 4096 partitions. I can't detect any
> performance improvements. In fact, within the noise, it tends to be
> just a bit on the s
Hi Peter, Imai-san,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I can't detect any performance improvement with the patch applied to
> current master, using the test case from Yoshikazu Imai (2019-03-19).
That's strange... Peter, Imai-san, can you compare your test procedu
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> "VACUUM" needs or "vacuum" is more appropriate here?
Looking at the same file and some other files, "vacuum" looks appropriate
because it represents the vacuum action, not the specific VACUUM command.
> The format of the documentation of n
Hi Amit-san, Imai-snan,
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> I was able to detect it as follows.
> plan_cache_mode = auto
>
>HEAD: 1915 tps
> Patched: 2394 tps
>
> plan_cache_mode = custom (non-problematic: generic plan is never created)
>
>HEAD: 2402 tps
> Patche
From: Michael Paquier [mailto:mich...@paquier.xyz]
> The first letter should be upper-case.
Thank you for taking care of this patch, and sorry to cause you trouble to fix
that...
> to me that socket_timeout_v14.patch should be rejected as it could cause
> a connection to go down with no actual
From: Michael Paquier [mailto:mich...@paquier.xyz]
> I have just committed the GUC and libpq portion for TCP_USER_TIMEOUT after
> a last lookup, and I have cleaned up a couple of places.
Thank you for further cleanup and committing.
> For the socket_timeout stuff, its way of solving the problem
Hi Andres, Fujii-san, any committer,
From: Andres Freund [mailto:and...@anarazel.de]
> On 2019-04-08 09:52:27 +0900, Fujii Masao wrote:
> > I'm thinking to commit this patch at first. We can change the term
> > and add the support of "TRUNCATE" option for VACUUM command later.
>
> I hope you rea
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> On the whole I don't think there's an adequate case for committing
> this patch.
From: Andres Freund [mailto:and...@anarazel.de]
> On 2019-04-05 23:03:11 -0400, Tom Lane wrote:
> > If I reduce the number of partitions in Amit's example from 8192
> > to
From: 'Andres Freund' [mailto:and...@anarazel.de]
> On 2019-04-08 02:28:12 +0000, Tsunakawa, Takayuki wrote:
> > I think the linked list of LOCALLOCK approach is natural, simple, and
> > good.
>
> Did you see that people measured slowdowns?
Yeah, 0.5% decrease with
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> It would be good to get your view on the
> shrink_bloated_locallocktable_v3.patch I worked on last night. I was
> unable to measure any overhead to solving the problem that way.
Thanks, it looks super simple and good. I understood the ide
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> And, as far as I can see from a quick review of the thread,
> we don't really have consensus on the names and behaviors.
Consensus on the name seems to use truncate rather than shrink (a few poople
kindly said they like shrink, and I'm OK with either n
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> "vacuum_truncate" gets my vote too.
+1
From: 'Andres Freund' [mailto:and...@anarazel.de]
> Personally I think the name just needs some committer to make a
> call. This largely is going to be used after encountering too many
> cancellations
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> Thanks for the info, so I marked the patch as committed.
Thanks a lot for your hard work! This felt relatively tough despite the
simplicity of the patch. I'm starting to feel the difficulty and fatigue in
developing in the community...
Regar
From: Mori Bellamy [mailto:m...@invoked.net]
> I'd like a few features when developing postgres -- (1) jump to definition
> of symbol (2) find references to symbol and (3) semantic autocompletion.
For 1), you can generate tags like:
[for vi]
$ src/tools/make_ctags
[for Emacs]
$ src/tools/make_eta
From: Yang Jie [mailto:yang...@highgo.com]
> Delayed cleanup, resulting in performance degradation, what are the
> solutions recommended?
> What do you suggest for the flashback feature?
> Although postgres has excellent backup and restore capabilities, have you
> considered adding flashbacks?
Gre
From: Adelino Silva [mailto:adelino.j.si...@googlemail.com]
> What is the advantage to use archive_mode = always in a slave server compared
> to archive_mode = on (shared WAL archive) ?
>
> I only see duplication of Wal files, what is the purpose of this feature ?
This also saves you the network
From: Narayanan V [mailto:vnarayanan.em...@gmail.com]
> I think what Takayuki is trying to say is that streaming replication works
> by sending the contents of the WAL archives to the standbys. If archive_mode
> was NOT set to always, and if you wanted to archive WAL logs in the standby
> you would
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I think it's a bit strange that we don't have this information fairly
> early on in the official documentation. I only see a mention of the
> 1600 column limit in the create table docs. Nothing central and don't
> see mention of 32 TB tabl
1 - 100 of 302 matches
Mail list logo