From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> On 2019-Sep-03, Tsunakawa, Takayuki wrote:
> > I don't think it's rejected. It would be a pity (mottainai) to refuse
> > this, because it provides significant speedup despite its simple
> > modification.
&g
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> I temporarily changed the Assert to be "==" rather than "<=", and
> it still passed check-world, so evidently we are not testing any
> cases where the descriptors are of different lengths. This explains
> the lack of symptoms. It's still a bug though,
Hello,
In the following code in execTuples.c, shouldn' srcdesc point to the source
slot's tuple descriptor? The attached fix passes make check. What kind of
failure could this cause?
BTW, I thought that in PostgreSQL coding convention, local variables should be
defined at the top of blocks,
From: Michael Paquier [mailto:mich...@paquier.xyz]
> Imagine an application which relies on Postgres, still does *not* start
> it as a service but uses "pg_ctl start"
> automatically. This could be triggered as part of another service startup
> which calls say system(), or as another script. Woul
From: David Steele [mailto:da...@pgmasters.net]
> > Can't we use SIGKILL instead of SIGINT/SIGTERM to stop the grandchildren,
> just in case they are slow to respond to or ignore SIGINT/SIGTERM? That
> matches the idea of pg_ctl's immediate shutdown.
>
> -1, at least not immediately. Archivers c
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> SIGTERM, which needs to be adjusted. For another, its
> SIGQUIT handler does exit(1) not _exit(2), which seems rather
> dubious ... should we make it more like the rest? I think
> the reasoning there might've been that if some DBA decides to
> SIGQUIT
From: Alvaro Herrera from 2ndQuadrant [mailto:alvhe...@alvh.no-ip.org]
> Testing protocol version 2 is difficult! Almost every single test fails
> because of error messages being reported differently; and streaming
> replication (incl. pg_basebackup) doesn't work at all because it's not
> possible
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Alvaro Herrera from 2ndQuadrant writes:
> > Well, IMV this is a backpatchable, localized bug fix.
>
> I dunno. This thread is approaching two years old, and a quick
> review shows few signs that we actually have any consensus on
> making behavioral ch
From: Michael Paquier [mailto:mich...@paquier.xyz]
> The last patch submitted is here:
> https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D
> 1F8ECF73@G01JPEXMBYT05
> And based on the code paths it touches I would recommend to not play with
> REL_12_STABLE at this stage.
I'm re
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> Hmm ... is this patch rejected, or is somebody still trying to get it to
> committable state? David, you're listed as committer.
I don't think it's rejected. It would be a pity (mottainai) to refuse this,
because it provides significant s
From: Kyotaro Horiguchi [mailto:horikyota@gmail.com]
> Since we are allowing OPs to use arbitrary command as
> archive_command, providing a replacement with non-standard signal
> handling for a specific command doesn't seem a general solution
> to me. Couldn't we have pg_system(a tentative name
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> After investigation, the mechanism that's causing that is that the
> src/test/recovery/t/010_logical_decoding_timelines.pl test shuts
> down its replica server with a mode-immediate stop, which causes
> that postmaster to shut down all its children with
From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
> PL/pgSQL: 29044.361 ms
> C/SPI: 22785.597 ms
>
> The fact that difference between PL/pgSQL and function implemented in C
> using SPI is not so large was expected by me.
This PL/pgSQL overhead is not so significant compared
From: Matsumura, Ryo [mailto:matsumura@jp.fujitsu.com]
> Detail:
> If target_session_attrs is set to read-write, PQconnectPoll() calls
> PQsendQuery("SHOW transaction_read_only") althogh previous return value
> was PGRES_POLLING_READING not WRITING.
The current code probably assumes that PQsen
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Another counter-argument to this is that there's already an
> unexplainable slowdown after you run a query which obtains a large
> number of locks in a session or use prepared statements and a
> partitioned table with the default plan_cache
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> For the use case we've been measuring with partitioned tables and the
> generic plan generation causing a sudden spike in the number of
> obtained locks, then having plan_cache_mode = force_custom_plan will
> cause the lock table not to bec
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I personally don't think that's true. The only way you'll notice the
> LockReleaseAll() overhead is to execute very fast queries with a
> bloated lock table. It's pretty hard to notice that a single 0.1ms
> query is slow. You'll need to e
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I went back to the drawing board on this and I've added some code that counts
> the number of times we've seen the table to be oversized and just shrinks
> the table back down on the 1000th time. 6.93% / 1000 is not all that much.
I'm afr
From: Ashwin Agrawal [mailto:aagra...@pivotal.io]
> The objective is to gather feedback on design and approach to the same.
> The implementation has core basic pieces working but not close to complete.
Thank you for proposing a very interesting topic. Are you thinking of
including this in Postgr
From: Stephen Frost [mailto:sfr...@snowman.net]
> sh, don't look now, but there might be a "Resend email" button in the
> archives now that you can click to have an email sent to you...
>
> Note that you have to be logged in, and the email will go to the email address
> that you're logging int
From: Amit Kapila [mailto:amit.kapil...@gmail.com]
> Tsunakawa/Haribabu - By reading this thread briefly, it seems we need
> some more inputs from other developers on whether to fix this or not,
> so ideally the status of this patch should be 'Needs Review'. Why it
> is in 'Waiting on Author' stat
From: David Rowley [mailto:david.row...@2ndquadrant.com]
v5 is attached.
Thank you, looks good. I find it ready for committer (I noticed the status is
already set so.)
Regards
Takayuki Tsunakawa
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I've revised the patch to add a new constant named
> LOCKMETHODLOCALHASH_SHRINK_SIZE. I've set this to 64 for now. Once the hash
Thank you, and good performance. The patch passed make check.
I'm OK with the current patch, but I have a fe
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> for (i = 0; i < NBuffers; i++)
> {
> (snip)
> buf_state = LockBufHdr(bufHdr);
>
> /* check with the lower bound and skip the loop */
> if (bufHdr->tag.blockNum < minBlock)
> {
> UnlockBufHdr(
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> We do RelationTruncate() also when we truncate heaps that are created
> in the current transactions or has a new relfilenodes in the current
> transaction. So I think there is a room for optimization Thomas
> suggested, although I'm not sure it
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> Years ago I've implemented an optimization for many DROP TABLE commands
> in a single transaction - instead of scanning buffers for each relation,
> the code now accumulates a small number of relations into an array, and
> then does a bsear
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> Thanks for the info, so I marked the patch as committed.
Thanks a lot for your hard work! This felt relatively tough despite the
simplicity of the patch. I'm starting to feel the difficulty and fatigue in
developing in the community...
Regar
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> "vacuum_truncate" gets my vote too.
+1
From: 'Andres Freund' [mailto:and...@anarazel.de]
> Personally I think the name just needs some committer to make a
> call. This largely is going to be used after encountering too many
> cancellations
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> And, as far as I can see from a quick review of the thread,
> we don't really have consensus on the names and behaviors.
Consensus on the name seems to use truncate rather than shrink (a few poople
kindly said they like shrink, and I'm OK with either n
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> It would be good to get your view on the
> shrink_bloated_locallocktable_v3.patch I worked on last night. I was
> unable to measure any overhead to solving the problem that way.
Thanks, it looks super simple and good. I understood the ide
From: 'Andres Freund' [mailto:and...@anarazel.de]
> On 2019-04-08 02:28:12 +0000, Tsunakawa, Takayuki wrote:
> > I think the linked list of LOCALLOCK approach is natural, simple, and
> > good.
>
> Did you see that people measured slowdowns?
Yeah, 0.5% decrease with
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> On the whole I don't think there's an adequate case for committing
> this patch.
From: Andres Freund [mailto:and...@anarazel.de]
> On 2019-04-05 23:03:11 -0400, Tom Lane wrote:
> > If I reduce the number of partitions in Amit's example from 8192
> > to
Hi Andres, Fujii-san, any committer,
From: Andres Freund [mailto:and...@anarazel.de]
> On 2019-04-08 09:52:27 +0900, Fujii Masao wrote:
> > I'm thinking to commit this patch at first. We can change the term
> > and add the support of "TRUNCATE" option for VACUUM command later.
>
> I hope you rea
From: Michael Paquier [mailto:mich...@paquier.xyz]
> I have just committed the GUC and libpq portion for TCP_USER_TIMEOUT after
> a last lookup, and I have cleaned up a couple of places.
Thank you for further cleanup and committing.
> For the socket_timeout stuff, its way of solving the problem
From: Michael Paquier [mailto:mich...@paquier.xyz]
> The first letter should be upper-case.
Thank you for taking care of this patch, and sorry to cause you trouble to fix
that...
> to me that socket_timeout_v14.patch should be rejected as it could cause
> a connection to go down with no actual
Hi Amit-san, Imai-snan,
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> I was able to detect it as follows.
> plan_cache_mode = auto
>
>HEAD: 1915 tps
> Patched: 2394 tps
>
> plan_cache_mode = custom (non-problematic: generic plan is never created)
>
>HEAD: 2402 tps
> Patche
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> "VACUUM" needs or "vacuum" is more appropriate here?
Looking at the same file and some other files, "vacuum" looks appropriate
because it represents the vacuum action, not the specific VACUUM command.
> The format of the documentation of n
Hi Peter, Imai-san,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I can't detect any performance improvement with the patch applied to
> current master, using the test case from Yoshikazu Imai (2019-03-19).
That's strange... Peter, Imai-san, can you compare your test procedu
Hi Peter,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I did a bit of performance testing, both a plain pgbench and the
> suggested test case with 4096 partitions. I can't detect any
> performance improvements. In fact, within the noise, it tends to be
> just a bit on the s
From: Fujii Masao [mailto:masao.fu...@gmail.com]
> reloption for TOAST is also required?
# I've come back to the office earlier than planned...
Hm, there's no reason to not provide toast.vacuum_shrink_enabled. Done with
the attached patch.
Regards
Takayuki Tsunakawa
disable-vacuum-truncat
Hi Hari-san,
I've reviewed all the files. The patch would be OK when the following have
been fixed, except for the complexity of fe-connect.c (which probably cannot be
improved.)
Unfortunately, I'll be absent next week. The earliest date I can do the test
will be April 8 or 9. I hope someon
Nagaura-san,
The socket_timeout patch needs the following fixes. Now that others have
already tested these patches successfully, they appear committable to me.
(1)
+ else
+ goto iiv_error;
...
+
+iiv_error:
+ conn->status = CONNECTION_BAD;
+ prin
Nagaura-san,
The client-side tcp_user_timeout patch looks good.
The server-side tcp_user_timeout patch needs fixing the following:
(1)
+ GUC_UNIT_MS | GUC_NOT_IN_SAMPLE
+ 12000, 0, INT_MAX,
GUC_NOT_IN_SAMPLE should be removed because the parameter appears in
From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> > +if (setsockopt(conn->sock, IPPROTO_TCP, TCP_USER_TIMEOUT,
> > + (char *) &timeout, sizeof(timeout)) < 0 &a
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> +if (setsockopt(conn->sock, IPPROTO_TCP, TCP_USER_TIMEOUT,
> + (char *) &timeout, sizeof(timeout)) < 0 && errno !=
> ENOPROTOOPT)
> +{
> +charsebuf[256];
> +
> +appendPQExpBuffer(&conn->er
From: Robert Haas [mailto:robertmh...@gmail.com]
> You're both right and I'm wrong.
>
> However, I think it would be better to stick with the term 'truncate'
> which is widely-used already, rather than introducing a new term.
Yeah, I have the same feeling. OTOH, as I referred in this thread, shr
I've looked through 0004-0007. I've only found the following:
(5) 0005
With this read-only option type, application can connect to
connecting to a read-only server in the list of hosts, in case
if there is any read-only servers available, the connection
attempt fails.
"connecting to" can be remo
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> while going through the old patch where the GUC_REPORT is implemented, Tom
> has commented the logic of sending the signal to all backends to process
> the hot standby exit with SIGHUP, if we add the logic of updating the GUC
> variable value
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> On Wed, Mar 27, 2019 at 2:30 AM Robert Haas wrote:
> >
> > On Tue, Mar 26, 2019 at 11:23 AM Masahiko Sawada
> wrote:
> > > > I don't see a patch with the naming updated, here or there, and I'm
> > > > going to be really unhappy if we end up w
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Here a benchmark doing that using pgbench's script weight feature.
Wow, I didn't know that pgbench has evolved to have such a convenient feature.
Thanks for telling me how to utilize it in testing. PostgreSQL is cool!
Regards
Takayuki
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> My understanding of what David wrote is that the slowness of bloated hash
> table is hard to notice, because planning itself is pretty slow. With the
> "speeding up planning with partitions" patch, planning becomes quite fast,
> so the bl
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut
> wrote:
> > Perhaps "speeding up planning with partitions" needs to be accepted first?
>
> Yeah, I think it likely will require that patch to be able to measure
> the gains from this patch.
From: David Steele [mailto:da...@pgmasters.net]
> This patch appears to have been stalled for a while.
>
> Takayuki -- the ball appears to be in your court. Perhaps it would be
> helpful to summarize what you think are next steps?
disable_index_cleanup is handled by Sawada-san in another thread.
From: Robert Haas [mailto:robertmh...@gmail.com]
> I really dislike having both target_sesion_attrs and
> target_server_type. It doesn't solve any actual problem. master,
> slave, prefer-save, or whatever you like could be put in
> target_session_attrs just as easily, and then we wouldn't end up
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> > > needs.1: stable accross different databases,
> >
> > Does this mean different database clusters, not different databases in
> a single database cluster?
>
> Does this mean you want different QueryID for the same-looking
> query
From: legrand legrand [mailto:legrand_legr...@hotmail.com]
> There are many projects that use alternate QueryId
> distinct from the famous pg_stat_statements jumbling algorithm.
I'd like to welcome the standard QueryID that DBAs and extension developers can
depend on. Are you surveying the needs
From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
> Fixed.
Rebased on HEAD.
Regards
Takayuki Tsunakawa
0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
Description: 0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
0002-speed-up-LOCALL
Hi Peter, Imai-san,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> Your changes in LOCALLOCK still refer to PGPROC, from your first version
> of the patch.
>
> I think the reordering of struct members could be done as a separate
> preliminary patch.
>
> Some more documentatio
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't think so. I think it's just a weirdly-design parameter
> without a really compelling use case. Enforcing limits on the value
> of the parameter doesn't fix that. Most of the reviewers who have
> opined so far have been somewhere between
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> Target_session_attrs Target_server_type
>
> read-write prefer-slave, slave
>
> prefer-read master, slave
> read-onlymaster, prefer-slave
>
> I know that some of the cas
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> Based on your comment it seems to me that 'socket_timeout' should be
> connected with statement_timeout. I mean that end-user should wait
> statement_timeout + 'socket_timeout' for returning control. It looks much
> more safer for m
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> In case of failure PQcancel() terminates in 'socket_timeout'. So, control
> to the end-user in such a failure situation will be returned in 2 *
> 'socket_timeout' interval. It is much better than hanging forever in some
> specific c
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> Do you mind me asking you whether you have thought that solving your problem
> can lead to the problem in the other user applications?
> Let's imagine a possible problem:
> 1. end-user sets 'socket_timeout' only for current session
From: Robert Haas [mailto:robertmh...@gmail.com]
> Now you might say - what if the server is stopped not because of
> SIGSTOP but because of some other reason, like it's waiting for a
> lock? Well, in that case, the database server is still functioning,
> and you will not want the connection to be
From: Robert Haas [mailto:robertmh...@gmail.com]
> One other thing -- I looked a bit into the pgsql-jdbc implementation
> of a similarly-named option, and it does seem to match what you are
> proposing here. I wonder what user experiences with that option have
> been like.
One case I faintly reca
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> > For example, OS issues such as abnormally (buggy) slow process scheduling
> or paging/swapping that prevent control from being passed to postgres. Or,
> abnormally long waits on lwlocks in postgres. statement_timeout doesn't
> t
From: Fabien COELHO [mailto:coe...@cri.ensmp.fr]
> I think that the typical use-case of \c is to connect to another database
> on the same host, at least that what I do pretty often. The natural
> expectation is that the same "other" connection parameters are used,
> otherwise it does not make much
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> If so, in turn the socket_timeout doesn't work as expected? I
> understand that what is proposed here is to disconnect after that
> time of waiting for *the first tuple* of a query, regardless of
> it is a long query or network fail
From: Robert Haas [mailto:robertmh...@gmail.com]
> But that's not what it will do. As long as the server continues to
> dribble out protocol messages from time to time, the timeout will
> never fire no matter how much time passes. I saw a system once where
> every 8kB read took many seconds to co
From: Robert Haas [mailto:robertmh...@gmail.com]
> The first thing I notice about the socket_timeout patch is that the
> documentation is definitely wrong:
Agreed. I suppose the description should be clearer about:
* the purpose and what situation this timeout will help: not for canceling a
lon
From: Fabien COELHO [mailto:coe...@cri.ensmp.fr]
> >> If the user reconnects, eg "\c db", the setting is lost. The
> >> re-connection handling should probably take care of this parameter, and
> maybe others.
> > I think your opinion is reasonable, but it seems not in this thread.
>
> HI think that
From: Ideriha, Takeshi/出利葉 健
> [Size=800, iter=1,000,000]
> Master |15.763
> Patched|16.262 (+3%)
>
> [Size=32768, iter=1,000,000]
> Master |61.3076
> Patched|62.9566 (+2%)
What's the unit, second or millisecond?
Why is the number of digits to the right of the decimal point?
Is the measurement c
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> Robert used the phrase "attractive nuisance", which maybe sounds like a
> good thing to have to a non native speaker, but it actually isn't -- he
> was saying we should avoid a GUC at all, and I can see the reason for
> that. I think we shou
From: Michael Paquier [mailto:mich...@paquier.xyz]
> So we could you consider adding an option for the VACUUM command as well
> as vacuumdb? The interactions with the current patch is that you need to
> define the behavior at the beginning of vacuum for a given heap, instead
> of reading the param
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> I measured the memory context accounting overhead using Tomas's tool
> palloc_bench,
> which he made it a while ago in the similar discussion.
> https://www.postgresql.org/message-id/53f7e83c.3020...@fuzzy.cz
>
> This tool is a littl
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
> Attached are the updated patches.
Thanks, all look fixed.
> The target_server_type option yet to be implemented.
Please let me review once more and proceed to testing when the above is added,
to make sure the final code looks good. I'd
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't think that a VACUUM option would be out of place, but a GUC
> sounds like an attractive nuisance to me. It will encourage people to
> just flip it blindly instead of considering the particular cases where
> they need that behavior, and I t
From: Michael Paquier [mailto:mich...@paquier.xyz]
> This makes the test page-size sensitive. While we don't ensure that tests
> can be run with different page sizes, we should make a maximum effort to
> keep the tests compatible if that's easy enough. In this case you could
> just use > 0 as bas
From: Mike Palmiotto [mailto:mike.palmio...@crunchydata.com]
> Attached is a patch which attempts to solve a few problems:
>
> 1) Filtering out partitions flexibly based on the results of an external
> function call (supplied by an extension).
> 2) Filtering out partitions from pg_inherits based o
From: Masahiko Sawada [mailto:sawada.m...@gmail.com]
> This test expects that the inserted tuple is always reclaimed by
> subsequent vacuum, but it's not always true if there are concurrent
> transactions. So size of the reloptions_test table will not be 0 if
> the tuple is not vacuumed. In my envi
From: Michael Paquier [mailto:mich...@paquier.xyz]
On Mon, Feb 25, 2019 at 03:59:21PM +0900, Masahiko Sawada wrote:
> > Also, I think that this test may fail in case where concurrent
> > transactions are running. So maybe should not run it in parallel to
> > other tests.
>
> That's why autovacuum
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> - If you find the process too much "bloat"s and you (intuirively)
> suspect the cause is system cache, set it to certain shorter
> value, say 1 minutes, and set the catalog_cache_memory_target
> to allowable amount of memory f
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't understand the idea that we would add something to PostgreSQL
> without proving that it has value. Sure, other systems have somewhat
> similar systems, and they have knobs to tune them. But, first, we
> don't know that those other systems
From: Michael Paquier [mailto:mich...@paquier.xyz]
> I don't think that we want to use a too generic name and it seems more natural
> to reflect the context where it is used in the parameter name.
> If we were to shrink with a similar option for other contexts, we would
> most likely use a differen
Hi Hari-san,
I've reviewed all files. I think I'll proceed to testing when I've reviewed
the revised patch and the patch for target_server_type.
(1) patch 0001
CONNECTION_CHECK_WRITABLE, /* Check if we could make a writable
From: Haribabu Kommi [mailto:kommi.harib...@gmail.com]
Here I attached first set of patches that implemented the prefer-read option
> after reporting the transaction_read_only GUC to client. Along the lines
> of adding prefer-read option patch,
Great, thank you! I'll review and test it.
> 3. Ex
Hi Higuchi-san,
(1)
What made you think this problem rarely occurs in PG 10 or later? Looking at
the following code, this seems to happen in PG 10+ too.
if (do_wait)
{
write_eventlog(EVENTLOG_INFORMATION_TYPE, _("Waiting for server
startup...\n"));
if (wait_for_postmas
From: Julien Rouhaud [mailto:rjuju...@gmail.com]
> FWIW, I prefer shrink over truncate, though I'd rather go with
> vacuum_shink_enabled as suggested previously.
Thanks. I'd like to leave a committer to choose the name. FWIW, I chose
shrink_enabled rather than vacuum_shrink_enabled because this
From: Jamison, Kirk [mailto:k.jami...@jp.fujitsu.com]
> socket_timeout (integer)
libpq documentation does not write the data type on the parameter name line.
> Terminate any connection that has been inactive for more than the specified
> number of seconds to prevent client from infinite waiting
From: mikalaike...@ibagroup.eu [mailto:mikalaike...@ibagroup.eu]
> I am not very familiar with the PostgreSQL source code. Nevertheless, the
> main idea of this parameter is clear for me - closing a connection when
> the PostgreSQL server does not response due to any reason. However, I have
> not
From: Jamison, Kirk/ジャミソン カーク
> Although I did review and followed the suggested way in previous email
> way back (which uses root user) and it worked as intended, I'd also like
> to hear feedback also from Fabien whether it's alright without the test
> script, or if there's another way we can test
From: Nagaura, Ryohei [mailto:nagaura.ryo...@jp.fujitsu.com]
> > Maybe. Could you suggest good description?
> Clients wait until the socket become readable when they try to get results
> of their query.
> If the socket state get readable, clients read results.
> (See src/interfaces/libpq/fe-exec.c
From: Nagaura, Ryohei [mailto:nagaura.ryo...@jp.fujitsu.com]
> BTW, tcp_user_timeout parameter of servers and clients have same name in
> my current implementation.
> I think it would be better different name rather than same name.
> I'll name them as the following a) or b):
> a) server_tcp_u
From: Robert Haas [mailto:robertmh...@gmail.com]
> That might be enough to justify having the parameter. But I'm not
> quite sure how high the value would need to be set to actually get the
> benefit in a case like that, or what happens if you set it to a value
> that's not quite high enough.
From: Ideriha, Takeshi/出利葉 健
> I checked it with perf record -avg and perf report.
> The following shows top 20 symbols during benchmark including kernel space.
> The main difference between master (unpatched) and patched one seems that
> patched one consumes cpu catcache-evict-and-refill functions
From: Jamison, Kirk [mailto:k.jami...@jp.fujitsu.com]
> 1) tcp_user_timeout parameter
> I think this can be "committed" separately when it's finalized.
Do you mean you've reviewed and tested the patch by simulating a communication
failure in the way Nagaura-san suggested?
> 2) tcp_socket_timeou
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Hm. Putting a list header for a purely-local data structure into shared
> memory seems quite ugly. Isn't there a better place to keep that?
Agreed. I put it in the global variable.
> Do we really want a dlist here at all? I'm concerned that bloati
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> On 2/12/19 7:33 AM, Tsunakawa, Takayuki wrote:
> > Imai-san confirmed performance improvement with this patch:
> >
> > https://commitfest.postgresql.org/22/1993/
> >
>
> Can you quantify the effects? Tha
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> number of tables | 100 |1000|1
> ---
> TPS (master) |10966 |10654 |9099
> TPS (patch)| 11137 (+1%) |10710 (+0%) |772 (-91%)
>
> It seems that before ca
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> 0.7% may easily be just a noise, possibly due to differences in layout
> of the binary. How many runs? What was the variability of the results
> between runs? What hardware was this tested on?
3 runs, with the variability of about +-2%. L
1 - 100 of 302 matches
Mail list logo