Hi,
I fix and submit this patch in CF4.
In my past patch, it is significant bug which is mistaken caluculation of
offset in posix_fadvise():-( However it works well without problem in pgbench.
Because pgbench transactions are always random access...
And I test my patch in DBT-2 benchmark. Resul
Hi,
I create patch that can drop duplicate buffers in OS using usage_count
alogorithm. I have developed this patch since last summer. This feature seems to
be discussed in hot topic, so I submit it more faster than my schedule.
When usage_count is high in shared_buffers, they are hard to drop fro
(2014/01/16 21:38), Aidan Van Dyk wrote:
Can we just get the backend that dirties the page to the posix_fadvice DONTNEED?
No, it can remove clean page in OS file caches. Because if page is dirtied, it
cause physical-disk-writing. However, it is experimental patch so it might be
changed by futur
(2014/01/16 3:34), Robert Haas wrote:
On Wed, Jan 15, 2014 at 1:53 AM, KONDO Mitsumasa
wrote:
I create patch that can drop duplicate buffers in OS using usage_count
alogorithm. I have developed this patch since last summer. This feature seems to
be discussed in hot topic, so I submit it more
Rebased patch is attached.
pg_stat_statements in PG9.4dev has already changed table columns in. So I hope
this patch will be committed in PG9.4dev.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
*** a/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql
--- b/contrib/pg_sta
(2014/01/22 9:34), Simon Riggs wrote:
AFAICS, all that has happened is that people have given their opinions
and we've got almost the same identical patch, with a rush-rush
comment to commit even though we've waited months. If you submit a
patch, then you need to listen to feedback and be clear a
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
wrote:
OK, Kondo, please demonstrate benchmarks that show we have <1% impact
from this change. Otherwise we may need a config parameter to allow
the calculation.
OK, testing DBT-2 now. However, er
(2014/01/23 12:00), Andrew Dunstan wrote:
On 01/22/2014 08:28 PM, KONDO Mitsumasa wrote:
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
wrote:
OK, Kondo, please demonstrate benchmarks that show we have <1% impact
from this change. Otherwise we
(2014/01/23 10:28), Peter Geoghegan wrote:
On Wed, Jan 22, 2014 at 5:28 PM, KONDO Mitsumasa
wrote:
Oh, thanks to inform me. I think essential problem of my patch has bottle
neck in sqrt() function and other division caluculation.
Well, that's a pretty easy theory to test. Just stop ca
(2014/01/23 23:18), Andrew Dunstan wrote:
What is more, if the square root calculation is affecting your benchmarks, I
suspect you are benchmarking the wrong thing.
I run another test that has two pgbench-clients in same time, one is
select-only-query and another is executing 'SELECT * pg_stat_s
(2014/01/26 17:43), Mitsumasa KONDO wrote:
> 2014-01-26 Simon Riggs mailto:si...@2ndquadrant.com>>
>
> On 21 January 2014 19:48, Simon Riggs <mailto:si...@2ndquadrant.com>> wrote:
> > On 21 January 2014 12:54, KONDO Mitsumasa
> <mailto:k
(2014/01/28 15:17), Rajeev rastogi wrote:
On 27th January, Mitsumasa KONDO wrote:
2014-01-26 Simon Riggs mailto:si...@2ndquadrant.com>>
On 21 January 2014 19:48, Simon Riggs mailto:si...@2ndquadrant.com>> wrote:
> On 21 January 2014 12:54,
f->flags & BM_FADVED) && !(buf->flags &
BM_JUST_DIRTIED))
(2014/01/29 8:20), Jeff Janes wrote:
On Wed, Jan 15, 2014 at 10:34 AM, Robert Haas mailto:robertmh...@gmail.com>> wrote:
On Wed, Jan 15, 2014 at 1:53 AM, KONDO Mitsumasa
mailto:kondo.mitsum...@lab.ntt.co.jp>&
(2014/01/29 15:51), Tom Lane wrote:
> KONDO Mitsumasa writes:
>> By the way, latest pg_stat_statement might affect performance in Windows
>> system.
>> Because it uses fflush() system call every creating new entry in
>> pg_stat_statements, and it calls many fread()
(2014/01/29 16:58), Peter Geoghegan wrote:
On Tue, Jan 28, 2014 at 10:51 PM, Tom Lane wrote:
KONDO Mitsumasa writes:
By the way, latest pg_stat_statement might affect performance in Windows system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and it
(2014/01/29 17:31), Rajeev rastogi wrote:
On 28th January, Mitsumasa KONDO wrote:
By the way, latest pg_stat_statement might affect performance in
Windows system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and it calls many fread() to warm file cache. It
(2014/01/30 8:29), Tom Lane wrote:
> Andrew Dunstan writes:
>> I could live with just stddev. Not sure others would be so happy.
>
> FWIW, I'd vote for just stddev, on the basis that min/max appear to add
> more to the counter update time than stddev does; you've got
> this:
>
> + e-
Hi Rajeev,
(2014/01/29 17:31), Rajeev rastogi wrote:
No Issue, you can share me the test cases, I will take the performance report.
Attached patch is supported to latest pg_stat_statements. It includes min, max,
and stdev statistics. Could you run compiling test on your windows enviroments?
I
Hi Febien,
Thank you very much for your very detail and useful comments!
I read your comment, I agree most of your advice:)
Attached patch is fixed for your comment. That are...
- Remove redundant long-option.
- We can use "--gaussian=NUM -S" or "--gaussian=NUMN -N" options.
- Add sentence
Sorry, previos attached patch has small bug.
Please use latest one.
> 134 - return min + (int64) (max - min + 1) * rand;
> 134 + return min + (int64)((max - min + 1) * rand);
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
*** a/contrib/pgbench/pgbench.c
--- b/contrib/pgbench/pgbench
(2014/02/16 7:38), Fabien COELHO wrote:
I have updated the patch (v7) based on Mitsumasa latest v6:
- some code simplifications & formula changes.
- I've added explicit looping probability computations in comments
to show the (low) looping probability of the iterative search.
- I
(2014/02/15 23:04), Andres Freund wrote:
Hi Simon,
On 2014-01-14 17:12:35 +, Simon Riggs wrote:
/*
- * MarkCurrentTransactionIdLoggedIfAny
+ * ReportTransactionInsertedWAL
*
- * Remember that the current xid - if it is assigned - now has been wal logged.
+ * Remember that the curre
(2014/02/17 21:44), Rajeev rastogi wrote:
It got compiled successfully on Windows.
Thank you for checking on Windows! It is very helpful for me.
Can we allow to add three statistics? I think only adding stdev is difficult to
image for user. But if there are min and max, we can image each statem
Hi,
I found interesting "for" and "while" loop in WaitForWALToBecomeAvailable() in
xlog.c. Can you tell me this behavior?
for (;;)
{
~
} while (StanbyMode)
I confirmed this code is no problem in gcc compiler:)
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
--
Sent via pgsql-hack
(2014/02/28 2:39), Tom Lane wrote:
> Fabien COELHO writes:
>>> Yeah, but they don't make -P take an integer argument. It's that
>>> little frammish that makes this problem significant.
>
>> I do not see a strong case to make options with arguments case insensitive
>> as a general rule. If this i
(2014/02/27 20:19), Heikki Linnakangas wrote:
On 02/27/2014 12:38 PM, KONDO Mitsumasa wrote:
I found interesting "for" and "while" loop in WaitForWALToBecomeAvailable() in
xlog.c. Can you tell me this behavior?
for (;;)
{
~
} while (StanbyMode)
I confirmed this code
(2014/03/02 22:32), Fabien COELHO wrote:
Alvaro Herrera writes:
Seems that in the review so far, Fabien has focused mainly in the
mathematical properties of the new random number generation. That seems
perfectly fine, but no comment has been made about the chosen UI for the
feature.
Per the f
(2014/03/03 16:51), Fabien COELHO wrote:>>>\setrandom foo 1 10 [uniform]
>>>\setrandom foo 1 :size gaussian 3.6
>>>\setrandom foo 1 100 exponential 7.2
>> It's good design. I think it will become more low overhead at part of parsing
>> in pgbench, because comparison of strings will be
(2014/03/04 17:28), Fabien COELHO wrote:
OK. I'm not sure which idia is the best. So I wait for comments in community:)
Hmmm. Maybe you can do what Tom voted for, he is the committer:-)
Yeah, but he might change his mind by our disscuttion. So I wait untill tomorrow,
and if nothing to comment,
Hi all,
I think this patch is completely forgotten, and feel very unfortunate:(
Min, max, and stdev is basic statistics in general monitoring tools,
So I'd like to push it.
(2014/02/12 15:45), KONDO Mitsumasa wrote:
(2014/01/29 17:31), Rajeev rastogi wrote:
No Issue, you can share m
Hi,
(2014/03/04 17:42), KONDO Mitsumasa wrote:> (2014/03/04 17:28), Fabien COELHO
wrote:
>>> OK. I'm not sure which idia is the best. So I wait for comments in
community:)
>> Hmmm. Maybe you can do what Tom voted for, he is the committer:-)
> Yeah, but he mi
(2014/03/07 16:02), KONDO Mitsumasa wrote:
And other cases are classified under following.
\setrandom var min max gaussian #hoge --> uniform
Oh, it's wrong... It will be..
\setrandom var min max gaussian #hoge --> ERROR
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Cente
(2014/03/09 1:49), Fabien COELHO wrote:
Hello Mitsumasa-san,
New "\setrandom" interface is here.
\setrandom var min max [gaussian threshold | exponential threshold]
Attached patch realizes this interface, but it has little bit ugly codeing in
executeStatement() and process_commands()..
I
(2013/08/05 19:28), Andres Freund wrote:
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
(2013/08/05 17:14), Amit Langote wrote:
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?
It may not become bottleneck.
1 FD consumes 160 byte in
(2013/08/05 21:23), Tom Lane wrote:
> Andres Freund writes:
>> ... Also, there are global
>> limits to the amount of filehandles that can simultaneously opened on a
>> system.
>
> Yeah. Raising max_files_per_process puts you at serious risk that
> everything else on the box will start falling ov
(2013/08/06 19:33), Andres Freund wrote:
On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote:
(2013/08/05 21:23), Tom Lane wrote:
Andres Freund writes:
... Also, there are global
limits to the amount of filehandles that can simultaneously opened on a
system.
Yeah. Raising
(2013/08/30 11:55), Fujii Masao wrote:
* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100
checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)
Did you execute munual checkpoint before star
Hi,
I add checkpoint option to pgbench.
pgbench is simple and useful benchmark for every user. However, result of
benchmark greatly changes by some situations which are in executing checkpoint,
number of dirty buffers in share_buffers, and so on. For such a problem, it is
custom to carry out a ch
(2013/09/05 0:04), Andres Freund wrote:
> I'd vote for adding zeroing *after* the fallocate() first.
+1, with FALLOC_FL_KEEP_SIZE flag.
At least, fallocate with FALLOC_FL_KEEP_SIZE flag is faster than nothing in my
developing sorted checkpoint. I adopted it to relation file, so I don't know
abo
Sorry for my delay reply.
Since I have had vacation last week, I replyed from gmail.
However, it was stalled post to pgsql-hackers:-(
(2013/09/21 6:05), Kevin Grittner wrote:
> You had accidentally added to the CF In Progress.
Oh, I had completely mistook this CF schedule :-)
Maybe, Horiguchi-san
Sorry for my delay reply.
Since I have had vacation last week, I replied from gmail.
However, it was stalled post to pgsql-hackers:-(
(2013/09/21 7:54), Fabien COELHO wrote:
However this pattern induces stronger cache effects which are maybe not too
realistic,
because neighboring keys in the mi
(2013/09/27 5:29), Peter Eisentraut wrote:
> This patch no longer applies.
I will try to create this patch in next commit fest.
If you have nice idea, please send me!
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
Hi Fujii-san,
(2013/09/30 12:49), Fujii Masao wrote:
> On second thought, the patch could compress WAL very much because I used
pgbench.
I will do the same measurement by using another benchmark.
If you hope, I can test this patch in DBT-2 benchmark in end of this week.
I will use under follow
(2013/09/30 13:55), Amit Kapila wrote:
On Mon, Sep 30, 2013 at 10:04 AM, Fujii Masao wrote:
Yep, please! It's really helpful!
OK! I test with single instance and synchronous replication constitution.
By the way, you posted patch which is sync_file_range() WAL writing method in 3
years ago. I
Hi,
I want to submit new project in pgFoundery project.
I submitted new project which is WAL archive copy tool with directIO method in
pgFoundery homepage 2 weeks ago, but it does not have approved and responded at
all:-(
Who is pgFoundery administrator or board member now? I would like to send
(2013/10/02 17:37), KONDO Mitsumasa wrote:
> I want to submit new project in pgFoundery project.
Our new project was approved yesterday!
Thanks very much for pgFoundery crew.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hack
(2013/10/08 17:33), Haribabu kommi wrote:
The checkpoint_timeout and checkpoint_segments are increased to make sure no
checkpoint happens during the test run.
Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't
know number of disks in your test server, in my test serv
Hi,
I tested dbt-2 benchmark in single instance and synchronous replication.
Unfortunately, my benchmark results were not seen many differences...
* Test server
Server: HP Proliant DL360 G7
CPU:Xeon E5640 2.66GHz (1P/4C)
Memory: 18GB(PC3-10600R-9)
Disk: 146GB(15k)*4 RAID1+0
(2013/10/08 20:13), Haribabu kommi wrote:
I chosen the sync_commit=off mode because it generates more tps, thus it
increases the volume of WAL.
I did not think to there. Sorry...
I will test with sync_commit=on mode and provide the test results.
OK. Thanks!
--
Mitsumasa KONDO
NTT Open Sourc
(2013/10/13 0:14), Amit Kapila wrote:
On Fri, Oct 11, 2013 at 10:36 PM, Andres Freund wrote:
But maybe pglz is just not a good fit for this, it really
isn't a very good algorithm in this day and aage.
+1. This compression algorithm is needed more faster than pglz which is like
general compress
Sorry for my reply late...
(2013/10/08 23:26), Bruce Momjian wrote:
> First, I want to apologize for not completing the release notes earlier
> so that others could review them. I started working on the release
> notes on Friday, but my unfamiliarity with the process and fear of
> making a mista
(2013/10/15 13:33), Amit Kapila wrote:
Snappy is good mainly for un-compressible data, see the link below:
http://www.postgresql.org/message-id/CAAZKuFZCOCHsswQM60ioDO_hk12tA7OG3YcJA8v=4yebmoa...@mail.gmail.com
This result was gotten in ARM architecture, it is not general CPU.
Please see detail
(2013/10/15 22:01), k...@rice.edu wrote:
Google's lz4 is also a very nice algorithm with 33% better compression
performance than snappy and 2X the decompression performance in some
benchmarks also with a bsd license:
https://code.google.com/p/lz4/
If we judge only performance, we will select lz
I submit patch adding min and max execute statement time in pg_stat_statement in
next CF.
pg_stat_statement have execution time, but it is average execution time and does
not provide detail information very much. So I add min and max execute statement
time in pg_stat_statement columns. Usage is al
Hi,
I submit improvement of pg_stat_statement usage patch in CF3.
In pg_stat_statement, I think buffer hit ratio is very important value. However,
it is difficult to calculate it, and it need complicated SQL. This patch makes
it
more simple usage and documentation.
> -bench=# SELECT query, call
(2013/10/02 18:57), Michael Paquier wrote:
wrote:
Who is pgFoundery administrator or board member now? I would like to send e-mail
them. At least, it does not have information and support page in pgFoundery
homepage.
Why don't you consider github as a potential solution?
It is because github
(2013/10/19 14:58), Amit Kapila wrote:
> On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
> wrote:
> I think in general also snappy is mostly preferred for it's low CPU
> usage not for compression, but overall my vote is also for snappy.
I think low CPU usage is the best import
(2013/10/18 22:21), Andrew Dunstan wrote:
> If we're going to extend pg_stat_statements, even more than min and max
> I'd like to see the standard deviation in execution time.
OK. I do! I am making some other patches, please wait more!
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center.;
(2013/10/22 12:52), Fujii Masao wrote:
On Tue, Oct 22, 2013 at 12:47 PM, Amit Kapila wrote:
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
wrote:
In general, my thinking is that we should
Hi All,
(2013/10/22 22:26), Stephen Frost wrote:
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
In our case, what I keep experiencing with tuning queries is that we
have like 99% of them running under acceptable threshold and 1% of them
taking more and more time.
This is usually described
Hi,
I create patch which is improvement of checkpoint IO scheduler for stable
transaction responses.
* Problem in checkpoint IO schedule in heavy transaction case
When heavy transaction in database, I think PostgreSQL checkpoint scheduler
has two problems at start and end of checkpoint. One
(2013/06/12 23:07), Robert Haas wrote:
On Mon, Jun 10, 2013 at 3:48 PM, Simon Riggs wrote:
On 10 June 2013 11:51, KONDO Mitsumasa wrote:
I create patch which is improvement of checkpoint IO scheduler for stable
transaction responses.
Looks like good results, with good measurements. Should
Thank you for giving comments and my patch reviewer!
(2013/06/16 23:27), Heikki Linnakangas wrote:
On 10.06.2013 13:51, KONDO Mitsumasa wrote:
I create patch which is improvement of checkpoint IO scheduler for
stable transaction responses.
* Problem in checkpoint IO schedule in heavy
(2013/06/17 5:48), Andres Freund wrote:> On 2013-06-16 17:27:56 +0300, Heikki
Linnakangas wrote:
>> If we don't mind scanning the buffer cache several times, we don't
>> necessarily even need to sort the writes for that. Just scan the buffer
>> cache for all buffers belonging to relation A, then
Hi Febien,
I send you my review result and refactoring patch. I think that your patch has
good function and many people surely want to use! I hope that my review comment
will be good for your patch.
* 1. Complete words and variable in source code and sgml document.
It is readable for user an
Hi,
I took results of my separate patches and original PG.
* Result of DBT-2
| TPS 90%tileAverage Maximum
--
original_0.7 | 3474.62 18.348328 5.73936.977713
original_1.0 | 3469.03 18.637865 5.84241.754421
f
Thank you for comments!
>> On Tue, Jun 25, 2013 at 1:15 PM, Heikki Linnakangas
Hmm, so the write patch doesn't do much, but the fsync patch makes the response
times somewhat smoother. I'd suggest that we drop the write patch for now, and
>>> focus on the fsyncs.
Write patch is effective in TPS!
Hello Fevien,
Thank you for your fast work and reply. I try to test your new patch until next
week.
(2013/06/26 20:16), Fabien COELHO wrote:
Here is a v4 that takes into account most of your points: The report is
performed
for all threads by thread 0, however --progress is not supported unde
(2013/06/26 20:15), Heikki Linnakangas wrote:
On 26.06.2013 11:37, KONDO Mitsumasa wrote:
On Tue, Jun 25, 2013 at 1:15 PM, Heikki Linnakangas
Hmm, so the write patch doesn't do much, but the fsync patch makes
the response
times somewhat smoother. I'd suggest that we drop the write
Dear Febien
(2013/06/27 14:39), Fabien COELHO wrote:
If I show a latency at full load, that would be "nclients/tps", not "1/tps".
However, I'm hoping to pass the throttling patch to pgbench, in which case the
latency to show is a little bit different because the "nclients/tps" would
include slee
(2013/06/28 0:08), Robert Haas wrote:
On Tue, Jun 25, 2013 at 4:28 PM, Heikki Linnakangas
wrote:
I'm pretty sure Greg Smith tried it the fixed-sleep thing before and
it didn't work that well. I have also tried it and the resulting
behavior was unimpressive. It makes checkpoints take a long tim
(2013/06/28 3:17), Fabien COELHO wrote:
Attached is patch version 5.
It includes this solution for fork emulation, one report per thread instead of a
global report. Some code duplication for that.
It's good coding. I test configure option with --disable-thread-safety and not.
My test results we
Hi, Febien
Thanks for your fast response and fix! I set your patch ready for commiter now.
(2013/07/01 19:49), Fabien COELHO wrote:
I have small comments. I think that 'lat' is not generally abbreviation of
'latency'. But I don't know good abbreviation. If you have any good
abbreviation, please
Hi,
I tested and changed segsize=0.25GB which is max partitioned table file size and
default setting is 1GB in configure option (./configure --with-segsize=0.25).
Because I thought that small segsize is good for fsync phase and background disk
write in OS in checkpoint. I got significant improveme
(2013/07/03 22:31), Robert Haas wrote:
On Wed, Jul 3, 2013 at 4:18 AM, KONDO Mitsumasa
wrote:
I tested and changed segsize=0.25GB which is max partitioned table file size and
default setting is 1GB in configure option (./configure --with-segsize=0.25).
Because I thought that small segsize is
(2013/07/05 0:35), Joshua D. Drake wrote:
On 07/04/2013 06:05 AM, Andres Freund wrote:
Presumably the smaller segsize is better because we don't
completely stall the system by submitting up to 1GB of io at once. So,
if we were to do it in 32MB chunks and then do a final fsync()
afterwards we mig
I create fsync v2 patch. There's not much time, so I try to focus fsync patch in
this commit festa as adviced by Heikki. And I'm sorry that it is not good that
diverging from main discussion in this commit festa... Of course, I continue to
try another improvement.
* Changes
- Add ckpt_flag in
Hi,l
I create fsync v3 v4 v5 patches and test them.
* Changes
- Add considering about total checkpoint schedule in fsync phase (v3 v4 v5)
- Add considering about total checkpoint schedule in write phase (v4 only)
- Modify some implementations from v3 (v5 only)
I use linear combination method
(2013/07/19 0:41), Greg Smith wrote:
On 7/18/13 11:04 AM, Robert Haas wrote:
On a system where fsync is sometimes very very slow, that
might result in the checkpoint overrunning its time budget - but SO
WHAT?
Checkpoints provide a boundary on recovery time. That is their only purpose.
You can
(2013/07/19 22:48), Greg Smith wrote:
On 7/19/13 3:53 AM, KONDO Mitsumasa wrote:
Recently, a user who think system availability is important uses
synchronous replication cluster.
If your argument for why it's OK to ignore bounding crash recovery on the master
is that it's possible t
(2013/07/21 4:37), Heikki Linnakangas wrote:
Mitsumasa-san, since you have the test rig ready, could you try the attached
patch please? It scans the buffer cache several times, writing out all the dirty
buffers for segment A first, then fsyncs it, then all dirty buffers for segment
B, and so on.
Hi,
I understand why my patch is faster than original, by executing Heikki's patch.
His patch execute write() and fsync() in each relation files in write-phase in
checkpoint. Therefore, I expected that write-phase would be slow, and fsync-phase
would be fast. Because disk-write had executed in
(2013/07/24 1:13), Greg Smith wrote:
On 7/23/13 10:56 AM, Robert Haas wrote:
On Mon, Jul 22, 2013 at 11:48 PM, Greg Smith wrote:
We know that a 1GB relation segment can take a really long time to write
out. That could include up to 128 changed 8K pages, and we allow all of
them to get dirty b
Hi Satoshi,
I was wondering about this problem. Please tell us about your system enviroment
which is postgresql version ,OS, raid card, and file system.
Best regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To mak
Hi Amit,
(2013/08/05 15:23), Amit Langote wrote:
May the routines in fd.c become bottleneck with a large number of
concurrent connections to above database, say something like "pgbench
-j 8 -c 128"? Is there any other place I should be paying attention
to?
What kind of file system did you use?
(2013/08/05 17:14), Amit Langote wrote:
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?
It may not become bottleneck.
1 FD consumes 160 byte in 64bit system. See linux manual at "epoll".
Regards,
--
Mitsumasa KONDO
NTT Open Source Software
Hi,
Horiguch's patch does not seem to record minRecoveryPoint in ReadRecord();
Attempt patch records minRecoveryPoint.
[crash recovery -> record minRecoveryPoint in control file -> archive recovery]
I think that this is an original intention of Heikki's patch.
I also found a bug in latest 9.2_st
(2013/03/06 16:50), Heikki Linnakangas wrote:>
Hi,
Horiguch's patch does not seem to record minRecoveryPoint in ReadRecord();
Attempt patch records minRecoveryPoint.
[crash recovery -> record minRecoveryPoint in control file -> archive
recovery]
I think that this is an original intention of Heik
(2013/03/07 19:41), Heikki Linnakangas wrote:
On 07.03.2013 10:05, KONDO Mitsumasa wrote:
(2013/03/06 16:50), Heikki Linnakangas wrote:>
Yeah. That fix isn't right, though; XLogPageRead() is supposed to
return true on success, and false on error, and the patch makes it
return 't
Hi,
I find problem about failing start-up achive recovery at Standby mode in
PG9.2.4 streaming replication.
I test same problem in PG9.2.3. But it is not occerd...
cp: cannot stat `../arc/00030013': そのようなファイルやディレクトリはありません
[Standby] 2013-04-22 01:27:25 EDTLOG: 0: restored
Hi,
I discavered the problem cause. I think taht horiguchi's discovery is another
problem...
Problem has CreateRestartPoint. In recovery mode, PG should not WAL record.
Because PG does not know latest WAL file location.
But in this problem case, PG(standby) write WAL file at RestartPoint in arch
Hi,
I found that archive recovery and SR promote command is failed by contrecord is
requested by 0/420" in ReadRecord().
I investigate about "contrecord", it means that record crosses page boundary.
I think it is not irregular page, and should be try to read next page in this
case.
But in a
(2013/05/07 22:40), Heikki Linnakangas wrote:
On 26.04.2013 11:51, KONDO Mitsumasa wrote:
So I fix CreateRestartPoint at branching point of executing
MinRecoveryPoint.
It seems to fix this problem, but attached patch is plain.
I didn't understand this. I committed a fix for the issue
://mysql.lamphost.net/sources/doxygen/mysql-5.1/structPgman_1_1Page__entry.html
- Song Jiang HP: http://www.ece.eng.wayne.edu/~sjiang/
--
Kondo Mitsumasa
NTT Corporation, NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription
(2013/10/21 20:17), KONDO Mitsumasa wrote:
> (2013/10/18 22:21), Andrew Dunstan wrote:
>> If we're going to extend pg_stat_statements, even more than min and max
>> I'd like to see the standard deviation in execution time.
> OK. I do! I am making some other patc
Oh! Sorry...
I forgot to attach my latest patch.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
diff --git a/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql b/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql
new file mode 100644
index 000..929d623
--- /dev/n
Hi Claudio,
(2013/11/14 22:53), Claudio Freire wrote:
On Thu, Nov 14, 2013 at 9:09 AM, KONDO Mitsumasa
wrote:
I create a patch that is improvement of disk-read and OS file caches. It can
optimize kernel readahead parameter using buffer access strategy and
posix_fadvice() in various disk-read
(2013/11/15 2:03), Fujii Masao wrote:
On Thu, Nov 14, 2013 at 9:09 PM, KONDO Mitsumasa
wrote:
Hi,
I create a patch that is improvement of disk-read and OS file caches. It can
optimize kernel readahead parameter using buffer access strategy and
posix_fadvice() in various disk-read situations
(2013/11/14 7:11), Peter Geoghegan wrote:
On Wed, Oct 23, 2013 at 8:52 PM, Alvaro Herrera
wrote:
Hmm, now if we had portable atomic addition, so that we could spare the
spinlock ...
And adding a histogram or
min/max for something like execution time isn't an approach that can
be made to work f
(2013/11/15 11:17), Peter Geoghegan wrote:
On Thu, Nov 14, 2013 at 6:18 PM, KONDO Mitsumasa
wrote:
I will fix it. Could you tell me your Mac OS version and gcc version? I have
only mac book air with Maverick OS(10.9).
I have an idea that Mac OSX doesn't have posix_fadvise at all. Didn&
1 - 100 of 138 matches
Mail list logo