but I did not find an explicit
answer in archives)
Thanks for any inputs!
Rgds,
-Dimitri
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Thursday 22 March 2007 14:52, Alvaro Herrera wrote:
> Dimitri escribió:
> > Folks,
> >
> > is there any constrains/problems/etc. to run several vacuum processes in
> > parallel while each one is 'vaccuming' one different table?
>
> No, no problem. Ke
On Thursday 22 March 2007 16:12, Alvaro Herrera wrote:
> Dimitri escribió:
> > On Thursday 22 March 2007 14:52, Alvaro Herrera wrote:
> > > Dimitri escribió:
> > > > Folks,
> > > >
> > > > is there any constrains/problems/etc. to run several
t cannot
fully load storage array... So, more vacuum processes I start in parallel -
faster I'll finish database vacuuming.
Best regards!
-Dimitri
On Thursday 22 March 2007 18:10, Michael Stone wrote:
> On Thu, Mar 22, 2007 at 04:55:02PM +0100, Dimitri wrote:
> >In my case I have se
y and use separated pool for logs if needed.
Also, RAID-Z is more security-oriented rather performance, RAID-10 should be
a better choice...
Rgds,
-Dimitri
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project
On Thursday 22 March 2007 19:46, Michael Stone wrote:
> On Thu, Mar 22, 2007 at 07:24:38PM +0100, Dimitri wrote:
> >you're right until you're using a single disk :)
> >Now, imagine you have more disks
>
> I do have more disks. I maximize the I/O performance by dedic
On Friday 23 March 2007 14:32, Matt Smiley wrote:
> Thanks Dimitri! That was very educational material! I'm going to think
> out loud here, so please correct me if you see any errors.
Your mail is so long - I was unable to answer all questions same day :))
>
> The section o
oad do:
# mount -o remount,logging /path_to_your_filesystem
and check if I/O volume is increasing as well TX numbers
than come back:
# mount -o remount,forcedirectio /path_to_your_filesystem
and see if I/O volume is decreasing as well TX numbers...
Best regards!
-Dimitri
>
> Now, why TX
d write 8K?... Is
there any reason to use so big default block size?...
Probably it may be a good idea to put it as 'initdb' parameter? and
have such value per database server?
Rgds,
-Dimitri
>
> However, to understand TX number mystery I think the only possible
> solution
&
d
(for the same amount of data)... Even we rewrite probably several
times the same block with incoming transactions - it still costs on
traffic, and we will process slower even H/W can do better. Don't
think it's good, no? ;)
Rgds,
-Dimitri
On 3/30/07, Erik Jones <[EMAIL PROTECTED]> wr
gain is quickly decreasing with growing workload! So, yes 8K is good
enough and probably the most optimal choice for LOG (as well data)
block size.
Rgds,
-Dimitri
Well, to check if there is a real potential gain all we need is a
small comparing test using PgSQL compiled with LOG block size equa
very interesting,
but very different :))
Rgds,
-Dimitri
On 4/3/07, A.M. <[EMAIL PROTECTED]> wrote:
On Apr 3, 2007, at 16:00 , Alan Hodgson wrote:
> On Tuesday 03 April 2007 12:47, "A.M."
> <[EMAIL PROTECTED]> wrote:
>> On Apr 3, 2007, at 15:39 , C. Bergström wrot
Wow, it's excellent! :))
probably the next step is:
ALTER TABLE CACHE ON/OFF;
just to force keeping any table in the cache. What do you think?...
Rgds,
-Dimitri
On 4/5/07, Josh Berkus wrote:
Dimitri,
> Probably another helpful solution may be to implement:
>
>ALTER
x27;logically' sequentual blocks...
Rgds,
-Dimitri
On 5/30/07, Albert Cervera Areny <[EMAIL PROTECTED]> wrote:
Hardware isn't very good I believe, and it's about 2-3 years old, but the
RAID
is Linux software, and though not very good the difference between reading
and writing sh
part including ZFS specific tuning
Tests were executed in Mar/Apr.2007 with latest v8.2.3 on that time.
Due limited spare time I was able to publish results only now...
Any comments are welcome! :)
Best regards!
-Dimitri
---(end of broadcast
nting there is no result :)
As well I did not think to compare database initially (don't know why
but it's always starting a small war between DB vendors :)), but
results were so surprising so I just continued until it was possible
:))
Rgds,
-Dimitri
On 5/31/07, Alexander Staubo &l
y its robustness :))
Rgds,
-Dimitri
On 6/1/07, Craig James <[EMAIL PROTECTED]> wrote:
Apologies for a somewhat off-topic question, but...
The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.
It detects the two RAID1 arrays, the partitions of which are marked
p
CURSOR.
Program 'psql' is implemented to not use CURSOR by default, so it'll
be easy to check if you're meeting this issue or not just by executing
your query remotely from 'psql'...
Rgds,
-Dimitri
On 6/21/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Tom,
ce code
itself, you may find ExecQueryUsingCursor function implementation
(file common.c))...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Dimitri,
>but did you try to execute your query directly from 'psql' ?...
munnin=>\timing
munnin=>select *
To keep default network workload more optimal, I think we need to
bring "FETCH N" more popular for developers and enable it (even
hidden) by default in any ODBC/JDBC and other generic modules...
Rgds,
-Dimitri
On 6/22/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Rainer Bauer <[EMA
Rainer, but did you try initial query with FETCH_COUNT equal to 100?...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Dimitri,
>Let's stay optimist - at least now you know the main source of your
problem! :))
>
>Let's see now with CURS
at least there is a choice :))
As well, if your query result will be 500 (for ex.) I think the
difference will be less important between non-CURSOR and "FETCH 500"
execution...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Dimitri,
>Rainer, but did you try
Rainer, seeking psqlODBC code source it seems to work in similar way
and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the
same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see
if it'll be better...
Rgds,
-Dimitri
On 6/22/07, Rainer B
- per checkpoint basis?
- full?...
Thanks a lot for any info!
Rgds,
-Dimitri
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
should work without fsync)...
On 7/3/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Dimitri wrote:
> I'm very curious to know if we may expect or guarantee any data
> consistency with WAL sync=OFF but using file system mounted in Direct
> I/O mode (means every write() syst
Yes Gregory, that's why I'm asking, because from 1800 transactions/sec
I'm jumping to 2800 transactions/sec! and it's more than important
performance level increase :))
Rgds,
-Dimitri
On 7/4/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
"Dimitri" <
y expect
currently, and think about migration before the end of this year...
Seeing at least 10.000 random writes/sec on storage sub-system during
live database test was very pleasant to customer and make feel them
comfortable for their production...
Thanks a lot for all your help!
Best regards!
-Dimitr
! etc. etc. etc. :)
Rgds,
-Dimitri
On 7/9/07, Jonah H. Harris <[EMAIL PROTECTED]> wrote:
On 7/9/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> BTW, it might be worth trying the different wal_sync_methods. IIRC,
> Jonah's seen some good results from open_datasync.
On Linux, usin
with prefetch! when prefetch is detected,
ZFS will read next blocks without any demand from PG; but otherwise
why you need to read more pages each time PG asking only one?...
- prefetch of course not needed for OLTP, but helps on OLAP/DWH, agree :)
Rgds,
-Dimitri
On 7/22/07, Luke Lonergan &l
obsolete at the end of this year :))
BTW, forgot to mention, you'll need Solaris 10u4 or at least 10u3 but
with all recent patches applied to run M8000 on full power.
Best regards!
-Dimitri
On 7/30/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Hi Dimitri,
>
> Can you post so
BTW, will it improve something if you change your index to "my_table(
id, the_date )"?
Rgds,
-Dimitri
On 9/5/07, JS Ubei <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I need to improve a query like :
>
> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY
Josh,
it'll be great if you explain how did you change the records size to
128K? - as this size is assigned on the file creation and cannot be
changed later - I suppose that you made a backup of your data and then
process a full restore.. is it so?
Rgds,
-Dimitri
On 5/8/10, Josh Berkus
. But
once you've re-copied your files again - the right order was applied
again.
BTW, 8K is recommended for OLTP workloads, but for DW you may stay
with 128K without problem.
Rgds,
-Dimitri
On 5/10/10, Josh Berkus wrote:
> On 5/9/10 1:45 AM, Dimitri wrote:
>> Josh,
>>
>&g
such resistance to implement hints withing SQL
queries in PG?..
Rgds,
-Dimitri
On 7/9/10, Robert Haas wrote:
> On Fri, Jul 9, 2010 at 6:13 AM, damien hostin
> wrote:
>>> Have you tried running ANALYZE on the production server?
>>>
>>> You might also want to try ALT
e are many
posts in blogs about optimal compiler options to use).. - don't
hesitate to try and don't forget to share here with others :-))
Rgds,
-Dimitri
On 8/11/10, Joseph Conway wrote:
> With a 16 CPU, 32 GB Solaris Sparc server, is there any conceivable
> reason to use
So, does it mean that VACUUM will never clean dead rows if you have a
non-stop transactional activity in your PG database???... (24/7 OLTP
for ex.)
Rgds,
-Dimitri
On 8/19/10, Kevin Grittner wrote:
> Alexandre de Arruda Paes wrote:
>> 2010/8/18 Tom Lane
>
>>> There&
n the
oldest transaction = we have a problem in PG.. Otherwise it works as
expected to match MVCC.
Rgds,
-Dimitri
On 8/21/10, Scott Marlowe wrote:
> No, it means it can't clean rows that are younger than the oldest
> transaction currently in progress. if you started a transaction 5
&
;VACUUM FORCE TABLE" will be just aware about what he's
doing and be sure no one of the active transactions will be ever
access this table.
What do you think?.. ;-)
Rgds,
-Dimitri
On 8/22/10, Robert Haas wrote:
> On Sat, Aug 21, 2010 at 9:49 AM, Alexandre de Arruda Paes
> w
You may also try the Sun's F5100 (flash storage array) - you may
easily get 700 MB/s just with a single I/O stream (single process), so
just with 2 streams you'll get your throughput.. - The array has 2TB
total space and max throughput should be around 4GB/s..
Rgds,
-Dimitri
On 11/1
#x27;t know
why they did not present similar performance graphs for these
platform, strange no?...
Rgds,
-Dimitri
On 11/9/07, Ron Mayer <[EMAIL PROTECTED]> wrote:
> Bill Moran wrote:
> > On Fri, 9 Nov 2007 11:11:18 -0500 (EST)
> > Greg Smith <[EMAIL PROTECTED]>
not null,
HORDER INT not null,
REF_STATCHAR(3) not null,
BEGIN_DATE CHAR(12) not null,
END_DATECHAR(12) ,
NOTECHAR(100)
);
create unique index s
ay we expect with CHAR vs VARCHAR if
all data have a fixed length?..
Any way to force nested loop without additional index?..
It's 2 times faster on InnoDB, and as it's just a SELECT query no need
to go in transaction details :-)
Rgds,
-Dimitri
On 5/6/09, Craig Ringer wrote:
> Di
Hi Heikki,
I've already tried a target 1000 and the only thing it changes
comparing to the current 100 (default) is instead of 2404 rows it says
240 rows, but the plan remaining the same..
Rgds,
-Dimitri
On 5/6/09, Heikki Linnakangas wrote:
> Dimitri wrote:
>> any idea if t
Hi Chris,
the only problem I see here is it's 2 times slower vs InnoDB, so
before I'll say myself it's ok I want to be sure there is nothing else
to do.. :-)
Rgds,
-Dimitri
On 5/6/09, Chris wrote:
> Dimitri wrote:
>> Hi Craig,
>>
>> yes, you detailed ver
Hi Richard,
no, of course it's not based on explain :-)
I've run several tests before and now going in depth to understand if
there is nothing wrong. Due such a single query time difference InnoDB
is doing 2-3 times better TPS level comparing to PostgreSQL..
Rgds,
-Dimitri
On 5/6/0
t all my "lc_*" variables are set to "C"...
Rgds,
-Dimitri
On 5/6/09, Merlin Moncure wrote:
> On Wed, May 6, 2009 at 7:46 AM, Merlin Moncure wrote:
>> prepare history_stat(char(10) as
>
> typo:
> prepare history_stat(char(10)) as
>
--
Sent via pgsql-p
.3.7, but there is still a room for improvement if such a
small query may go faster :-)
Rgds,
-Dimitri
On 5/6/09, Albe Laurenz wrote:
> Dimitri wrote:
>> I've run several tests before and now going in depth to understand if
>> there is nothing wrong. Due such a single query tim
- execute of the same prepared "select count(*) ..." took 0.68ms
So, where the time is going?...
Rgds,
-Dimitri
On 5/6/09, Ries van Twisk wrote:
>
> On May 6, 2009, at 7:53 AM, Richard Huxton wrote:
>
>> Dimitri wrote:
>>> I'll try to answer all mails
I supposed in case with prepare and then execute a query optimizer is
no more coming in play on "execute" phase, or did I miss something?..
Forget to say: query cache is disabled on MySQL side.
Rgds,
-Dimitri
On 5/6/09, Craig Ringer wrote:
> Dimitri wrote:
>> Hi Chris,
>&
Hi Ken,
yes, I may do it, but I did not expect to come into profiling initially :-)
I expected there is just something trivial within a plan that I just
don't know.. :-)
BTW, is there already an integrated profiled within a code? or do I
need external tools?..
Rgds,
-Dimitri
On 5/6/09, Ke
6ms
Any idea why planner is not choosing this plan from the beginning?..
Any way to keep this plan without having a global or per sessions
hashjoin disabled?..
Rgds,
-Dimitri
On 5/6/09, Simon Riggs wrote:
>
> On Wed, 2009-05-06 at 10:31 +0200, Dimitri wrote:
>
>> I've alrea
ect
0.007 0.127 ExecScan
...
Curiously "memcpy" is in top. Don't know if it's impacted in many
cases, but probably it make sense to see if it may be optimized, etc..
Rgds,
-Dimitri
On 5/7/09, Euler Taveira de Oliveira wrote:
> Dimitri escreveu:
>>
and not outpassing
6.000 TPS, while 8.4 uses 90% CPU and reaching 11.000 TPS..
On the same time while I'm comparing 8.3 and 8.4 - the response time
is 2 times lower in 8.4, and seems to me the main gain for 8.4 is
here.
I'll publish all details, just need a time :-)
Rgds,
-Dimitri
On 5
t else may limit concurrent SELECTs here?..
Yes, forget, MySQL is reaching 17.500 TPS here.
Rgds,
-Dimitri
On 5/7/09, Simon Riggs wrote:
>
> On Thu, 2009-05-07 at 20:36 +0200, Dimitri wrote:
>
>> I've simply restarted a full test with hashjoin OFF. Until 32
>> concurre
sec - it helped, throughput is more
stable now, but instead of big waves I have now short waves anyway..
What is the best options combination here?..
Rgds,
-Dimitri
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
;t it?
;-)
And what about scalability on 32cores?..
Any idea?
Rgds,
-Dimitri
On 5/11/09, Tom Lane wrote:
> Dimitri writes:
>> Anyone may explain me why analyze target may have so huge negative
>> secondary effect?..
>
> If these are simple queries, maybe what you're
ween autovacuum runs
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.001
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
#
Rgds,
-Dimitri
On 5/11
ot; ? seems it's "on" by default. Does it
makes sense to put if off?..
Rgds,
-Dimitri
On 5/11/09, Kevin Grittner wrote:
> Dimitri wrote:
>
>> PostgreSQL: 8.3.7 & 8.4
>> Server: Sun M5000 32cores
>> OS: Solaris 10
>
> Does that have a battery
!
Rgds,
-Dimitri
On 5/11/09, Scott Marlowe wrote:
> On Mon, May 11, 2009 at 10:31 AM, Dimitri wrote:
>> Hi Kevin,
>>
>> PostgreSQL: 8.3.7 & 8.4
>> Server: Sun M5000 32cores
>> OS: Solaris 10
>>
>> current postgresql.conf:
>>
>>
OK, it'll be better to avoid a such improvement :-)
Performance - yes, but not for any price :-)
Thank you!
Rgds,
-Dimitri
On 5/11/09, Kevin Grittner wrote:
> Dimitri wrote:
>
>> What about "full_page_writes" ? seems it's "on" by default. Does it
&g
tests it gives:
- on 8 cores: 14.000 TPS
- on 16 cores: 17.500 TPS
- on 32 cores: 15.000 TPS (regression)
Rgds,
-Dimitri
On 5/11/09, Simon Riggs wrote:
>
> On Mon, 2009-05-11 at 17:18 +0200, Dimitri wrote:
>
>> Yes, forget, MySQL is reaching 17.500 TPS here.
>
> Ple
is the most
dominant - so why instead to blame this query not to implement a QUERY
PLANNER CACHE??? - in way if any *similar* query is recognized by
parser we simply *reuse* the same plan?..
Rgds,
-Dimitri
On 5/11/09, Aidan Van Dyk wrote:
> * Dimitri [090511 11:18]:
>> Folks, it'
ry is recognized by
>> parser we simply *reuse* the same plan?..
>
> This has been discussed in the past, but it turns out that a real
> implementation is a lot harder than it seems.
Ok. If I remember well, Oracle have it and it helps a lot, but for
sure it's not easy t
ta and your indexes - don't
need to spend so much time.
Rgds,
-Dimitri
On 5/12/09, Heikki Linnakangas wrote:
> Dimitri wrote:
>> Now, as you see from your explanation, the Part #2 is the most
>> dominant - so why instead to blame this query not to implement a QUERY
>> PLAN
It's just one of the test conditions - "what if there 2000 users?" - I
know I may use pgpool or others, but I also need to know the limits of
the database engine itself.. For the moment I'm limiting to 256
concurrent sessions, but config params are kept like for 2000 :-)
Rgd
on read+write workload! :-)
Any other comments are welcome!
Rgds,
-Dimitri
On 5/12/09, Dimitri Fontaine wrote:
> Hi,
>
> Dimitri writes:
>
>>>> So, why I don't use prepare here: let's say I'm testing the worst
>>>> stress case :-) Imagine y
et to
get a profit from idiots :-)) That's why I never betting in my life,
but every time telling the same story in such situation... Did you
like it? ;-))
However, no problem to give you a credit as well to all pg-perf list
as it provides a very valuable help! :-))
Rgds,
-Dimitri
On 5/12
)..
And yes, I'll try to profile on 32 cores, it makes sense.
Rgds,
-Dimitri
On 5/12/09, Heikki Linnakangas wrote:
> Dimitri wrote:
>> What I discovered so far with all your help:
>> - the impact of a planner
>> - the impact of the analyze target
>> - the i
_statistics_target to 5 ! -but this one I
found myself :-))
Probably checkpoint_timeout may be bigger now with the current
settings? - the goal here is to keep Read+Write TPS as stable as
possible and also avoid a long recovery in case of
system/database/other crash (in theory).
Rgds,
-Dimitri
way to run it against any
database schema, it's only question of time..
Rgds,
-Dimitri
On 5/12/09, Stefan Kaltenbrunner wrote:
> Dimitri wrote:
>> Folks, before you start to think "what a dumb guy doing a dumb thing" :-))
>> I'll explain you few details:
>>
ago PostgreSQL outperformed MySQL on
the same test case, and there was nothing done within MySQL code to
improve it explicitly for db_STRESS.. And I'm staying pretty honest
when I'm testing something.
Rgds,
-Dimitri
On 5/12/09, Robert Haas wrote:
> On Tue, May 12, 2009 at 8:59 AM,
Good point! I missed it.. - will 20MB be enough?
Rgds,
-Dimitri
On 5/12/09, Julian v. Bock wrote:
> Hi
>
>>>>>> "D" == Dimitri writes:
>
> D> current postgresql.conf:
>
> D> #
> D> max_connections =
On 5/12/09, Stefan Kaltenbrunner wrote:
> Dimitri wrote:
>> Hi Stefan,
>>
>> sorry, I did not have a time to bring all details into the toolkit -
>> but at least I published it instead to tell a "nice story" about :-)
>
> fair point and appreciate
No, they keep connections till the end of the test.
Rgds,
-Dimitri
On 5/12/09, Joshua D. Drake wrote:
> On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:
>> Robert, what I'm testing now is 256 users max. The workload is growing
>> progressively from 1, 2, 4, 8 ... to 256
On 5/12/09, Robert Haas wrote:
> On Tue, May 12, 2009 at 1:00 PM, Dimitri wrote:
>> On MySQL there is no changes if I set the number of sessions in the
>> config file to 400 or to 2000 - for 2000 it'll just allocate more
>> memory.
>
> I don't care whether t
s talking about this as an 'unoptimal' solution, the
> fact is there is no evidence that a connection pooler will fix the
> scalability from 16 > 32 cores.
> Certainly a connection pooler will help most results, but it may not fix the
> scalability problem.
>
> A q
- 2 another clients are started => 4 in total
- sleep ..
...
... ===> 256 in total
- sleep ...
- kill clients
So I even able to monitor how each new client impact all others. The
test kit is quite flexible to prepare any kind of stress situations.
Rgds,
-Dimitri
On 5/12/09, Gle
your position with a pooler, but I also want you think
about idea that 128 cores system will become a commodity server very
soon, and to use these cores on their full power you'll need a
database engine capable to run 256 users without pooler, because a
pooler will not help you here anymore..
Rgds
db engine):
http://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5442
Rgds,
-Dimitri
On 5/13/09, Kevin Grittner wrote:
> Glenn Maynard wrote:
>> I'm sorry, but I'm confused. Everyone keeps talking about
>> connection pooling, but Dimitri has said
aphs, pgsql, and other). I'll publish it on my web site and
send you a link.
Rgds,
-Dimitri
On 5/14/09, Simon Riggs wrote:
>
> On Tue, 2009-05-12 at 14:28 +0200, Dimitri wrote:
>
>> As problem I'm considering a scalability issue on Read-Only workload -
>> only selects
It's absolutely great!
it'll not help here because a think time is 0.
but for any kind of solution with a spooler it's a must to try!
Rgds,
-Dimitri
On 5/13/09, Dimitri Fontaine wrote:
> Hi,
>
> Le 13 mai 09 à 18:42, Scott Carey a écrit :
>>> will not help, as e
Hi Scott,
let me now finish my report and regroup all data together, and then
we'll continue discussion as it'll come more in debug/profile phase..
- I'll be not polite from my part to send some tons of attachments to
the mail list :-)
Rgds,
-Dimitri
On 5/13/09, Scott Carey wro
vailable test time will be very limited..
Best regards!
-Dimitri
On 5/18/09, Simon Riggs wrote:
>
> On Thu, 2009-05-14 at 20:25 +0200, Dimitri wrote:
>
>> # lwlock_wait_8.4.d `pgrep -n postgres`
>
>>Lock IdMode Combined Time
Thanks Dave for correction, but I'm also curious where the time is
wasted in this case?..
0.84ms is displayed by "psql" once the result output is printed, and I
got similar time within my client (using libpq) which is not printing
any output..
Rgds,
-Dimitri
On 5/18/09, Dave
On 5/18/09, Scott Carey wrote:
> Great data Dimitri!'
Thank you! :-)
>
> I see a few key trends in the poor scalability:
>
> The throughput scales roughly with %CPU fairly well. But CPU used doesn't
> go past ~50% on the 32 core tests. This indicates lock contentio
On 5/18/09, Simon Riggs wrote:
>
> On Mon, 2009-05-18 at 20:00 +0200, Dimitri wrote:
>
>> >From my point of view it needs first to understand where the time is
>> wasted on a single query (even when the statement is prepared it runs
>> still slower comparing t
No, Tom, the query cache was off.
I put it always explicitly off on MySQL as it has scalability issues.
Rgds,
-Dimitri
On 5/19/09, Tom Lane wrote:
> Simon Riggs writes:
>> In particular, running the tests repeatedly using
>> H.REF_OBJECT = '01'
>>
On 5/19/09, Scott Carey wrote:
>
> On 5/18/09 3:32 PM, "Dimitri" wrote:
>
>> On 5/18/09, Scott Carey wrote:
>>> Great data Dimitri!'
>>
>> Thank you! :-)
>>
>>>
>>> I see a few key trends in the poor scalability:
On 5/19/09, Simon Riggs wrote:
>
> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:
>> >
>> > In particular, running the tests repeatedly using
>> >H.REF_OBJECT = '01'
>> > rather than varying the value seems likely to benefit MyS
ith 32 sessions it's 18ms, etc..
I've retested on 24 isolated cores, so any external secondary effects
are avoided.
Rgds,
-Dimitri
On 5/19/09, Dimitri wrote:
> On 5/19/09, Simon Riggs wrote:
>>
>> On Tue, 2009-05-19 at 00:33 +0200, Dimitri wrote:
>>> >
>
;s dramatically dropping down..
Rgds,
-Dimitri
On 5/19/09, Simon Riggs wrote:
>
> On Tue, 2009-05-19 at 14:00 +0200, Dimitri wrote:
>
>> I may confirm the issue with hash join - it's repeating both with
>> prepared and not prepared statements - it's curious because i
On 5/19/09, Merlin Moncure wrote:
> On Mon, May 18, 2009 at 6:32 PM, Dimitri wrote:
>> Thanks Dave for correction, but I'm also curious where the time is
>> wasted in this case?..
>>
>> 0.84ms is displayed by "psql" once the result output is printed,
On 5/19/09, Scott Carey wrote:
>
> On 5/19/09 3:46 AM, "Dimitri" wrote:
>
>> On 5/19/09, Scott Carey wrote:
>>>
>>> On 5/18/09 3:32 PM, "Dimitri" wrote:
>>>
>>>> On 5/18/09, Scott Carey wrote:
>>>>> Gre
On 5/19/09, Merlin Moncure wrote:
> On Tue, May 19, 2009 at 11:53 AM, Dimitri wrote:
>> On 5/19/09, Merlin Moncure wrote:
>>> On Mon, May 18, 2009 at 6:32 PM, Dimitri wrote:
>>>> Thanks Dave for correction, but I'm also curious where the time is
>>>
Few weeks ago tested a customer application on 16 cores with Oracle:
- 20,000 sessions in total
- 70,000 queries/sec
without any problem on a mid-range Sun box + Solaris 10..
Rgds,
-Dimitri
On 6/3/09, Kevin Grittner wrote:
> James Mansion wrote:
>
>> I'm sure most
re are any PG scalability limits,
integrated pooler will be in most cases more performant than external;
if there are no PG scalability limits - it'll still help to size PG
most optimally according a HW or OS capacities..
Rgds,
-Dimitri
On 6/3/09, Kevin Grittner wrote:
> Dimitri wrote:
&
er states (full DIRECT or fully cached).
Rgds,
-Dimitri
Rgds,
-Dimitri
On 5/5/11, Robert Haas wrote:
> On Sat, Apr 30, 2011 at 4:51 AM, Hsien-Wen Chu
> wrote:
>> since the block size is 8k for the default, and it consisted with many
>> tuple/line; as my understand, i
r having a distributed processing
ready EA in some future), cheaper and accurate?
After all, the discussion, as far as I understand it, is about having a
accurate measure of duration of events, knowing when they occurred in the day
does not seem to be the point.
My 2¢, hoping this could be somehow he
http://pgfouine.projects.postgresql.org/tsung.html
http://tsung.erlang-projects.org/
http://debian.dalibo.org/unstable/
This latter link also contains a .tar.gz archive of tsung-ploter in case
you're not running a debian system. Dependencies are python and matplotlib.
Regards,
--
Dimitri Fontaine
http://www
[3]: http://tsung.erlang-projects.org/
[4]: http://debian.dalibo.org/unstable/tsung-ploter_0.1-1.tar.gz
Regards,
--
Dimitri Fontaine
http://www.dalibo.com/
pgpHLPZaAGz2d.pgp
Description: PGP signature
1 - 100 of 182 matches
Mail list logo