On Fri, Aug 14, 2009 at 4:21 PM, Tom Lane wrote:
> Jeff Janes writes:
>> I apologize if it is bad form to respond to a message that is two
>> months old, but I did not see this question answered elsewhere and
>> thought it would be helpful to have it answered. This my rough
>> understanding. Ora
On 14 August 2009 at 03:18 Jeff Janes wrote:
> This my rough understanding. Oracle never
> "takes" a snapshot, it computes one the fly, if and when it is needed. It
> maintains a
> structure of recently committed transactions, with the XID for when they
> committed. If a
> process runs into
Jeff Janes writes:
> I apologize if it is bad form to respond to a message that is two
> months old, but I did not see this question answered elsewhere and
> thought it would be helpful to have it answered. This my rough
> understanding. Oracle never "takes" a snapshot, it computes one the
> fly
On Thu, 4 Jun 2009 06:57:57 -0400, Robert Haas wrote
in http://archives.postgresql.org/pgsql-performance/2009-06/msg00065.php :
> I think I see the distinction you're drawing here. IIUC, you're
> arguing that other database products use connection pooling to handle
> rapid connect/disconnect cyc
On Fri, Jun 5, 2009 at 1:02 PM, Greg Smith wrote:
> On Fri, 5 Jun 2009, Mark Mielke wrote:
>> I disagree that profiling trumps theory every time.
> That's an interesting theory. Unfortunately, profiling shows it doesn't
> work that way.
I had a laugh when I read this, but I can see someone being
On Thu, 4 Jun 2009, Mark Mielke wrote:
At it's very simplest, this is the difference between "wake one thread"
(which is then responsible for waking the next thread) vs "wake all
threads"Any system which actually wakes all threads will probably
exhibit scaling limitations.
The prototype
On Fri, 5 Jun 2009, Mark Mielke wrote:
I disagree that profiling trumps theory every time.
That's an interesting theory. Unfortunately, profiling shows it doesn't
work that way.
Let's see if I can summarize the state of things a bit better here:
1) PostgreSQL stops working as efficiently
Greg Smith wrote:
No amount of theoretical discussion advances that any until
you're at least staring at a very specific locking problem you've
already characterized extensively via profiling. And even then,
profiling trumps theory every time.
In theory, there is no difference between theory
On Fri, Jun 5, 2009 at 12:33 AM, wrote:
> On Fri, 5 Jun 2009, Greg Smith wrote:
>
>> On Thu, 4 Jun 2009, Robert Haas wrote:
>>
>>> That's because this thread has altogether too much theory and
>>> altogether too little gprof.
>>
>> But running benchmarks and profiling is actual work; that's so muc
Scott Carey wrote:
> If you wake up 10,000 threads, and they all can get significant work
> done before yielding no matter what order they run, the system will
> scale extremely well.
But with roughly twice the average response time you would get
throttling active requests to the minimum neede
Mark Mielke wrote:
> Kevin Grittner wrote:
>> James Mansion wrote:
>>> Kevin Grittner wrote:
>>>
Sure, but the architecture of those products is based around all
the work being done by "engines" which try to establish affinity
to different CPUs, and loop through the various tasks
Greg Smith wrote:
This thread reminds me of Jignesh's "Proposal of tunable fix for
scalability of 8.4" thread from March, except with only a fraction of
the real-world detail. There are multiple high-profile locks causing
scalability concerns at quadruple digit high user counts in the
Postgre
On Fri, 5 Jun 2009, Greg Smith wrote:
On Thu, 4 Jun 2009, Robert Haas wrote:
That's because this thread has altogether too much theory and
altogether too little gprof.
But running benchmarks and profiling is actual work; that's so much less fun
than just speculating about what's going on!
On Thu, 4 Jun 2009, Mark Mielke wrote:
da...@lang.hm wrote:
On Thu, 4 Jun 2009, Mark Mielke wrote:
An alternative approach might be: 1) Idle processes not currently running
a transaction do not need to be consulted for their snapshot (and other
related expenses) - if they are idle for a per
On Thu, 4 Jun 2009, Robert Haas wrote:
That's because this thread has altogether too much theory and
altogether too little gprof.
But running benchmarks and profiling is actual work; that's so much less
fun than just speculating about what's going on!
This thread reminds me of Jignesh's "Pr
da...@lang.hm wrote:
On Thu, 4 Jun 2009, Mark Mielke wrote:
You should really only have as 1X or 2X many threads as there are
CPUs waiting on one monitor. Beyond that is waste. The idle threads
can be pooled away, and only activated (with individual monitors
which can be far more easily and ef
On Thu, Jun 4, 2009 at 8:51 PM, wrote:
> if this is the case, how hard would it be to have threads add and remove
> themselves from some list as they get busy/become idle?
>
> I've been puzzled as I've been watching this conversation on what internal
> locking/lookup is happening that is causing t
On Thu, 4 Jun 2009, Mark Mielke wrote:
Kevin Grittner wrote:
James Mansion wrote:
I know that if you do use a large number of threads, you have to be
pretty adaptive. In our Java app that pulls data from 72 sources and
replicates it to eight, plus feeding it to filters which determine
what
On 6/4/09 3:08 PM, "Kevin Grittner" wrote:
> James Mansion wrote:
>> I'm sorry, but (in particular) UNIX systems have routinely
>> managed large numbers of runnable processes where the run queue
>> lengths are long without such an issue.
>
> Well, the OP is looking at tens of thousands of con
On Thu, 4 Jun 2009, Robert Haas wrote:
On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
On 6/3/09 11:39 AM, "Robert Haas" wrote:
On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
Postgres could fix its connection scalability issues -- that is entirely
independent of connection pooling.
Kevin Grittner wrote:
James Mansion wrote:
Kevin Grittner wrote:
Sure, but the architecture of those products is based around all
the work being done by "engines" which try to establish affinity to
different CPUs, and loop through the various tasks to be done. You
don't get a context
James Mansion wrote:
>> they spend a lot of time spinning around queue access to see if
>> anything has become available to do -- which causes them not to
>> play nice with other processes on the same box.
> UNIX systems have routinely managed large numbers of runnable
> processes where the r
James Mansion wrote:
> Kevin Grittner wrote:
>> Sure, but the architecture of those products is based around all
>> the work being done by "engines" which try to establish affinity to
>> different CPUs, and loop through the various tasks to be done. You
>> don't get a context switch storm becaus
Kevin Grittner wrote:
Sure, but the architecture of those products is based around all the
work being done by "engines" which try to establish affinity to
different CPUs, and loop through the various tasks to be done. You
don't get a context switch storm because you normally have the number
of e
On Thu, Jun 4, 2009 at 2:04 PM, Scott Carey wrote:
> To clarify if needed:
>
> I'm not saying the two issues are unrelated. I'm saying that the
> relationship between connection pooling and a database is multi-dimensional,
> and the scalability improvement does not have a hard dependency on
> con
On 6/4/09 3:57 AM, "Robert Haas" wrote:
> On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
>> On 6/3/09 11:39 AM, "Robert Haas" wrote:
>>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
Postgres could fix its connection scalability issues -- that is entirely
independent of con
On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
> On 6/3/09 11:39 AM, "Robert Haas" wrote:
>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
>>> Postgres could fix its connection scalability issues -- that is entirely
>>> independent of connection pooling.
>>
>> Really? I'm surprised. I
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers, circu
On 6/3/09 11:39 AM, "Robert Haas" wrote:
> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
>> Postgres could fix its connection scalability issues -- that is entirely
>> independent of connection pooling.
>
> Really? I'm surprised. I thought the two were very closely related.
> Could you
On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
> Postgres could fix its connection scalability issues -- that is entirely
> independent of connection pooling.
Really? I'm surprised. I thought the two were very closely related.
Could you expand on your thinking here?
...Robert
--
Sent via
Just to say you don't need a mega server to keep thousands connections
with Oracle, it's just trivial, nor CPU affinity and other stuff you
may or may not need with Sybase :-)
Regarding PostgreSQL, I think it'll only benefit to have an integrated
connection pooler as it'll make happy all populatio
On 6/3/09 10:45 AM, "Kevin Grittner" wrote:
> Dimitri wrote:
>> Few weeks ago tested a customer application on 16 cores with Oracle:
>> - 20,000 sessions in total
>> - 70,000 queries/sec
>>
>> without any problem on a mid-range Sun box + Solaris 10..
>
> I'm not sure what point you are t
Dimitri wrote:
> Few weeks ago tested a customer application on 16 cores with Oracle:
> - 20,000 sessions in total
> - 70,000 queries/sec
>
> without any problem on a mid-range Sun box + Solaris 10..
I'm not sure what point you are trying to make. Could you elaborate?
(If it's that Orac
Few weeks ago tested a customer application on 16 cores with Oracle:
- 20,000 sessions in total
- 70,000 queries/sec
without any problem on a mid-range Sun box + Solaris 10..
Rgds,
-Dimitri
On 6/3/09, Kevin Grittner wrote:
> James Mansion wrote:
>
>> I'm sure most of us evaluating Postgres
James Mansion wrote:
> I'm sure most of us evaluating Postgres from a background in Sybase
> or SQLServer would regard 5000 connections as no big deal.
Sure, but the architecture of those products is based around all the
work being done by "engines" which try to establish affinity to
differen
Greg Smith wrote:
3500 active connections across them. That doesn't work, and what
happens
is exactly the sort of context switch storm you're showing data for.
Think about it for a minute: how many of those can really be doing
work at any time? 32, that's how many. Now, you need some multip
On Sat, 30 May 2009, Scott Marlowe wrote:
8.04 was a frakking train wreck in many ways. It wasn't until 8.04.2
came out that it was even close to useable as a server OS, and even
then, not for databases yet. It's still got broken bits and pieces
marked "fixed in 8.10"... Uh, hello, it's your
Grzegorz Jaśkiewicz wrote:
>
> I thought that's where the difference is between postgresql and oracle
> mostly, ability to handle more transactions and better scalability .
>
Which were you suggesting had this "better scalability"?
I recall someone summarizing to a CFO where I used to work:
"Or
On 5/31/09 9:37 AM, "Fabrix" wrote:
>
>
> 2009/5/29 Scott Carey
>>
>> On 5/28/09 6:54 PM, "Greg Smith" wrote:
>>
>>> 2) You have very new hardware and a very old kernel. Once you've done the
>>> above, if you're still not happy with performance, at that point you
>>> should consider using
2009/5/29 Scott Carey
>
> On 5/28/09 6:54 PM, "Greg Smith" wrote:
>
> > 2) You have very new hardware and a very old kernel. Once you've done
> the
> > above, if you're still not happy with performance, at that point you
> > should consider using a newer one. It's fairly simple to build a Linu
On Sat, May 30, 2009 at 9:41 PM, Greg Smith wrote:
> On Fri, 29 May 2009, Scott Carey wrote:
>
>> There are operations/IT people won't touch Ubuntu etc with a ten foot pole
>> yet for production.
>
> The only thing I was suggesting is that because 2.6.28 is the latest Ubuntu
> kernel, that means i
On Fri, 29 May 2009, Scott Carey wrote:
There are operations/IT people won't touch Ubuntu etc with a ten foot pole
yet for production.
The only thing I was suggesting is that because 2.6.28 is the latest
Ubuntu kernel, that means it's gotten a lot more exposure and testing
than, say, other o
On 5/28/09 6:54 PM, "Greg Smith" wrote:
> 2) You have very new hardware and a very old kernel. Once you've done the
> above, if you're still not happy with performance, at that point you
> should consider using a newer one. It's fairly simple to build a Linux
> kernel using the same basic kern
On Fri, May 29, 2009 at 3:45 PM, Fabrix wrote:
>
> Which is better and more complete, which have more features?
> What you recommend? pgbouncer or pgpool?
>
>>
In your case, where you're looking to just get the connection overhead
off of the machine, pgBouncer is probably going to be more effi
2009/5/29 Scott Mead
> 2009/5/29 Greg Smith
>
>> On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:
>>
>> if it is implemented somewhere else better, shouldn't that make it
>>> obvious that postgresql should solve it internally ?
>>>
>>
>> Opening a database connection has some overhead to it that
On Fri, May 29, 2009 at 12:20 PM, Scott Mead
wrote:
> This sounds like a dirty plug (sorry sorry sorry, it's for informative
> purposes only)...
(Commercial applications mentioned deleted for brevity.)
Just sounded like useful information to me. I'm not anti commercial,
just anti-marketing spe
2009/5/29 Greg Smith
> On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:
>
> if it is implemented somewhere else better, shouldn't that make it
>> obvious that postgresql should solve it internally ?
>>
>
> Opening a database connection has some overhead to it that can't go away
> without losing *
On Fri, 29 May 2009, Fabrix wrote:
In this application is not closing the connection, the development team
is makeing the change for close the connection after getting the job
done. So most connections are in idle state. How much would this help?
Does this could be the real problem?
Ah, now
2009/5/28 Greg Smith
> On Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:
>
> It is 2.6.24 We had to apply the kswapd patch also. It's important
>> specially if you see your system % going as high as 99% in top and loosing
>> the machine's control. I have read something about 2.6.28 had t
On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:
if it is implemented somewhere else better, shouldn't that make it
obvious that postgresql should solve it internally ?
Opening a database connection has some overhead to it that can't go away
without losing *something* in the process that you w
2009/5/29 Grzegorz Jaśkiewicz :
> damn I agree with you Scott. I wish I had enough cash here to employ
> Tom and other pg magicians to improve performance for all of us ;)
>
> Thing is tho, postgresql is mostly used by companies, that either
> don't have that sort of cash, but still like to get the
damn I agree with you Scott. I wish I had enough cash here to employ
Tom and other pg magicians to improve performance for all of us ;)
Thing is tho, postgresql is mostly used by companies, that either
don't have that sort of cash, but still like to get the performance,
or companies that have 'why
2009/5/29 Grzegorz Jaśkiewicz :
> 2009/5/29 Scott Marlowe :
>
>>
>> Both Oracle and PostgreSQL have fairly heavy backend processes, and
>> running hundreds of them on either database is a mistake. Sure,
>> Oracle can handle more transactions and scales a bit better, but no
>> one wants to have t
2009/5/29 Scott Marlowe :
>
> Both Oracle and PostgreSQL have fairly heavy backend processes, and
> running hundreds of them on either database is a mistake. Sure,
> Oracle can handle more transactions and scales a bit better, but no
> one wants to have to buy a 128 way E15K to handle the load
2009/5/29 Grzegorz Jaśkiewicz :
> 2009/5/29 Scott Marlowe :
>
>>> if it is implemented somewhere else better, shouldn't that make it
>>> obvious that postgresql should solve it internally ? It is really
>>> annoying to hear all the time that you should add additional path of
>>> execution to alread
2009/5/29 Scott Marlowe :
>> if it is implemented somewhere else better, shouldn't that make it
>> obvious that postgresql should solve it internally ? It is really
>> annoying to hear all the time that you should add additional path of
>> execution to already complex stack, and rely on more code
2009/5/29 Grzegorz Jaśkiewicz :
> On Fri, May 29, 2009 at 2:54 AM, Greg Smith wrote:
>
>> The PostgreSQL connection handler is known to be bad at handling high
>> connection loads compared to the popular pooling projects, so you really
>> shouldn't throw this problem at it. While kernel problems
2009/5/29 Grzegorz Jaśkiewicz :
> On Fri, May 29, 2009 at 2:54 AM, Greg Smith wrote:
>
>> The PostgreSQL connection handler is known to be bad at handling high
>> connection loads compared to the popular pooling projects, so you really
>> shouldn't throw this problem at it. While kernel problems
On Fri, May 29, 2009 at 2:54 AM, Greg Smith wrote:
> The PostgreSQL connection handler is known to be bad at handling high
> connection loads compared to the popular pooling projects, so you really
> shouldn't throw this problem at it. While kernel problems stack on top of
> that, you really sho
On Thu, 28 May 2009, Flavio Henrique Araque Gurgel wrote:
It is 2.6.24 We had to apply the kswapd patch also. It's important
specially if you see your system % going as high as 99% in top and
loosing the machine's control. I have read something about 2.6.28 had
this patch accepted in mainstrea
On Thu, May 28, 2009 at 7:04 PM, Fabrix wrote:
>> I would ask for your kernel version. uname -a please?
>
> sure, and thanks for you answer Flavio...
>
> uname -a
> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> cat /etc/redhat-release
> Red
I would ask for your kernel version. uname -a please?
> sure, and thanks for you answer Flavio...
>
> uname -a
> Linux SERVIDOR-A 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5
2009/5/28 Flavio Henrique Araque Gurgel
> - "Scott Marlowe" escreveu:
> > On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> > >
> > > HI.
> > >
> > > Someone had some experience of bad performance with postgres in some
> server
> > > with many processors?
>
> I had.
>
> > > but I have exper
- "Scott Marlowe" escreveu:
> On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> >
> > HI.
> >
> > Someone had some experience of bad performance with postgres in some server
> > with many processors?
I had.
> > but I have experienced problems with another server that has 8 CPUS quad
2009/5/28 Scott Mead
> On Thu, May 28, 2009 at 4:53 PM, Fabrix wrote:
>
>>
>>
>>>
>>> Wow, that's some serious context-switching right there - 300k context
>>> switches a second mean that the processors are spending a lot of their
>>> time fighting for CPU time instead of doing any real work.
>>
Thanks Scott
2009/5/28 Scott Marlowe
> On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
> >
> > HI.
> >
> > Someone had some experience of bad performance with postgres in some
> server
> > with many processors?
>
> Seems to depend on the processors and chipset a fair bit.
>
> > I have a server
On Thu, May 28, 2009 at 2:53 PM, Fabrix wrote:
> yes, i have max_connections = 5000
> can lower, but at least i need 3500 connections
Whoa, that's a lot. Can you look into connection pooling of some sort?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
On Thu, May 28, 2009 at 4:53 PM, Fabrix wrote:
>
>
>>
>> Wow, that's some serious context-switching right there - 300k context
>> switches a second mean that the processors are spending a lot of their
>> time fighting for CPU time instead of doing any real work.
>
>
There is a bug in the quad c
Thanks David...
2009/5/28 David Rees
> On Thu, May 28, 2009 at 11:50 AM, Fabrix wrote:
> > Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
> > eth, etc) except that processors regularly climb to 100%.
>
> What kind of load are you putting the server under when this ha
On Thu, May 28, 2009 at 12:50 PM, Fabrix wrote:
>
> HI.
>
> Someone had some experience of bad performance with postgres in some server
> with many processors?
Seems to depend on the processors and chipset a fair bit.
> I have a server with 4 CPUS dual core and gives me a very good performance
On Thu, May 28, 2009 at 11:50 AM, Fabrix wrote:
> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
> eth, etc) except that processors regularly climb to 100%.
What kind of load are you putting the server under when this happens?
> I can see that the processes are waiting
71 matches
Mail list logo