On Tue, 22 Apr 2008, Rafael Barrera Oro wrote:
Hello, i have a postgresql server running and from time to time it gets
painfully slow.
The usual information you should always include when posting messages here
is PostgreSQL and operating system versions.
When this happens i usually connect
On Sun, 20 Apr 2008, James Mansion wrote:
Are you suggesting that the disk subsystem has already decided on its
strategy for a set of seeks and writes and will not insert new
instructions into an existing elevator plan until it is completed and it
looks at the new requests?
No, just that eac
Adonias Malosso wrote:
Hi all,
split --lines=1000
And running the copy i receive the error on the 5th file:
psql:/srv/www/htdocs/import/script_q2.sql:122: ERROR: invalid string
enlargement request size 65536
CONTEXT: COPY temp_q2, line 3509639: ""9367276";"4";"DANIEL DO
CARMO
On Mon, Apr 21, 2008 at 5:50 AM, Adrian Moisey
<[EMAIL PROTECTED]> wrote:
> Hi
>
> # ps -ef | grep idle | wc -l
> 87
> # ps -ef | grep SELECT | wc -l
> 5
>
>
> I have 2 web servers which connect to PGPool which connects to our postgres
> db. I have noticed that idle connections seem to take u
Bryan Buecking wrote:
On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
Are you referring to PHP's persistent connections? Do not use those.
Here's a thread that details the issues with why not:
http://archives.postgresql.org/pgsql-general/2007-08/msg00660.php .
Thanks for tha
Rafael Barrera Oro wrote:
Hello, i have a postgresql server running and from time to time it gets
painfully slow. When this happens i usually connect to the server and
run a "top" command, the output i get is filled with lines like the
following
71872 pgsql1 40 48552K 42836K sb
"Adonias Malosso" <[EMAIL PROTECTED]> writes:
> I´m running a copy for a 37G CSV and receiving the following error:
> "invalid string enlargement request size 65536"
AFAICS this would only happen if you've got an individual line of COPY
data exceeding 1GB. (PG versions later than 8.2 give a sligh
Hi all,
I´m running a copy for a 37G CSV and receiving the following error:
"invalid string enlargement request size 65536"
The file has about 70 million lines with 101 columns, all them varchar.
When I run the command with the whole file i receive the error after loading
about 29million lines.
Hello, i have a postgresql server running and from time to time it gets
painfully slow. When this happens i usually connect to the server and
run a "top" command, the output i get is filled with lines like the
following
71872 pgsql1 40 48552K 42836K sbwait 1:41 4.79%
postgres
about 2300 connections in idle
(ps auxwww | grep postgres | idle)
[...]
The server that connects to the db is an apache server using persistent
connections. MaxClients is 2048 thus the high number of connections
needed. Application was written in PHP using the Pear DB class.
Erik Jones wrote:
max_connections = 2400
That is WAY too high. Get a real pooler, such as pgpool, and drop that
down to 1000 and test from there. I see you mentioned 500 concurrent
connections. Are each of those connections actually doing something?
My guess that once you cut down on th
Bryan Buecking <[EMAIL PROTECTED]> writes:
> On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
>> That is WAY too high. Get a real pooler, such as pgpool, and drop
>> that down to 1000 and test from there.
> I agree, but the number of idle connections dont' seem to affect
> performace
On Tue, Apr 22, 2008 at 10:10 AM, Bryan Buecking <[EMAIL PROTECTED]> wrote:
>
> I agree, but the number of idle connections dont' seem to affect
> performace only memory usage. I'm trying to lessen the load of
> connection setup. But sounds like this tax is minimal?
Not entirely true. There ar
On Tue, Apr 22, 2008 at 01:21:03PM -0300, Rodrigo Gonzalez wrote:
> Are tables vacuumed often?
How often is often. Right now db is vaccumed once a day.
--
Bryan Buecking http://www.starling-software.com
--
Sent via pgsql-performance mailing list (pgsql-performance@post
Are tables vacuumed often?
Bryan Buecking escribió:
On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:
max_connections = 2400
That is WAY too high. Get a real pooler, such as pgpool, and drop
that down to 1000 and
On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
>
> Are you referring to PHP's persistent connections? Do not use those.
> Here's a thread that details the issues with why not:
> http://archives.postgresql.org/pgsql-general/2007-08/msg00660.php .
Thanks for that article, very
Bryan,
> > about 2300 connections in idle
> > > (ps auxwww | grep postgres | idle)
that is about 2300 processes being task scheduled by your kernel, each
of them using > 1 MB of RAM and some other ressources, are you sure
that this is what you want?
Usual recommended design for a web applicati
On Tue, Apr 22, 2008 at 10:55:19AM -0500, Erik Jones wrote:
> On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:
>
> >max_connections = 2400
>
> That is WAY too high. Get a real pooler, such as pgpool, and drop
> that down to 1000 and test from there.
I agree, but the number of idle connecti
On Tue, Apr 22, 2008 at 08:41:09AM -0700, Joshua D. Drake wrote:
> On Wed, 23 Apr 2008 00:31:01 +0900
> Bryan Buecking <[EMAIL PROTECTED]> wrote:
>
> > at any given time there is about 5-6 postgres in startup
> > (ps auxwww | grep postgres | grep startup | wc -l)
> >
> > about 2300 connections i
On Apr 22, 2008, at 10:31 AM, Bryan Buecking wrote:
Hi,
I'm running into an performance problem where a Postgres db is running
at 99% CPU (4 cores) with about 500 concurrent connection doing
various
queries from a web application. This problem started about a week ago,
and has been steadily
On Wed, 23 Apr 2008 00:31:01 +0900
Bryan Buecking <[EMAIL PROTECTED]> wrote:
> at any given time there is about 5-6 postgres in startup
> (ps auxwww | grep postgres | grep startup | wc -l)
>
> about 2300 connections in idle
> (ps auxwww | grep postgres | idle)
>
> and loads of "FATAL: sorry, t
Hi,
I'm running into an performance problem where a Postgres db is running
at 99% CPU (4 cores) with about 500 concurrent connection doing various
queries from a web application. This problem started about a week ago,
and has been steadily going downhill. I have been tweaking the config a
bit, mai
On Tue, Apr 22, 2008 at 7:42 AM, Thomas Spreng <[EMAIL PROTECTED]> wrote:
>
> I think I'll upgrade PostgreSQL to the latest 8.3 version in the next
> few days anyway, along with a memory upgrade (from 1.5GB to 4GB) and a
> new 2x RAID-1 (instead of RAID-5) disk configuration. I hope that this
>
On 19.04.2008, at 19:11, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Thomas
Spreng) wrote:
On 16.04.2008, at 17:42, Chris Browne wrote:
What I meant is if there are no INSERT's or UPDATE's going on it
shouldn't affect SELECT queries, or am I wrong?
Matthew Wakeling wrote:
On Tue, 22 Apr 2008, Mark Mielke wrote:
The poster I responded to said that the memory required for a hash
join was relative to the number of distinct values, not the number of
rows. They gave an example of millions of rows, but only a few
distinct values. Above, you ag
On Tue, 22 Apr 2008, Mark Mielke wrote:
The poster I responded to said that the memory required for a hash join was
relative to the number of distinct values, not the number of rows. They gave
an example of millions of rows, but only a few distinct values. Above, you
agree with me that it it wo
Matthew Wakeling wrote:
On Mon, 21 Apr 2008, Mark Mielke wrote:
This surprises me - hash values are lossy, so it must still need to
confirm against the real list of values, which at a minimum should
require references to the rows to check against?
Is PostgreSQL doing something beyond my imagi
On Mon, 21 Apr 2008, Mark Mielke wrote:
This surprises me - hash values are lossy, so it must still need to confirm
against the real list of values, which at a minimum should require references
to the rows to check against?
Is PostgreSQL doing something beyond my imagination? :-)
Not too far
that's correct, there are nightly (at least at the moment) processes that
insert around 2-3 mio rows and delete about the same amount. I can see
that
those 'checkpoints are occurring too frequently' messages are only logged
during that timeframe.
Perhaps you should increase the quantity of
On 19.04.2008, at 19:04, Scott Marlowe wrote:
No, that will certainly NOT just affect write performance; if the
postmaster is busy writing out checkpoints, that will block SELECT
queries that are accessing whatever is being checkpointed.
What I meant is if there are no INSERT's or UPDATE's go
30 matches
Mail list logo