Andy Colson wrote:
The oom-killer is kicking in, at some point, so somebody is using too
much ram. There should be messages or logs or something, right?
(I've never enabled the oom stuff so dont know much about it). But
the log messages might be helpful.
They probably won't be. The inform
Scott Marlowe wrote:
I wonder if having paid technical support to abuse leads to people
thinking they can treat other people like crap and get the answer they
want anyway...
You have technical support somewhere you get to abuse? For me it always
seems to be the other way around...
--
Greg S
Hi,
Try using dynamic sql. Query will be faster in a function
regards
Ram
- Original Message -
From: "Mridula Mahadevan"
To: "Tom Lane"
Cc:
Sent: Wednesday, February 03, 2010 2:23 AM
Subject: Re: [PERFORM] Queries within a function
Tom,
I cannot run execute because all these are tem
Sorry again for previous rough messages, some good people advice me to
post these problems again With sincere and friendly attitude 。
I think i should do this .
In recently projects , I determine use pg in some medium or big projects , as
the projects has been finished, it prove that I made
"Davor J." writes:
> Now, if one takes a subquery for "1", the optimizer evaluates it first
> (let's say to "1"), but then searches for it (sequentially) in every
> partition, which, for large partitions, can be very time-consuming and goes
> beyond the point of partitioning.
No, the optimizer
I have a relatively straightforward query that by itself isn't that
slow, but we have to run it up to 40 times on one webpage load, so it
needs to run much faster than it does. Here it is:
SELECT COUNT(*) FROM users, user_groups
WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND
Let's say you have one partitioned table, "tbl_p", partitioned according to
the PK "p_pk". I have made something similar with triggers, basing myself on
the manual for making partitioned tables.
According to the manual, optimizer searches the CHECKs of the partitions to
determine which table(s)
On 2010-02-02, Matt White wrote:
> I have a relatively straightforward query that by itself isn't that
> slow, but we have to run it up to 40 times on one webpage load, so it
> needs to run much faster than it does. Here it is:
>
> SELECT COUNT(*) FROM users, user_groups
> WHERE users.user_group_
On Feb 2, 6:06 am, Edgardo Portal wrote:
> On 2010-02-02, Matt White wrote:
>
> > I have a relatively straightforward query that by itself isn't that
> > slow, but we have to run it up to 40 times on one webpage load, so it
> > needs to run much faster than it does. Here it is:
>
> > SELECT COUNT
Kevin Grittner wrote:
> Rob wrote:
>
>
>> 8gb ram
>> ~240 active databases
>> 800+ db connections via tcp.
>>
>
> 8 GB RAM divided by 800 DB connections is 10 MB per connection. You
> seriously need to find some way to use connection pooling. I'm not
> sure the best way to do that wi
Scott Marlowe wrote:
> On Tue, Feb 2, 2010 at 12:11 PM, Rob wrote:
>
>> postgresql.conf:
>> max_connections = 1100
>> work_mem = 32MB
>>
>
> 32MB * 1000 = 32,000MB... And that's if you max out connections and
> they each only do 1 sort. If you're running many queries that run > 1
> sorts
Tom,
I cannot run execute because all these are temp tables with drop on auto
commit within a function. This should have something to do with running it in a
transaction, when I run them in autocommit mode (without a drop on autocommit)
the queries return in a few seconds.
-Original Mess
On 2/2/2010 1:11 PM, Rob wrote:
postgresql.conf:
max_connections = 1100
shared_buffers = 800MB
max_prepared_transactions = 0
work_mem = 32MB
maintenance_work_mem = 64MB
max_fsm_pages = 330
max_fsm_relations = 1
vacuum_cost_delay = 50ms
bgwriter_delay = 150ms
bgwriter_lru_maxpages = 250
b
Thanks Tom and Marcos.
More details -
I have 2 temp tables
Table a -
Create table a (id int primary key,
promoted int,
unq_str varchar )
Table b -
CREATE TABLE b (
id int primary key,
On 2/2/2010 1:11 PM, Rob wrote:
Linux 2.6.18-6-686-bigmem #1 SMP Thu Nov 5 17:30:05 UTC 2009 i686
GNU/Linux (Debian Etch)
8 MB RAM
4 Quad Core Intel(R) Xeon(R) CPU E5440 @ 2.83GHz stepping 06
L1 I cache: 32K, L1 D cache: 32K, L2 cache: 6144K
Well _there's_ your problem! Ya need
Rob wrote:
> 8gb ram
> ~240 active databases
> 800+ db connections via tcp.
8 GB RAM divided by 800 DB connections is 10 MB per connection. You
seriously need to find some way to use connection pooling. I'm not
sure the best way to do that with 240 active databases.
-Kevin
--
Sent via pg
On Tue, Feb 2, 2010 at 12:11 PM, Rob wrote:
> pg 8.3.9, Debian Etch, 8gb ram, quadcore xeon, megaraid (more details at end)
> ~240 active databases, 800+ db connections via tcp.
>
> Everything goes along fairly well, load average from 0.5 to 4.0. Disk
> IO is writing about 12-20 MB every 4 or 5 s
On 2/2/2010 1:03 PM, Matt White wrote:
On Feb 2, 6:06 am, Edgardo Portal wrote:
On 2010-02-02, Matt White wrote:
I have a relatively straightforward query that by itself isn't that
slow, but we have to run it up to 40 times on one webpage load, so it
needs to run much faster than it does. He
Richard Broersma writes:
> 2010/2/1 :
>> yet, this code is just for fun , why ? because i found this code in one
>> article:PostgreSQL Concurrency Issues .
> I'm not familiar with the "code" that you are referring to. Do you
> have a hyper-link to it?
>> Tom Lane
>> Red Hat Database Group
>> R
Mridula Mahadevan wrote:
Hi,
I am running a bunch of queries within a function, creating some temp
tables and populating them. When the data exceeds say, 100k the
queries start getting really slow and timeout (30 min). when these are
run outside of a transaction(in auto commit mode), they r
On Tue, Feb 2, 2010 at 2:33 PM, Tom Lane wrote:
> Robert Haas writes:
>> Hmm, in that case, I think the problem is that this function has no
>> comment explaining its intended charter.
>
> That's certainly a big problem, but a comment won't fix the fact that
> the name is misleading. We need bot
Robert Haas writes:
> Hmm, in that case, I think the problem is that this function has no
> comment explaining its intended charter.
That's certainly a big problem, but a comment won't fix the fact that
the name is misleading. We need both a comment and a name change.
re
Mridula Mahadevan writes:
> I am running a bunch of queries within a function, creating some temp tables
> and populating them. When the data exceeds say, 100k the queries start
> getting really slow and timeout (30 min). when these are run outside of a
> transaction(in auto commit mode), they
pg 8.3.9, Debian Etch, 8gb ram, quadcore xeon, megaraid (more details at end)
~240 active databases, 800+ db connections via tcp.
Everything goes along fairly well, load average from 0.5 to 4.0. Disk
IO is writing about 12-20 MB every 4 or 5 seconds. Cache memory about
4gb. Then under load, we
Hello
look on http://blog.endpoint.com/2008/12/why-is-my-function-slow.html
Regards
Pavel Stehule
2010/2/2 Mridula Mahadevan :
> Hi,
>
> I am running a bunch of queries within a function, creating some temp
> tables and populating them. When the data exceeds say, 100k the queries
> start gettin
On Tuesday 02 February 2010 20:06:32 Robert Haas wrote:
> On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund wrote:
> > For now it could - but it very well might be converted to sync_file_range
> > or similar, which would have different "sideeffects".
> >
> > As the potential code duplication is rathe
On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund wrote:
> For now it could - but it very well might be converted to sync_file_range or
> similar, which would have different "sideeffects".
>
> As the potential code duplication is rather small I would prefer to describe
> the prime effect not the sidee
2010/2/2 :
> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';
>
> when concurrency write transaction on read committed isolation , the hits
> may result wrong .
That should work fine. All updates for the same url will be serialized.
The rest I'm pretty uncertain about what you're de
On Tuesday 02 February 2010 19:14:40 Robert Haas wrote:
> On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane wrote:
> > Andres Freund writes:
> >> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
> >>> I took a look at this patch today and I agree with Tom that
> >>> pg_fsync_start() is a very conf
Hi,
I am running a bunch of queries within a function, creating some temp tables
and populating them. When the data exceeds say, 100k the queries start getting
really slow and timeout (30 min). when these are run outside of a
transaction(in auto commit mode), they run in a few seconds. Any idea
On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane wrote:
> Andres Freund writes:
>> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
>>> I took a look at this patch today and I agree with Tom that
>>> pg_fsync_start() is a very confusing name. I don't know what the
>>> right name is, but this doe
On Tue, Feb 2, 2010 at 9:20 AM, Alvaro Herrera
wrote:
> Scott Marlowe escribió:
>> 2010/2/1 :
>
>> Let's rewrite this assertion:
>> > * joke 1: insert operation would use a excluse lock on reference row by the
>> > foreign key . a big big big performance killer , i think this is a stupid
>> > des
On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark wrote:
> On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane wrote:
>> That function *seriously* needs documentation, in particular the fact
>> that it's a no-op on machines without the right kernel call. The name
>> you've chosen is very bad for those semantics
On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
> On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark wrote:
> > On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane wrote:
> >> That function *seriously* needs documentation, in particular the fact
> >> that it's a no-op on machines without the right kernel
Andres Freund writes:
> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:
>> I took a look at this patch today and I agree with Tom that
>> pg_fsync_start() is a very confusing name. I don't know what the
>> right name is, but this doesn't fsync so I don't think it shuld have
>> fsync in th
2010/2/1 :
> yet, this code is just for fun , why ? because i found this code in one
> article:PostgreSQL Concurrency Issues .
I'm not familiar with the "code" that you are referring to. Do you
have a hyper-link to it?
> Tom Lane
> Red Hat Database Group
> Red Hat, Inc.
Also, I'm not sure wh
Scott Marlowe escribió:
> 2010/2/1 :
> Let's rewrite this assertion:
> > * joke 1: insert operation would use a excluse lock on reference row by the
> > foreign key . a big big big performance killer , i think this is a stupid
> > design .
>
> "problem #1: insert operation would use a excluse lo
2010/2/1 :
> hi, first, thanks u for make so good opensource db .
>
> recently maybe half an years ago ,i begin to use pg in a big project for
> insurance project, belong as the project go on ,and
>
> i found some performance problem on concurrency write situation , then i do
> a research on concu
Glenn Maynard wrote:
On Mon, Feb 1, 2010 at 6:15 AM, Yeb Havinga wrote:
Stomp_steps is analyzed to 2902 rows but when you run the query the actual
rows are 0. This means that the highscore function is not called or the
number 0 is incorrect.
This SELECT returns 0 rows: it calls the fun
39 matches
Mail list logo