On Wed, 20 Oct 2004 23:43:54 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> You will need to tell us the number of updates/deletes you are having. This will
> determine the vacuum needs. If the bulk of the data is inserted you may only need to
> analyze frequently, not vacuum.
>
> In order to get
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Is this the select(1) query? Please post an explain analyze for this and any other
> "slow"
> queries.
I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can te
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
> the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
> monitoring to determine where the bottleneck is.
Th
On Wed, 20 Oct 2004 13:35:43 -0500, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
> You might not need to do the vacuum fulls that often. If the your hourly
> vacuums have a high enough fsm setting, they should be keeping the database
> from continually growing in size. At that point daily vacuum full
On Wed, 20 Oct 2004 18:47:25 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> What about triggers? Also constraints (check contraints, integrity
> constraints) All these will slow the inserts/updates down.
No triggers or constraints. There are some foreign keys, but the
tables that have the inserts
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> 1: Is the bulk insert being done inside of a single transaction, or as
> individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copie
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Unlike many other database engines the shared buffers of Postgres is
> not a private cache of the database data. It is a working area shared
> between all the backend processes. This needs to be tuned for number
> of conne
On Tue, 19 Oct 2004 17:42:16 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> The short answer is no; the sweet spot for shared_buffers is usually on
> the order of 1 buffers, and trying to go for "75% of RAM" isn't
> going to do anything except hurt. For the long answer see the
> pgsql-performanc
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?
Any thou
On Tue, 19 Oct 2004 19:24:23 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> I would have though it would be better to fix the client application. If the
> app is not closing connections then you may be leaking handles and
> memory.
>
> What .NET provider is this? Are you sure it is not just norma
Is there a way to remove idle connections? My postgres server is
getting serveral hundred idle connections. It's due to a postgres .NET
provider not closing the connections properly. I don't want to kill
them all, or restart postgres everytime the connections go crazy.
-Josh
-
On Tue, 19 Oct 2004 00:48:34 -0500, Thomas F. O'Connell
<[EMAIL PROTECTED]> wrote:
> Ah, yes. You're right.
>
> Presumably you could use the hidden query from \dn in psql to get the
> tables necessary for a script. That's the next best thing I can think
> of.
I wrote a script to do the vacumming.
On Mon, 18 Oct 2004 15:35:53 -0500, Thomas F. O'Connell
<[EMAIL PROTECTED]> wrote:
> Are you looking for something different than vacuumdb?
>
> http://www.postgresql.org/docs/7.4/static/app-vacuumdb.html
>
> -tfo
I think so. As far as I can tell, you can either vacuum a whole db, or
individual t
Is there a way to vacuum all tables in a schema? Or will I need to
write a script to do it?
-Josh
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
On 17 Oct 2004 01:24:27 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
> Uh, not in Postgres. Perhaps you're thinking of another database system?
> In fact I suspect it's slowing down your system somewhat.
>
> --
> greg
So, there is no locking taking place during inserts at all? Or updates?
Also,
On Sat, 16 Oct 2004 10:20:35 -0600, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> Glad you got it working.
>
> A question about your design: you appear to have a tblBatchHistory_X
> table for each iId value in tblServers. Is there a reason for doing
> that instead of having a single tblBatchHistory t
On Sat, 16 Oct 2004 00:59:34 -0600, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> Add some RAISE INFO statements to print variables' values at key
> places. I wonder if one of your SUMs is returning NULL, causing
> your addition to evaluate to NULL. If so, then perhaps you should
> use COALESCE to tu
I'm having a problem with a value coming out of a loop.
CREATE OR REPLACE FUNCTION funmessagespermintotal()
RETURNS int8 AS
'
DECLARE
this_rServer record;
this_rSum record;
this_iSum bigint;
this_iTotal bigint;
this_iMsgsPerMin bigint;
this_sQuery varchar(500);
BEGIN
this_iTotal := 0;
FOR th
On Fri, 15 Oct 2004 09:11:05 +0800, Keow Yeong Huat Joseph
<[EMAIL PROTECTED]> wrote:
>
> Hi, Can anyone tell me how to unsubscribe my address from the mailing list.
> Thanks.
>
> Regards
> Joseph
You can do it the same place you sign up at.
http://webmail.postgresql.org/mj/mj_wwwusr?dom
Is there a way to do "dirty" reads on postgres?
If there is an insert of a million records or so, is there a way to
select from those records before it's committed?
-Josh
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Fri, 8 Oct 2004 18:14:50 -0600, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> This looks like tblheadings has a foreign key reference to itself.
> Is this the *exact* error message, cut-and-pasted? What do your
> table definitions look like?
>
> --
> Michael Fuhr
There isn't a foreign key referen
On Fri, 8 Oct 2004 23:15:24 +0200, Martijn van Oosterhout
<[EMAIL PROTECTED]> wrote:
> You havn't posted the exact error message. You'll have to if you want
> people to properly explain what's going on...
update or delete on "tblheadings" violates foreign key constraint "$1"
on "tblheadings" DETAI
How can a delete rows from a table that has foreign key constraints?
Here is how I have it set up.
I have 2 tables, tableA has fields and id's and tableB has fields that
reference tableA's id's. I'm not able to do this
BEGIN TRANSACTION;
DELETE FROM tableB
WHERE tableAid = 5;
DELETE FROM tableA
On 30 Sep 2004 16:40:58 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
>
> Josh Close <[EMAIL PROTECTED]> writes:
>
> > To reduce the request size (currently 34037760 bytes),
> ...
> > root # sysctl -a | grep shm
> > kernel.shmmni = 4096
> > kerne
On Wed, 29 Sep 2004 14:40:34 -0500, Josh Close <[EMAIL PROTECTED]> wrote:
> Is there a way to do COPY FROM STDIN from sql? So, remotely I could
> run the copy command and somehow push the info over instead of having
> it on the server.
For those who are curious, I found a php im
On Thu, 30 Sep 2004 13:39:57 -0500, Josh Close <[EMAIL PROTECTED]> wrote:
> I keep getting this error.
>
> DETAIL: Failed system call was shmget(key=5432001, size=34037760, 03600).
> HINT: This error usually means that PostgreSQL's request for a shared
> memory segm
I keep getting this error.
DETAIL: Failed system call was shmget(key=5432001, size=34037760, 03600).
HINT: This error usually means that PostgreSQL's request for a shared
memory segment exceeded your kernel's SHMMAX parameter. You can
either reduce the request size or reconfigure the kernel wit
Is there a way to do COPY FROM STDIN from sql? So, remotely I could
run the copy command and somehow push the info over instead of having
it on the server.
-Josh
---(end of broadcast)---
TIP 6: Have you searched our list archives?
ht
What is the best "module" to use for mssql to set up a linked server
with postgresql?
-Josh
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
29 matches
Mail list logo