Re: [PERFORM] 15,000 tables - next step

2005-12-05 Thread Jan Wieck
On 12/4/2005 4:33 AM, Michael Riess wrote: I will do the following: - switch to 10k buffers on a 1GB machine, 20k buffers on a 2GB machine - try to optimize my connection polls to remember which apps (groups of 30 tables) were accessed, so that there is a better chance of using caches - "swap o

Re: [PERFORM] 15,000 tables - next step

2005-12-04 Thread Michael Riess
William Yu schrieb: > Michael Riess wrote: >>> Well, I'd think that's were your problem is. Not only you have a >>> (relatively speaking) small server -- you also share it with other >>> very-memory-hungry services! That's not a situation I'd like to be in. >>> Try putting Apache and Tomcat else

Re: [PERFORM] 15,000 tables - next step

2005-12-04 Thread William Yu
Michael Riess wrote: Well, I'd think that's were your problem is. Not only you have a (relatively speaking) small server -- you also share it with other very-memory-hungry services! That's not a situation I'd like to be in. Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Jan Wieck
On 12/3/2005 11:41 AM, Michael Riess wrote: Alvaro Herrera schrieb: Michael Riess wrote: Shared memory ... I currently use 1500 buffers for 50 connections, and performance really suffered when I used 3000 buffers. The problem is that it is a 1GB machine, and Apache + Tomcat need about 400MB.

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Jaime Casanova
On 12/3/05, Michael Riess <[EMAIL PROTECTED]> wrote: > Alvaro Herrera schrieb: > > Michael Riess wrote: > > > >> Shared memory ... I currently use 1500 buffers for 50 connections, and > >> performance really suffered when I used 3000 buffers. The problem is > >> that it is a 1GB machine, and Apache

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Michael Riess
Alvaro Herrera schrieb: Michael Riess wrote: Shared memory ... I currently use 1500 buffers for 50 connections, and performance really suffered when I used 3000 buffers. The problem is that it is a 1GB machine, and Apache + Tomcat need about 400MB. Well, I'd think that's were your problem is

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Alvaro Herrera
Michael Riess wrote: > Shared memory ... I currently use 1500 buffers for 50 connections, and > performance really suffered when I used 3000 buffers. The problem is > that it is a 1GB machine, and Apache + Tomcat need about 400MB. Well, I'd think that's were your problem is. Not only you have

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Michael Riess
Jan Wieck schrieb: On 12/2/2005 6:01 PM, Michael Riess wrote: Hi, thanks for your comments so far - I appreciate it. I'd like to narrow down my problem a bit: As I said in the other thread, I estimate that only 20% of the 15,000 tables are accessed regularly. So I don't think that vacuumin

Re: [PERFORM] 15,000 tables - next step

2005-12-03 Thread Jan Wieck
On 12/2/2005 6:01 PM, Michael Riess wrote: Hi, thanks for your comments so far - I appreciate it. I'd like to narrow down my problem a bit: As I said in the other thread, I estimate that only 20% of the 15,000 tables are accessed regularly. So I don't think that vacuuming or the number of

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Francisco Reyes
Michael Riess writes: Sorry, I should have included that info in the initial post. You're right in that most of these tables have a similar structure. But they are independent and can be customized by the users. How about creating 50 databases and give each it's own tablespace? It's not onl

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Andrew Sullivan
On Thu, Dec 01, 2005 at 08:34:43PM +0100, Michael Riess wrote: > Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, > but the database got considerably slower near the end of the week. If you have your FSM configured correctly and you are vacuuming tables often enough for yo

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Jan Wieck
On 12/1/2005 2:34 PM, Michael Riess wrote: VACUUM FULL was probably always overkill, unless "always" includes versions prior to 7.3... Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, but the database got considerably slower near the end of the week. This indicates th

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Alex Stapleton
On 2 Dec 2005, at 14:16, Alex Stapleton wrote: On 1 Dec 2005, at 16:03, Tom Lane wrote: Michael Riess <[EMAIL PROTECTED]> writes: (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Alex Stapleton
On 1 Dec 2005, at 16:03, Tom Lane wrote: Michael Riess <[EMAIL PROTECTED]> writes: (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same structure. This will be much more efficient than

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Michael Stone
On Fri, Dec 02, 2005 at 03:15:00AM -0500, Ron wrote: I've seen enough HD failures that I tend to be paranoid and log the metadata of fs dedicated to WAL as well, but that may very well be overkill. Especially since it wouldn't gain anything. Journalling doesn't give you any advantage whatsoever

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Ron
Agreed, and I apologize for the imprecision of my post below. I should have written: "Best practice seems to be to use a journaling fs and log metadata only and put it on separate dedicated spindles." I've seen enough HD failures that I tend to be paranoid and log the metadata of fs dedicated

Re: [PERFORM] 15,000 tables

2005-12-01 Thread David Lang
On Thu, 1 Dec 2005, Craig A. James wrote: So say I need 10,000 tables, but I can create tablespaces. Wouldn't that solve the performance problem caused by Linux's (or ext2/3's) problems with large directories? For example, if each user creates (say) 10 tables, and I have 1000 users, I could

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Craig A. James
So say I need 10,000 tables, but I can create tablespaces. Wouldn't that solve the performance problem caused by Linux's (or ext2/3's) problems with large directories? For example, if each user creates (say) 10 tables, and I have 1000 users, I could create 100 tablespaces, and assign groups o

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote: > what i noticed is autovacuum not working properly as it should. i had 8.1 > running with autovacuum for just 2 days or so and got warnings in pgadmin > that my tables would need an vacuum. Hum, so how is autovacuum's documentation lacking? Please read it critically a

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Scott Marlowe
On Thu, 2005-12-01 at 13:34, Michael Riess wrote: > > Michael Riess <[EMAIL PROTECTED]> writes: > >>> On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: > we are currently running a postgres server (upgraded to 8.1) which > has one large database with approx. 15,000 tables. Unfortunatel

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Merlin Moncure
> we are currently running a postgres server (upgraded to 8.1) which has > one large database with approx. 15,000 tables. Unfortunately performance > suffers from that, because the internal tables (especially that which > holds the attribute info) get too large. > > (We NEED that many tables, plea

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Jaime Casanova
On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: > > Michael Riess <[EMAIL PROTECTED]> writes: > >>> On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: > we are currently running a postgres server (upgraded to 8.1) which > has one large database with approx. 15,000 tables. Unfortuna

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Michael Riess
Michael Riess <[EMAIL PROTECTED]> writes: On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: we are currently running a postgres server (upgraded to 8.1) which has one large database with approx. 15,000 tables. Unfortunately performance suffers from that, because the internal tables (especiall

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Gavin M. Roy
Heh looks like I left a trailing thought... My post wasn't saying don't use journaled filesystems, but rather that it can be slower than non-journaled filesystems, and I don't consider recovery time from a crash to be a factor in determining the speed of reads and writes on the data. That

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Tom Lane
Ron <[EMAIL PROTECTED]> writes: > Agreed. Also the odds of fs corruption or data loss are higher in a > non journaling fs. Best practice seems to be to use a journaling fs > but to put the fs log on dedicated spindles separate from the actual > fs or pg_xlog. I think we've determined that bes

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Gavin M. Roy
Here's a fairly recent post on reiserfs (and performance): http://archives.postgresql.org/pgsql-novice/2005-09/msg7.php I'm still digging on performance of ext2 vrs journaled filesystems, as I know I've seen it before. Gavin My point was not in doing an fsck, but rather in On Dec 1, 20

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Ron
Agreed. Also the odds of fs corruption or data loss are higher in a non journaling fs. Best practice seems to be to use a journaling fs but to put the fs log on dedicated spindles separate from the actual fs or pg_xlog. Ron At 01:40 PM 12/1/2005, Tino Wildenhain wrote: Am Donnerstag, den 0

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Tino Wildenhain
Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy: > Hi Michael, > > I'm a fan of ReiserFS, and I can be wrong, but I believe using a > journaling filesystem for the PgSQL database could be slowing things > down. Have a 200G+ database, someone pulling the power plug or a regular

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Gavin M. Roy
Hi Michael, I'm a fan of ReiserFS, and I can be wrong, but I believe using a journaling filesystem for the PgSQL database could be slowing things down. Gavin On Dec 1, 2005, at 6:51 AM, Michael Riess wrote: Hi David, with 15,000 tables you are talking about a LOT of files to hold thes

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Chris Browne
Michael Riess <[EMAIL PROTECTED]> writes: >> On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: >>> we are currently running a postgres server (upgraded to 8.1) which >>> has one large database with approx. 15,000 tables. Unfortunately >>> performance suffers from that, because the internal table

Re: [PERFORM] 15,000 tables

2005-12-01 Thread me
hi michael Have you ANALYZEd your database? VACUUMing? Of course ... before 8.1 we routinely did a vacuum full analyze each night. As of 8.1 we use autovacuum. what i noticed is autovacuum not working properly as it should. i had 8.1 running with autovacuum for just 2 days or so and got w

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Michael Riess
Hi Tom, Michael Riess <[EMAIL PROTECTED]> writes: (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same structure. This will be much more efficient than managing that key at the filesystem

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Guido Neitzer
On 01.12.2005, at 17:04 Uhr, Michael Riess wrote: No. Our database contains tables for we content management systems. The server hosts approx. 500 cms applications, and each of them has approx. 30 tables. Just for my curiosity: Are the "about 30 tables" with similar schemas or do they dif

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Michael Riess
Hi, On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: Hi, we are currently running a postgres server (upgraded to 8.1) which has one large database with approx. 15,000 tables. Unfortunately performance suffers from that, because the internal tables (especially that which holds the attribut

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Jaime Casanova
On 12/1/05, Tom Lane <[EMAIL PROTECTED]> wrote: > Michael Riess <[EMAIL PROTECTED]> writes: > > (We NEED that many tables, please don't recommend to reduce them) > > No, you don't. Add an additional key column to fold together different > tables of the same structure. This will be much more effic

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Tom Lane
Michael Riess <[EMAIL PROTECTED]> writes: > (We NEED that many tables, please don't recommend to reduce them) No, you don't. Add an additional key column to fold together different tables of the same structure. This will be much more efficient than managing that key at the filesystem level, whic

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Jaime Casanova
On 12/1/05, Michael Riess <[EMAIL PROTECTED]> wrote: > Hi, > > we are currently running a postgres server (upgraded to 8.1) which has > one large database with approx. 15,000 tables. Unfortunately performance > suffers from that, because the internal tables (especially that which > holds the attrib

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Michael Riess
Hi David, with 15,000 tables you are talking about a LOT of files to hold these (30,000 files with one index each and each database being small enough to not need more then one file to hold it), on linux ext2/3 this many files in one directory will slow you down horribly. We use ReiserFS,

Re: [PERFORM] 15,000 tables

2005-12-01 Thread Michael Riess
Hi David, incidentally: The directory which holds our datbase currently contains 73883 files ... do I get a prize or something? ;-) Regards, Mike ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings

Re: [PERFORM] 15,000 tables

2005-12-01 Thread David Lang
On Thu, 1 Dec 2005, Michael Riess wrote: Hi, we are currently running a postgres server (upgraded to 8.1) which has one large database with approx. 15,000 tables. Unfortunately performance suffers from that, because the internal tables (especially that which holds the attribute info) get too