On Tue, 2005-03-15 at 02:14, Paul Newman wrote: > Hi, > > Iâm a convert from Firebird so I consider myself a newbie to > Postgresql. We have a requirement to host 400 â 600 companies data > inside a single database for connection pooling and scalability > reasons as well as our business logic requirements. We have therefore > been very busy adding a company id to each relevant table and > adjusting all our queries to be company specific â such that if a > company says âshow me all my clientsâ we would use a query such as > Select * from client where comp_id = âCompAâ > > > > But, Iâve just discovered Postgresql Schemas â. > > > > If I were to create a schema for each company and therefore remove the > comp_id from our tables and sql would this work ? Could we have 600 > schemas in the db ? Would performance be hindered ? If this is OK what > is the best way to maintain all the db structures ? In other words if > I have an update script do I need to run it against each schema ?
I just set up a simple test that created 500 or so schemas and the performance seemed fine to me. I imagine your system catalogs will be a fair bit bigger than if you had it all in one table, but the payoff is that when you're looking for the data for one customer you don't have to go through a huge table of 599 other customers just to get their data. I'm betting multiple schemas will be a win as long as you aren't needing to union a bunch of schemas together all the time. And yes, an update script would have to hit each schema. If the data between different companies aren't related then schemas would seem a nice solution to me. ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq