[PERFORM] Performance for relative large DB
Hi. The company that I'm working for are surveying the djungle of DBMS since we are due to implement the next generation of our system. The companys buissnes is utilizing the DBMS to store data that are accessed trough the web at daytime (only SELECTs, sometimes with joins, etc). The data is a collection of bjects that are for sale. The data consists of basic text information about theese togheter with some group information, etc. The data is updated once every night. There are about 4 M posts in the database (one table) and is expected to grow with atleast 50% during a reasonable long time. How well would PostgreSQL fit our needs? We are using Pervasive SQL today and suspect that it is much to small. We have some problems with latency. Esp. when updating information, complicated conditions in selects and on concurrent usage. Best Regards Robert Bengtsson Project Manager ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PERFORM] Performance for relative large DB
Hi Chris. Thanks for the answer. Sorry that i was a bit unclear. 1) We update around 20.000 posts per night. 2) What i meant was that we suspect that the DBMS called PervasiveSQL that we are using today is much to small. That's why we're looking for alternatives. Today we base our solution much on using querry-specific tables created at night, so instead of doing querrys direct on the "post" table (with 4-6M rows) at daytime, we have the data pre-aligned in several much smaller tables. This is just to make the current DBMS coop with our amount of data. What I am particulary interested in is if we can expect to run all our select querrys directly from the "post" table with PostgreSQL. 3) How well does postgres work with load balancing environments. Is it built-in? Best Regards Robert Bengtsson Project Manager ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match