Re: [PERFORM] extremly low memory usage

2005-08-21 Thread Marko Ristola
ling option. I looked at your problem. One of the problems is that you need to keep the certain data cached in memory all the time. That could be solved by doing SELECT COUNT(*) from to_be_cached; as a cron job. It loads the whole table into the Linux Kernel memory cache. Marko Ristola Tom Lane

Re: [PERFORM] extremly low memory usage

2005-08-20 Thread Marko Ristola
and data=writeback for Ext3. Only the writeback risks data integrity. Ext3 is the only journaled filesystem, that I know that fulfills these fundamental data integrity guarantees. Personally I like about such filesystems, even though it means less speed. Marko Ristola --

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-28 Thread Marko Ristola
um values and the histogram of the 100 distinct values). Marko Ristola Greg Stark wrote: "Dave Held" <[EMAIL PROTECTED]> writes: Actually, it's more to characterize how large of a sample we need. For example, if we sample 0.005 of disk pages, and get an estimate, and then

Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-24 Thread Marko Ristola
find it out, by other means than checking at least two million rows? This means, that the user should have a possibility to tell the lower bound for the number of rows for sampling. Regards, Marko Ristola Tom Lane wrote: Josh Berkus writes: Overall, our formula is inherently conservati

Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-24 Thread Marko Ristola
decreased with UseDeclareFetc=1 by increasing the Fetch=2048 parameter: With Fetch=1 you get a bad performance with lots of rows, but if you fetch more data from the server once per 2048 rows, the network latency affects only once for the 2048 row block. Regards, Marko Ristola Joel Fradkin wrote: Hate

Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-22 Thread Marko Ristola
om numbers. So, for example, if you have one million pages, but the upper bound for the random numbers is one hundred thousand pages, the statistics might get tuned. Or some random number generator has for example only 32000 different values. Regards, Marko Ristola Josh Berkus wrote: Tom, An

Re: [PERFORM] Foreign key slows down copy/insert

2005-04-14 Thread Marko Ristola
foreign key checks are okay, complete the COMMIT operation. 4. If a foreign key check fails, go into the ROLLBACK NEEDED state. Maybe Tom Lane meant the same. set option delayed_foreign_keys=true; BEGIN; insert 1000 rows. COMMIT; Regards, Marko Ristola Christopher Kings-Lynne wrote: My problem