ling option.
I looked at your problem.
One of the problems is that you need to keep the certain data
cached in memory all the time.
That could be solved by doing
SELECT COUNT(*) from to_be_cached;
as a cron job. It loads the whole table into the Linux Kernel memory cache.
Marko Ristola
Tom Lane
and
data=writeback for Ext3. Only the writeback risks data integrity.
Ext3 is the only journaled filesystem, that I know that fulfills
these fundamental data integrity guarantees. Personally I like about
such filesystems, even though it means less speed.
Marko Ristola
--
um
values
and the histogram of the 100 distinct values).
Marko Ristola
Greg Stark wrote:
"Dave Held" <[EMAIL PROTECTED]> writes:
Actually, it's more to characterize how large of a sample
we need. For example, if we sample 0.005 of disk pages, and
get an estimate, and then
find it out, by other means than checking at least two million
rows?
This means, that the user should have a possibility to tell the lower
bound for the number of rows for sampling.
Regards,
Marko Ristola
Tom Lane wrote:
Josh Berkus writes:
Overall, our formula is inherently conservati
decreased with UseDeclareFetc=1 by increasing the
Fetch=2048
parameter: With Fetch=1 you get a bad performance with lots of rows, but
if you fetch
more data from the server once per 2048 rows, the network latency
affects only once for
the 2048 row block.
Regards,
Marko Ristola
Joel Fradkin wrote:
Hate
om numbers.
So, for example, if you have one million pages, but the upper bound for
the random
numbers is one hundred thousand pages, the statistics might get tuned.
Or some random number generator has for example only 32000 different values.
Regards,
Marko Ristola
Josh Berkus wrote:
Tom,
An
foreign key checks are okay, complete the COMMIT operation.
4. If a foreign key check fails, go into the ROLLBACK NEEDED state.
Maybe Tom Lane meant the same.
set option delayed_foreign_keys=true;
BEGIN;
insert 1000 rows.
COMMIT;
Regards,
Marko Ristola
Christopher Kings-Lynne wrote:
My problem