On Wed, 21 Mar 2007, David Boyes wrote:

>> - First, have a default limit of the number of records that will be
>> inserted
>> in any one batch request.  This should guarantee that an out of memory
>> problem will not normally occur.
>
> Can we calculate this based on available memory at execution time?
> General tuning wisdom for the commercial databases (DB/2, Oracle, etc)
> target about 60% of real memory as the goal for this kind of
> segmentation. That allows some query optimization overhead, and a little
> wiggle room if the optimizer guesses wrong.

In MySQL: /etc/my.cnf

====
# If your system supports the memlock() function call, you might want to
# enable this option while running MySQL to keep it locked in memory and
# to avoid potential swapping out in case of high memory pressure. Good
# for performance.
memlock
====

Comment this out. If MySQL runs out of RAM it will hit swap and slow down, 
but it will not crash.


I did send Kern some updated query forms some time back for dbcheck which 
reduced the load and sped things up a LOT, but I can't find them now.

(Basically, SELECT COUNT(*) instead of SELECT, etc)

No need to bring out the rows and count them when the SQL database can do 
it for you.....



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to