On 10/05/11 10:25, Uwe Schuerkamp wrote:
> On Wed, Oct 05, 2011 at 02:37:37PM +0100, Rory Campbell-Lange wrote:
>>
>> I've been using non-batch insertion with postgres (following your dare, I 
>> think, Phil) for about a year. Backups are only about 8TB, but it works 
>> extremely well for us. 
>>
> 
> Hi folks,
> 
> thanks for your recommendations and thoughts. I've also noticed long
> wait times for jobs with status "Dir inserting attributes", and it
> looks like the new compile (5.0.3) was configured without batch
> insertion by default. 
> 
> I'm still looking for mysql optimizations, or do you think it best to
> leave my.cnf at the default values (we don't do many restores) and use
> the maximum amount of RAM for the fs buffer cache as needed? 

I'd look at performance with something like mysqltuner and tune it for
best performance, just like any other MySQL use case.


-- 
  Phil Stracchino, CDK#2     DoD#299792458     ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
                 It's not the years, it's the mileage.

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to