Am 04.11.2013 15:58, schrieb compdoc:
>>> we switched to "/usr/share/mysql/my-huge.cnf"
> I used 'mysqltuner.pl' to help with the tweaking. They may not be perfect,
> but I think those sample .cnf files are a good start to rolling your own.
I just did fiddle a little bit with mysqltuner.pl.
As an
On 11/4/2013 6:55 PM, Dimitri Maziuk wrote:
> On 11/04/2013 04:17 PM, Phil Stracchino wrote:
> ... In at least one of the
>> cases I know about, though, the problem was not a failure of DRBD per
>> se, it was that someone accidentally started up mysqld on the second
>> node, which normally would
Phil Stracchino schrieb:
> On 11/04/13 08:15, Ralf Brinkmann wrote:
>> Changing the data base options might help, for MySql there are some
>> predefined sample configuration files:
>>
>> ./usr/share/mysql/my-medium.cnf
>> ./usr/share/mysql/my-huge.cnf
>> ./usr/share/mysql/my-large.cnf
>> ./usr/shar
On 11/04/13 18:55, Dimitri Maziuk wrote:
> On 11/04/2013 04:17 PM, Phil Stracchino wrote: ... In at least one
> of the
>> cases I know about, though, the problem was not a failure of DRBD
>> per se, it was that someone accidentally started up mysqld on the
>> second node,
>
> Ah, the "active-acti
On 11/04/2013 04:17 PM, Phil Stracchino wrote:
... In at least one of the
> cases I know about, though, the problem was not a failure of DRBD per
> se, it was that someone accidentally started up mysqld on the second
> node, which normally would not be allowed to happen because the second
> instan
On 11/04/13 16:10, Josh Fisher wrote:
> On 11/4/2013 1:27 PM, Phil Stracchino wrote:
>> Honestly, based upon experience as a DBA at a hosting company that hosts
>> MANY customers using MySQL, my first advice on using MySQL on top of
>> DRBD would be "Just don't." I could cite lists of customers wh
On 11/04/2013 03:10 PM, Josh Fisher wrote:
>
> On 11/4/2013 1:27 PM, Phil Stracchino wrote:
>> Honestly, based upon experience as a DBA at a hosting company that hosts
>> MANY customers using MySQL, my first advice on using MySQL on top of
>> DRBD would be "Just don't."
> ... I have been using M
On 11/4/2013 1:27 PM, Phil Stracchino wrote:
> On 11/04/13 13:00, Josh Fisher wrote:
>> I would add that it is critical (IMO) to place the DB storage on
>> different physical drives than those holding the Bacula spool area. At
>> the end of a job Bacula SD must read the spooled attributes and upda
On 11/4/2013 1:22 PM, Dimitri Maziuk wrote:
> On 11/04/2013 12:00 PM, Josh Fisher wrote:
>
>> As for clustering using DRBD, the catalog and spool area should still be
>> on different spindles.
> Though DRBD will limit you to the network speed at some point, so I
> expect SSDs would be a waste of m
On 11/04/13 13:00, Josh Fisher wrote:
> I would add that it is critical (IMO) to place the DB storage on
> different physical drives than those holding the Bacula spool area. At
> the end of a job Bacula SD must read the spooled attributes and update
> the catalog. If spooled attributes and cata
On 11/04/2013 12:00 PM, Josh Fisher wrote:
>
> As for clustering using DRBD, the catalog and spool area should still be
> on different spindles.
Though DRBD will limit you to the network speed at some point, so I
expect SSDs would be a waste of money if you use it...
--
Dimitri Maziuk
Program
On 11/4/2013 12:15 PM, Phil Stracchino wrote:
> On 11/04/13 09:58, compdoc wrote:
>> By the way, it wasn't enough to enable InnoDB - I had to create the bacula
>> database after it was enabled for the tables to use this engine. (it was a
>> new install)
>>
>> I don't know if it's possible to conve
On 11/04/13 09:58, compdoc wrote:
> By the way, it wasn't enough to enable InnoDB - I had to create the bacula
> database after it was enabled for the tables to use this engine. (it was a
> new install)
>
> I don't know if it's possible to convert the tables after enabling InnoDB,
> but I would
>> we switched to "/usr/share/mysql/my-huge.cnf"
>Honestly, the truth is that all of those sample configuration files are all
but worthless.
I can't speak to all versions of bacula, but I used the my-huge.cnf from
Version 5.2.13 to create my own my.cnf file. I had to disable a couple of
lines
Here's an example of a job with about 17 million files even smaller
than yours (it's only about 300gb):
Full 17648183 314.48 GB 2013-11-01 15:20:09
2013-11-02 14:35:41 23:15:32 3.85 TB
Cheers,
Uwe
--
NIONEX --- Ein Unternehmen der Bertelsmann SE & Co. KGaA
On 11/04/13 08:15, Ralf Brinkmann wrote:
> Changing the data base options might help, for MySql there are some
> predefined sample configuration files:
>
> ./usr/share/mysql/my-medium.cnf
> ./usr/share/mysql/my-huge.cnf
> ./usr/share/mysql/my-large.cnf
> ./usr/share/mysql/my-small.cnf
>
> we swit
On 11/04/13 05:44, Christian Manal wrote:
>> 1. How long does the backup?
>
> The last full run took 1 day 1 hour 33 mins 25 secs, incrementals and
> differentials take around 4 to 6 hours.
>
>
>> 2. Do you use compression?
>
> Yes, but not Bacula's. The backups go to a ZFS pool and LTO tapes,
Hi Bacula-Users,
thank you for the feedback. Than I will create the job and take care of
your recommandations. If I have any problems, I'll let you know.
Regards - Willi
Am 04.11.2013 14:15, schrieb Ralf Brinkmann:
> Am 04.11.2013 10:45, schrieb Willi Fehler:
>> Hi Bacula-Users,
>>
>> we want to
Am 04.11.2013 10:45, schrieb Willi Fehler:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacula but the backup
> crashes. Anybody have expirience with many small files and Bacula
Do you have Heartbeat Interval configured? I had problems with long
running jobs without this directive.
On 11/04/2013 11:19 AM, Willi Fehler wrote:
> Unfortunately no. We will get a ticket in this week. Than I will give it
> a try to run this job on Bacula.
>
> Regards - Willi
>
> Am 04.11.2013
> 1. How long does the backup?
The last full run took 1 day 1 hour 33 mins 25 secs, incrementals and
differentials take around 4 to 6 hours.
> 2. Do you use compression?
Yes, but not Bacula's. The backups go to a ZFS pool and LTO tapes, which
do their own compression.
> 3. Do you use incremen
Hi Christian,
1. How long does the backup?
2. Do you use compression?
3. Do you use incremental backups?
Regards - Willi
Am 04.11.2013 11:22, schrieb Christian Manal:
> On 04.11.2013 10:45, Willi Fehler wrote:
>> Hi Bacula-Users,
>>
>> we want to backup our central nas-server.
>>
>> disk-usage:
On 04.11.2013 10:45, Willi Fehler wrote:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacula but the backup
> crashes. Anybody have expirience with many small files and Bacula
Unfortunately no. We will get a ticket in this week. Than I will give it
a try to run this job on Bacula.
Regards - Willi
Am 04.11.2013 10:57, schrieb Juraj Sakala:
> Do you have more specific information? For example output from
> unsuccessful job?
>
> On 11/04/2013 10:45 AM, Willi Fehler wrote
Do you have more specific information? For example output from
unsuccessful job?
On 11/04/2013 10:45 AM, Willi Fehler wrote:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacu
Hi Bacula-Users,
we want to backup our central nas-server.
disk-usage: 885G
ca. 10 millionen small files
In the past the old it-colleges tried to use Bacula but the backup
crashes. Anybody have expirience with many small files and Bacula or
know a good alternative? I think if we try it again, we
26 matches
Mail list logo