The FileSet catalog records are created
when the first job with the new
FileSet is actually run.
Best regards,
Kern
On 11/04/2013 08:15 PM, Juan Pablo Lorier wrote:
Hi,
I've created new filesets to backup new
Phil Stracchino schrieb:
> On 11/04/13 08:15, Ralf Brinkmann wrote:
>> Changing the data base options might help, for MySql there are some
>> predefined sample configuration files:
>>
>> ./usr/share/mysql/my-medium.cnf
>> ./usr/share/mysql/my-huge.cnf
>> ./usr/share/mysql/my-large.cnf
>> ./usr/shar
On 11/04/13 18:55, Dimitri Maziuk wrote:
> On 11/04/2013 04:17 PM, Phil Stracchino wrote: ... In at least one
> of the
>> cases I know about, though, the problem was not a failure of DRBD
>> per se, it was that someone accidentally started up mysqld on the
>> second node,
>
> Ah, the "active-acti
On 11/04/2013 04:17 PM, Phil Stracchino wrote:
... In at least one of the
> cases I know about, though, the problem was not a failure of DRBD per
> se, it was that someone accidentally started up mysqld on the second
> node, which normally would not be allowed to happen because the second
> instan
On 10/29/13 12:42 PM, Dan Langille wrote:
> On 2013-10-27 19:33, David Newman wrote:
>> On 10/27/13 11:31 AM, Dan Langille wrote:
>>
>> On Oct 22, 2013, at 3:00 PM, David Newman wrote:
>>
>>
>>
>> On 10/19/13 11:40 PM, Kern Sibbald wrote:
>> Hello,
>>
>> From what I can see -- first "signal 0", and
On 11/04/13 16:10, Josh Fisher wrote:
> On 11/4/2013 1:27 PM, Phil Stracchino wrote:
>> Honestly, based upon experience as a DBA at a hosting company that hosts
>> MANY customers using MySQL, my first advice on using MySQL on top of
>> DRBD would be "Just don't." I could cite lists of customers wh
On 11/04/2013 03:10 PM, Josh Fisher wrote:
>
> On 11/4/2013 1:27 PM, Phil Stracchino wrote:
>> Honestly, based upon experience as a DBA at a hosting company that hosts
>> MANY customers using MySQL, my first advice on using MySQL on top of
>> DRBD would be "Just don't."
> ... I have been using M
On 11/4/2013 1:27 PM, Phil Stracchino wrote:
> On 11/04/13 13:00, Josh Fisher wrote:
>> I would add that it is critical (IMO) to place the DB storage on
>> different physical drives than those holding the Bacula spool area. At
>> the end of a job Bacula SD must read the spooled attributes and upda
On 11/4/2013 1:22 PM, Dimitri Maziuk wrote:
> On 11/04/2013 12:00 PM, Josh Fisher wrote:
>
>> As for clustering using DRBD, the catalog and spool area should still be
>> on different spindles.
> Though DRBD will limit you to the network speed at some point, so I
> expect SSDs would be a waste of m
On 11/04/2013 2:01 pm, Dan Langille wrote:
> On Oct 30, 2013, at 4:48 PM, dweimer wrote:
>
>
> I have no idea. But I have one suggestion, just for kicks.
>
> I've long been skeptical of multiple run before/after scripts. I've
> always preferred
> to have just one script. Is it worth combinin
On Oct 30, 2013, at 4:48 PM, dweimer wrote:
> On 10/16/2013 5:43 pm, David Newman wrote:
>> On 10/16/13 12:44 PM, dweimer wrote:
>>> On 10/16/2013 2:13 pm, David Newman wrote:
On 10/14/13 2:44 AM, Martin Simmons wrote:
>> On Sun, 13 Oct 2013 18:25:07 -0700, David Newman said:
>>
I've been following the discussion on database performance and such. We
currently have both the SD and DIRECTOR on one server (17TB Raid Array).
Backups seem to be fine as far as time taken. We backup 16 servers with about
120-200GB incrementally nightly and it takes about 1.5 hours. So I'm
Hi,
I've created new filesets to backup new servers but though they are
in the bacula-dir.conf and I've reloaded the config, but the
filesets are not been created in the database and thus, are not
shown in bconsole or bat. Other kind of records like clients,
back
On 11/04/13 13:00, Josh Fisher wrote:
> I would add that it is critical (IMO) to place the DB storage on
> different physical drives than those holding the Bacula spool area. At
> the end of a job Bacula SD must read the spooled attributes and update
> the catalog. If spooled attributes and cata
On 11/04/2013 12:00 PM, Josh Fisher wrote:
>
> As for clustering using DRBD, the catalog and spool area should still be
> on different spindles.
Though DRBD will limit you to the network speed at some point, so I
expect SSDs would be a waste of money if you use it...
--
Dimitri Maziuk
Program
On 11/4/2013 12:15 PM, Phil Stracchino wrote:
> On 11/04/13 09:58, compdoc wrote:
>> By the way, it wasn't enough to enable InnoDB - I had to create the bacula
>> database after it was enabled for the tables to use this engine. (it was a
>> new install)
>>
>> I don't know if it's possible to conve
On 11/04/13 09:58, compdoc wrote:
> By the way, it wasn't enough to enable InnoDB - I had to create the bacula
> database after it was enabled for the tables to use this engine. (it was a
> new install)
>
> I don't know if it's possible to convert the tables after enabling InnoDB,
> but I would
Hi Brad,
Your client bacula-fd is not able to talk to your storage daemon
192.168.0.234:9103.
Have you tested some telnets between client and storage, client and
director, director and storage? All of them should be able to connect to.
Regards,
Ana
On Sat, Nov 2, 2013 at 2:26 AM, bradphillips
On 11/2/2013 1:26 AM, bradphillips wrote:
> I am having some issues with my bacula set-up since moving the director to a
> remote location on a different network than my clients. I adjusted the
> settings so I could connect to the clients, but when I try to do a backup I
> get the following as
>> we switched to "/usr/share/mysql/my-huge.cnf"
>Honestly, the truth is that all of those sample configuration files are all
but worthless.
I can't speak to all versions of bacula, but I used the my-huge.cnf from
Version 5.2.13 to create my own my.cnf file. I had to disable a couple of
lines
Here's an example of a job with about 17 million files even smaller
than yours (it's only about 300gb):
Full 17648183 314.48 GB 2013-11-01 15:20:09
2013-11-02 14:35:41 23:15:32 3.85 TB
Cheers,
Uwe
--
NIONEX --- Ein Unternehmen der Bertelsmann SE & Co. KGaA
On 11/04/13 08:15, Ralf Brinkmann wrote:
> Changing the data base options might help, for MySql there are some
> predefined sample configuration files:
>
> ./usr/share/mysql/my-medium.cnf
> ./usr/share/mysql/my-huge.cnf
> ./usr/share/mysql/my-large.cnf
> ./usr/share/mysql/my-small.cnf
>
> we swit
On 11/04/13 05:44, Christian Manal wrote:
>> 1. How long does the backup?
>
> The last full run took 1 day 1 hour 33 mins 25 secs, incrementals and
> differentials take around 4 to 6 hours.
>
>
>> 2. Do you use compression?
>
> Yes, but not Bacula's. The backups go to a ZFS pool and LTO tapes,
Hi Bacula-Users,
thank you for the feedback. Than I will create the job and take care of
your recommandations. If I have any problems, I'll let you know.
Regards - Willi
Am 04.11.2013 14:15, schrieb Ralf Brinkmann:
> Am 04.11.2013 10:45, schrieb Willi Fehler:
>> Hi Bacula-Users,
>>
>> we want to
Am 04.11.2013 10:45, schrieb Willi Fehler:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacula but the backup
> crashes. Anybody have expirience with many small files and Bacula
Do you have Heartbeat Interval configured? I had problems with long
running jobs without this directive.
On 11/04/2013 11:19 AM, Willi Fehler wrote:
> Unfortunately no. We will get a ticket in this week. Than I will give it
> a try to run this job on Bacula.
>
> Regards - Willi
>
> Am 04.11.2013
> 1. How long does the backup?
The last full run took 1 day 1 hour 33 mins 25 secs, incrementals and
differentials take around 4 to 6 hours.
> 2. Do you use compression?
Yes, but not Bacula's. The backups go to a ZFS pool and LTO tapes, which
do their own compression.
> 3. Do you use incremen
Hi Christian,
1. How long does the backup?
2. Do you use compression?
3. Do you use incremental backups?
Regards - Willi
Am 04.11.2013 11:22, schrieb Christian Manal:
> On 04.11.2013 10:45, Willi Fehler wrote:
>> Hi Bacula-Users,
>>
>> we want to backup our central nas-server.
>>
>> disk-usage:
On 04.11.2013 10:45, Willi Fehler wrote:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacula but the backup
> crashes. Anybody have expirience with many small files and Bacula
Unfortunately no. We will get a ticket in this week. Than I will give it
a try to run this job on Bacula.
Regards - Willi
Am 04.11.2013 10:57, schrieb Juraj Sakala:
> Do you have more specific information? For example output from
> unsuccessful job?
>
> On 11/04/2013 10:45 AM, Willi Fehler wrote
Do you have more specific information? For example output from
unsuccessful job?
On 11/04/2013 10:45 AM, Willi Fehler wrote:
> Hi Bacula-Users,
>
> we want to backup our central nas-server.
>
> disk-usage: 885G
> ca. 10 millionen small files
>
> In the past the old it-colleges tried to use Bacu
Hi Bacula-Users,
we want to backup our central nas-server.
disk-usage: 885G
ca. 10 millionen small files
In the past the old it-colleges tried to use Bacula but the backup
crashes. Anybody have expirience with many small files and Bacula or
know a good alternative? I think if we try it again, we
32 matches
Mail list logo