Zitat von Dimitri Maziuk :
> On 08/14/2013 01:13 PM, azurIt wrote:
>>> Where's my mistake? Could you suggest me some optimization.
>>
>> Hi,
>>
>> only one little suggestion from me - upgrade to MySQL 5.5.
>
> I didn't have much luck with postgres on a single-cpu (I think 6-core or
> maybe 4) and
On 08/14/2013 02:54 PM, Josh Fisher wrote:
> We put the db on Intel 710 series SSD a while ago and it made a HUGE
> difference even without upgrading cpu or ram.
I expect it would. Still, if you're looking at JOINs on millions of
rows, you'd want a grown-up db engine with enough resources to do
On 8/14/2013 2:41 PM, Dimitri Maziuk wrote:
> On 08/14/2013 01:13 PM, azurIt wrote:
>>> Where's my mistake? Could you suggest me some optimization.
>> Hi,
>>
>> only one little suggestion from me - upgrade to MySQL 5.5.
> I didn't have much luck with postgres on a single-cpu (I think 6-core or
> m
On 08/14/2013 01:34 PM, Martin Simmons wrote:
...
> You work around it with something similar to this the fileset below. The
> first four wilddir options include the parents, the fifth wilddir option backs
> up the wanted files and the exclude option prevents other files from being
> included.
>
Oh, I don't have any 'other' product that works so magically. I am
learning bacula on home/family gear. So in my particular case I'm
trying to do a full on 150GB wirelessly to an old box running CentOS
with zfs-fuse on 4 drives. Everything about this is slow, but zfs has
given new life to 4
On 08/14/2013 01:13 PM, azurIt wrote:
>> Where's my mistake? Could you suggest me some optimization.
>
> Hi,
>
> only one little suggestion from me - upgrade to MySQL 5.5.
I didn't have much luck with postgres on a single-cpu (I think 6-core or
maybe 4) and 8 or 12GB RAM (it's off at the moment
On Wed, Aug 14, 2013 at 2:28 PM, azurIt wrote:
Hi,
is it, somehow, possible to run two concurrent jobs on one file daemon?
Thnx.
azur
On Wed, Aug 14, 2013 at 2:38 PM, John Drescher wrote:
> Yes. Set maximum concurrent jobs in the client resource of bacula-dir.conf
>
Sorry for the top post.
Yes. Set maximum concurrent jobs in the client resource of bacula-dir.conf
On Wed, Aug 14, 2013 at 2:28 PM, azurIt wrote:
> Hi,
>
> is it, somehow, possible to run two concurrent jobs on one file daemon?
> Thnx.
>
> azur
>
>
>
> On Tue, 13 Aug 2013 11:46:14 -0500, Dimitri Maziuk said:
>
> Recap:
> my fileset includes a list of directories to back up:
> /some/place/dir1
> /some/place/else/dir2
> /some/place/dir3
>
> In the backup, dir1, dir2, and dir3, and files in them have correct
> sizes, ownership, permissions,
Hi,
is it, somehow, possible to run two concurrent jobs on one file daemon? Thnx.
azur
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down
>Where's my mistake? Could you suggest me some optimization.
Hi,
only one little suggestion from me - upgrade to MySQL 5.5.
azur
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshoot
> On Wed, 14 Aug 2013 09:16:33 -0500, Barak Griffis said:
>
> That's not friendly with unstable/slow links... Does anyone have any
> other insightful ideas on how to handle this sort of situation? If I
> can't ever get a full because of unstable links then bacula is useless
> in my partic
Hi Barak,
>> If I can't ever get a full because of unstable links then bacula
>> is useless in my particular setup.
Out of curiosity, what backup product are you currently using that _does_
cleanly handle loss of communication between the client and the backup
server, resuming the next backup at
Hi at all,
recently I got troubles with the Bacula's DB (mysql 5.0.77). Trying to
restore a file from a very huge backup (500GB and 846532 files) the
InnoDB occupies one core for many hours before to let me select the
target files.
During these days I've builded index for File, Filename and Path,
On 08/14/13 10:23, John Drescher wrote:
> Small volumes also help in the case of recycling. Remember that bacula
> recycles a whole volume at a time it can not reclaim space from
> individual jobs inside a volume until the whole volume can be
> recycled.
Very good point which I forgot to include i
Does anyone do something like this already? How can I handle the eSATA
drives? Is there a better way?
>>>
>>>
>>> Hi Steven... I believe that you can accomplish what you are looking for
>>> using
>>> Bacula's "Copy Jobs", Josh Fisher's excellent "vchanger" add-on, and a
>>> little
>>> h
That's not friendly with unstable/slow links... Does anyone have any
other insightful ideas on how to handle this sort of situation? If I
can't ever get a full because of unstable links then bacula is useless
in my particular setup.
On 08/14/2013 08:59 AM, John Drescher wrote:
>> Feel free to
> Feel free to direct me to a URL, since this seems like an obvious newb
> question, but I don't see an obvious search result on the webs.
>
> If a job gets interrupted (say network drops out midway through a
> full). What happens the next time? does it pick up where it left off
> or does it star
Feel free to direct me to a URL, since this seems like an obvious newb
question, but I don't see an obvious search result on the webs.
If a job gets interrupted (say network drops out midway through a
full). What happens the next time? does it pick up where it left off
or does it start over?
On 08/13/13 11:10, Steven Haigh wrote:
> On 03/08/13 23:50, Bill Arlofski wrote:
>> On 08/01/13 22:33, Steven Haigh wrote:
>>> Hi all,
>>>
>>> I'm looking at migrating my existing TSM5.5 backup solution to Bacula.
>>> I'm hoping some people could point me in the right direction.
>>>
>>> I currently
20 matches
Mail list logo