ility for cleaning a PostgreSQL database. vacuumdb will also
generate internal statistics used by the PostgreSQL query optimizer.
15.03.2017, 19:57, "James Chamberlain" mailto:jam...@exa.com>>:
Hi all,
I’m getting a touch concerned about the size of my Bacula database, and was
wo
t; jobs or clients, it can be a quick way to compact the database.
>
> Note also, your retention periods are quite long so if you have lots of jobs
> (more than 100) that run every night, you will need a big database.
> Best regards,
>
> Kern
>
> On 03/16/2017 03:17 PM, Ja
> On Mar 16, 2017, at 3:29 AM, Mikhail Krasnobaev wrote:
>
>> 15.03.2017, 19:57, "James Chamberlain" :
>>
>> Hi all,
>>
>> I’m getting a touch concerned about the size of my Bacula database, and was
>> wondering what I can do to prune it, c
> On Mar 15, 2017, at 10:17 PM, Josip Deanovic
> wrote:
>
> On Wednesday 2017-03-15 12:57:33 James Chamberlain wrote:
>> Hi all,
>>
>> I’m getting a touch concerned about the size of my Bacula database, and
>> was wondering what I can do to prune it, compr
Hi all,
I’m getting a touch concerned about the size of my Bacula database, and was
wondering what I can do to prune it, compress it, or otherwise keep it at a
manageable size. The database itself currently stands at 324 GB, and is using
90% of the file system it’s on. I’m running Bacula 7.4.
ntirely sure myself and
> it’s hard to find in the manual, but I think the default unit is seconds.
>
>
>
> From: James Chamberlain [mailto:jam...@exa.com]
> Sent: 07 July 2015 1:11
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] Incrementals not happeni
Hi all,I'm trying to figure out what's wrong with my configuration. If run a full backup on the system Hawking, and then try to run an incremental against Hawking, the job gets promoted to full and I don't know why. If I do a restore against Hawking, the most recent full backup is found and Bacul
Hi Bacula Users,
Does anyone have experience setting up Storage Daemons at different
sites from the Director? I'm thinking that it would be really nice to
have a central console where I can see and control the backups of
other sites, but I don't have the bandwidth to send all the data to
On Apr 14, 2009, at 10:04 AM, Josh Fisher wrote:
> James Chamberlain wrote:
>> On Tue, 14 Apr 2009, Martin Simmons wrote:
>>>>>>>> On Mon, 13 Apr 2009 17:41:00 -0400, James Chamberlain said:
>>>>>>>>
>>>> The basic problem for me
On Tue, 14 Apr 2009, Martin Simmons wrote:
>>>>>> On Mon, 13 Apr 2009 17:41:00 -0400, James Chamberlain said:
>>
>> The basic problem for me is that I've hit the 8 TB file system size
>> limit with ext3, and I don't have ext4 available to me yet. Wi
>> Why would you ever want such a pool? The only reason I can think
>> of is if
>> you have more pools than backup devices;
>
> Exactly what you said. I have 20 pools and 2 backup devices with my 2
> drive 24 slot autochanger.
Why so many pools? Are you doing one per client?
>> but that's the
>> You do not understand idea of scratch pool. this pool is literally
>> speaking
>> some kind of trash for volumes that were recycled. you cannot use
>> volumes in
>> scratch pool. they are grabbed from it and placed to the pool which
>> needs new
>> media so adding new storage for scratch p
Hi Bacula Users,
I'm having trouble with scratch pools. I have a three main backup
pools configured in Bacula (Desktops, Infrastructure, Servers). Each
corresponds to a separate RAID device (disk0, disk1, disk2), for disk-
to-disk backups. I have added a fourth RAID device (disk3) which I
13 matches
Mail list logo