I deleted the volume with VolStatus='Recycle' and then all problems was
solved.
Everything running fine now. Case closed.
/hans
On 09/29/2014 03:51 PM, Hans Schou wrote:
> I still got this problem but maybe there is a clue here
>
> 29-Sep 14:36 bohr-sd JobId 93320: Committ
ol=SanFullPool | bconsole | grep san-full1855
| 1,855 | san-full1855 | Recycle | 1 | 1 |
0 |1,209,600 | 1 |0 | 0 | SanFullFile | 2014-08-30
05:42:40 |
Any help much appreciated.
--
venlig hilsen
Hans
ase closed for now and thanks for help.
--
venlig hilsen
Hans Schou
MOC ⅍
Grønningen 19 st th
1270 København K
http://moc.net/
tel:+4546923438
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve P
ype = Backup
Next Pool = RemoteTape
Recycle = yes
AutoPrune = yes
Maximum Volume Bytes = 5368709120
Volume Retention = 7 days
LabelFormat = "admhuset-"
Maximum Volumes = 180
Storage = Admhuset
}
--
venlig hilsen
Hans Schou
T COUNT(*) FROM Media WHERE PoolId=3 AND MediaId
NOT IN (SELECT MediaId FROM JobMedia);"
+--+
| COUNT(*) |
+--+
| 328 |
+--+
Is that query wrong?
I am beginning to have too little disk space left so I cannot just keep
adding volumes.
Any help much appreciated.
--
On 07/23/2014 04:30 PM, Thomas Lohman wrote:
>> "StartTime" does not get updated when migrating a job. Is this a bug or
>> is it the way it is supposed to be?
>>
> the start time of the new job gets set to the start time
> of the copied/migrated job or in the case of a Virtual Full to the start
> t
Hi
"StartTime" does not get updated when migrating a job. Is this a bug or
is it the way it is supposed to be?
mysql bacula -e "SELECT JobId, PriorJobId, SchedTime, StartTime,
EndTime, RealEndTime FROM Job WHERE JobId IN (90018,89565)"
+---++-+---
erv0047'
AND Client.ClientId=Job.ClientId
AND Job.JobId=File.JobId
AND File.FileIndex > 0
AND Path.PathId=File.PathId
AND Filename.FilenameId=File.FilenameId
AND Filename.Name='resolv.conf'
ORDER BY
StartTime DESC
LIMIT 20;
--
Venl
t;Spool Size".
My spool size is set to 50G but the actual server which runs alone has
120G of data.
I will try to set the spool size to more than the amount of data the
biggest client will send and then I guess it is solved.
--
Venlig h
(Before you ask: The gant chart is from a QND perl script)
--
Venlig hilsen
Hans Schou
--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
tech
.
OK, thanks. I got it working now.
I feared that I had to use 2 directors but one can handle it all.
Another solution is to wait for SD->SD migration functionality. It is
on the dev table.
That would be nice. It is a more simple setup than the NFS solution.
--
Venlig hilsen
Hans Schou
remote location.
My plan was to migrate the oldest full backup each Monday but it seems
like it is not possible to migrate from one SD to another SD.
Any suggestion much appreciated.
--
Venlig hilsen
Hans Schou
tel:46923438
Thanks, it works. I thought it could be buiild in more a elegant way but
it will do.
/hans
Den 09/13/2012 02:57 PM, Gary R. Schmidt skrev:
> On 13/09/2012 10:01 PM, Hans Schou wrote:
>> Hi
>>
>> Despite Bacula is quite easy to operate I got a small problem.
>>
&g
Command = "/etc/backup/dump_all_db"
}
The "FailJobOnError" makes the backup continue despite the error, but
the error report says "OK".
How can I get failure report and also a backup of the data which is OK?
--
Venlig
14 matches
Mail list logo