I think that just did the trick, Rob.
I really appreciate your persistence, common sense is never common,
especially the first time one tries to accomplish something.
I replaced the ArchiveDevice parameters to point to my
/path/to/bacula/archive directory and launched a backup run.
This looks
(resending, deleted quoted text from previous messages so my message will
pass the 40kb limit)
Myles,
1. Basically, I suspect rclone filled its cache and bacula stopped the
backup at that time. My guess is that if you were to run a backup of less
than 1GiB right now in bacula, it would succeed. G
Maybe Dropbox or rclone or some combination of the two are limiting you to
1GiB file sizes?
In fact, for your rclone process I see it has a 1GB cache size limit. "
--vfs-cache-max-size 1G" I bet in the case of the dd command you did, we
filled the write cache and then dd exited. If the cache was l
Myles,
Some thoughts (apologies if I missed something obvious in your GitHub post):
1. I recommend testing your setup to verify that a 50gb file can be stored
the way you think it can. Maybe storage is full. Maybe it is rate limiting
you. Maybe there is a maximum file size set somewhere. To test
Well, I have one file in my Dropbox that is 29.3 GB in length and that synced
around to all my client machines without problem.
On 2023-12-04 5:31 p.m., Chris Wilkinson wrote:
> Does Dropbox have a file size upload limit?
>
> -Chris-
>
> On Mon, 4 Dec 2023, 22:23 MylesDearBusiness via Bacula-u
Does Dropbox have a file size upload limit?
-Chris-
On Mon, 4 Dec 2023, 22:23 MylesDearBusiness via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:
>
>
> Ok, here goes ...
>
>
> root@c1:~# find / -path /mnt -prune -o -type f -print | grep "Vol-0"
> root@c1:~#
>
>
> root@c1:~# df -h
Hi, Rob,
Thanks for the response.
1.
I'm only using 25% of my 2TB Dropbox account, so I don't expect storage
to be full.
This particular cloud server is tiny, just a single CPU, 50GB storage,
2GB RAM.
The biggest file I managed to write successfully to my rclone/Dropbox
mount is 1GB:
When I
Ok, here goes ...
root@c1:~# find / -path /mnt -prune -o -type f -print | grep "Vol-0"
root@c1:~#
root@c1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 941M 0 941M 0% /dev
tmpfs 198M 1.6M 196M 1% /run
/dev/vda1 49G 19G 30G 39% /
tmpfs 986M 20K 986M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run
On Mon, Jan 26, 2009 at 7:43 PM, Steve Hood wrote:
> Hello everyone-
> I just install bacula2.4.4 on fedora9_64bit.
> The problem I seem to be having is when I use the bconsole to "run" a job I
> get the following error-- messages
> 26-Jan 04:31 bacula-dir JobId 14: No prior Full backup Job record
> A00046 is in the changer. Its status is Recycle. When I check the Storage
> status I see
>
> Device status:
> Autochanger "Magnum_224" with devices:
>"Ultrium-TD3" (/dev/rmt/0cbn)
> Device "Ultrium-TD3" (/dev/rmt/0cbn) open but no Bacula volume is currently
> mounted.
> Device is BLOCKED
On Mon, Sep 18, 2006 at 02:23:03PM -0400, Yanik Doucet wrote:
> 18-Sep 14:06 bconsole: Fatal error: bnet.c:502 TLS host certificate
> verification failed. Host 192.168.100.6 did not match presented
> certificate
The name on the certificate has to match the hostname that you're telling
bacula to co
I don't think the windows version of bacula has TLS compiled in.
anyway, the problem you have there is that your host address in the
client config is 192... while your certificate has a hostname as its
CN, the host address in your director config must match the CN of the
certificate.
I wrote some
Sorry for replying only now...
On 3/29/2006 11:42 AM, david robert wrote:
> thanks for your reply.I am getting the following error in between the jobs
> >
> > 29-Mar 02:40 backup-sd: NightlySave.2006-03-29_02.40.00 Error:
> > block.c:552 Write error at 8:1 on device /dev/nst0. ERR=Input/output
thanks for your reply. Is MySQL running? When this error occurs, can you connect to the database via mysql(1)? yes mysql is running i can connect database using mysql and when i use bconsole with status and select 1 i am getting only the lines and nothing appears Is this an intermittent
On 27 Apr 2006 at 8:48, david robert wrote:
> Hi Guys,
>
> I am trying to check the director status in my bacula backup.When i
> use status and select 1 for director i am getting the following
>
>
> backup1-dir Version: 1.36.2 (28 February 2005) i386-pc-linux-gnu debian 3.1
> Daemon started 2
this is the error message i am getting exactly 30-Mar 12:14 backup01-sd: NightlySave01.2006-03-30_11.50.00 Error: block.c:552 Write error at 6:6478 on device /dev/nst0. ERR=Input/output error.30-Mar 12:14 backup01-sd: NightlySave01.2006-03-30_11.50.00 Error: Backspace record at EOT failed. ERR=
thanks for your help i don't think it is over riding any job below one is my job definition Job { Name = "Nightlybackup" Type = Backup Client = bacman-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = DLT Messages = Standard Pool
Hello,
On 3/30/2006 9:32 AM, david robert wrote:
i really appriciate your help and my *sd configuration* is as follows i
am new to bacula please help me how to fix this problem or i need to
change any setting in sd configuration.I am using dell DLT 114T tape
drive on debian sarge 3.1 kernel ve
i really appriciate your help and my sd configuration is as follows i am new to bacula please help me how to fix this problem or i need to change any setting in sd configuration.I am using dell DLT 114T tape drive on debian sarge 3.1 kernel version 2.4.27.The tape status will change to full after 5
Hi,
On 3/29/2006 11:42 AM, david robert wrote:
thanks for your reply.I am getting the following error in between the jobs
>
> 29-Mar 02:40 backup-sd: NightlySave.2006-03-29_02.40.00 Error:
> block.c:552 Write error at 8:1 on device /dev/nst0. ERR=Input/output
> error.
This is the reason B
thanks for your reply.I am getting the following error in between the jobs > > 29-Mar 02:40 backup-sd: NightlySave.2006-03-29_02.40.00 Error:> block.c:552 Write error at 8:1 on device /dev/nst0. ERR=Input/output> error.> > 29-Mar 02:40 backup-sd: NightlySave.2006-03-29_02.40.00 Error: Error> writi
Hello,
On 3/29/2006 9:37 AM, david robert wrote:
Hi,
I am running backup for 10 clients and my volume status is showing full
after running 3 or 5 jobs this is brand new tape and i don't know why it
is showing full in volume status and it is asking for next volume to
mount.PLease some one he
On 29 Mar 2006 at 8:37, david robert wrote:
> Hi,
>
> I am running backup for 10 clients and my volume status is showing
> full after running 3 or 5 jobs this is brand new tape and i don't
> know why it is showing full in volume status and it is asking for
> next volume to mount.PLease so
23 matches
Mail list logo