Yes, there is no workaround.  I suggest you contact Bacula Systems about it,
becasue they might not know about the problem.

__Martin


>>>>> On Thu, 14 May 2020 16:38:06 +0000, Rick Tuk said:

> Hi Martin,
> 
> I installed Bacula through the Ubuntu repository provided by Bacula:
> 
> deb http://bacula.org/packages/<key>/debs/9.6.3/bionic/amd64 bionic main
> 
> Other than building bacula myself or waiting for a new release there is 
> probably not much that I can do, right?
> 
> Met vriendelijke groet / With kind regards,
> Rick
> 
> > On May 14, 2020, at 6:20 PM, Martin Simmons <mar...@lispworks.com> wrote:
> > 
> > OK, so your bacula-sd-cloud-driver-9.6.3.so is built without S3 support for
> > some reason (there should be many lines mentioning s3_driver).  The output 
> > of
> > objdump is also strangely mangled, because all of the references to 
> > fil_driver
> > should be file_driver.
> > 
> > It looks like libs3 was not detected when Bacula was built.
> > 
> > __Martin
> > 
> >>>>>> On Thu, 14 May 2020 15:08:09 +0000, Rick Tuk said:
> >> 
> >> Hi Martin,
> >> 
> >> The Plugin Directory is /opt/bacula/plugins:
> >> 
> >> Storage {
> >>    Name = soteria-sd
> >>    SD Address =  soteria.local.domain
> >>    SD Port = 9103
> >>    Maximum Concurrent Jobs = 20
> >> 
> >>    Working Directory = /opt/bacula/working
> >>    Pid Directory = /opt/bacula/working
> >> 
> >>    Plugin Directory = "/opt/bacula/plugins"
> >> }
> >> 
> >> The output of objdump -t 
> >> /opt/bacula/plugins/bacula-sd-cloud-driver-9.6.3.so | grep _driver is:
> >> 
> >> 0000000000000000 l    df *ABS*     0000000000000000              
> >> s3_driver.c
> >> 0000000000000000 l    df *ABS*     0000000000000000              
> >> file_driver.c
> >> 0000000000014f60 l     O .rodata   000000000000004a              
> >> _ZZN11fil_driver22get_cloud_volumes_listEP3DCRP5alistRPcE19__PRETTY_FUNCTION__
> >> 0000000000014fc0 l     O .rodata   000000000000005c              
> >> _ZZN11fil_driver27get_cloud_volume_parts_listEP3DCRPKcP5ilistRPcE19__PRETTY_FUNCTION__
> >> 0000000000015100 l     O .rodata   000000000000004c              
> >> _ZZN11fil_driver10put_objectEP8transferPKcS3_P7bwlimitE19__PRETTY_FUNCTION__
> >> 00000000000150a0 l     O .rodata   0000000000000049              
> >> _ZZN11fil_driver19make_cloud_filenameERPcPKcjE19__PRETTY_FUNCTION__
> >> 0000000000015060 l     O .rodata   000000000000003e              
> >> _ZZN11fil_driver24copy_cache_part_to_cloudEP8transferE19__PRETTY_FUNCTION__
> >> 0000000000015020 l     O .rodata   000000000000003e              
> >> _ZZN11fil_driver24copy_cloud_part_to_cacheEP8transferE19__PRETTY_FUNCTION__
> >> 0000000000012b90 g     F .text     000000000000010e              
> >> _ZN11file_driver24copy_cache_part_to_cloudEP8transfer
> >> 0000000000011490 g     F .text     0000000000000088              
> >> _ZN11file_driver4initEP3JCRP9cloud_devP6DEVRES
> >> 0000000000012960 g     F .text     000000000000022c              
> >> _ZN11file_driver21truncate_cloud_volumeEP3DCRPKcP5ilistRPc
> >> 0000000000012090 g     F .text     00000000000007f6              
> >> _ZN11file_driver10put_objectEP8transferPKcS3_P7bwlimit
> >> 0000000000217d20  w    O .data.rel.ro      0000000000000068              
> >> _ZTV11fil_driver
> >> 0000000000011a60 g     F .text     000000000000062f              
> >> _ZN11file_driver27get_cloud_volume_parts_listEP3DCRPKcP5ilistRPc
> >> 000000000000f7a0  w    F .text     0000000000000040              
> >> _ZN11file_driverD0Ev
> >> 000000000000f7e0  w    F .text     000000000000002c              
> >> _ZN11file_driverD2Ev
> >> 0000000000010950 g     F .text     0000000000000185              
> >> _ZN16transfer_manager8get_xferEmPFPvP8transferEPcPKcjP12cloud_driverP3DCRP11cloud_proxy
> >> 0000000000011580 g     F .text     00000000000004d5              
> >> _ZN11file_driver22get_cloud_volumes_listEP3DCRP5alistRPc
> >> 0000000000011520 g     F .text     0000000000000006              
> >> _ZN11file_driver10end_of_jobEP3DCR
> >> 0000000000010250 g     F .text     0000000000000154              
> >> _ZN8transferC2EmPFPvPS_EPKcS5_jP12cloud_driverP3DCRP11cloud_proxy
> >> 0000000000011520 g     F .text     0000000000000006              
> >> _ZN11file_driver4termEP3DCR
> >> 0000000000011530 g     F .text     0000000000000047              
> >> _ZN11file_driver12start_of_jobEP3DCR
> >> 0000000000012ca0 g     F .text     00000000000000d3              
> >> _ZN11file_driver24copy_cloud_part_to_cacheEP8transfer
> >> 000000000000f7e0  w    F .text     000000000000002c              
> >> _ZN11file_driverD1Ev
> >> 00000000000128a0 g     F .text     00000000000000bb              
> >> _ZN11file_driver19make_cloud_filenameERPcPKcj
> >> 0000000000012890 g     F .text     000000000000000c              
> >> _ZN11file_driver16get_cloud_objectEP8transferPKcS3_
> >> 0000000000010250 g     F .text     0000000000000154              
> >> _ZN8transferC1EmPFPvPS_EPKcS5_jP12cloud_driverP3DCRP11cloud_proxy
> >> 0000000000217948  w    O .data.rel.ro      0000000000000068              
> >> _ZTV12cloud_driver
> >> 
> >> 
> >> Met vriendelijke groet / With kind regards,
> >> Rick
> >> 
> >>> On May 14, 2020, at 3:19 PM, Martin Simmons <mar...@lispworks.com> wrote:
> >>> 
> >>> It looks to me like S3 support is missing.
> >>> 
> >>> What is the PluginDirectory in your bacula-sd.conf?
> >>> 
> >>> Find the bacula-sd-cloud-driver-9.6.3.so in that directory and post the 
> >>> output
> >>> of:
> >>> 
> >>> objdump -t /...path.../...to.../bacula-sd-cloud-driver-9.6.3.so | grep 
> >>> _driver
> >>> 
> >>> __Martin
> >>> 
> >>> 
> >>>>>>>> On Thu, 14 May 2020 06:24:53 +0000, Rick Tuk said:
> >>>> 
> >>>> LS,
> >>>> 
> >>>> I read Martin Simmons' reply to Phillip Dale’s message.
> >>>> The traceback I found did not have much information in it, so I 
> >>>> installed gd and changed the btraceback script to run gd as root.
> >>>> 
> >>>> The following traceback is triggered by opening bconsole and trying to 
> >>>> list all volumes in cloud, same sd crash happens:
> >>>> 
> >>>> [New LWP 5942]
> >>>> [New LWP 6084]
> >>>> [Thread debugging using libthread_db enabled]
> >>>> Using host libthread_db library 
> >>>> "/lib/x86_64-linux-gnu/libthread_db.so.1".
> >>>> 0x00007ff1e39ca03f in select () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> $1 = "14-May-2020 08:18:11\000\000\000\000\000\000\000\000\000"
> >>>> $2 = 0x56298dfb0ee0 <my_name> "soteria.svc.mostwanted.io-sd"
> >>>> $3 = 0x56298ee4d0e8 "bacula-sd"
> >>>> $4 = 0x56298ee4d128 "/opt/bacula/bin/bacula-sd"
> >>>> $5 = 0x0
> >>>> $6 = '\000' <repeats 49 times>
> >>>> $7 = 0x7ff1e42ad55b "9.6.3 (09 March 2020)"
> >>>> $8 = 0x7ff1e42ad53a "x86_64-pc-linux-gnu"
> >>>> $9 = 0x7ff1e42ad533 "ubuntu"
> >>>> $10 = 0x7ff1e42ad555 "18.04"
> >>>> $11 = "soteria", '\000' <repeats 42 times>
> >>>> $12 = 0x7ff1e42ad54e "ubuntu 18.04"
> >>>> Environment variable "TestName" not defined.
> >>>> #0  0x00007ff1e39ca03f in select () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> #1  0x00007ff1e4269618 in bnet_thread_server (addrs=<optimized out>, 
> >>>> max_clients=41, client_wq=0x56298dfb1020 <dird_workq>, 
> >>>> handle_client_request=0x56298dd99ee0 <handle_connection_request(void*)>) 
> >>>> at bnet_server.c:166
> >>>> #2  0x000056298dd9126a in main (argc=<optimized out>, argv=<optimized 
> >>>> out>) at stored.c:326
> >>>> 
> >>>> Thread 3 (Thread 0x7ff1e1e6f700 (LWP 6084)):
> >>>> #0  0x00007ff1e403f23a in waitpid () from 
> >>>> /lib/x86_64-linux-gnu/libpthread.so.0
> >>>> #1  0x00007ff1e429453e in signal_handler (sig=11) at signal.c:233
> >>>> #2  <signal handler called>
> >>>> #3  0x00007ff1e0c64787 in cloud_dev::get_cloud_volumes_list 
> >>>> (this=<optimized out>, dcr=0x7ff1dc00a138, volumes=0x7ff1e1e6ec50, 
> >>>> err=@0x7ff1e1e6ec48: 0x7ff1dc001330 "") at cloud_dev.h:110
> >>>> #4  0x000056298dd95829 in cloud_list_cmd (jcr=<optimized out>) at 
> >>>> dircmd.c:815
> >>>> #5  0x000056298dd9a394 in handle_connection_request (arg=0x56298ee57428) 
> >>>> at dircmd.c:242
> >>>> #6  0x00007ff1e429f518 in workq_server (arg=0x56298dfb1020 <dird_workq>) 
> >>>> at workq.c:372
> >>>> #7  0x00007ff1e40346db in start_thread () from 
> >>>> /lib/x86_64-linux-gnu/libpthread.so.0
> >>>> #8  0x00007ff1e39d488f in clone () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> 
> >>>> Thread 2 (Thread 0x7ff1e166e700 (LWP 5942)):
> >>>> #0  0x00007ff1e403af85 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
> >>>> /lib/x86_64-linux-gnu/libpthread.so.0
> >>>> #1  0x00007ff1e429eb56 in watchdog_thread (arg=<optimized out>) at 
> >>>> watchdog.c:299
> >>>> #2  0x00007ff1e40346db in start_thread () from 
> >>>> /lib/x86_64-linux-gnu/libpthread.so.0
> >>>> #3  0x00007ff1e39d488f in clone () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> 
> >>>> Thread 1 (Thread 0x7ff1e4b59300 (LWP 5938)):
> >>>> #0  0x00007ff1e39ca03f in select () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> #1  0x00007ff1e4269618 in bnet_thread_server (addrs=<optimized out>, 
> >>>> max_clients=41, client_wq=0x56298dfb1020 <dird_workq>, 
> >>>> handle_client_request=0x56298dd99ee0 <handle_connection_request(void*)>) 
> >>>> at bnet_server.c:166
> >>>> #2  0x000056298dd9126a in main (argc=<optimized out>, argv=<optimized 
> >>>> out>) at stored.c:326
> >>>> #0  0x00007ff1e39ca03f in select () from /lib/x86_64-linux-gnu/libc.so.6
> >>>> No symbol table info available.
> >>>> #1  0x00007ff1e4269618 in bnet_thread_server (addrs=<optimized out>, 
> >>>> max_clients=41, client_wq=0x56298dfb1020 <dird_workq>, 
> >>>> handle_client_request=0x56298dd99ee0 <handle_connection_request(void*)>) 
> >>>> at bnet_server.c:166
> >>>> 166      bnet_server.c: No such file or directory.
> >>>> maxfd = 7
> >>>> sockset = {fds_bits = {128, 0 <repeats 15 times>}}
> >>>> newsockfd = <optimized out>
> >>>> stat = <optimized out>
> >>>> clilen = 16
> >>>> clientaddr = {ss_family = 2, __ss_padding = 
> >>>> "\262\354\n`\bd\000\000\000\000\000\000\000\000\000\352٠\375\177\000\000ݲ\233\343\361\177\000\000\060\346٠\375\177\000\000\036\260\224\344\361\177\000\000\220\245\265\344\361\177\000\000\060\346٠\375\177\000\000\240\361\377\177\003\000\000\000\307c\224\344\361\177\000\000`\346٠\375\177\000\000\030\352٠\375\177\000\000X\232W\342\361\177\000\000X\232!\000\000\000\000\000\000\200W\342\361\177\000",
> >>>>  __ss_align = 1}
> >>>> tlog = <optimized out>
> >>>> turnon = 1
> >>>> request = {fd = 8, user = '\000' <repeats 127 times>, daemon = 
> >>>> "soteria.svc.mostwanted.io-sd", '\000' <repeats 99 times>, pid = 
> >>>> "5938\000\000\000\000\000", client = {{name = '\000' <repeats 127 
> >>>> times>, addr = '\000' <repeats 127 times>, sin = 0x7ff1e38b27e0, unit = 
> >>>> 0x0, request = 0x7ffda0d9e5a0}}, server = {{name = '\000' <repeats 127 
> >>>> times>, addr = '\000' <repeats 127 times>, sin = 0x7ff1e38b2760, unit = 
> >>>> 0x0, request = 0x7ffda0d9e5a0}}, sink = 0x0, hostname = 0x7ff1e36aeb30 
> >>>> <sock_hostname>, hostaddr = 0x7ff1e36aeae0 <sock_hostaddr>, cleanup = 
> >>>> 0x0, config = 0x0}
> >>>> addr = <optimized out>
> >>>> fd_ptr = 0x0
> >>>> buf = "10.96.8.100", '\000' <repeats 116 times>
> >>>> sockfds = {<SMARTALLOC> = {<No data fields>}, head = 0x7ffda0d9e3d0, 
> >>>> tail = 0x7ffda0d9e3d0, loffset = 0, num_items = 1}
> >>>> allbuf = 
> >>>> "hM\266\344\361\177\000\000\210\227\224\344\361\177\000\000\020J\266\344\361\177\000\000\020J\266\344\361\177\000\000\000\000\000\000\000\000\000\000\320L%\344\361\177\000\000\033p\351\003\000\000\000\000$iL\344\361\177\000\000\300EL\344\361\177\000\000X\311$\344\361\177\000\000\020J\266\344\361\177\000\000\000\000\000\000\361\177\000\000p\353٠\375\177\000\000\003\000\000\000\361\177\000\000`\353٠\375\177\000\000\000\000\000\000\375\177\000\000ج\265\344\361\177\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\001\000\000\000\223T`\275ج\265\344\361\177\000\000\301\006\\\372\000\000\000\000\330\301\344\216)V\000\000\326}'\344\361\177\000\000\330\301\344\216)V\000\000\300\313*\344"...
> >>>> #2  0x000056298dd9126a in main (argc=<optimized out>, argv=<optimized 
> >>>> out>) at stored.c:326
> >>>> 326      stored.c: No such file or directory.
> >>>> ch = <optimized out>
> >>>> no_signals = <optimized out>
> >>>> thid = 140676853856000
> >>>> uid = 0x0
> >>>> gid = 0x0
> >>>> #0  0x0000000000000000 in ?? ()
> >>>> No symbol table info available.
> >>>> #0  0x0000000000000000 in ?? ()
> >>>> No symbol table info available.
> >>>> #0  0x0000000000000000 in ?? ()
> >>>> No symbol table info available.
> >>>> #0  0x0000000000000000 in ?? ()
> >>>> No symbol table info available.
> >>>> #0  0x0000000000000000 in ?? ()
> >>>> No symbol table info available.
> >>>> Attempt to dump current JCRs. njcrs=1
> >>>> threadid=0x7ff1e1e6f700 JobId=0 JobStatus=C jcr=0x7ff1dc0008f8 
> >>>> name=*System*
> >>>>  use_count=1 killable=1
> >>>>  JobType=I JobLevel=
> >>>>  sched_time=14-May-2020 08:18 start_time=01-Jan-1970 01:00
> >>>>  end_time=01-Jan-1970 01:00 wait_time=01-Jan-1970 01:00
> >>>>  db=(nil) db_batch=(nil) batch_started=0
> >>>> dcr=*None*
> >>>> List plugins. Hook count=0
> >>>> 
> >>>> Met vriendelijke groet / With kind regards,
> >>>> Rick Tuk 
> >>>> 
>>>>> On May 7, 2020, at 1:32 PM, Rick Tuk <r...@mostwanted.io> wrote:
>>>>> 
>>>>> LS,
>>>>> 
>>>>> I am trying to get Bacula 9.6.3 up and running on Ubuntu 18.04 using the 
>>>>> bacula-cloud-storage package to store the backups to an Ceph cluster 
>>>>> using the S3 interface.
>>>>> All services are running, when I manually try to run a backup job (in 
>>>>> this case a backup of the same host) The job fails with Fatal error: 
>>>>> job.c:3011 Comm error with SD. bad response to Append Data. ERR=No data 
>>>>> available
>>>>> When this happens the SD daemon crashes with the following error: Bacula 
>>>>> interrupted by signal 11: Segmentation violation
>>>>> 
>>>>> Configs related to this setup that might be relevant:
>>>>> 
>>>>> bacula-dir:
>>>>> 
>>>>> Storage {
>>>>>  Name = Full
>>>>>  Address = soteria.local.domain
>>>>>  SD Port = 9103
>>>>>  Password = “removed-for-security"
>>>>>  Device = Full
>>>>>  Media Type = CloudType
>>>>> }
>>>>> 
>>>>> Storage {
>>>>>  Name = Diff
>>>>>  Address = soteria.local.domain
>>>>>  SD Port = 9103
>>>>>  Password = “removed-for-security"
>>>>>  Device = Diff
>>>>>  Media Type = CloudType
>>>>> }
>>>>> 
>>>>> Storage {
>>>>>  Name = Inc
>>>>>  Address = soteria.local.domain
>>>>>  SD Port = 9103
>>>>>  Password = “removed-for-security"
>>>>>  Device = Diff
>>>>>  Media Type = CloudType
>>>>> }
>>>>> 
>>>>> Pool {
>>>>>  Name = Daily
>>>>>  Pool Type = Backup
>>>>>  Recycle = yes
>>>>>  AutoPrune = yes
>>>>>  Storage = Inc
>>>>>  File Retention = 1 months
>>>>>  Job Retention = 1 months
>>>>>  Volume Retention = 1 months
>>>>>  Maximum Volume Bytes = 10G
>>>>>  Label Format = daily-
>>>>> }
>>>>> 
>>>>> bacula-sd:
>>>>> 
>>>>> Cloud {
>>>>>  Name = Ceph-S3
>>>>>  Driver = "S3"
>>>>>  HostName = “s3.local.domain"
>>>>>  BucketName = "bacula"
>>>>>  AccessKey = “removed-for-security"
>>>>>  SecretKey = “removed-for-security"
>>>>>  Protocol = HTTPS
>>>>>  UriStyle = Path
>>>>>  Truncate Cache = No
>>>>>  Upload = EachPart
>>>>> }
>>>>> 
>>>>> Device {
>>>>>  Name = Full
>>>>>  Cloud = Ceph-S3
>>>>>  Archive Device = /bacula/backup/full
>>>>>  Device Type = Cloud
>>>>>  Media Type = CloudType
>>>>>  Maximum Part Size = 10 MB
>>>>>  Label Media = yes
>>>>>  Random Access = yes
>>>>>  Automatic Mount = yes
>>>>>  Removable Media = no
>>>>>  Always Open = no
>>>>> }
>>>>> 
>>>>> Device {
>>>>>  Name = Diff
>>>>>  Cloud = Ceph-S3
>>>>>  Archive Device = /bacula/backup/diff
>>>>>  Device Type = Cloud
>>>>>  Media Type = CloudType
>>>>>  Maximum Part Size = 10 MB
>>>>>  Label Media = yes
>>>>>  Random Access = yes
>>>>>  Automatic Mount = yes
>>>>>  Removable Media = no
>>>>>  Always Open = no
>>>>> }
>>>>> 
>>>>> Device {
>>>>>  Name = Inc
>>>>>  Cloud = Ceph-S3
>>>>>  Archive Device = /bacula/backup/inc
>>>>>  Device Type = Cloud
>>>>>  Media Type = CloudType
>>>>>  Maximum Part Size = 10 MB
>>>>>  Label Media = yes
>>>>>  Random Access = yes
>>>>>  Automatic Mount = yes
>>>>>  Removable Media = no
>>>>>  Always Open = no
>>>>> }
>>>>> 
>>>>> If any additional information is required, please let me know, I’m really 
>>>>> hoping to get this to work soon.
>>>>> 
>>>>> Met vriendelijke groet / With kind regards,
>>>>> 
>>>>> Rick
> >>>> 
> >>>> 
> >>>> _______________________________________________
> >>>> Bacula-users mailing list
> >>>> Bacula-users@lists.sourceforge.net
> >>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>>> 
> >> 
> >> 
> 
> 


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to