Hi Heitor and Kern, 

I'm using the S3 compatability API of Oracle Cloud 
(https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/s3compatibleapi.htm). 
It appeared to work with volume chunks uploading, bconsole cloud commands 
working etc. 

I'll have a look at stored/s3_driver.c and see if I can spot a use of an API 
not listed on Oracles compatibility page. 

Cheers,
Mike

On Sun, May 26, 2019, at 23:22, Heitor Faria wrote:
> Hello All,
> Kern report is accurate.
> Oracle, Google Storage, Azure and Swift based have their own storage 
> standards. The Enterprise Bacula version has specific plugins for each one of 
> these.
> S3 is compatible with AWS, Huaweii Cloud, CEPH based (using S3 API) and 
> Backblaze. The last one, using an adapter - mini.io.
> Regards,
> Sent from TypeApp <http://www.typeapp.com/r?b=15030>
> On May 26, 2019, at 7:17 AM, Kern Sibbald <k...@sibbald.com> wrote:
>> Hello Mike, 
>> 
>>  Sorry, but I am 100% sure that Oracle cloud is not compatible with Amazon 
>> S3. Bacula was explicitly tested on Oracle (not by me) and it doesn't work. 
>> It requires a special Oracle driver that the community does not have at this 
>> time. 
>> 
>>  Best regards, 
>>  Kern 
>> 
>> 
>> On 5/26/19 12:08 PM, Mike wrote: 
>>> Morning Kern, 
>>> 
>>> Thanks for the quick response. 
>>> 
>>> 1) Oracle Cloud 
>>> 2) I have not tried a restore but I repeatedly see this so have doubt in 
>>> the consistency of the backups. I have verified the sd service is not 
>>> restarting during the upload. I only copy backups from local stores to 
>>> remote stores once a week but see it every time. This has happened 
>>> repeatedly over the past three weeks. 
>>> It feels like a timing issue. 
>>> 
>>> I'm happy to enable any debugging you suggest, as you imply the files are 
>>> being uploaded but the driver does not appear to recognize all uploads 
>>> (hence the low VolParts). 
>>> 
>>> Appreciate the help, 
>>> Mike
>>> 
>>> On Sun, May 26, 2019, at 19:37, Kern Sibbald wrote: 
>>>> Hello, 
>>>> 
>>>> I have two questions: 
>>>> 
>>>> 1. What cloud server are you using. We support Amazon or 100% Amazon 
>>>> compatible. All S3 implementations are not the same or compatible with 
>>>> Amazon S3. 
>>>> 
>>>> 2. Have you tried doing a restore? Most of those errors look like they 
>>>> *may* be harmless in that 
>>>> they indicate that the Volume has more parts than what the Catalog 
>>>> reports. This can happen when 
>>>> some of the parts are not uploaded to the cloud (i.e. they are still in 
>>>> the cache). It could occur if you shutdown the SD while uploads are still 
>>>> in progress. 
>>>> 
>>>> Best regards, 
>>>> Kern 
>>>> 
>>>> 
>>>> On 5/26/19 3:39 AM, Mike wrote: 
>>>>> Morning Folks, 
>>>>> 
>>>>> I was keen to use the new cloud storage feature so I've configured things 
>>>>> up to use it as an offsite repository of some backup jobs. 
>>>>> I use the 'Next Pool' Pool parameter and then a Copy type job on 
>>>>> PoolUncopiedJobs 
>>>>> 
>>>>> I start a job and it seems to run OK but I'm seeing dozens of errors 
>>>>> similar to the following and the data from 'list volumes' is very 
>>>>> different to what is actually present on the S3 volume. 
>>>>> 
>>>>> --- 
>>>>> 10-May 07:06 ghost-sd JobId 12751: Warning: cloud_dev.c:1748 For Volume 
>>>>> "Offsite-0183": 
>>>>> The Parts do not match! Metadata Volume=16915 Catalog=16889. 
>>>>> Correcting Catalog 
>>>>> --- 
>>>>> Sometimes a single job will get multiple 
>>>>> --- 
>>>>> 07-May 11:04 ghost-sd JobId 12608: Warning: cloud_dev.c:1748 For Volume 
>>>>> "Offsite-0182": 
>>>>> The Parts do not match! Metadata Volume=6985 Catalog=6711. 
>>>>> The Cloud Parts do not match! Metadata Volume=1817 Catalog=1574. 
>>>>> 
>>>>> Correcting Catalog 
>>>>> 07-May 11:47 ghost-sd JobId 12608: Warning: cloud_dev.c:1748 For Volume 
>>>>> "Offsite-0182": 
>>>>> The Parts do not match! Metadata Volume=10080 Catalog=9986. 
>>>>> The Cloud Parts do not match! Metadata Volume=1953 Catalog=1574. 
>>>>> 
>>>>> Correcting Catalog 
>>>>> 07-May 11:54 ghost-sd JobId 12608: Warning: cloud_dev.c:1748 For Volume 
>>>>> "Offsite-0182": 
>>>>> The Parts do not match! Metadata Volume=10521 Catalog=10480. 
>>>>> The Cloud Parts do not match! Metadata Volume=1976 Catalog=1574. 
>>>>> 
>>>>> Correcting Catalog 
>>>>> ---- 
>>>>> 
>>>>> Director: 9.4.2 on Solaris 
>>>>> Storage: 9.4.2 on Solaris (disk based), 9.4.2 on Linux (cloud based) 
>>>>> Clients: 9.4.2 on Solaris, Windows and Linux 
>>>>> 
>>>>> Linux sd (ghost) is using Bacula community RPMs 
>>>>> bacula-libs-9.4.2-1.el7.x86_64 
>>>>> bacula-cloud-storage-9.4.2-1.el7.x86_64 
>>>>> bacula-mysql-9.4.2-1.el7.x86_64 
>>>>> 
>>>>> I expect it's a bug in the new plugin but thought I'd see if anyone had 
>>>>> suggestions on how to proceed: it could be my misconfiguration. 
>>>>> Note I do have Truncate Cache = AfterUpload since I already have a local 
>>>>> copy of the job and restoring from cloud would be last resort. 
>>>>> Thanks for the help 
>>>>> 
>>>>> * list volumes 
>>>>> <SNIP> 
>>>>> Pool: Offsite 
>>>>> +---------+--------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------+---------+----------+---------------------+------------+
>>>>>  
>>>>> | MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | 
>>>>> VolRetention | Recycle | Slot | InChanger | MediaType | VolType | 
>>>>> VolParts | LastWritten | ExpiresIn | 
>>>>> +---------+--------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------+---------+----------+---------------------+------------+
>>>>>  
>>>>> | 181 | Offsite-1 | Full | 1 | 262,087,761 | 0 | 15,552,000 | 1 | 0 | 0 | 
>>>>> CloudType | 14 | 2,866 | 2019-05-05 11:43:59 | 15,011,184 | 
>>>>> | 182 | Offsite-0182 | Full | 1 | 1,073,696,800 | 0 | 15,552,000 | 1 | 0 
>>>>> | 0 | CloudType | 14 | 13,922 | 2019-05-07 12:44:02 | 15,187,587 | 
>>>>> | 183 | Offsite-0183 | Append | 1 | 630,979,806 | 0 | 15,552,000 | 1 | 0 
>>>>> | 0 | CloudType | 14 | 17,827 | 2019-05-11 16:59:43 | 15,548,528 | 
>>>>> | 184 | Offsite-0184 | Append | 1 | 0 | 0 | 15,552,000 | 1 | 0 | 0 | 
>>>>> CloudType | 14 | 0 | NULL | NULL | 
>>>>> | 185 | Offsite-0185 | Append | 1 | 0 | 0 | 15,552,000 | 1 | 0 | 0 | 
>>>>> CloudType | 14 | 0 | NULL | NULL | 
>>>>> | 186 | Offsite-0186 | Append | 1 | 0 | 0 | 15,552,000 | 1 | 0 | 0 | 
>>>>> CloudType | 14 | 0 | NULL | NULL | 
>>>>> | 187 | Offsite-0187 | Full | 1 | 1,073,729,432 | 0 | 15,552,000 | 1 | 0 
>>>>> | 0 | CloudType | 14 | 14,859 | 2019-05-06 18:51:26 | 15,123,231 | 
>>>>> +---------+--------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------+---------+----------+---------------------+------------+
>>>>>  
>>>>> 
>>>>> Firstly the VolBytes is very wrong, on the S3 side it's 439.633 GBytes 
>>>>> (472052843913 Bytes) and what I'd expect based on the Copy jobs. 
>>>>> Secondly there are different counts of VolParts: 
>>>>> 13916 Offsite-0182 
>>>>> 17722 Offsite-0183 
>>>>> 14855 Offsite-0187 
>>>>> 724 Offsite-1 
>>>>> 
>>>>> * cloud list 
>>>>> <SNIP> 
>>>>> +--------------------+-----------+----------------------+----------------------+---------------+
>>>>>  
>>>>> | Volume Name | Status | Media Type | Pool | VolBytes | 
>>>>> +--------------------+-----------+----------------------+----------------------+---------------+
>>>>>  
>>>>> | Offsite-0182 | Full | CloudType | Offsite | 1.073 GB | 
>>>>> | Offsite-0183 | Append | CloudType | Offsite | 630.9 MB | 
>>>>> | Offsite-0187 | Full | CloudType | Offsite | 1.073 GB | 
>>>>> | Offsite-1 | Full | CloudType | Offsite | 262.0 MB | 
>>>>> +--------------------+-----------+----------------------+----------------------+---------------+
>>>>>  
>>>>> 
>>>>> 
>>>>> ---sd -- 
>>>>> Device { 
>>>>>  Name = CloudStorage 
>>>>>  Device Type = Cloud 
>>>>>  Cloud = RealS3 
>>>>>  Archive Device = /opt/bacula/cloud_backups 
>>>>>  Maximum Part Size = 10 MB 
>>>>>  Media Type = CloudType 
>>>>>  LabelMedia = yes 
>>>>>  Random Access = Yes; 
>>>>>  AutomaticMount = yes 
>>>>>  RemovableMedia = no 
>>>>>  AlwaysOpen = no 
>>>>> } 
>>>>> Cloud { 
>>>>>  Name = RealS3 
>>>>>  Driver = "S3" 
>>>>>  HostName = <SNIP> 
>>>>>  BucketName = <SNIP> 
>>>>>  AccessKey = <SNIP> 
>>>>>  SecretKey = <SNIP> 
>>>>>  Protocol = HTTPS 
>>>>>  UriStyle = Path 
>>>>>  Truncate Cache = AfterUpload 
>>>>>  Upload = EachPart 
>>>>>  Region = "eu-frankfurt-1" 
>>>>>  MaximumUploadBandwidth = 35MB/s 
>>>>> } 
>>>>> 
>>>>> --- dir --- 
>>>>> Job { 
>>>>>  Name = job.copyjob.full 
>>>>>  Type = Copy 
>>>>>  Pool = Full-Pool 
>>>>>  Selection Type = PoolUncopiedJobs 
>>>>>  Messages = Standard 
>>>>>  Client = client.fake 
>>>>>  FileSet="none" 
>>>>>  Maximum Concurrent Jobs = 2 
>>>>> } 
>>>>> 
>>>>> Pool { 
>>>>>  Name = Offsite 
>>>>>  Pool Type = Backup 
>>>>>  Recycle = yes 
>>>>>  AutoPrune = yes 
>>>>>  Storage = ghost-changer 
>>>>>  Maximum Volume Jobs = 1 
>>>>>  AutoPrune = yes 
>>>>>  Volume Retention = 6 months 
>>>>>  Maximum Volumes = 80 
>>>>>  Label Format = Offsite- 
>>>>> } 
>>>>> Pool { 
>>>>>  Name = Full-Pool 
>>>>>  Pool Type = Backup 
>>>>>  Recycle = yes 
>>>>>  AutoPrune = yes 
>>>>>  Volume Retention = 3 months 
>>>>>  Maximum Volume Jobs = 1 
>>>>>  Label Format = Full- 
>>>>>  Maximum Volumes = 60 
>>>>>  Next Pool = "Offsite" 
>>>>>  Storage = File1 
>>>>> } 
>>>>> 
>>>>> Autochanger { 
>>>>>  Name = ghost-changer 
>>>>>  Address = ghost.<SNIP> 
>>>>>  SDPort = 9103 
>>>>>  Password = <SNIP> 
>>>>>  Device = CloudStorage 
>>>>>  Media Type = CloudType 
>>>>>  Autochanger = ghost-changer 
>>>>>  Maximum Concurrent Jobs = 10 
>>>>>  TLS Enable = yes 
>>>>>  TLS Require = no 
>>>>>  TLS CA Certificate File = /etc/bacula/certs/cacert.pem 
>>>>>  TLS Certificate = "<SNIP>" 
>>>>>  TLS Key = "<SNIP>" 
>>>>> } 
>>>>> Autochanger { 
>>>>>  Name = File1 
>>>>>  Address = otherhost.<SNIP> 
>>>>>  SDPort = 9103 
>>>>>  Password = "<SNIP>" 
>>>>>  Device = FileChgr1 
>>>>>  Media Type = File1 
>>>>>  Maximum Concurrent Jobs = 10 
>>>>>  Autochanger = File1 
>>>>>  TLS Enable = yes 
>>>>>  TLS Require = yes 
>>>>>  TLS CA Certificate File = /etc/bacula/certs/cacert.pem 
>>>>>  TLS Certificate = "<SNIP>" 
>>>>>  TLS Key = "<SNIP>" 
>>>>> } 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
Bacula-users mailing list
>>>>> Bacula-users@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users

>>>>> 
>>> 
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to