Hi Oliver,

In order to run the `s3cmd ws-create`, you need to run it against an RGW that 
has the following settings:

rgw_enable_static_website = true
rgw_enable_apis = s3, s3website

You can choose to do so temporarily if you only need to apply that config once, 
or leave it running indefinitely. We run our webhosting cluster with s3 and 
s3website enabled on all RGWs, largely because we have all our RGW instances 
behind one VIP, and do so for customer convenience.

It is possible to expose a bucket that is a member of a tenant as a static 
website, however it is not possible to access it via virtual hosted style 
addressing, you must use path style addressing to access it.

Virtual hosted style addressing: <bucketname>.<rgw_hostname>
Path style addressing: <rgw_hostname>/<bucketname>, or in the case of a tenant 
bucket: <rgw_hostname>/<tenant>:<bucketname>

Hope this helps.
-Ben

On 10/24/19, 7:38 PM, "Oliver Freyermuth" <freyerm...@physik.uni-bonn.de> 
wrote:

    Dear Cephers,
    
    I have a question concerning static websites with RGW. 
    To my understanding, it is best to run >=1 RGW client for "classic" S3 and 
in addition operate >=1 RGW client for website serving
    (potentially with HAProxy or its friends in front) to prevent messup of 
requests via the different protocols. 
    
    I'd prefer to avoid "*.example.com" entries in DNS if possible. 
    So my current setup has these settings for the "web" RGW client:
     rgw_enable_static_website = true
     rgw_enable_apis = s3website
     rgw_dns_s3website_name = 
some_value_unused_when_A_records_are_used_pointing_to_the_IP_but_it_needs_to_be_set
    and I create simple A records for each website pointing to the IP of this 
"web" RGW node. 
    
    I can easily upload content for those websites to the other RGW instances 
which are serving S3,
    so S3 and s3website APIs are cleanly separated in separate instances. 
    
    However, one issue remains: How do I run
     s3cmd ws-create
    on each website-bucket once? 
    I can't do that against the "classic" S3-serving RGW nodes. This will give 
me a 405 (not allowed),
    since they do not have rgw_enable_static_website enabled. 
    I also can not run it against the "web S3" nodes, since they do not have 
the S3 API enabled. 
    Of course I could enable that, but then the RGW node can't cleanly 
disentangle S3 and website requests since I use A records. 
    
    Does somebody have a good idea on how to solve this issue? 
    Setting "rgw_enable_static_website = true" on the S3-serving RGW nodes 
would solve it, but does that have any bad side-effects on their S3 operation? 
    
    Also, if there's an expert on this: Exposing a bucket under a tenant as 
static website is not possible since the colon (:) can't be encoded in DNS, 
right? 
    
    
    In case somebody also wants to set something like this up, here are the 
best docs I could find:
    https://gist.github.com/robbat2/ec0a66eed28e5f0e1ef7018e9c77910c
    and of course:
    
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index#configuring_gateways_for_static_web_hosting
    
    
    Cheers,
        Oliver
    
    

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to