Hello, 

That was indeed my suggestion.
The alternative would be to make sure your ceph can be routed through your 
public network. But it’s not my infrastructure, I don’t know what you store as 
data, etc…

In either case, you’re making possible for your tenants to access a part of 
your infra (the ceph cluster that’s used for openstack too), so you should 
think of the implications twice (bad neighbours, security intrusions…).
 
Best regards,
JP


On 29/05/2017, 10:27, "fabrice grelaud" <fabrice.grel...@u-bordeaux.fr> wrote:

    Thanks for the answer.
    
    My use case is for a file-hosting software system like « Seafile »  which 
can use a ceph backend (swift too but we don’t deploy swift on our infra).
    
    Our network configuration of our infra is identical as your OSA 
documentation. So, on our compute node we have two bonding interface (bond0 and 
bond1).
    The ceph vlan is actually propagate on bond0 (where is attach br-storage) 
to have ceph backend for our openstack.
    And on bond1, among other, we have br-vlan for ours vlans providers.
    
    If i understood correctly, the solution is to propagate too on our switch 
the ceph vlan on bond1, and create by neutron the provider network to be 
reachable in the tenant by our file-hosting software.
    
    For security issues, using neutron rbac tool to share only this provider 
network to the tenant in question, could be sufficient ?
    
    I’m all ears ;-) if you have another alternative.
    
    Regards,
    Fabrice
    
    
    > Le 25 mai 2017 à 14:01, Jean-Philippe Evrard 
<jean-philippe.evr...@rackspace.co.uk> a écrit :
    > 
    > I doubt many people have tried this, because 1) cinder/nova/glance 
probably do the job well in a multi-tenant fashion 2) you’re poking holes into 
your ceph cluster security.
    > 
    > Anyway, if you still want it, you would need (I guess) have to create a 
provider network that will be allowed to access your ceph network.
    > 
    > You can either route it from your current public network, or create 
another network. It’s 100% up to you, and not osa specific.
    > 
    > Best regards,
    > JP
    > 
    > On 24/05/2017, 15:02, "fabrice grelaud" <fabrice.grel...@u-bordeaux.fr> 
wrote:
    > 
    >    Hi osa team,
    > 
    >    i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph 
as backend for cinder (with our own ceph infra).
    > 
    >    After create an instance with root volume, i would like to mount a 
ceph block or cephfs directly to the vm (not a cinder volume). So i want to 
attach a new interface to the vm that is in the ceph vlan.
    >    How can i do that ?
    > 
    >    We have our ceph vlan propagated on bond0 interface (bond0.xxx and 
br-storage configured as documented) for openstack infrastructure.
    > 
    >    Should i have to propagate this vlan on the bond1 interface where my 
br-vlan is attach ?
    >    Should i have to use the existing br-storage where the ceph vlan is 
already propagated (bond0.xxx) ? And how i create the ceph vlan network in 
neutron (by neutron directly or by horizon) ?
    > 
    >    Has anyone ever experienced this ?
    > 
    >    
__________________________________________________________________________
    >    OpenStack Development Mailing List (not for usage questions)
    >    Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    > 
    > 
    > 
    > ________________________________
    > Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
    > __________________________________________________________________________
    > OpenStack Development Mailing List (not for usage questions)
    > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    
    
    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to