[ceph-users] Re: Out of memory

2019-09-09 Thread Konstantin Shalygin
On 9/2/19 5:32 PM, Sylvain PORTIER wrote: Hi, Thank you for your response. I am using Nautilus version. Sylvain PORTIER. You should decrease osd memory usage via `osd_memory_target` option. Default is 4GB. k ___ ceph-users mailing list -- cep

[ceph-users] Bucket policies with OpenStack integration and limiting access

2019-09-09 Thread shubjero
Good day, We have a Ceph cluster and make use of object-storage and integrate with OpenStack. Each OpenStack project/tenant is given a radosgw user which allows all keystone users of that project to access the object-storage as that single radosgw user. The radosgw user is the project id of the Op

[ceph-users] Re: ceph-iscsi and tcmu-runner RPMs for CentOS?

2019-09-09 Thread Jason Dillaman
On Sat, Sep 7, 2019 at 11:31 AM Robert Sander wrote: > > Hi, > > In the Documentation on > https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/ it is stated > that you need at least CentOS 7.5 with at least kernel 4.16 and to > install tcmu-runner and ceph-iscsi "from your Linux distribution'

[ceph-users] Unable to replace OSDs deployed with ceph-volume lvm batch

2019-09-09 Thread Burkhard Linke
Hi, we had a failing hard disk, and I replace it and want to create a new OSD on it now. But ceph-volume fails under these circumstances. In the original setup, the OSDs were created with ceph-volume lvm batch using a bunch of drives and a NVMe device for bluestore db. The batch mode uses

[ceph-users] Re: Unable to replace OSDs deployed with ceph-volume lvm batch

2019-09-09 Thread Robert Sander
Hi, On 09.09.19 17:39, Burkhard Linke wrote: > # ceph-volume lvm create --bluestore --data /dev/sda --block.db > /dev/ceph-block-dbs-ea684aa8-544e-4c4a-8664-6cb50b3116b8/osd-block-db-a8f1489a-d97b-479e-b9a7-30fc9fa99cb5 > When using an LV on a VG omit the leading /dev/ in the command line arg

[ceph-users] Re: Unable to replace OSDs deployed with ceph-volume lvm batch

2019-09-09 Thread Burkhard Linke
Hi, On 9/9/19 5:55 PM, Robert Sander wrote: Hi, On 09.09.19 17:39, Burkhard Linke wrote: # ceph-volume lvm create --bluestore --data /dev/sda --block.db /dev/ceph-block-dbs-ea684aa8-544e-4c4a-8664-6cb50b3116b8/osd-block-db-a8f1489a-d97b-479e-b9a7-30fc9fa99cb5 When using an LV on a VG omit th

[ceph-users] Re: vfs_ceph and permissions

2019-09-09 Thread Konstantin Shalygin
On 9/7/19 8:59 PM, ceph-us...@dxps31.33mail.com wrote: [data2] browseable = yes force create mode = 0660 force directory mode = 0660 valid users = @"Domain Users", @"Domain Admins", @"Domain Admins" read list = write list = @"Domain Users", @"Domain Admins" admi

[ceph-users] Re: ceph -openstack -kolla-ansible deployed using docker containers - One OSD is down out of 4- how can I bringt it up

2019-09-09 Thread Reddi Prasad Yendluri
Hi All, I have implemented a solution/procedure to add/deploying a new node into the Ceph cluster storage using Kolla-ansible for your existing Openstack cloud. You can contact me for any further support in this regard. Thanks, Reddi Prasad YENDLURI Cloud Specialist M +65 8345 9599|D +65