[ceph-users] Full OSD's on cephfs_metadata pool

2020-03-18 Thread Robert Ruge
Hi All. Nautilus 14.2.8. I came in this morning to find that six of my eight NVME OSD's that were housing the cephfs_metadata pool had mysteriously filled up and crashed overnight and they won't come back up. These OSD's are all single logical volume devices with no separate WAL or DB. I have

[ceph-users] Ceph pool quotas

2020-03-18 Thread Stolte, Felix
Hey guys, a short question about pool quotas. Do they apply to stats attributes “stored” or “bytes_used” (Is replication counted for or not)? Regards Felix IT-Services Telefon 02461 61-9243 E-Mail: f.sto...@fz-juelich.de --

[ceph-users] OSDs continuously restarting under load

2020-03-18 Thread huxia...@horebdata.cn
Hello, folks, I am trying to add a ceph node into an existing ceph cluster. Once the reweight of newly-added OSD on the new node exceed 0.4 somewhere, the osd becomes unresponsive and restarting, eventually go down. What could be the problem? Any suggestion would be highly appreciated. best

[ceph-users] Re: New Ceph Cluster Setup

2020-03-18 Thread Eugen Block
Hi, 1) Will create the ceph cluster with two osd node by doing "osd pool default size = 2" after that we can add the third node in live cluster and change the replication factor of pool from 2 to 3 by doing # "ceph osd pool set size 3". I hope it's just a test cluster with size 2, don't

[ceph-users] New Ceph Cluster Setup

2020-03-18 Thread adhobale8
Dear Team, We are creating new ceph cluster with three OSD node each node will have 38TB disk space, unfortunately we are having two servers with us third server will get delivered after 3 weeks. I need your help to plan the cluster setup, please suggest which approach will be the right to init

[ceph-users] March Ceph Science User Group Virtual Meeting

2020-03-18 Thread Kevin Hrpcek
Hello, We will be having a Ceph science/research/big cluster call on Wednesday March 25th. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members mostly fr

[ceph-users] Re: bluefs enospc

2020-03-18 Thread Derek Yarnell
Hi Igor, I just want to thank you for taking the time to help with this issue. On 3/18/20 5:30 AM, Igor Fedotov wrote: >>> Most probably you will need additional 30GB of free space per each OSD >>> if going this way. So please let me know if you can afford this. >> Well I had already increased 70

[ceph-users] Object storage multisite

2020-03-18 Thread Ignazio Cassano
Hello, I have two openstack installation on different sites. They do not share any service : each one have its keystone repository, and ceph cluster with object storage and block storage. I read about object storage multisite and I could modify my object storages to enable multisite active-active.

[ceph-users] ceph objecy storage client gui

2020-03-18 Thread Ignazio Cassano
Hello All, I am looking for object storage freee/opensource client gui (linux and windows) for end users . I tried swiftstack but it is only for personal use. Help, please ? Ignazio ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: bluefs enospc

2020-03-18 Thread Igor Fedotov
Hi Derek, On 3/16/2020 7:17 PM, Derek Yarnell wrote: Hi Igor, On 3/16/20 10:34 AM, Igor Fedotov wrote: I can suggest the following non-straightforward way for now: 1) Check osd startup log for the following line: 2020-03-15 14:43:27.845 7f41bb6baa80  1 bluestore(/var/lib/ceph/osd/ceph-681) _

[ceph-users] Re: v14.2.8 Nautilus released

2020-03-18 Thread Dietmar Rieder
On 2020-03-17 14:28, Jake Grimmett wrote: >> Is it possible to reconfigure a filesystem with a default EC pool to a >> default replicated pool? >> > > Patrick Donnelly answered this question on 3/4/20 > > "You must create a new file system at this time. Someday we would like > to change this but