Re: [ceph-users] Were fixed CephFS lock ups when it's running on nodes with OSDs?

2019-04-23 Thread Dan van der Ster
On Mon, 22 Apr 2019, 22:20 Gregory Farnum, wrote: > On Sat, Apr 20, 2019 at 9:29 AM Igor Podlesny wrote: > > > > I remember seeing reports in regards but it's being a while now. > > Can anyone tell? > > No, this hasn't changed. It's unlikely it ever will; I think NFS > resolved the issue but it

Re: [ceph-users] Were fixed CephFS lock ups when it's running on nodes with OSDs?

2019-04-23 Thread Patrick Hein
I'm am running a Ceph Cluster on 5 Servers, all with a single osd and acting as a client (kernel) for nearly half a year now and didn't encounter a lockup yet. Total storage is 3.25TB with about 600GB raw storage used, if that matters. Dan van der Ster schrieb am Di., 23. Apr. 2019, 09:33: > On

Re: [ceph-users] Osd update from 12.2.11 to 12.2.12

2019-04-23 Thread Marc Roos
I have only this in the default section, I think it is related to not having any configuration for some of these osd's. I 'forgot' to add the most recently added node [osd.x] sections. But in any case nothing afaik that should have them behave differently. [osd] osd journal size = 1024 osd p

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-23 Thread Marc Roos
I am not sure about your background knowledge of ceph, but if you are starting. Maybe first try and get ceph working in a virtual environment, that should not be to much of a problem. Then try migrating it to your container. Now you are probably fighting to many issues at the same time.

Re: [ceph-users] Bluestore with so many small files

2019-04-23 Thread Frédéric Nass
Hi, You probably forgot to recreate the OSD after changing bluestore_min_alloc_size. Regards, Frédéric. - Le 22 Avr 19, à 5:41, 刘 俊 a écrit : > Hi All , > I still see this issue with latest ceph Luminous 12.2.11 and 12.2.12. > I have set bluestore_min_alloc_size = 4096 before the tes

[ceph-users] Recovery 13.2.5 Slow

2019-04-23 Thread Andrew Cassera
Hello, I have a cluster with 6 OSD nodes each with 10 SATA 8TB drives. Node 6 was just added. All nodes are 10Gbps on the network with Jumbo frames. S3 application access is working as expected but recovery is extremely slow. Based on past posts I attempted to do the following: Alter the osd_r

Re: [ceph-users] Default Pools

2019-04-23 Thread David Turner
You should be able to see all pools in use in a RGW zone from the radosgw-admin command. This [1] is probably overkill for most, but I deal with multi-realm clusters so I generally think like this when dealing with RGW. Running this as is will create a file in your current directory for each zone

Re: [ceph-users] showing active config settings

2019-04-23 Thread solarflow99
Thanks, but does this not work on Luminous maybe? I am on the mon hosts trying this: # ceph config set osd osd_recovery_max_active 4 Invalid command: unused arguments: [u'4'] config set : Set a configuration option at runtime (not persistent) Error EINVAL: invalid command # ceph daemon osd.0

Re: [ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-23 Thread Mike Christie
On 04/18/2019 06:24 AM, Matthias Leopold wrote: > Hi, > > the Ceph iSCSI gateway has a problem when receiving discovery auth > requests when discovery auth is not enabled. Target discovery fails in > this case (see below). This is especially annoying with oVirt (KVM > management platform) where yo

[ceph-users] How to minimize IO starvations while Bluestore try to delete WAL files

2019-04-23 Thread I Gede Iswara Darmawan
Hello, Recently I have an issue when Bluestore try to delete WAL files (indicated from osd log), the IO of the disk (HDD - Spinning) will reached 100% and will introduced slow request to the cluster. Is there any way to throttle this operation down or completely disable it? Thanks Regards, I Ge

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-23 Thread Varun Singh
On Tue, Apr 23, 2019 at 2:58 PM Marc Roos wrote: > > > > I am not sure about your background knowledge of ceph, but if you are > starting. Maybe first try and get ceph working in a virtual environment, > that should not be to much of a problem. Then try migrating it to your > container. Now you ar

[ceph-users] getting pg inconsistent periodly

2019-04-23 Thread Zhenshi Zhou
Hi, I'm running a cluster for a period of time. I find the cluster usually run into unhealthy state recently. With 'ceph health detail', one or two pg are inconsistent. What's more, pg in wrong state each day are not placed on the same disk, so that I don't think it's a disk problem. The cluster