[ceph-users] Re: Recovering from a Failed Disk (replication 1)

2019-10-18 Thread vladimir franciz blando
The OSD even when it's down, I can still access it's contents, looks like I need to check out ceph-objectstore-tool. # idweight type name up/down reweight -1 98.44 root default -2 32.82 host ceph-node-1 0 3.64osd.0 up 1 1 3.64

[ceph-users] rgw multisite failover

2019-10-18 Thread Frank R
I am looking to change an RGW multisite deployment so that the secondary will become master. This is meant to be a permanent change. Per: https://docs.ceph.com/docs/mimic/radosgw/multisite/ I need to: 1. Stop RGW daemons on the current master end. On a secondary RGW node: 2. radosgw-admin zone

[ceph-users] ceph balancer do not start

2019-10-18 Thread Jan Peters
Hello, i use ceph 12.2.12 and would like to activate the ceph balancer. unfortunately no redistribution of the PGs is started: ceph balancer status { "active": true, "plans": [], "mode": "crush-compat" } ceph balancer eval current cluster score 0.023776 (lower is better) ceph conf

[ceph-users] Re: CephFS and 32-bit Inode Numbers

2019-10-18 Thread Darrell Enns
Does your 32-bit application actually use the inode numbers? Or is it just trying to read other metadata (such as filenames in a directory, file sizes, etc)? If it's the latter, you could use LD_PRELOAD to wrap the calls and return fake/mangled inode numbers (since the application doesn't care a

[ceph-users] OSD node suddenly slow to responding cmd

2019-10-18 Thread Amudhan P
Hi, I am using Ceph nautilus cluster and i found one of my OSD node running 3 OSD's service suddenly went down and it was very slow typing command. I killed ceph-osd process and system become normal and started all OSD service. After that it becomes normal I figure out that due low memory the sys

[ceph-users] mds log showing msg with HANGUP

2019-10-18 Thread Amudhan P
Hi, I am getting below error msg in ceph nautilus cluster, do I need to worry about this? Oct 14 06:25:02 mon01 ceph-mds[35067]: 2019-10-14 06:25:02.209 7f55a4c48700 -1 received signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse Oct 14 06:25:02 mon01 ceph-mds[35067]:

[ceph-users] Re: Change device class in EC profile

2019-10-18 Thread Frank Schilder
Hi Maks, thanks for looking at this. Unfortunately, this does not answer the question. After steps 1-3 you are exactly in the same situation as I am, namely that the profile attached to the pool is outdated and, therefore, contains invalid information that will be confusing. To check, execute

[ceph-users] Re: iscsi gate install

2019-10-18 Thread Fyodor Ustinov
Hi! Thank you very much! It remains to understand why this link is not in the documentation. :) - Original Message - > From: "Torben Hørup" > To: "Fyodor Ustinov" > Cc: "ceph-users" > Sent: Friday, 18 October, 2019 15:03:09 > Subject: Re: [ceph-users] iscsi gate install > Take a look

[ceph-users] Re: iscsi gate install

2019-10-18 Thread Torben Hørup
Take a look at https://shaman.ceph.com/repos/tcmu-runner/ /Torben On 18.10.2019 11:55, Fyodor Ustinov wrote: Hi! CEPH documentation requre "tcmu-runner-1.4.0 or newer package", but I can not find this package for Centos. Maybe someone knows where to download this package? WBR, Fyodor. __

[ceph-users] Change device class in EC profile

2019-10-18 Thread Frank Schilder
I recently moved an EC pool from HDD to SSD by changing the device class in the crush rule. I would like to complete this operation by cleaning up a dirty trail. The EC profile attached to this pool is called sr-ec-6-2-hdd and it is easy enough to rename that to sr-ec-6-2-ssd. However, the profi

[ceph-users] iscsi gate install

2019-10-18 Thread Fyodor Ustinov
Hi! CEPH documentation requre "tcmu-runner-1.4.0 or newer package", but I can not find this package for Centos. Maybe someone knows where to download this package? WBR, Fyodor. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send

[ceph-users] Re: Monitor unable to join existing cluster, stuck at probing

2019-10-18 Thread Mathijs Smit
Thank you for taking time to reply to my issue. I have increased the log level to 10/10 for both the messenger and monitor debug and see the following pattern return in the logs. However I do not understand the severe high level log that is produced to deduct the problem. My I again ask for adv