Re: [ceph-users] kraken-bluestore 11.2.0 memory leak issue

2017-03-29 Thread nokia ceph
Hello, We manually fixed the issue and below is our analysis. Due to high CPU utilisation we stopped the ceph-mgr on all our cluster. On one of our cluster we saw high memory usage by OSDs some grater than 5GB causing OOM , resulting in process kill. The memory was released immediately when the

[ceph-users] (no subject)

2017-03-29 Thread Лузин Дмитрий
help ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] MDS Read-Only state in production CephFS

2017-03-29 Thread John Spray
On Wed, Mar 29, 2017 at 12:59 AM, Brady Deetz wrote: > That worked for us! > > Thank you very much for throwing that together in such a short time. > > How can I buy you a beer? Bitcoin? No problem, I appreciate the testing. John > > On Mar 28, 2017 4:13 PM, "John Spray" wrote: >> >> On Tue, M

Re: [ceph-users] kraken-bluestore 11.2.0 memory leak issue

2017-03-29 Thread John Spray
I think it could be because of this: http://tracker.ceph.com/issues/19407 The clients were meant to stop trying to send reports to the mgr when it goes offline, but the monitor may not have been correctly updating the mgr map to inform clients that the active mgr had gone offline. John On Wed, M

Re: [ceph-users] MDS Read-Only state in production CephFS

2017-03-29 Thread John Spray
Yes, it should just be a question of deleting them. When I tried it here, I found that nothing in the deletion path objected to the non-existence of the data pool, so it shouldn't complain. If you want to make sure it's safe to subsequently install jewel releases that might not have the fix, then

Re: [ceph-users] cephfs and erasure coding

2017-03-29 Thread Wido den Hollander
> Op 29 maart 2017 om 8:54 schreef Konstantin Shalygin : > > > Hello. > > How your tests? I'm looking for CephFS with EC for save space on > replicas for many small files (dovecot mailboxes). I wouldn't use CephFS for so many small files. Dovecot will do a lot of locking, opening en closing

[ceph-users] Ceph 12.0.0/master + DPDK 16.11.1 -> compilation failed

2017-03-29 Thread Aynur Shakirov
Hello all! Meta: OS: Ubuntu 16.04.1 up-to-date Kernel: 4.8.0-42-generic GCC: 5.4.0 (ubuntu) Compiler flags: -O2, march=native or broadwell, -j 4 Ceph: 12.0.0, master (from git) DPDK: 16.11.1 (ubuntu or upstream, not a submodule) Description. I want to use the DPDK messenger for Ceph, but upstre

Re: [ceph-users] cephfs and erasure coding

2017-03-29 Thread Konstantin Shalygin
Thanks for notice. On dovecot mail list reported https://dovecot.org/pipermail/dovecot/2016-August/105210.html about success usage CephFS for 30-40k of users, with replica, not EC. On 03/29/2017 08:19 PM, Wido den Hollander wrote: I wouldn't use CephFS for so many small files. Dovecot will do

[ceph-users] CephFS: ceph-fuse segfaults

2017-03-29 Thread Andras Pataki
Below is a crash we had on a few machines with the ceph-fuse client on the latest Jewel release 10.2.6. A total of 5 ceph-fuse processes crashed more or less the same way at different times. The full logs are at http://voms.simonsfoundation.org:50013/9SXnEpflYPmE6UhM9EgOR3us341eqym/ceph-20170

Re: [ceph-users] radosgw global quotas - how to set in jewel?

2017-03-29 Thread Casey Bodley
Hi Graham, you're absolutely right. In jewel, these settings were moved into the period, but radosgw-admin doesn't have any commands to modify them. I opened a tracker issue for this at http://tracker.ceph.com/issues/19409. For now, it looks like you're stuck with the 'default quota' settings i

[ceph-users] Client's read affinity

2017-03-29 Thread Alejandro Comisario
Guys hi. I have a Jewel Cluster divided into two racks which is configured on the crush map. I have clients (openstack compute nodes) that are closer from one rack than to another. I would love to (if is possible) to specify in some way the clients to read first from the nodes on a specific rack t

[ceph-users] Troubleshooting incomplete PG's

2017-03-29 Thread nokia ceph
Hello, Env:- 5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2 As part of our resillency testing with kraken bluestore, we face more PG's were in incomplete+remapped state. We tried to repair each PG using "ceph pg repair " still no luck. Then we planned to remove incomplete PG's using below proc

Re: [ceph-users] ceph-rest-api's behavior

2017-03-29 Thread Dan Mick
It looks like, while the mon allows 'get_command_descriptions' with no privilege (other than basic auth), the same is not true of osd or mds. I don't know if that's the only thing that would prevent a 'readonly' ceph-rest-api (or ceph CLI or other programs that use the mon_command/osd_command inter

Re: [ceph-users] cephfs and erasure coding

2017-03-29 Thread Christian Balzer
Hello, On Wed, 29 Mar 2017 21:09:23 +0700 Konstantin Shalygin wrote: > Thanks for notice. On dovecot mail list reported > https://dovecot.org/pipermail/dovecot/2016-August/105210.html about > success usage CephFS for 30-40k of users, with replica, not EC. > If you read that whole thread, you w

Re: [ceph-users] Troubleshooting incomplete PG's

2017-03-29 Thread Brad Hubbard
On Thu, Mar 30, 2017 at 4:53 AM, nokia ceph wrote: > Hello, > > Env:- > 5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2 > > As part of our resillency testing with kraken bluestore, we face more PG's > were in incomplete+remapped state. We tried to repair each PG using "ceph pg > repair " still

[ceph-users] ??????how to get radosgw ops log

2017-03-29 Thread ????
Hi all,I have configured "rgw enable ops log = true" in ceph.conf,and now i can find it seems like to be storage into pool "default.rgw.log".But it's content can't be display in human-read format.Did here have any decode method or apis to get the rgw ops log.__

Re: [ceph-users] 回复:how to get radosgw ops log

2017-03-29 Thread Tianshan Qu
try radosgw-admin usage show 2017-03-30 12:02 GMT+08:00 码云 : > > Hi all, > > I have configured "rgw enable ops log = true" in ceph.conf, > > and now i can find it seems like to be storage into pool "default.rgw.log". > > But it's content can't be display in human-read format. > > Did here have any

Re: [ceph-users] cephfs and erasure coding

2017-03-29 Thread Konstantin Shalygin
My use case - from past ages /mail is block device for kvm vm. Now I need more space for messages, but I don't want use 3x raw space for replicas. What is your reccomendations? Create an RBD image on an erasure coded pools when a replicated pool tier set a cache tier? Thanks. On 03/30/2017

[ceph-users] how to get radosgw ops log

2017-03-29 Thread ????
Hi all,I have configured "rgw enable ops log = true" in ceph.conf,and now i can find it seems like to be storage into pool "default.rgw.log".But it's content can't be display in human-read format.Did here have any decode method or apis to get the rgw ops log.__

Re: [ceph-users] how to get radosgw ops log

2017-03-29 Thread Pritha Srivastava
- Original Message - > From: "码云" > To: "ceph-users" > Sent: Thursday, March 30, 2017 9:25:54 AM > Subject: [ceph-users] how to get radosgw ops log > > Hi all, > I have configured "rgw enable ops log = true" in ceph.conf, > and now i can find it seems like to be storage into pool "defau