Hello,
We manually fixed the issue and below is our analysis.
Due to high CPU utilisation we stopped the ceph-mgr on all our cluster.
On one of our cluster we saw high memory usage by OSDs some grater than 5GB
causing OOM , resulting in process kill.
The memory was released immediately when the
help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Mar 29, 2017 at 12:59 AM, Brady Deetz wrote:
> That worked for us!
>
> Thank you very much for throwing that together in such a short time.
>
> How can I buy you a beer? Bitcoin?
No problem, I appreciate the testing.
John
>
> On Mar 28, 2017 4:13 PM, "John Spray" wrote:
>>
>> On Tue, M
I think it could be because of this:
http://tracker.ceph.com/issues/19407
The clients were meant to stop trying to send reports to the mgr when
it goes offline, but the monitor may not have been correctly updating
the mgr map to inform clients that the active mgr had gone offline.
John
On Wed, M
Yes, it should just be a question of deleting them. When I tried it
here, I found that nothing in the deletion path objected to the
non-existence of the data pool, so it shouldn't complain.
If you want to make sure it's safe to subsequently install jewel
releases that might not have the fix, then
> Op 29 maart 2017 om 8:54 schreef Konstantin Shalygin :
>
>
> Hello.
>
> How your tests? I'm looking for CephFS with EC for save space on
> replicas for many small files (dovecot mailboxes).
I wouldn't use CephFS for so many small files. Dovecot will do a lot of
locking, opening en closing
Hello all!
Meta:
OS: Ubuntu 16.04.1 up-to-date
Kernel: 4.8.0-42-generic
GCC: 5.4.0 (ubuntu)
Compiler flags: -O2, march=native or broadwell, -j 4
Ceph: 12.0.0, master (from git)
DPDK: 16.11.1 (ubuntu or upstream, not a submodule)
Description.
I want to use the DPDK messenger for Ceph, but upstre
Thanks for notice. On dovecot mail list reported
https://dovecot.org/pipermail/dovecot/2016-August/105210.html about
success usage CephFS for 30-40k of users, with replica, not EC.
On 03/29/2017 08:19 PM, Wido den Hollander wrote:
I wouldn't use CephFS for so many small files. Dovecot will do
Below is a crash we had on a few machines with the ceph-fuse client on
the latest Jewel release 10.2.6. A total of 5 ceph-fuse processes
crashed more or less the same way at different times. The full logs are
at
http://voms.simonsfoundation.org:50013/9SXnEpflYPmE6UhM9EgOR3us341eqym/ceph-20170
Hi Graham, you're absolutely right. In jewel, these settings were moved
into the period, but radosgw-admin doesn't have any commands to modify
them. I opened a tracker issue for this at
http://tracker.ceph.com/issues/19409. For now, it looks like you're
stuck with the 'default quota' settings i
Guys hi.
I have a Jewel Cluster divided into two racks which is configured on
the crush map.
I have clients (openstack compute nodes) that are closer from one rack
than to another.
I would love to (if is possible) to specify in some way the clients to
read first from the nodes on a specific rack t
Hello,
Env:-
5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2
As part of our resillency testing with kraken bluestore, we face more PG's
were in incomplete+remapped state. We tried to repair each PG using "ceph
pg repair " still no luck. Then we planned to remove incomplete PG's
using below proc
It looks like, while the mon allows 'get_command_descriptions' with no
privilege (other than basic auth), the same is not true of osd or mds.
I don't know if that's the only thing that would prevent a 'readonly'
ceph-rest-api (or ceph CLI or other programs that use the
mon_command/osd_command inter
Hello,
On Wed, 29 Mar 2017 21:09:23 +0700 Konstantin Shalygin wrote:
> Thanks for notice. On dovecot mail list reported
> https://dovecot.org/pipermail/dovecot/2016-August/105210.html about
> success usage CephFS for 30-40k of users, with replica, not EC.
>
If you read that whole thread, you w
On Thu, Mar 30, 2017 at 4:53 AM, nokia ceph wrote:
> Hello,
>
> Env:-
> 5 node, EC 4+1 bluestore kraken v11.2.0 , RHEL7.2
>
> As part of our resillency testing with kraken bluestore, we face more PG's
> were in incomplete+remapped state. We tried to repair each PG using "ceph pg
> repair " still
Hi all,I have configured "rgw enable ops log = true" in ceph.conf,and now i can
find it seems like to be storage into pool "default.rgw.log".But it's content
can't be display in human-read format.Did here have any decode method or apis
to get the rgw ops log.__
try radosgw-admin usage show
2017-03-30 12:02 GMT+08:00 码云 :
>
> Hi all,
>
> I have configured "rgw enable ops log = true" in ceph.conf,
>
> and now i can find it seems like to be storage into pool "default.rgw.log".
>
> But it's content can't be display in human-read format.
>
> Did here have any
My use case - from past ages /mail is block device for kvm vm. Now I
need more space for messages, but I don't want use 3x raw space for
replicas.
What is your reccomendations? Create an RBD image on an erasure coded
pools when a replicated pool tier set a cache tier?
Thanks.
On 03/30/2017
Hi all,I have configured "rgw enable ops log = true" in ceph.conf,and now i can
find it seems like to be storage into pool "default.rgw.log".But it's content
can't be display in human-read format.Did here have any decode method or apis
to get the rgw ops log.__
- Original Message -
> From: "码云"
> To: "ceph-users"
> Sent: Thursday, March 30, 2017 9:25:54 AM
> Subject: [ceph-users] how to get radosgw ops log
>
> Hi all,
> I have configured "rgw enable ops log = true" in ceph.conf,
> and now i can find it seems like to be storage into pool "defau
20 matches
Mail list logo