Hi together,
we currently have some problems with monitor quorum after shutting down all
cluster nodes for migration to another location.
mon_status gives uns the following outputt:
{
"name": "mon01",
"rank": 0,
"state": "electing",
"election_epoch": 20345,
"quorum": [],
"features": {
"r
On 07/26/2018 10:12 AM, Benjamin Naber wrote:
> Hi together,
>
> we currently have some problems with monitor quorum after shutting down all
> cluster nodes for migration to another location.
>
> mon_status gives uns the following outputt:
>
> {
> "name": "mon01",
> "rank": 0,
> "state":
Cool, then it's time to upgrade to Mimic.
Thanks for the info!
On Wed, Jul 25, 2018 at 6:37 PM, Casey Bodley wrote:
>
> On 07/25/2018 08:39 AM, Elias Abacioglu wrote:
>
>> Hi
>>
>> I'm wondering why LZ4 isn't built by default for newer Linux distros like
>> Ubuntu Xenial?
>> I understand that i
NFS Ganesha certainly works with Cephfs. I would investigate that also.
http://docs.ceph.com/docs/master/cephfs/nfs/
Regarding Active Directory, I have done a lot of work recently with sssd.
Not entirely relevant to this list, please send me a mail offline.
Not sure if this is any direct use
http
hi Wido,
thx for your reply.
time is also in sync. i forced time sync again to be sure.
kind regards
Ben
> Wido den Hollander hat am 26. Juli 2018 um 10:18 geschrieben:
>
>
>
>
> On 07/26/2018 10:12 AM, Benjamin Naber wrote:
> > Hi together,
> >
> > we currently have some problems with monitor
Hi everyone, we run a cluster for our customer, ubuntu 16.04, ceph
luminous 12.2.4 - its use is exclusively for cephFS now. We use multiple
cephFS(i'm aware it's an experimental feature, but it works fine so far)
for our storage purposes. The data pools for all the cephFS filesystems
are erasur
On 07/26/2018 10:33 AM, Benjamin Naber wrote:
> hi Wido,
>
> thx for your reply.
> time is also in sync. i forced time sync again to be sure.
>
Try setting debug_mon to 10 or even 20 and check the logs about what the
MONs are saying.
debug_ms = 10 might also help to get some more information
hi Wido,
got the folowing outputt since ive changed the debug setting:
2018-07-26 11:46:21.004490 7f819e968700 10 -- 10.111.73.1:6789/0 >>
10.111.73.3:0/1033315403 conn(0x55aa46c4a800 :6789 s=STATE_OPEN pgs=71 cs=1
l=1)._try_send sent bytes 9 remaining bytes 0
2018-07-26 11:46:21.004520 7f81a19
On 07/26/2018 11:50 AM, Benjamin Naber wrote:
> hi Wido,
>
> got the folowing outputt since ive changed the debug setting:
>
This is only debug_ms it seems?
debug_mon = 10
debug_ms = 10
Those two shoud be set where debug_mon will tell more about the election
process.
Wido
> 2018-07-26 11:
hi Wido,
i have now one monitor online. i hve removed the two others from monmap.
how can i procedure, to reset that mon hosts and add them as new monitors to
the monmap?
king regards
Ben
> Wido den Hollander hat am 26. Juli 2018 um 11:52 geschrieben:
>
>
>
>
> On 07/26/2018 11:50 AM, Benjami
hi wido,
after adding the hosts back to monmap the following error occurs in ceph-mon
log.
e5 ms_verify_authorizer bad authorizer from mon 10.111.73.3:6789/0
i tried to copy the mon key ring to all other nodes, but porblem still exists.
kind regards
Ben
> Benjamin Naber hat am 26. Juli 2018
Hello ceph users!
I have a question regarding the ceph data usage and the rados gateway
multisite replication.
Our test cluster have the following setup:
* 3 monitors
* 12 osds (raw size : 5gb, journal size 1gb, colocated on the same drive)
* osd pool default size is set to 2, min size to 1
*
I can comment on that docker image: We built that to bake in a
certain amount of config regarding nfs-ganesha serving CephFS and
using LDAP to do idmap lookups (example ldap entries are in readme).
At least as we use it the server-side uid/gid information is pulled
from sssd using a config file on
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote:
>
> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
> wrote:
> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> > wrote:
> >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
> >> wrote:
> >>>
> >>>
> >>> On Wed, Jul 25, 2018 at 5:41 PM
On Thu, Jul 26, 2018 at 9:21 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> > wrote:
>> >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillama
On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev wrote:
>
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> wrote:
> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
> >>
> >>
> >> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
> >> wrote:
> >>>
> >>> I am not sure this related to RBD,
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>> >>
>> >>
>> >> On Wed, Jul 25, 2018 at 5:41 PM Al
HI,
We currently segregate ceph pool PG allocation using the crush device class
ruleset as described:
https://ceph.com/community/new-luminous-crush-device-classes/
simply using the following command to define the rule : ceph osd crush
rule create-replicated default host
However, we noticed tha
On Thu, Jul 26, 2018 at 4:57 PM Benoit Hudzia
wrote:
> HI,
>
> We currently segregate ceph pool PG allocation using the crush device
> class ruleset as described:
> https://ceph.com/community/new-luminous-crush-device-classes/
> simply using the following command to define the rule : ceph osd cr
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>> >>
>> >>
>> >> On Wed, Jul 25, 2018 at 5:41 PM Al
Hello, just to report,
Looks like change the message type to simple help to avoid the memory leak.
Just about a day later the memory still OK:
1264 ceph 20 0 12,547g 1,247g 16652 S 3,3 8,2 110:16.93
ceph-mds
The memory usage is more than 2x of MDS limit (512Mb), but maybe is the
da
You are correct the PG are stale ( not allocated )
[root@stratonode1 /]# ceph status
cluster:
id: ea0df043-7b25-4447-a43d-e9b2af8fe069
health: HEALTH_WARN
Reduced data availability: 256 pgs inactive, 256 pgs peering,
256 pgs stale
services:
mon: 3 daemons, quorum
s
Sorry missing the pg dump :
2.1 0 00 0 0 0 0
0 stale+peering 2018-07-26 19:38:13.381673 0'0125:9 [3]
3[3] 30'0 2018-07-26 15:20:08.965357
0'0 2018-07-26 15:20:08.965357 0
2.0 0
23 matches
Mail list logo