Ronny,
While letting cluster replicate, looks like this might be a while, I decided to
look in to where those pgs are missing.. From the "ceph health detail" I found
pgs that are unfound. Then found the directories that had that pgs, pasted on
the right of that detail message below..pg 2.35 is
Hmm.. I hope I don't really need any thing from osd.0. =P
# ceph-objectstore-tool --op export --pgid 2.35 --data-path
/var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --file
2.35.exportFailure to read OSD superblock: (2) No such file or directory#
ceph-objectstore-tool --
Hi, everyone.
I’m trying to enable mgr dashboard on Luminous. However, when I modified the
configuration and restart ceph-mgr, the following error came up:
Sep 4 17:33:06 rg1-ceph7 ceph-mgr: 2017-09-04 17:33:06.495563 7fc49b3fc700 -1
mgr handle_signal *** Got signal Terminated ***
Sep 4 17:33
On Mon, Sep 4, 2017 at 10:38 AM, 许雪寒 wrote:
> Hi, everyone.
>
> I’m trying to enable mgr dashboard on Luminous. However, when I modified the
> configuration and restart ceph-mgr, the following error came up:
>
> Sep 4 17:33:06 rg1-ceph7 ceph-mgr: 2017-09-04 17:33:06.495563 7fc49b3fc700
> -1 mgr
Thanks for your quick reply:-)
I checked the opened ports and 7000 is not opened, and all of my machines had
selinux disabled.
Can there be other causes? Thanks :-)
-邮件原件-
发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒
发送时间: 2017年9月4日 17:38
收件人: ceph-users@lists.ceph.
Hello,
I'm building a 5 server cluster over three rooms/racks. Each server has 8
960GB SSDs used as bluestore OSDs. Ceph version 12.1.2 is used.
rack1: server1(mon) server2
rack2: server3(mon) server4
rack3: server5(mon)
The crushmap was built this way:
ceph osd
Hello!
I'm validating IO performance of CephFS vs. NFS.
Therefore I have mounted the relevant filesystems on the same client.
Then I start fio with the following parameters:
action = randwrite randrw
blocksize = 4k 128k 8m
rwmixreadread = 70 50 30
32 jobs run in parallel
The NFS share is stripin
Hi,
For VDI (Windows 10) use case... is there any document about the
recommended configuration with rbd?
Thanks a lot!
2017-08-18 15:40 GMT+02:00 Oscar Segarra :
> Hi,
>
> Yes, you are right, the idea is cloning a snapshot taken from the base
> image...
>
> And yes, I'm working with the current
On Mon, Sep 4, 2017 at 4:27 PM, wrote:
> Hello!
>
> I'm validating IO performance of CephFS vs. NFS.
>
> Therefore I have mounted the relevant filesystems on the same client.
> Then I start fio with the following parameters:
> action = randwrite randrw
> blocksize = 4k 128k 8m
> rwmixreadread = 7
I am unaware of any way to accomplish having 1 pool with all 3 racks and
another pool with only 2 of them. If you could put the same osd in 2
different roots or have a crush rule choose from 2 different roots, then
this might work out. To my knowledge neither of these is possible.
What is your rea
Hello,
On Mon, 04 Sep 2017 15:27:34 + c.mo...@web.de wrote:
> Hello!
>
> I'm validating IO performance of CephFS vs. NFS.
>
Well, at this point you seem to be comparing apples to bananas.
You're telling us results, but your mail lacks crucial information required
to give you a qualified an
Hi,
I'm still having trouble with above issue.
Is anybody there who have same issue or resolve this?
Thanks.
2017-08-21 22:51 GMT+09:00 Hyun Ha :
> Thanks for response.
>
> I can understand why size of 2 and min_size 1 is not an acceptable in
> production.
> but, I just want to make the situat
I don't know that it's still clear what you're asking for. You're
understanding that this scenario is going to have lost data that you cannot
get back, correct? Some of the information for the RBD might have been in
the PGs that you no longer have any copy of. Any RBD that has objects that
are n
thank you for response.
yes, I know that we can lost data in this scenario and can not guarantee
recover data.
But, in my opinion, we need to make Ceph Cluster healthy in spite of data
lose.
In this scenario, Ceph cluster has some stuck+stale PGs and goes to Error
state.
>From the perspective of op
Hey Cephers,
Sorry for the short notice, but the Ceph on ARM meeting scheduled for
today (Sep 5) has been cancelled.
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-users mailing list
ceph-users@lis
Hi!
Thanks for the pointer about leveldb_compact_on_mount, it took a while
to get everything compacted but after that the deep scrub of the
offending pg went smooth without any suicides. I'm considering using
the compact on mount feature for all our osd's in the cluster since
they're kind of large
Hello,
How to replace an OSD's journal created with dmcrypt, from one drive
to another drive, in case of current journal drive failed.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
17 matches
Mail list logo