Hi
I am assuming that you do not have any near full osd (either before or along
the pg splitting process) and that your cluster is healthy.
To minimize the impact on the clients during recover or operations like pg
splitting, it is good to set the following configs. Obviously the whole
operat
Hi...
I think you are seeing an issue we saw some time ago. Your segfault seems the
same we had but please confirm against the info in
https://github.com/ceph/ceph/pull/10027
We solve it by recompiling ceph with the patch described above.
I think it should be solved in the next bug release ve
Hello Goncalo,
afaik the authoritative shard is concluded based on deep-scrub object
checksums which was included in Hammer. Is this in-line with your
experience? If yes, is there any other method of concluding for the
auth shard besides object timestamps for ceph < jewel?
Kostis
On 13 September
If you are aware of the problematic PGs and they are exportable, then
ceph-objectstore-tool is a viable solution. If not, then running gdb
and/or higher debug osd level logs may prove useful (to understand
more about the problem or collect info to ask for more in ceph-devel).
On 13 September 2016
Hi , thanks for your reply.
Yes, I’don’t any near full osd.
The problem is not the rebalancing process but the process of creation of new
pgs.
I’ve only 2 host running Ceph Firefly version with 3 SSDs for journaling each.
During the creation of new pgs all the volumes attached stop to read or w
added debug journal = 20 and got some new lines in the log. that i added
to the end of this email.
any of you can make something out of them ?
kind regards
Ronny Aasen
On 18.09.2016 18:59, Kostis Fardelas wrote:
If you are aware of the problematic PGs and they are exportable, then
ceph-obje
Thank you for your reply.
I will recomplie the code and test if it works.
I will let you know if it works.
At 2016-09-18 19:18:18, "Goncalo Borges" wrote:
>Hi...
>
>I think you are seeing an issue we saw some time ago. Your segfault seems the
>same we had but please confirm against the info
no body meet this situation? Can somebody help me slove the issue, please
!!! THX
2016-09-16 13:02 GMT+08:00 Brian Chang-Chien :
> Can anyone know this problem,please help me to watch this
>
> 2016年9月13日 下午5:58,"Brian Chang-Chien" 寫道:
>
>> Hi ,naga.b
>>
>> I use Ceph jewel 10.2.2
>> my ceph.conf
How many PGs do you have - and how many are you increasing it to?
Increasing PG counts can be disruptive if you are increasing by a large
proportion of the initial count because all the PG peering involved. If you
are doubling the amount of PGs it might be good to do it in stages to minimize
p
Hi, everyone.
I'm newbie for Ceph. According to Sage A. Weil's paper, Ceph was using EBOFS as
the file system for its OSDs. However, I looked into the source code of Ceph
and could hardly find any code of EBOFS. Is Ceph still using EBOFS or has opted
to use other types of file system for a sing
Thanks, sir:-)
At 2016-09-19 13:00:18, "Ian Colle" wrote:
Some use xfs, others btrfs, and still others use (gasp) zfs and ext4.
Upstream automated testing currently only runs on xfs, if that gives you a
sense of the community's comfort level, but there are strong advocates for each
of the
11 matches
Mail list logo