Hi cephers,
Recently there has been a big problem in our production ceph
cluster.It has been running very well for one and a half years.
RBD client network and ceph public network are different,
communicating through a router.
Our ceph version is 0.94.5. Our IO transport is using Simple Messanger.
Hi cephers,
Recently I have met a problem with leveldb which is set as monitor store by
default.
My ceph version is 0.94.5.
I have a disk format as xfs,and mount the disk to /var/lib/ceph/mon/mon.,
and the size is 100GB.
The monitor store size is increasing 1GB per hours and never seems
Dear cephers,
I met a problem when using ceph-fuse with quota enabled.
My ceph version is :
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) .
I have two ceph-fuse process in two different hosts(node1 and node2).
One ceph-fuse is mounted with root directory on /mnt/cephfs on n
Dear cephers,
I met a problem when using ceph-fuse with quota enabled.
My ceph version is :
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
I have two ceph-fuse process in two different hosts(node1 and node2)
One ceph-fuse is mounted with root directory on /mnt/c
http://tracker.ceph.com/issues/17270
Cheers,
xiangyang
At 2016-09-13 18:08:19, "John Spray" wrote:
>On Tue, Sep 13, 2016 at 2:12 PM, yu2xiangyang wrote:
>> Hello everyone,
>>
>> I have met a ceph-fuse crash when i add osd to osd pool.
>>
>> I am writing data
_
>From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of yu2xiangyang
>[yu2xiangy...@163.com]
>Sent: 18 September 2016 12:14
>To: ceph-users@lists.ceph.com
>Subject: [ceph-users] cephfs-client Segmentation fault with not-root mount
> point
>
>My envirenment
My envirenment is displayed below:
The ceph-fuse client is 10.2.2, and the ceph osd is 0.94.3, details beblow:
[root@localhost ~]# rpm -qa | grep ceph
libcephfs1-10.2.2-0.el7.centos.x86_64
python-cephfs-10.2.2-0.el7.centos.x86_64
ceph-common-0.94.3-0.el7.x86_64
ceph-fuse-10.2.2-0.el7.centos.x86_6
I have tried all Jewel packages and it runs correctly and I think the problem
is in osdc at ceph-0.94-3.
There must be some previous commits which solved the problem.
At 2016-09-13 18:08:19, "John Spray" wrote:
>On Tue, Sep 13, 2016 at 2:12 PM, yu2xiangyang wrote:
>> Hell
I have submitted the issue at "http://tracker.ceph.com/issues/17270";.
At 2016-09-13 17:01:09, "John Spray" wrote:
>On Tue, Sep 13, 2016 at 2:12 PM, yu2xiangyang wrote:
>> Hello everyone,
>>
>> I have met a ceph-fuse crash when i add osd to osd pool.
, "John Spray" wrote:
>On Tue, Sep 13, 2016 at 2:12 PM, yu2xiangyang wrote:
>> Hello everyone,
>>
>> I have met a ceph-fuse crash when i add osd to osd pool.
>>
>> I am writing data through ceph-fuse,then i add one osd to osd pool, after
>>
Hello everyone,
I have met a ceph-fuse crash when i add osd to osd pool.
I am writing data through ceph-fuse,then i add one osd to osd pool, after less
than 30 s, the ceph-fuse process crash.
The ceph-fuse client is 10.2.2, and the ceph osd is 0.94.3, details beblow:
[root@localhost ~]# rp
I have found MDS restart several times between two MDS processes with ACTIVE
and BACKUP mode when I perform smallfile creating lots of files(3 clients
each with 8 threads creating 1 files) . Would any one encounter the same
problem? Is there any configuration I can set ? Thank you for a
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
13 matches
Mail list logo