Hello,
On Tue, 29 Mar 2016 14:00:44 +0800 lin zhou wrote:
> Hi,Christian.
> When I re-add these OSD(0,3,9,12,15),the high latency occur again.the
> default reweight of these OSD is 0.0
>
That makes no sense, at a crush weight (not reweight) of 0 they should not
get used at all.
When you deleted
What is version you use?
-
Regards,
Mohd Zainal Abidin Rabani
Technical Support
-- Original Message --
From: "lin zhou"
To: "Christian Balzer"
Cc: "Ceph-User"
Sent: 29/03/2016 14:00:44
Subject: Re: [ceph-users] how to re-add a deleted osd device as a osd
with data
Hi,Chri
Hi,Christian.
When I re-add these OSD(0,3,9,12,15),the high latency occur again.the
default reweight of these OSD is 0.0
root@node-65:~# ceph osd tree
# idweight type name up/down reweight
-1 103.7 root default
-2 8.19host node-65
18 2.73
Thanks.I try this method just like ceph document say.
But I just test osd.6 in this way,and the leveldb of osd.6 is
broken.so it can not start.
When I try this for other osd,it works.
2016-03-29 8:22 GMT+08:00 Christian Balzer :
> On Mon, 28 Mar 2016 18:36:14 +0800 lin zhou wrote:
>
>> > Hello,
>
Hello ,
Can anybody let me know if ceph team is working on porting of librbd on
openSolaris like it did for librados ?
Thanks
sumit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Any suggestions to fix this issue? We are using Ceph with proxmox and vms
won't start due to these Ceph Errors.
This in turn prevents any vm from starting up. This is a live server, please
advise.
Mar 28 22:01:22 pm3 systemd[1]: Unit ceph.service entered failed state.
Mar 28 22:09:00 pm3 system
On Mon, 28 Mar 2016 18:36:14 +0800 lin zhou wrote:
> > Hello,
> >
> > On Sun, 27 Mar 2016 13:41:57 +0800 lin zhou wrote:
> >
> > > Hi,guys.
> > > some days ago,one osd have a large latency seeing in ceph osd
> > > perf.and this device make this node a high cpu await.
> > The thing to do at that po
Christian Balzer пишет:
>> New problem (unsure, but probably not observed in Hammer, but sure in
>> Infernalis): copying large (tens g) files into kernel cephfs (from
>> outside of cluster, iron - non-VM, preempt kernel) - make slow requests
>> on some of OSDs (repeated range) - mostly 3 Gbps chan
Sage Weil writes:
> The first release candidate for Jewel is now available!
Cool!
[...]
> Packages for aarch64 will also be posted shortly!
According to the announcement, Ubuntu Xenial should now be supported
instead of Precise; but I don't see Xenial packages on
download.ceph.com. Will those a
Hi everyone,
The first release candidate for Jewel is now available!
Jewel is the next major release of Ceph, and will be the foundation for
the next long-term stable release. There have been many major changes
since the Infernalis (9.2.x) and Hammer (0.94.x) releases, and the upgrade
process
> Hello,
>
> On Sun, 27 Mar 2016 13:41:57 +0800 lin zhou wrote:
>
> > Hi,guys.
> > some days ago,one osd have a large latency seeing in ceph osd perf.and
> > this device make this node a high cpu await.
> The thing to do at that point would have been look at things with atop or
> iostat to verify t
11 matches
Mail list logo