On Fri, Sep 29, 2017 at 5:56 PM, Yoann Moulin wrote:
> Hello,
>
> I'm doing some tests on the radosgw on luminous (12.2.1), I have a few
> questions.
>
> In the documentation[1], there is a reference to "radosgw-admin region get"
> but it seems not to be available anymore.
> It should be "radosg
Hi,
We have similar issues.
After upgradeing from hammer to jewel the tunable "choose leave stabel"
was introduces. If we activate it nearly all data will be moved. The
cluster has 2400 OSD on 40 nodes over two datacenters and is filled with
2,5 PB Data.
We tried to enable it but the backfillin
Hi,
I'm reading this document:
http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf
I have 3 questions:
1. BlueStore writes both data (to raw block device) and metadata (to
RockDB) simultaneously, or sequentially?
2. From my opinion, performance of BlueStore can not compar
Hi Everyone,
Is there a guide/tutorial about how to setup Ceph monitoring system using
collectd / grafana / graphite ? Other suggestions are welcome as well !
I found some GitHub solutions but not much documentation on how to implement.
Thanks.
Regards,
Ossi
On Thu, Sep 28, 2017 at 05:46:30PM +0200, Abhishek wrote:
> This is the first bugfix release of Luminous v12.2.x long term stable
> release series. It contains a range of bug fixes and a few features
> across CephFS, RBD & RGW. We recommend all the users of 12.2.x series
> update.
>
> For more det
Hey Christian,
On 29 Sep 2017 12:32 a.m., "Christian Wuerdig"
> wrote:
>
>> I'm pretty sure the orphan find command does exactly just that -
>> finding orphans. I remember some emails on the dev list where Yehuda
>> said he wasn't 100% comfortable of automating the delete just yet.
>> So the purp
In addition to the points that you made :
I noticed on RAID0 disk that read IO errors are not always trapped by
ceph leading to unattended behaviour of the impacted OSD daemon.
On both RAID0 disk or non-RAID disk, a IO error is trapped on /var/log/messages
Oct 2 15:20:37 os-ceph05 kernel: sd 0:
Please file a tracker ticket with all the info you have for stuff like
this. They’re a lot harder to lose than emails are. ;)
On Sat, Sep 30, 2017 at 8:31 AM Marc Roos wrote:
> Is this useful for someone?
>
>
>
> [Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
> closed (con st
If you take Ceph out of your search string you should find loads of
tutorials on setting up the popular collectd/influxdb/grafana stack. Once
you've got that in place, the Ceph bit should be fairly easy. There's Ceph
collectd plugins out there or you could write your own.
On Mon, Oct 2, 2017 at
Hi,
According to the regulation in China, we, the mirror site of
mirrors.ustc.edu.cn, is no longer able to serve the domain
cn.ceph.com, which has no ICP license[1].
Please either disable the CNAME record of cn.ceph.com or change it to a
mirror like hk.ceph.com.
People can still access our mirro
prometheus has a nice data exporter build in go, that then you can send to
grafana or any other tool
https://github.com/digitalocean/ceph_exporter
*German*
2017-10-02 8:34 GMT-03:00 Osama Hasebou :
> Hi Everyone,
>
> Is there a guide/tutorial about how to setup Ceph monitoring system using
> co
On 02/10/17 12:34, Osama Hasebou wrote:
> Hi Everyone,
>
> Is there a guide/tutorial about how to setup Ceph monitoring system
> using collectd / grafana / graphite ? Other suggestions are welcome as
> well !
We just installed the collectd plugin for ceph, and pointed it at our
grahphite server;
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon wrote:
> On 02/10/17 12:34, Osama Hasebou wrote:
>> Hi Everyone,
>>
>> Is there a guide/tutorial about how to setup Ceph monitoring system
>> using collectd / grafana / graphite ? Other suggestions are welcome as
>> well !
>
> We just installed the c
Hello everyone,
what is the safest way to decrease the number of PGs in the cluster. Currently,
I have too many per osd.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
As someone currently running collectd/influxdb/grafana stack for monitoring, I
am curious if anyone has seen issues moving Jewel -> Luminous.
I thought I remembered reading that collectd wasn’t working perfectly in
Luminous, likely not helped with the MGR daemon.
Also thought about trying teleg
You cannot;
On 02/10/2017 21:43, Andrei Mikhailovsky wrote:
> Hello everyone,
>
> what is the safest way to decrease the number of PGs in the cluster.
> Currently, I have too many per osd.
>
> Thanks
>
>
>
> ___
> ceph-users mailing list
> ceph
Adding more OSDs or deleting/recreating pools that have too many PGs are
your only 2 options to reduce the number of PG's per OSD. It is on the
Ceph roadmap, but is not a currently supported feature. You can
alternatively adjust the setting threshold for the warning, but it is still
a problem you
On Thu, Sep 28, 2017 at 5:16 AM, Micha Krause wrote:
> Hi,
>
> I had a chance to catch John Spray at the Ceph Day, and he suggested that I
> try to reproduce this bug in luminos.
Did you edit the code before trying Luminous? I also noticed from your
original mail that it appears you're using mult
Anyone can help me?
On Oct 2, 2017 17:56, "Sam Huracan" wrote:
> Hi,
>
> I'm reading this document:
> http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf
>
> I have 3 questions:
>
> 1. BlueStore writes both data (to raw block device) and metadata (to
> RockDB) simultaneous
Hey Cephers,
My apologies for the short notice, but the Ceph on ARM meeting scheduled
for tomorrow (Oct 3) has been canceled.
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-users mailing list
ceph-
yes, at least that's how I'd interpret the information given in this
thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-February/016521.html
On Tue, Oct 3, 2017 at 1:11 AM, Webert de Souza Lima
wrote:
> Hey Christian,
>
>> On 29 Sep 2017 12:32 a.m., "Christian Wuerdig"
>> wrote:
>
On Mon, Oct 02, 2017 at 11:47:47PM +0800, Shengjing Zhu wrote:
> Hi,
>
> According to the regulation in China, we, the mirror site of
> mirrors.ustc.edu.cn, is no longer able to serve the domain
> cn.ceph.com, which has no ICP license[1].
>
> Please either disable the CNAME record of cn.ceph.com
Hi,
On 02/10/2017 13:34, Osama Hasebou wrote:
> Hi Everyone,
>
> Is there a guide/tutorial about how to setup Ceph monitoring system
> using collectd / grafana / graphite ? Other suggestions are welcome as
> well !
>
> I found some GitHub solutions but not much documentation on how to
> implement
23 matches
Mail list logo