Hi there,
On 06/06/2016 09:56 PM, Patrick McGarry wrote:
> So we have gone from not having a Ceph Tech Talk this month…to having
> two! As a part of our regularly scheduled Ceph Tech Talk series, Lenz
> Grimmer from OpenATTIC will be talking about the architecture of their
> ma
Hi all,
FYI, a few days ago, we released openATTIC 2.0.13 beta. On the Ceph
management side, we've made some progress with the cluster and pool
monitoring backend, which lays the foundation for the dashboard that
will display graphs generated from this data. We also added some more
RBD management
Hi Alexander,
sorry for the late reply, I've been on vacation for a bit.
On 08/11/2016 07:16 AM, Александр Пивушков wrote:
> and what does not suit calamari?
Thank you for your comment! openATTIC has a somewhat different scope: we
aim at providing a versatile storage management system that supp
Hi,
On 08/16/2016 02:16 PM, Lenz Grimmer wrote:
> I blogged about the state of Ceph support a few months ago [1], a
> followup posting is currently in the works.
>
> [1]
> https://blog.openattic.org/posts/update-the-state-of-ceph-support-in-openattic/
FWIW, the update has bee
Hi Sage,
On 06/30/2017 05:21 AM, Sage Weil wrote:
> The easiest thing is to
>
> 1/ Stop testing filestore+btrfs for luminous onward. We've recommended
> against btrfs for a long time and are moving toward bluestore anyway.
Searching the documentation for "btrfs" does not really give a user an
Hi Sage,
On 06/30/2017 06:48 PM, Sage Weil wrote:
> Ah, crap. This is what happens when devs don't read their own
> documetnation. I recommend against btrfs every time it ever comes
> up, the downstream distributions all support only xfs, but yes, it
> looks like the docs never got updated...
Hi,
On 06/27/2017 11:54 PM, Daniel K wrote:
> Is there anywhere that details the various compression settings for
> bluestore backed pools?
>
> I can see compression in the list of options when I run ceph osd pool
> set, but can't find anything that details what valid settings are.
>
> I've tr
On 07/17/2017 10:15 PM, Alvaro Soto wrote:
> The second part, nevermind know I see that the solution is to use
> the TCMU daemon, I was thinking in a out of the box iSCSI endpoint
> directly from CEPH, sorry don't have to much expertise in this area.
There is no "native" iSCSI support built i
Hi,
On 10/03/2017 08:37 AM, Jasper Spaans wrote:
> Now to find or build a pretty dashboard with all of these metrics. I
> wasn't able to find something in the grafana supplied dashboards, and
> haven't spent enough time on openattic to extract a dashboard from
> there. Any pointers appreciated!
On 10/05/2017 12:15 PM, Jasper Spaans wrote:
> Thanks for the pointers - I guess I'll need to find some time to change
> those dashboards to use the ceph-mgr metrics names (at least, I'm unsure
> if the DO exporter uses the same names as ceph-mgr.) To be continued..
Not sure about that; AFAIK the
Hi,
FYI, DeepSea, the Salt-based framework to deploy and manage a Ceph
cluster, now has experimental support on CentOS 7.
Thanks to Ricardo Dias for getting this up and running as a SUSE
Hackweek project.
You can see a demo here: https://asciinema.org/a/147812
RPM packages can be found here:
ht
Hi Harry,
On 12/12/2017 02:18 AM, DHD.KOHA wrote:
> After managing to install ceph, with all possible ways that I could
> manage on 4 nodes, 4 osd and 3 monitors , with ceph-deploy and latter
> with ceph-ansible, I thought to to give a try to install CALAMARI on
> UBUNTU 14.04 ( another separate
Hi,
On 12/15/2017 11:53 AM, Falk Mueller-Braun wrote:
> since we upgraded to Luminous (12.2.2), we use the internal Ceph
> exporter for getting the Ceph metrics to Prometheus. At random times we
> get a Internal Server Error from the Ceph exporter, with python having a
> key error with some rando
Hi Dan,
On 12/15/2017 10:13 AM, Dan van der Ster wrote:
> As we are starting to ramp up our internal rgw service, I am wondering
> if someone already developed some "open source" high-level admin tools
> for rgw. On the one hand, we're looking for a web UI for users to create
> and see their cred
On 01/09/2018 07:46 PM, Karun Josy wrote:
> We have a user "testuser" with below permissions :
>
> $ ceph auth get client.testuser
> exported keyring for client.testuser
> [client.testuser]
> key = ==
> caps mon = "profile rbd"
> caps osd = "profile rbd pool=ecpool, pr
Hi Massimiliano,
On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:
> _*3) Management complexity*_
> Ceph is amazing, but is just too big to have everything under control
> (too many services).
> Now there is a management console, but as far as I read this management
> console just show basic da
Ciao Massimiliano,
On 01/23/2018 01:29 PM, Massimiliano Cuttini wrote:
>> https://www.openattic.org/features.html
>
> Oh god THIS is the answer!
:)
> Lenz, if you need help I can join also development.
You're more than welcome - we have a lot of work ahead of us...
Feel free to join our Free
Just curious, is anyone aware of $SUBJECT? As Prometheus provides a
built-in alert mechanism [1], are there any custom rules that people use
to receive notifications about critical situations in a Ceph cluster?
Would it make sense to collect these and have them included in a git
repo under the Ce
Hi all,
On 02/08/2018 11:23 AM, Martin Emrich wrote:
> I just want to thank all organizers and speakers for the awesome Ceph
> Day at Darmstadt, Germany yesterday.
>
> I learned of some cool stuff I'm eager to try out (NFS-Ganesha for RGW,
> openATTIC,...), Organization and food were great, too.
On 02/16/2018 07:16 AM, Kai Wagner wrote:
> yes there are plans to add management functionality to the dashboard as
> well. As soon as we're covered all the existing functionality to create
> the initial PR we'll start with the management stuff. The big benefit
> here is, that we can profit what w
On 08/18/2016 12:42 AM, Brad Hubbard wrote:
> On Thu, Aug 18, 2016 at 1:12 AM, agung Laksono
> wrote:
>>
>> Is there a way to make the compiling process be faster? something
>> like only compile a particular code that I change.
>
> Sure, just use the same build directory and run "make" again af
Hi,
if you're running a Ceph cluster and would be interested in trying out a
new tool for managing/monitoring it, we've just released version 2.0.14
of openATTIC that now provides a first implementation of a cluster
monitoring dashboard.
This is work in progress, but we'd like to solicit your inp
Hi,
On 09/22/2016 03:03 PM, Matteo Dacrema wrote:
> someone have ever tried to run a ceph cluster on two different version
> of the OS?
> In particular I’m running a ceph cluster half on Ubuntu 12.04 and half
> on Ubuntu 14.04 with Firefly version.
> I’m not seeing any issues.
> Are there some ki
On 11/03/2016 06:52 AM, Tim Serong wrote:
> I thought I should make a little noise about a project some of us at
> SUSE have been working on, called DeepSea. It's a collection of Salt
> states, runners and modules for orchestrating deployment of Ceph
> clusters. To help everyone get a feel for i
Hi Swami,
On 11/25/2016 11:04 AM, M Ranga Swami Reddy wrote:
> Can you please confirm, if the DeepSea works on Ubuntu also?
Not yet, as far as I can tell, but testing/feedback/patches are very
welcome ;)
One of the benefits of using Salt is that it supports multiple
distributions. However, curr
Hi all,
(replying to the root of this thread, as the discussions between
ceph-users and ceph-devel have somewhat diverged):
On 11/03/2016 06:52 AM, Tim Serong wrote:
> I thought I should make a little noise about a project some of us at
> SUSE have been working on, called DeepSea. It's a collec
On 07/24/2018 07:02 AM, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?
There are several, using Prometheus with the Ceph Exporter Manager
module is a popular choic
On 08/22/2018 08:57 PM, David Turner wrote:
> My initial reaction to this PR/backport was questioning why such a
> major update would happen on a dot release of Luminous. Your
> reaction to keeping both dashboards viable goes to support that.
> Should we really be backporting features into a dot
Hi all,
JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
conference call that discusses the ongoing development and gives an
update on recent improvements/features.
Today, we plan to give a demo of the new dashboard landing page (See
https://tracker.ceph.com/issues/24573 and
h
On 08/24/2018 10:59 AM, Lenz Grimmer wrote:
> JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
> conference call that discusses the ongoing development and gives an
> update on recent improvements/features.
>
> Today, we plan to give a demo of the new dashboa
On 08/24/2018 02:00 PM, Lenz Grimmer wrote:
> On 08/24/2018 10:59 AM, Lenz Grimmer wrote:
>
>> JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
>> conference call that discusses the ongoing development and gives an
>> update on recent improvements/f
Great news. Welcome Mike! I look forward to working with you, let me
know if there is anything I can help you with.
Lenz
On 08/29/2018 03:13 AM, Sage Weil wrote:
> Please help me welcome Mike Perez, the new Ceph community manager!
>
> Mike has a long history with Ceph: he started at DreamHost
Hi Hendrik,
On 09/18/2018 12:57 PM, Hendrik Peyerl wrote:
> we just deployed an Object Gateway to our CEPH Cluster via ceph-deploy
> in an IPv6 only Mimic Cluster. To make sure the RGW listens on IPv6 we
> set the following config:
> rgw_frontends = civetweb port=[::]:7480
>
> We now tried to en
On 02/25/2018 01:18 PM, Massimiliano Cuttini wrote:
> Is upgrade the kernel to major version on a distribution a bad idea?
> Or is just safe as like as upgrade like any other package?
> I prefer ultra stables release instead of latest higher package.
In that case it's probably best to stick with
On 02/28/2018 11:51 PM, Sage Weil wrote:
> On Wed, 28 Feb 2018, Dan Mick wrote:
>
>> Would anyone else appreciate a Google Calendar invitation for the
>> CDMs? Seems like a natural.
>
> Funny you should mention it! I was just talking to Leo this morning
> about creating a public Ceph Events cal
Hi all,
a month has passed since the Dashboard v2 was merged into the master
branch, so I thought it might be helpful to write a summary/update (with
screenshots) of what we've been up to since then:
https://www.openattic.org/posts/ceph-dashboard-v2-update/
Let us know what you think!
Cheers,
Hi Marc,
On 04/21/2018 11:34 AM, Marc Roos wrote:
> I wondered if there are faster ways to copy files to and from a bucket,
> like eg not having to use the radosgw? Is nfs-ganesha doing this faster
> than s3cmd?
I have doubts that putting another layer of on top of S3 will make if
faster than
On 05/08/2018 07:21 AM, Kai Wagner wrote:
> Looks very good. Is it anyhow possible to display the reason why a
> cluster is in an error or warning state? Thinking about the output from
> ceph -s if this could by shown in case there's a failure. I think this
> will not be provided by default but wo
On 06/12/2018 07:14 PM, Max Cuttins wrote:
> it's a honor to me contribute to the main repo of ceph.
We appreciate you support! Please take a look at
http://docs.ceph.com/docs/master/start/documenting-ceph/ for guidance on
how to contribute to the documentation.
> Just a throught, is it wise havi
On 06/13/2018 02:01 PM, Sean Purdy wrote:
> Me too. I picked ceph luminous on debian stretch because I thought
> it would be maintained going forwards, and we're a debian shop. I
> appreciate Mimic is a non-LTS release, I hope issues of debian
> support are resolved by the time of the next LTS.
On 06/18/2018 08:38 PM, Michael Kuriger wrote:
> Don’t use the installer scripts. Try yum install ceph
I'm not sure I agree. While running "make install" is of course somewhat
of limited use on a distributed cluster, I would expect that it at least
installs all the required components on the lo
Hi Leo,
On 06/20/2018 01:47 AM, Leonardo Vaz wrote:
> We created the following etherpad to organize the calendar for the
> future Ceph Tech Talks.
>
> For the Ceph Tech Talk of June 28th our fellow George Mihaiescu will
> tell us how Ceph is being used on cancer research at OICR (Ontario
> Insti
On 06/20/2018 05:42 PM, Kevin Hrpcek wrote:
> The ceph mgr dashboard is only enabled on the mgr daemons. I'm not
> familiar with the mimic dashboard yet, but it is much more advanced than
> luminous' dashboard and may have some alerting abilities built in.
Not yet - see http://docs.ceph.com/docs/
Hi,
On 01/13/2017 05:34 PM, Tu Holmes wrote:
> I remember seeing one of the openATTIC project people on the list
> mentioning that.
>
> My initial question is, "Can you configure openATTIC just to monitor an
> existing cluster without having to build a new one?"
Yes, you can - when you install
Hi,
On 01/30/2017 12:18 PM, Matthew Vernon wrote:
> On 28/01/17 23:43, Marc Roos wrote:
>
>> Is there a doc that describes all the parameters that are published by
>> collectd-ceph?
>
> The best I've found is the Redhat documentation of the performance
> counters (which are what collectd-ceph i
Hi,
On 05/18/2017 02:28 PM, Shambhu Rajak wrote:
> I want to deploy ceph-cluster as a backend storage for openstack, so I
> am trying to find the best tool available for deploying ceph cluster.
>
> Few are in my mind:
>
> https://github.com/ceph/ceph-ansible
>
> https://github.com/01org/virtua
Hi Serkan,
On 11/16/18 11:29 AM, Serkan Çoban wrote:
> Does anyone know if slides/recordings will be available online?
Unfortunately, the presentations were not recorded. However, the slides
are usually made available on the corresponding event page,
https://ceph.com/cephdays/ceph-day-berlin/ in
Hi Alexander,
On 11/13/18 12:37 PM, Kasper, Alexander wrote:
> As i am not sure howto correctly use tracker.ceph.com, i´ll post my
> report here:
>
> Using the dashboard to delete a rbd image via gui throws an error when
> the image name ends with an whitespace (user input error leaded to this
>
Hi Ashley,
On 11/29/18 7:16 AM, Ashley Merrick wrote:
> After rebooting a server that hosts the MGR Dashboard I am now unable to
> get the dashboard module to run.
>
> Upon restarting the mgr service I see the following :
>
> ImportError: No module named ordered_dict
> Nov 29 07:13:14 ceph-m01
On 11/29/18 10:28 AM, Ashley Merrick wrote:
> Sorry missed the basic info!!
>
> Latest Mimic 13.2.2
>
> Ubuntu 18.04
Thanks. So it worked before the reboot and did not afterwards? What
changed? Did you perform an OS update?
Would it be possible for you to paste the entire mgr log file messages
On 11/29/18 11:29 AM, Ashley Merrick wrote:
> Yeah had a few OS updates, but not related directly to CEPH.
But they seem to be the root cause of the issue you're facing. Thanks
for sharing the entire log entry.
> The full error log after a reboot is :
>
> 2018-11-29 11:24:22.494 7faf046a1700 1
Hi Ashley,
On 11/29/18 11:41 AM, Ashley Merrick wrote:
> Managed to fix the issue with some googling from the error above.
>
> There is a bug with urllib3 1.24.1 which breaks the module ordered_dict (1)
Good spotting!
> I rolled back to a working version "pip install urllib3==1.23" and
> resta
Hi,
On 1/30/19 2:02 PM, PHARABOT Vincent wrote:
> I have my cluster set up correctly now (thank you again for the help)
What version of Ceph is this?
> I am seeking now a way to get cluster health thru API (REST) with curl
> command.
>
> I had a look at manager / RESTful and Dashboard but none
Am 30. Januar 2019 19:33:14 MEZ schrieb PHARABOT Vincent
:
>Thanks for the info
>But, nope, on Mimic (13.2.4) /api/health ends in 404 (/api/health/full,
>/api/health/minimal also...)
On which node did you try to access the API? Did you enable the Dashboard
module in Ceph manager?
Lenz
--
D
Hi Ashley,
On 2/9/19 4:43 PM, Ashley Merrick wrote:
> Any further suggestions, should i just ignore the error "Failed to load
> ceph-mgr modules: telemetry" or is this my route cause for no realtime
> I/O readings in the Dashboard?
I don't think this is related. It you don't plan to enable the t
On 2/21/19 4:30 PM, Hayashida, Mami wrote:
> I followed the documentation
> (http://docs.ceph.com/docs/mimic/mgr/dashboard/) to enable the dashboard
> RGW management, but am still getting the 501 error ("Please consult the
> documentation on how to configure and enable the Object Gateway... ").
>
On 2/6/19 11:52 AM, Junk wrote:
> I was trying to set my mimic dashboard cert using the instructions
> from
>
> http://docs.ceph.com/docs/mimic/mgr/dashboard/
>
> and I'm pretty sure the lines
>
>
> $ ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt
> $ ceph config-key set mgr mgr/d
Hi Wes,
On 4/4/19 9:23 PM, Wes Cilldhaire wrote:
> Can anyone at all please confirm whether this is expected behaviour /
> a known issue, or give any advice on how to diagnose this? As far as
> I can tell my mon and mgr are healthy. All rbd images have
> object-map and fast-diff enaabled.
My g
Hi Fyodor,
(Cc:ing Alfonso)
On 8/13/19 12:47 PM, Fyodor Ustinov wrote:
> I have ceph nautilus (upgraded from mimic, if it is important) and in
> dashboard in "PG Status" section I see "Clean (2397%)"
>
> It's a bug?
Huh, That might be possible - sorry about that. We'd be grateful if you
could
On 8/22/19 9:38 PM, Wesley Dillingham wrote:
> I am interested in keeping a revision history of ceph-iscsi's
> gateway.conf object for any and all changes. It seems to me this may
> come in handy to revert the environment to a previous state. My question
> is are there any existing tools which do
Hi Jake,
On 8/27/19 3:22 PM, Jake Grimmett wrote:
> That exactly matches what I'm seeing:
>
> when iostat is working OK, I see ~5% CPU use by ceph-mgr
> and when iostat freezes, ceph-mgr CPU increases to 100%
Does this also occur if the dashboard module is disabled? Just wondering
if this is is
On 9/24/19 1:37 PM, Miha Verlic wrote:
> I've got slightly different problem. After a few days of running fine,
> dashboard stops working because it is apparently seeking for wrong
> certificate file in /tmp. If I restart ceph-mgr it starts to work again.
Does the restart trigger the creation of
gt; On 24. 09. 19 14:53, Lenz Grimmer wrote:
>> On 9/24/19 1:37 PM, Miha Verlic wrote:
>>
>>> I've got slightly different problem. After a few days of running fine,
>>> dashboard stops working because it is apparently seeking for wrong
>>> certificate fi
Hi Thoralf,
there have been several reports about Ceph mgr modules (not just the
dashboard) experiencing hangs and freezes recently. The thread "mgr
daemons becoming unresponsive" might give you some additional insight.
Is the "device health metrics" module enabled on your cluster? Could you
try
Hi Matt,
On 1/6/20 4:33 PM, Matt Dunavant wrote:
> I was hoping there was some update on this bug:
> https://tracker.ceph.com/issues/39140
>
> In all recent versions of the dashboard, the RBD image page takes
> forever to populate due to this bug. All our images have fast-diff
> enabled, so it
65 matches
Mail list logo