[ceph-users] What if etcd is lost

2019-07-15 Thread Oscar Segarra
Hi, I'm planning to deploy a ceph cluster using etcd as kv store. I'm planning to deploy a stateless etcd docker to store the data. I'd like to know if ceph cluster will be able to boot when etcd container restarts (and looses al data written in it) If the etcd container restarts when the ceph

[ceph-users] Returning to the performance in a small cluster topic

2019-07-15 Thread Drobyshevskiy, Vladimir
Dear colleagues, I would like to ask you for help with a performance problem on a site backed with ceph storage backend. Cluster details below. I've got a big problem with PostgreSQL performance. It runs inside a VM with virtio-scsi ceph rbd image. And I see constant ~100% disk load with up t

Re: [ceph-users] Changing the release cadence

2019-07-15 Thread Sage Weil
On Mon, 15 Jul 2019, Kaleb Keithley wrote: > On Wed, Jun 5, 2019 at 11:58 AM Sage Weil wrote: > > > ... > > > > This has mostly worked out well, except that the mimic release received > > less attention that we wanted due to the fact that multiple downstream > > Ceph products (from Red Has and SU

Re: [ceph-users] Changing the release cadence

2019-07-15 Thread Thore Bödecker
Hey, On 15.07.19 09:58, Kaleb Keithley wrote: > Speaking as (one of) the Ceph packager(s) in Fedora: Arch Linux packager for Ceph here o/ > If Octopus is really an LTS release like all the others, and you want > bleeding edge users to test/use it and give early feedback, then Fedora is > probabl

Re: [ceph-users] Changing the release cadence

2019-07-15 Thread Sage Weil
On Mon, 15 Jul 2019, Kaleb Keithley wrote: > On Mon, Jul 15, 2019 at 10:10 AM Sage Weil wrote: > > > On Mon, 15 Jul 2019, Kaleb Keithley wrote: > > > > > > If Octopus is really an LTS release like all the others, and you want > > > bleeding edge users to test/use it and give early feedback, then

Re: [ceph-users] Returning to the performance in a small cluster topic

2019-07-15 Thread Jordan Share
We found shockingly bad committed IOPS/latencies on ceph. We could get roughly 20-30 IOPS when running this fio invocation from within a vm: fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=1 --size=2G --runtime=60 --group_reporting --fsync=1 For non-committed IO

Re: [ceph-users] What if etcd is lost

2019-07-15 Thread Frank Schilder
Hi Oscar, ceph itself does not use etcd for anything. Hence, a deployed and operational cluster will not notice the presence or absence of an etcd store. How much a loss of etcd means for your work depends on what you plan to store in it. If you look at the ceph/daemon container on docker, the

Re: [ceph-users] Returning to the performance in a small cluster topic

2019-07-15 Thread Paul Emmerich
You are effectively measuring the latency with jobs=1 here (which is appropriate considering that the WAL of a DB is effectively limited by latency) and yeah, a networked file system will always be a little bit slower than a local disk. But I think you should be able to get a higher performance he

[ceph-users] enterprise support

2019-07-15 Thread Void Star Nill
Hello, Other than Redhat and SUSE, are there other companies that provide enterprise support for Ceph? Thanks, Shridhar ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] enterprise support

2019-07-15 Thread Eddy Castillon
Hi Void, Canonical offers a very interesting offert. https://ubuntu.com/openstack/storage El lun., 15 de jul. de 2019 a la(s) 14:53, Void Star Nill ( void.star.n...@gmail.com) escribió: > Hello, > > Other than Redhat and SUSE, are there other companies that provide > enterprise support for Ceph

Re: [ceph-users] enterprise support

2019-07-15 Thread Brady Deetz
https://www.mirantis.com/software/ceph/ On Mon, Jul 15, 2019 at 2:53 PM Void Star Nill wrote: > Hello, > > Other than Redhat and SUSE, are there other companies that provide > enterprise support for Ceph? > > Thanks, > Shridhar > ___ > ceph-users maili

[ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread DHilsbos
All; I'm digging deeper into the capabilities of Ceph, and I ran across this: http://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/ Which seems really interesting, except... This feature seems to require custom cluster naming to function, which is deprecated in Nautilus, and not all commands adh

Re: [ceph-users] enterprise support

2019-07-15 Thread Robert LeBlanc
We recently used Croit (https://croit.io/) and they were really good. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jul 15, 2019 at 12:53 PM Void Star Nill wrote: > Hello, > > Other than Redhat and SUSE, are there other companies that

Re: [ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread Paul Emmerich
No worries, that's just the names of the config files/keyrings on the mirror server which needs to access both clusters and hence two different ceph.conf files. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 Münc

Re: [ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread DHilsbos
Paul; If I understand you correctly: I will have 2 clusters, each named "ceph" (internally). As such, each will have a configuration file at: /etc/ceph/ceph.conf I would copy the other clusters configuration file to something like: /etc/ceph/remote.conf Then the commands (run on the local

Re: [ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread Michel Raabe
Hi, On 15.07.19 22:42, dhils...@performair.com wrote: Paul; If I understand you correctly: I will have 2 clusters, each named "ceph" (internally). As such, each will have a configuration file at: /etc/ceph/ceph.conf I would copy the other clusters configuration file to something like:

Re: [ceph-users] What if etcd is lost

2019-07-15 Thread Oscar Segarra
Hi Frank, Thanks a lot for your quick response. Yes, the use case that concerns me is the following: 1.- I bootstrap a complete cluster mons, osds, mgr, mds, nfs, etc using etcd as a key store 2.- There is an electric blackout and all nodes of my cluster goes down and all data in my etcd is lost

Re: [ceph-users] Returning to the performance in a small cluster topic

2019-07-15 Thread Marc Roos
Isn't that why you suppose to test up front? So you do not have shocking surprises? You can find in the mailing list archives some performance references also. I think it would be good to publish some performance results on the ceph.com website. Can’t be to difficult to put some default scen

Re: [ceph-users] Returning to the performance in a small cluster topic

2019-07-15 Thread Jordan Share
All "normal" VM usage is about what you'd expect, since a lot of apps or system software is still written from the days of spinning disks, when this (tens of ops) is the level of committed IOPS you can get from them. So they let the OS cache writes and only sync when needed. Some applications

Re: [ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread Jason Dillaman
On Mon, Jul 15, 2019 at 4:50 PM Michel Raabe wrote: > > Hi, > > > On 15.07.19 22:42, dhils...@performair.com wrote: > > Paul; > > > > If I understand you correctly: > > I will have 2 clusters, each named "ceph" (internally). > > As such, each will have a configuration file at: /etc/ceph/ceph

Re: [ceph-users] Ceph performance IOPS

2019-07-15 Thread Christian Wuerdig
Option 1 is the official way, option 2 will be a lot faster if it works for you (I was never in the situation requiring this so can't say) and option 3 is for filestore and not applicable to bluestore On Wed, 10 Jul 2019 at 07:55, Davis Mendoza Paco wrote: > What would be the most appropriate pr