Re: [ceph-users] ceph-volume failed after replacing disk

2019-07-05 Thread Erik McCormick
If you create the OSD without specifying an ID it will grab the lowest available one. Unless you have other gaps somewhere, that ID would probably be the one you just removed. -Erik On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich wrote: > > On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza wrote: > >> On

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Erik McCormick
On Fri, Jun 28, 2019, 10:05 AM Alfredo Deza wrote: > On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix > wrote: > > > > Thanks for the update Alfredo. What steps need to be done to rename my > cluster back to "ceph"? > > That is a tough one, the ramifications of a custom cluster name are > wild - it

Re: [ceph-users] IMPORTANT : NEED HELP : Low IOPS on hdd : MAX AVAIL Draining fast

2019-04-27 Thread Erik McCormick
On Sat, Apr 27, 2019, 3:49 PM Nikhil R wrote: > We have baremetal nodes 256GB RAM, 36core CPU > We are on ceph jewel 10.2.9 with leveldb > The osd’s and journals are on the same hdd. > We have 1 backfill_max_active, 1 recovery_max_active and 1 > recovery_op_priority > The osd crashes and starts o

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:53 AM Jason Dillaman wrote: > On Thu, Apr 11, 2019 at 8:49 AM Erik McCormick > wrote: > > > > > > > > On Thu, Apr 11, 2019, 8:39 AM Erik McCormick > wrote: > >> > >> > >> > >> On Thu,

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:39 AM Erik McCormick wrote: > > > On Thu, Apr 11, 2019, 12:07 AM Brayan Perera > wrote: > >> Dear Jason, >> >> >> Thanks for the reply. >> >> We are using python 2.7.5 >> >> Yes. script is based on opensta

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 12:07 AM Brayan Perera wrote: > Dear Jason, > > > Thanks for the reply. > > We are using python 2.7.5 > > Yes. script is based on openstack code. > > As suggested, we have tried chunk_size 32 and 64, and both giving same > incorrect checksum value. > The value of rbd_store_

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Erik McCormick
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer wrote: > > On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote: > > > Hello all, > > > > Having dug through the documentation and reading mailing list threads > > until my eyes rolled back in my head, I am left

[ceph-users] Bluestore WAL/DB decisions

2019-03-28 Thread Erik McCormick
Hello all, Having dug through the documentation and reading mailing list threads until my eyes rolled back in my head, I am left with a conundrum still. Do I separate the DB / WAL or not. I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs and 2 x 240 GB SSDs. I had put the OS on

Re: [ceph-users] Commercial support

2019-01-23 Thread Erik McCormick
Suse as well https://www.suse.com/products/suse-enterprise-storage/ On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote: > > > > Hi, > > > > How is the commercial support for Ceph? More specifically, I was > recently pointed in the direction of the ve

Re: [ceph-users] Ceph on Azure ?

2018-12-23 Thread Erik McCormick
Dedicated links are not that difficult to come by anymore. It's mainly done with SDN. I know Megaport, for example, let's you provision virtual circuits to dozens of providers including Azure, AWS, and GCP. You can run several virtual circuits over a single ccross-connect. I look forward to hearin

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick wrote: > > > > On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote: >> >> I had a similar problem: >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html >> >> But even the recent 2

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
pe [-Werror] (xdrproc_t) xdr_entry4)) I'm guessing I am missing some newer version of a library somewhere, but not sure. Any tips for successfully getting it to build? -Erik > Kevin > > > Am Di., 9. Okt. 2018 um 19:39 Uhr schrieb Erik McCormick < > emccorm...@cirru

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote: > On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick > wrote: > > > > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick > > wrote: > > > > > > Hello, > > > > > > I'm trying to set up

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick wrote: > > Hello, > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > running into difficulties getting the current stable release running. > The versions in the Luminous repo is stuck at 2.6.1, where

[ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
Hello, I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and running into difficulties getting the current stable release running. The versions in the Luminous repo is stuck at 2.6.1, whereas the current stable version is 2.6.3. I've seen a couple of HA issues in pre 2.6.3 versions th

Re: [ceph-users] list admin issues

2018-10-09 Thread Erik McCormick
Without an example of the bounce response itself it's virtually impossible to troubleshoot. Can someone with mailman access please provide an example of a bounce response? All the attachments on those rejected messages are just HTML copies of the message (which are not on the list of filtered atta

Re: [ceph-users] list admin issues

2018-10-06 Thread Erik McCormick
This has happened to me several times as well. This address is hosted on gmail. -Erik On Sat, Oct 6, 2018, 9:06 AM Elias Abacioglu < elias.abacio...@deltaprojects.com> wrote: > Hi, > > I'm bumping this old thread cause it's getting annoying. My membership get > disabled twice a month. > Between

Re: [ceph-users] network architecture questions

2018-09-18 Thread Erik McCormick
On Tue, Sep 18, 2018, 7:56 PM solarflow99 wrote: > thanks for the replies, I don't know that cephFS clients go through the > MONs, they reach the OSDs directly. When I mentioned NFS, I meant NFS > clients (ie. not cephFS clients) This should have been pretty straight > forward. > Anyone doing HA

Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread Erik McCormick
Wherever I go, there you are ;). Glad to have you back again! Cheers, Erik On Tue, Aug 28, 2018, 10:25 PM Dan Mick wrote: > On 08/28/2018 06:13 PM, Sage Weil wrote: > > Hi everyone, > > > > Please help me welcome Mike Perez, the new Ceph community manager! > > > > Mike has a long history with C

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
Thode Jocelyn wrote: > Hi, > > > > We are still blocked by this problem on our end. Glen did you or someone > else figure out something for this ? > > > > Regards > > Jocelyn Thode > > > > From: Glen Baars [mailto:g...@onsitecomputers.com.au] > S

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
Don't set a cluster name. It's no longer supported. It really only matters if you're running two or more independent clusters on the same boxes. That's generally inadvisable anyway. Cheers, Erik On Wed, Aug 1, 2018, 9:17 PM Glen Baars wrote: > Hello Ceph Users, > > Does anyone know how to set t

[ceph-users] Multiple Rados Gateways with different auth backends

2018-06-12 Thread Erik McCormick
Hello all, I have recently had need to make use of the S3 API on my Rados Gateway. We've been running just Swift API backed by Openstack for some time with no issues. Upon trying to use the S3 API I discovered that our combination of Jewel and Keystone renders AWS v4 signatures unusable. Apparent

Re: [ceph-users] civetweb: ssl_private_key

2018-05-29 Thread Erik McCormick
On Tue, May 29, 2018, 11:00 AM Marc Roos wrote: > > I guess we will not get this ssl_private_key option unless we upgrade > from Luminous? > > > http://docs.ceph.com/docs/master/radosgw/frontends/ > > That option is only for Beast. For civetweb you just feed it ssl_certificate with a combined PEM

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Erik McCormick
On Feb 28, 2018 10:06 AM, "Max Cuttins" wrote: Il 28/02/2018 15:19, Jason Dillaman ha scritto: > On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini > wrote: > >> I was building ceph in order to use with iSCSI. >> But I just see from the docs that need: >> >> CentOS 7.5 >> (which is not ava

Re: [ceph-users] Luminous v12.2.2 released

2017-12-05 Thread Erik McCormick
On Dec 5, 2017 10:26 AM, "Florent B" wrote: On Debian systems, upgrading packages does not restart services ! You really don't want it to restart services. Many small clusters run mons and osds on the same nodes, and auto restart makes it impossible to order restarts. -Erik On 05/12/2017 16:22

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-16 Thread Erik McCormick
I was told at the Openstack Summit that 12.2.2 should drop "In a few days." That was a week ago yesterday. If you have a little leeway, it may be best to wait. I know I am, but I'm paranoid. There was also a performance regression mentioned recently that's supposed to be fixed. -Erik On Nov 16

Re: [ceph-users] removing cluster name support

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" wrote: On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil wrote: > On Tue, 7 Nov 2017, Alfredo Deza wrote: >> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai wrote: >> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil wrote: >> >> At CDM yesterday we talked about removing t

Re: [ceph-users] removing cluster name support

2017-11-06 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil wrote: > On Fri, 9 Jun 2017, Erik McCormick wrote: >> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote: >> > On Thu, 8 Jun 2017, Sage Weil wrote: >> >> Questions: >> >> >> >> - Does anybody on the l

Re: [ceph-users] Creating a custom cluster name using ceph-deploy

2017-10-15 Thread Erik McCormick
Do not, under any circumstances, make a custom named cluster. There be pain and suffering (and dragons) there, and official support for it has been deprecated. On Oct 15, 2017 6:29 PM, "Bogdan SOLGA" wrote: > Hello, everyone! > > We are trying to create a custom cluster name using the latest cep

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Erik McCormick
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon wrote: > On 02/10/17 12:34, Osama Hasebou wrote: >> Hi Everyone, >> >> Is there a guide/tutorial about how to setup Ceph monitoring system >> using collectd / grafana / graphite ? Other suggestions are welcome as >> well ! > > We just installed the c

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote: > On Thu, 8 Jun 2017, Sage Weil wrote: >> Questions: >> >> - Does anybody on the list use a non-default cluster name? >> - If so, do you have a reason not to switch back to 'ceph'? > > It sounds like the answer is "yes," but not for daemons. Seve

Re: [ceph-users] Ceph Giant Repo problem

2017-03-30 Thread Erik McCormick
Try setting obsoletes=0 in /etc/yum.conf and see if that doesn't make it happier. The package is clearly there and it even shows it available in your log. -Erik On Thu, Mar 30, 2017 at 8:55 PM, Vlad Blando wrote: > Hi Guys, > > I encountered some issue with installing ceph package for giant,

[ceph-users] Change ownership of objects

2016-12-07 Thread Erik McCormick
Hello everyone, I am running Ceph (firefly) Radosgw integrated with Openstack Keystone. Recently we built a whole new Openstack cloud and created users in that cluster. The names were the same, but the UUID's are not. Both clouds are using the same Ceph cluster with their own RGW. I have managed

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Erik McCormick
On Oct 27, 2016 3:16 PM, "Oliver Dzombic" wrote: > > Hi, > > i can recommand > > X710-DA2 > Weer also use this nice for everything. > 10G Switch is going over our bladeinfrastructure, so i can't recommand > something for you there. > > I assume that the usual juniper/cisco will do a good job. I t

Re: [ceph-users] hadoop on cephfs

2016-04-30 Thread Erik McCormick
I think what you are thinking of is the driver that was built to actually replace hdfs with rbd. As far as I know that thing had a very short lifespan on one version of hadoop. Very sad. As to what you proposed: 1) Don't use Cephfs in production pre-jewel. 2) running hdfs on top of ceph is a mas

Re: [ceph-users] Rename Ceph cluster

2015-08-18 Thread Erik McCormick
I've got a custom named cluster integrated with Openstack (Juno) and didn't run into any hard-coded name issues that I can recall. Where are you seeing that? As to the name change itself, I think it's really just a label applying to a configuration set. The name doesn't actually appear *in* the co

Re: [ceph-users] QEMU Venom Vulnerability

2015-05-19 Thread Erik McCormick
Sorry, I made the assumption you were on 7. If you're on 6 then I defer to someone else ;) If you're on 7, go here. http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/ On May 19, 2015 2:47 PM, "Georgios Dimitrakakis" wrote: > Erik, > > are you talking about the ones here :

Re: [ceph-users] QEMU Venom Vulnerability

2015-05-19 Thread Erik McCormick
You can also just fetch the rhev SRPMs and build those. They have rbd enabled already. On May 19, 2015 12:31 PM, "Robert LeBlanc" wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > You should be able to get the SRPM, extract the SPEC file and use that > to build a new package. You sh

Re: [ceph-users] Rados Gateway and keystone

2015-04-13 Thread Erik McCormick
I haven't really used the S3 stuff much, but the credentials should be in keystone already. If you're in horizon, you can download them under Access and Security->API Access. Using the CLI you can use the openstack client like "openstack credential " or with the keystone client like "keystone ec2-c

Re: [ceph-users] ceph and glance... permission denied??

2015-04-06 Thread Erik McCormick
Glance needs some additional permissions including write access to the pool you want to add images to. See the docs at: http://ceph.com/docs/master/rbd/rbd-openstack/ Cheers, Erik On Apr 6, 2015 7:21 AM, wrote: > Hi, first off: long time reader, first time poster :).. > I have a 4 node ceph clu

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
unk_size or self.get_size(location)) > > > This all looks correct, so any slowness isn't the bug I was thinking of. > > QH > > On Thu, Apr 2, 2015 at 10:06 AM, Erik McCormick < > emccorm...@cirrusseven.com> wrote: > >> The RDO glance-store package had a

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
13:20:50.449 1266 DEBUG glance.common.config [-] >>>> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values >>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004 >>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] >>>

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
gt;> glance_store.rbd_store_chunk_size = 8 log_opt_values >>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004 >>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] >>> glance_store.rbd_store_pool= images log_opt_values >>> /usr/l

Re: [ceph-users] Ceph and Openstack

2015-04-01 Thread Erik McCormick
Can you both set Cinder and / or Glance logging to debug and provide some logs? There was an issue with the first Juno release of Glance in some vendor packages, so make sure you're fully updated to 2014.2.2 On Apr 1, 2015 7:12 PM, "Quentin Hartman" wrote: > I am conincidentally going through the

[ceph-users] Giant on Centos 7 with custom cluster name

2015-01-17 Thread Erik McCormick
Hello all, I've got an existing Firefly cluster on Centos 7 which I deployed with ceph-deploy. In the latest version of ceph-deploy, it refuses to handle commands issued with a cluster name. [ceph_deploy.install][ERROR ] custom cluster names are not supported on sysvinit hosts This is a producti