If you create the OSD without specifying an ID it will grab the lowest
available one. Unless you have other gaps somewhere, that ID would probably
be the one you just removed.
-Erik
On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich wrote:
>
> On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza wrote:
>
>> On
On Fri, Jun 28, 2019, 10:05 AM Alfredo Deza wrote:
> On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix
> wrote:
> >
> > Thanks for the update Alfredo. What steps need to be done to rename my
> cluster back to "ceph"?
>
> That is a tough one, the ramifications of a custom cluster name are
> wild - it
On Sat, Apr 27, 2019, 3:49 PM Nikhil R wrote:
> We have baremetal nodes 256GB RAM, 36core CPU
> We are on ceph jewel 10.2.9 with leveldb
> The osd’s and journals are on the same hdd.
> We have 1 backfill_max_active, 1 recovery_max_active and 1
> recovery_op_priority
> The osd crashes and starts o
On Thu, Apr 11, 2019, 8:53 AM Jason Dillaman wrote:
> On Thu, Apr 11, 2019 at 8:49 AM Erik McCormick
> wrote:
> >
> >
> >
> > On Thu, Apr 11, 2019, 8:39 AM Erik McCormick
> wrote:
> >>
> >>
> >>
> >> On Thu,
On Thu, Apr 11, 2019, 8:39 AM Erik McCormick
wrote:
>
>
> On Thu, Apr 11, 2019, 12:07 AM Brayan Perera
> wrote:
>
>> Dear Jason,
>>
>>
>> Thanks for the reply.
>>
>> We are using python 2.7.5
>>
>> Yes. script is based on opensta
On Thu, Apr 11, 2019, 12:07 AM Brayan Perera
wrote:
> Dear Jason,
>
>
> Thanks for the reply.
>
> We are using python 2.7.5
>
> Yes. script is based on openstack code.
>
> As suggested, we have tried chunk_size 32 and 64, and both giving same
> incorrect checksum value.
>
The value of rbd_store_
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer wrote:
>
> On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote:
>
> > Hello all,
> >
> > Having dug through the documentation and reading mailing list threads
> > until my eyes rolled back in my head, I am left
Hello all,
Having dug through the documentation and reading mailing list threads
until my eyes rolled back in my head, I am left with a conundrum
still. Do I separate the DB / WAL or not.
I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs
and 2 x 240 GB SSDs. I had put the OS on
Suse as well
https://www.suse.com/products/suse-enterprise-storage/
On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote:
> >
> > Hi,
> >
> > How is the commercial support for Ceph? More specifically, I was
> recently pointed in the direction of the ve
Dedicated links are not that difficult to come by anymore. It's mainly done
with SDN. I know Megaport, for example, let's you provision virtual
circuits to dozens of providers including Azure, AWS, and GCP. You can run
several virtual circuits over a single ccross-connect.
I look forward to hearin
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick
wrote:
>
>
>
> On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote:
>>
>> I had a similar problem:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>>
>> But even the recent 2
pe [-Werror]
(xdrproc_t) xdr_entry4))
I'm guessing I am missing some newer version of a library somewhere, but
not sure. Any tips for successfully getting it to build?
-Erik
> Kevin
>
>
> Am Di., 9. Okt. 2018 um 19:39 Uhr schrieb Erik McCormick <
> emccorm...@cirru
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote:
> On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
> wrote:
> >
> > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> > wrote:
> > >
> > > Hello,
> > >
> > > I'm trying to set up
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
wrote:
>
> Hello,
>
> I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> running into difficulties getting the current stable release running.
> The versions in the Luminous repo is stuck at 2.6.1, where
Hello,
I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
running into difficulties getting the current stable release running.
The versions in the Luminous repo is stuck at 2.6.1, whereas the
current stable version is 2.6.3. I've seen a couple of HA issues in
pre 2.6.3 versions th
Without an example of the bounce response itself it's virtually impossible
to troubleshoot. Can someone with mailman access please provide an example
of a bounce response?
All the attachments on those rejected messages are just HTML copies of the
message (which are not on the list of filtered atta
This has happened to me several times as well. This address is hosted on
gmail.
-Erik
On Sat, Oct 6, 2018, 9:06 AM Elias Abacioglu <
elias.abacio...@deltaprojects.com> wrote:
> Hi,
>
> I'm bumping this old thread cause it's getting annoying. My membership get
> disabled twice a month.
> Between
On Tue, Sep 18, 2018, 7:56 PM solarflow99 wrote:
> thanks for the replies, I don't know that cephFS clients go through the
> MONs, they reach the OSDs directly. When I mentioned NFS, I meant NFS
> clients (ie. not cephFS clients) This should have been pretty straight
> forward.
> Anyone doing HA
Wherever I go, there you are ;). Glad to have you back again!
Cheers,
Erik
On Tue, Aug 28, 2018, 10:25 PM Dan Mick wrote:
> On 08/28/2018 06:13 PM, Sage Weil wrote:
> > Hi everyone,
> >
> > Please help me welcome Mike Perez, the new Ceph community manager!
> >
> > Mike has a long history with C
Thode Jocelyn wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you or someone
> else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> S
Don't set a cluster name. It's no longer supported. It really only matters
if you're running two or more independent clusters on the same boxes.
That's generally inadvisable anyway.
Cheers,
Erik
On Wed, Aug 1, 2018, 9:17 PM Glen Baars wrote:
> Hello Ceph Users,
>
> Does anyone know how to set t
Hello all,
I have recently had need to make use of the S3 API on my Rados
Gateway. We've been running just Swift API backed by Openstack for
some time with no issues.
Upon trying to use the S3 API I discovered that our combination of
Jewel and Keystone renders AWS v4 signatures unusable. Apparent
On Tue, May 29, 2018, 11:00 AM Marc Roos wrote:
>
> I guess we will not get this ssl_private_key option unless we upgrade
> from Luminous?
>
>
> http://docs.ceph.com/docs/master/radosgw/frontends/
>
> That option is only for Beast. For civetweb you just feed it
ssl_certificate with a combined PEM
On Feb 28, 2018 10:06 AM, "Max Cuttins" wrote:
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
> wrote:
>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that need:
>>
>> CentOS 7.5
>> (which is not ava
On Dec 5, 2017 10:26 AM, "Florent B" wrote:
On Debian systems, upgrading packages does not restart services !
You really don't want it to restart services. Many small clusters run mons
and osds on the same nodes, and auto restart makes it impossible to order
restarts.
-Erik
On 05/12/2017 16:22
I was told at the Openstack Summit that 12.2.2 should drop "In a few days."
That was a week ago yesterday. If you have a little leeway, it may be
best to wait. I know I am, but I'm paranoid.
There was also a performance regression mentioned recently that's supposed
to be fixed.
-Erik
On Nov 16
On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" wrote:
On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil wrote:
>> >> At CDM yesterday we talked about removing t
On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil wrote:
> On Fri, 9 Jun 2017, Erik McCormick wrote:
>> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote:
>> > On Thu, 8 Jun 2017, Sage Weil wrote:
>> >> Questions:
>> >>
>> >> - Does anybody on the l
Do not, under any circumstances, make a custom named cluster. There be pain
and suffering (and dragons) there, and official support for it has been
deprecated.
On Oct 15, 2017 6:29 PM, "Bogdan SOLGA" wrote:
> Hello, everyone!
>
> We are trying to create a custom cluster name using the latest cep
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon wrote:
> On 02/10/17 12:34, Osama Hasebou wrote:
>> Hi Everyone,
>>
>> Is there a guide/tutorial about how to setup Ceph monitoring system
>> using collectd / grafana / graphite ? Other suggestions are welcome as
>> well !
>
> We just installed the c
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>> - Does anybody on the list use a non-default cluster name?
>> - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Seve
Try setting
obsoletes=0
in /etc/yum.conf and see if that doesn't make it happier. The package is
clearly there and it even shows it available in your log.
-Erik
On Thu, Mar 30, 2017 at 8:55 PM, Vlad Blando wrote:
> Hi Guys,
>
> I encountered some issue with installing ceph package for giant,
Hello everyone,
I am running Ceph (firefly) Radosgw integrated with Openstack
Keystone. Recently we built a whole new Openstack cloud and created
users in that cluster. The names were the same, but the UUID's are
not. Both clouds are using the same Ceph cluster with their own RGW.
I have managed
On Oct 27, 2016 3:16 PM, "Oliver Dzombic" wrote:
>
> Hi,
>
> i can recommand
>
> X710-DA2
>
Weer also use this nice for everything.
> 10G Switch is going over our bladeinfrastructure, so i can't recommand
> something for you there.
>
> I assume that the usual juniper/cisco will do a good job. I t
I think what you are thinking of is the driver that was built to actually
replace hdfs with rbd. As far as I know that thing had a very short
lifespan on one version of hadoop. Very sad.
As to what you proposed:
1) Don't use Cephfs in production pre-jewel.
2) running hdfs on top of ceph is a mas
I've got a custom named cluster integrated with Openstack (Juno) and didn't
run into any hard-coded name issues that I can recall. Where are you seeing
that?
As to the name change itself, I think it's really just a label applying to
a configuration set. The name doesn't actually appear *in* the
co
Sorry, I made the assumption you were on 7. If you're on 6 then I defer to
someone else ;)
If you're on 7, go here.
http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/
On May 19, 2015 2:47 PM, "Georgios Dimitrakakis"
wrote:
> Erik,
>
> are you talking about the ones here :
You can also just fetch the rhev SRPMs and build those. They have rbd
enabled already.
On May 19, 2015 12:31 PM, "Robert LeBlanc" wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> You should be able to get the SRPM, extract the SPEC file and use that
> to build a new package. You sh
I haven't really used the S3 stuff much, but the credentials should be in
keystone already. If you're in horizon, you can download them under Access
and Security->API Access. Using the CLI you can use the openstack client
like "openstack credential " or with
the keystone client like "keystone ec2-c
Glance needs some additional permissions including write access to the pool
you want to add images to. See the docs at:
http://ceph.com/docs/master/rbd/rbd-openstack/
Cheers,
Erik
On Apr 6, 2015 7:21 AM, wrote:
> Hi, first off: long time reader, first time poster :)..
> I have a 4 node ceph clu
unk_size or self.get_size(location))
>
>
> This all looks correct, so any slowness isn't the bug I was thinking of.
>
> QH
>
> On Thu, Apr 2, 2015 at 10:06 AM, Erik McCormick <
> emccorm...@cirrusseven.com> wrote:
>
>> The RDO glance-store package had a
13:20:50.449 1266 DEBUG glance.common.config [-]
>>>> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>>
gt;> glance_store.rbd_store_chunk_size = 8 log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_pool= images log_opt_values
>>> /usr/l
Can you both set Cinder and / or Glance logging to debug and provide some
logs? There was an issue with the first Juno release of Glance in some
vendor packages, so make sure you're fully updated to 2014.2.2
On Apr 1, 2015 7:12 PM, "Quentin Hartman"
wrote:
> I am conincidentally going through the
Hello all,
I've got an existing Firefly cluster on Centos 7 which I deployed with
ceph-deploy. In the latest version of ceph-deploy, it refuses to handle
commands issued with a cluster name.
[ceph_deploy.install][ERROR ] custom cluster names are not supported on
sysvinit hosts
This is a producti
45 matches
Mail list logo