of new and old buckets and the behavior always stays
the same, nothing with a slash syncs but everything without does.
Thanks in advance,
-Matt Dunavant
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
nd cluster worked.
Thanks for the help!
____
From: Matt Dunavant
Sent: Monday, August 22, 2022 1:56:21 PM
To: Casey Bodley
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Problem adding secondary realm to rados-gw
Thanks, that got me past that issue. However I'
-08-22T13:54:38.628-0400 7ff280858c80 0 ERROR: failed to start notify
service ((22) Invalid argument
2022-08-22T13:54:38.628-0400 7ff280858c80 0 ERROR: failed to init services
(ret=(22) Invalid argument)
couldn't init storage provider
From: Casey Bodley
Sent:
Hello,
I'm trying to add a secondary realm to my ceph cluster but I'm getting the
following error after running a 'radosgw-admin realm pull --rgw-realm=$REALM
--url=http://URL:80 --access-key=$KEY --secret=$SECRET':
request failed: (5) Input/output error
Nothing on google seems to help with
Hi all,
We are currently using a ceph cluster for block storage on version 14.2.16. We
would like to start experimenting with object storage but the ceph
documentation doesn't seem to cover a lot of the installation or configuration
of the RGW piece. Does anybody know where I may be able to fi
Hi all,
We've recently run into an issue where our single ceph rbd pool is throwing
errors for nearfull osds. The OSDs themselves vary in PGs/%full with a low of
64/78% and a high of 73/86%. Is there any suggestions on how to get this to
balance a little more cleanly? Currently we have 360 driv
Gotcha, 1 more question: During the process, data will be available right? Just
performance will be impacted by the rebalancing correct?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
My replica size on the pool is 3, so I'll use that to test. There is no other
type in my map like dc, rack, etc; just servers. Do you know what a successful
run of the test command looks like? I just ran it myself and it spits out a
number of crush rules (in this case 1024) and then ends with:
Thanks for the reply! I've pasted what I believe are the applicable parts of
the crush map below. I see that the rule id is 0, but what is num-rep?
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
Hi all,
We have a 12 OSD node cluster in which I just recently found out that
'osd_crush_chooseleaf_type = 0' made it's way into our ceph.conf file,
probably from previous testing. I believe this is the reason a recent
maintenance on an OSD node caused data to stop flowing. In researching how
Hello all,
Thanks for the help. I believe we traced this down to be an issue with the
crush rules. It seems somehow osd_crush_chooseleaf_type = 0 got placed into
our configuration. This caused ceph osd crush rule dump to include this line '
"op": "choose_firstn",' instead of 'chooseleaf_firstn
Yeah, the VMs didn't die completely but they were all inaccessible during the
maintenance period. Once the maintenance node came back up, it started flowing
again.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-us
Hi all,
We just completed maintenance on an OSD node and we ran into an issue where all
data seemed to stop flowing while the node was down. We couldn't connect to any
of our VMs during that time. I was under the impression that by setting the
'noout' flag, you would not get the rebalance of t
Jason Dillaman wrote:
> On Fri, Mar 13, 2020 at 11:36 AM Matt Dunavant
> >
> > Jason Dillaman wrote:
> > > On Fri, Mar 13, 2020 at 11:17 AM Matt Dunavant
> > > > > >
> > > > I'm not sure of the last known good release of the rbd
Jason Dillaman wrote:
> On Fri, Mar 13, 2020 at 11:17 AM Matt Dunavant
> >
> > I'm not sure of the last known good release of the rbd CLI where this
> > worked. I just
> > ran the sha1sum against the images and they always come up as different.
> > Might
I'm not sure of the last known good release of the rbd CLI where this worked. I
just ran the sha1sum against the images and they always come up as different.
Might be worth knowing, this is a volume that's provisioned at 512GB (with much
less actually used) but after export, it only shows up as
Should have mentioned, the VM is always off. We are not using snapshots either.
-Matt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
I think I've been running into an rbd export/import bug and wanted to see if
anybody else had any experience.
We're using rbd images for VM drives both with and without custom stripe sizes.
When we try to export/import the drive to another ceph cluster, the VM always
comes up in a buste
18 matches
Mail list logo