pipermail/ceph-users-ceph.com/2018-October/030791.html
> https://tracker.ceph.com/issues/37942
>
> For now we are cancelling these resharding jobs since they seem to be
> causing performance issues with the cluster, but this is an untenable
> solution. Does anyone know what is causing t
add my initial analysis to it.
But the threads do seem to be stuck, at least for a while, in
get_obj_data::flush despite a lack of traffic. And sometimes it self-resolves,
so it’s not a true “infinite loop”.
Thank you,
Eric
> On Aug 22, 2019, at 9:12 PM, Eric Ivancich wrote:
>
>
to check with others who’re more familiar with this code path.
> Begin forwarded message:
>
> From: Vladimir Brik
> Subject: Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is
> being transferred
> Date: August 21, 2019 at 4:47:01 PM EDT
> To: "J. Eric Ivanc
ed on? Are you using lifecycle? And
garbage collection is another background task.
And just to be clear -- sometimes all 3 of your rados gateways are
simultaneously in this state?
But the call graph would be incredibly helpful.
Thank you,
Eric
--
J. Eric Ivancich
he/him/h
ddly (with some help from Canonical, who I think are working on
> patches).
>
There was a recently merged PR that addressed bucket deletion with
missing shadow objects:
https://tracker.ceph.com/issues/40590
Thank you for reporting your experience w/ rgw,
Eric
--
s this a good idea or am I missing something?
>
> IO would be reduced by a factor of 100 for that particular
> pathological case. I've
> unfortunately seen a real-world setup that I think hits a case like that.
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ult.
> Is this a good idea or am I missing something?
On the face it looks good. I’ll raise this with other RGW developers. I do know
that there was a related bug that was recently addressed with this pr:
https://github.com/ceph/ceph/pull/28192
<https://github.com/ceph/ceph/pull/
uld look the exact strategy in a
> multisite - setup to resync e.g. a single bucket at which one zone got
> corrupted and must be get back into a synchronous state?
Be aware that there are full syncs and incremental syncs. Full syncs just copy
every object. Incremental syncs use
mimic, and nautilus:
http://tracker.ceph.com/issues/40526
<http://tracker.ceph.com/issues/40526>
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s situation:
https://github.com/ceph/ceph/pull/28724
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
04
> gc.26: 42
> gc.28: 111292
> gc.17: 111314
> gc.12: 111534
> gc.31: 111956
Casey Bodley mentioned to me that he's seen similar behavior to what
you're describing when RGWs are upgraded but not all OSDs are upgraded
as well. Is it possible that the OSDs hosting gc.13,
On 6/4/19 7:37 AM, Wido den Hollander wrote:
> I've set up a temporary machine next to the 13.2.5 cluster with the
> 13.2.6 packages from Shaman.
>
> On that machine I'm running:
>
> $ radosgw-admin gc process
>
> That seems to work as intended! So the PR seems to have fixed it.
>
> Should be f
Hi Wido,
When you run `radosgw-admin gc list`, I assume you are *not* using the
"--include-all" flag, right? If you're not using that flag, then
everything listed should be expired and be ready for clean-up. If after
running `radosgw-admin gc process` the same entries appear in
`radosgw-admin gc l
Hi Manuel,
My response is interleaved below.
On 5/8/19 3:17 PM, EDH - Manuel Rios Fernandez wrote:
> Eric,
>
> Yes we do :
>
> time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list
> the bucket.
We're adding an --allow-unordered option to `radosgw-admin bucket list`.
Tha
show more 0B. Is it correct?
I am having difficulty understanding that sentence. Would you be so kind as to
rewrite it? I don’t want to create confusion by guessing.
Eric
> Thanks for your response.
>
>
> -Mensaje original-
> De: J. Eric Ivancich
> Enviado el: miércole
Hi Manuel,
My response is interleaved.
On 5/7/19 7:32 PM, EDH - Manuel Rios Fernandez wrote:
> Hi Eric,
>
> This looks like something the software developer must do, not something than
> Storage provider must allow no?
True -- so you're using `radosgw-admin bucket list --bucket=XYZ` to list
th
On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote:
> Hi Casey
>
> ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)
>
> Reshard is something than don’t allow us customer to list index?
>
> Regards
Listing of buckets with a large number of buckets is notoriously
So I do not think mclock_client queue works the way you’re hoping it does. For
categorization purposes it joins the operation class and the client identifier
with the intent that that will execute operations among clients more evenly
(i.e., it won’t favor one client over another).
However, it w
w-admin metadata list bucket.instance | jq -r '.[]' | sort
>
>
>
> Give that a try and see if you see the same problem. It seems that once
> you remove the old bucket instances the omap dbs don't reduce in size
> until you compact them.
>
>
>
> Bryan
>
On 11/29/18 6:58 PM, Bryan Stillwell wrote:
> Wido,
>
> I've been looking into this large omap objects problem on a couple of our
> clusters today and came across your script during my research.
>
> The script has been running for a few hours now and I'm already over 100,000
> 'orphaned' object
On 12/17/18 9:18 AM, Josef Zelenka wrote:
> Hi everyone, i'm running a Luminous 12.2.5 cluster with 6 hosts on
> ubuntu 16.04 - 12 HDDs for data each, plus 2 SSD metadata OSDs(three
> nodes have an additional SSD i added to have more space to rebalance the
> metadata). CUrrently, the cluster is use
I did make an inquiry and someone here does have some experience w/ the
mc command -- minio client. We're curious how "ls -r" is implemented
under mc. Does it need to get a full listing and then do some path
parsing to produce nice output? If so, it may be playing a role in the
delay as well.
Eric
The numbers you're reporting strike me as surprising as well. Which version are
you running?
In case you're not aware, listing of buckets is not a very efficient operation
given that the listing is required to return with objects in lexical order.
They are distributed across the shards via a ha
23 matches
Mail list logo