153970795085
.rgw.buckets.index0 497200
0 3721485483 5926323574 360300980
Thanks,
Ryan Leimenstoll
University of Maryland Institute for Advanced Computer Studies
___
ceph-users
candidate phase, I
haven’t seen much mention of it. For some time now we have been experiencing
blocked requests when deep scrubbing PGs in our bucket index, so this could be
quite useful for us.
Thanks,
Ryan Leimenstoll
rleim...@umiacs.umd.edu
University of Maryland Institute for Advanced
on processing
returned error r=-22
Can anyone advise on the best path forward to stop the current sharding states
and avoid this moving forward?
Some other details:
- 3 rgw instances
- Ceph Luminous 12.2.1
- 584 active OSDs, rgw bucket index is on Intel NVMe OSDs
Thanks,
Ryan Leim
are somewhat nervous to reenable dynamic sharding as it seems to have
contributed to this problem.
Thanks,
Ryan
> On Oct 9, 2017, at 5:26 PM, Yehuda Sadeh-Weinraub wrote:
>
> On Mon, Oct 9, 2017 at 1:59 PM, Ryan Leimenstoll
> wrote:
>> Hi all,
>>
>> We recently upg
there any good rule of
thumb or guidance to getting an estimate on this before purchasing hardware? We
are expecting upwards of 800T usable capacity at the start.
Thanks for any insight!
Ryan Leimenstoll
rleim...@umiacs.umd.edu
University of Maryland Institute for Advanced Computer Studies
admin user stats —uid=USER —sync-stats.
While we can no longer replicate the issue since that patch, is there a
suggested path forward to rectify the existing user stats that may have been
skewed by this bug before the patched release?
[0] http://tracker.ceph.com/issues/14507
Thanks!
Ryan Leimen
hough this is now prohibited by
Amazon in US-East and seemingly all of their other regions [0]. Since clients
typically follow Amazon’s direction, should RGW be rejecting underscores in
these names to be in compliance? (I did notice it already rejects uppercase
letters.)
Thanks much!
Ryan Leimen
!
Best,
Ryan
[0] https://tracker.ceph.com/issues/36293
<https://tracker.ceph.com/issues/36293>
> On Oct 2, 2018, at 6:08 PM, Robin H. Johnson wrote:
>
> On Tue, Oct 02, 2018 at 12:37:02PM -0400, Ryan Leimenstoll wrote:
>> I was hoping to get some clarification on what &
helpful to have the ability to do this on
the radosgw backend. This is especially useful for large buckets/datasets where
copying the objects out and into radosgw could be time consuming.
Is this something that is currently possible within radosgw? We are running
Ceph 12.2.2.
Thanks,
Ryan
hood.
Thanks,
Ryan
> On Mar 6, 2018, at 2:54 PM, Robin H. Johnson wrote:
>
> On Tue, Mar 06, 2018 at 02:40:11PM -0500, Ryan Leimenstoll wrote:
>> Hi all,
>>
>> We are trying to move a bucket in radosgw from one user to another in an
>> effort both change ow
/docs/luminous/cephfs/disaster-recovery/#recovery-from-missing-metadata-objects
Thanks much,
Ryan Leimenstoll
rleim...@umiacs.umd <mailto:rleim...@umiacs.umd>.edu
University of Maryland Institute for Advanced Computer Studies
___
ceph-users mailin
of safety on an offline system? (not sure how long it would
take, data pool is ~100T large w/ 242 million objects, and downtime is a big
pain point for our users with deadlines).
Thanks,
Ryan
> On May 8, 2018, at 5:05 AM, John Spray wrote:
>
> On Mon, May 7, 2018 at 8:50 PM, Ryan Le
6 kernel driver.
My read here would be that the MDS is sending too large a message to the OSD,
however my understanding was that the MDS should be using osd_max_write_size to
determine the size of that message [0]. Is this maybe a bug in how this is
calculated on the MDS side?
Thanks!
Ryan Leime
reason that this wouldn’t be a HEALTH_ERR
condition since it represents a significant service degradation?
Thanks!
Ryan
> On May 22, 2019, at 4:20 AM, Yan, Zheng wrote:
>
> On Tue, May 21, 2019 at 6:10 AM Ryan Leimenstoll
> wrote:
>>
>> Hi all,
>>
>> We
14 matches
Mail list logo