Replication works on osd layer, rgw is a http frontend for objects. If you
write some object via librados directly, rgw will not be awared about this
k
Sent from my iPhone
> On 22 Feb 2021, at 18:52, Cary FitzHugh wrote:
>
> Question is - do files which are written directly to an OSD get rep
OMAP with keys works as database-like replication, new keys/updates comes to
acting set as data stream, not a full object
k
Sent from my iPhone
> On 22 Feb 2021, at 17:13, Benoît Knecht wrote:
>
> Is recovery faster for OMAP compared to the equivalent number of RADOS
> objects?
___
Hi,
Is there a way to cleanup the sync shards and start from scratch?
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by copyright or
other legal rules. If you have re
Hi All,
We've been dealing with what seems to be a pretty annoying bug for a while
now. We are unable to delete a customer's bucket that seems to have an
extremely large number of aborted multipart uploads. I've had $(radosgw-admin
bucket rm --bucket=pusulax --purge-objects) running in a screen se
I increased the debug level to 20. There isn't anything additional being
written:
2021-02-23 16:26:38.736642 7£2c45£3700 -1 Initialization timeout, failed to
initialize
2021-02-23 16:26:38.931400 7f4d7bf4a000 0 deferred set uid:gid to 167:167
(ceph:ceph)
2021-02-23 16:26:38.931707 7f4d7bf4a000
We have a Red Hat installation of Luminuous (full packages version:
12.2.8-128.1). We're experiencing an issue where the ceph-radosgw service will
timeout during initialization and cycle through attempts every five minutes
until it seems to just give up. Every other ceph service starts successfu
Den tis 23 feb. 2021 kl 16:53 skrev Mathew Snyder
:
>
> We have a Red Hat installation of Luminuous (full packages version:
> 12.2.8-128.1). We're experiencing an issue where the ceph-radosgw service
> will timeout during initialization and cycle through attempts every five
> minutes until it se
On 2/21/21 9:51 AM, Frank Schilder wrote:
Hi Stefan,
thanks for the additional info. Dell will put me in touch with their deployment
team soonish and then I can ask about matching abilities.
It turns out that the problem I observed might have a much more profane reason.
I saw really long peri
Hello,
Recently I deployed a small ceph cluster using cephadm.
In this cluster, I have 3 OSD nodes with 8 HDDs Hitachi (9.1 TiB), 4 NVMes
Micron_9300 (2.9 TiB), and 2 NVMes Intel Optane P4800X (375 GiB). I want to
use spinning disks for the data block, 2.9 NVMes for the block.DB and the
intel Opta
I don't think there are here people advising to use consumer grade
ssd's/nvme's. The enterprise ssd's often have more twpd, and are just stable
under high constant load.
My 1,5 year old sm863a still has 099 wearlevel and 097 poweronhours, some other
sm863a of 3,8 years has 099 wearlevel and
>>> Hello,
>>> We have functional ceph swarm with a pair of S3 rgw in front that uses
>>> A.B.C.D domain to be accessed.
>>>
>>> Now a new client asks to have access using the domain : E.C.D, but to
>>> already existing buckets. This is not a scenario discussed in the docs.
>>> Apparently, looking
11 matches
Mail list logo