Hi,
I have a ceph nautilus (14.2.9) cluster with 10 nodes. Each node has
19x16TB disks attached.
I created radosgw pools. secondaryzone.rgw.buckets.data pool is configured
as EC 8+2 (jerasure).
ceph df shows 2.1PiB MAX AVAIL space.
Then I configured radosgw as a secondary zone and 100TiB of S3 d
I have firsthand experfience migrating multiple clusters from Ubuntu to RHEL,
preserving the OSDs along the way, with no loss or problems.
It’s not like you’re talking about OpenVMS ;)
> On Jan 25, 2021, at 9:14 PM, Szabo, Istvan (Agoda)
> wrote:
>
> Hi,
>
> Is there anybody running a cluste
Of course, Ceph original mission is Independence of distros and hardware. Just
match your packages versions
Cheers,
k
Sent from my iPhone
> On 26 Jan 2021, at 08:15, Szabo, Istvan (Agoda)
> wrote:
>
> Is there anybody running a cluster with different os?
> Due to the centos 8 change I might
Hello Everyone,
We seem to be having a problem on one of our ceph clusters post the OS
patch and reboot of one of the nodes. The three other nodes are showing
OSD fill rates of 77%-81%, but the 60 OSDs contained in the host that was
just rebooted are varying between 64% and 90% since the reboot o
Docs for permissions are super vague. What each flag does?
What is 'x' permitting?
What's the difference between class-write and write?
And the last question: can we limit user to reading/writing only to
existing objects in the pool?
Thanks!
___
ce
Hey all,
We will be having a Ceph science/research/big cluster call on Wednesday
January 27th. If anyone wants to discuss something specific they can add
it to the pad linked below. If you have questions or comments you can
contact me.
This is an informal open call of community members mostl
I upgraded our ceph cluster (6 bare metal nodes, 3 rgw VMs) from v13.2.4 to
v15.2.8. The mon, mgr, mds and osd daemons were all upgraded successfully,
everything looked good.
After the radosgw was upgraded, they refused to work, the log messages are
at the end of this e-mail.
Here are the things
Debugging a bit more it shows in all sites many stale instances which can't be
removed due to multisite limitation ☹ in octopus 15.2.7.
-Original Message-
From: Szabo, Istvan (Agoda)
Sent: Monday, January 25, 2021 11:51 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Multisite bu
Hmm,
Looks like attached screenshots not allowed, so in HKG we have 19 millions
objects, in ash we have 32millions.
-Original Message-
From: Szabo, Istvan (Agoda)
Sent: Monday, January 25, 2021 11:44 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Multisite bucket data inconsistency
Hi,
We have bucket sync enabled and seems like it is inconsistent ☹
This is the master zone sync status on that specific bucket:
realm 5fd28798-9195-44ac-b48d-ef3e95caee48 (realm)
zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
zone 9213182a-14ba-48ad-bde9-289a1c0
10 matches
Mail list logo