Hi,
Just pinging to check if this issue was understood yet?
Cheers, Dan
On Mon, Apr 12, 2021 at 9:12 PM Jonas Jelten wrote:
>
> Hi Igor!
>
> I have plenty of OSDs to loose, as long as the recovery works well afterward,
> so I can go ahead with it :D
>
> What debug flags should I activate? osd=
Hi!
Unfortunately no, I've done some digging but didn't find a cause or solution
yet.
Igor, whats your suggestion how we should look for a solution?
-- Jonas
On 27/04/2021 09.47, Dan van der Ster wrote:
> Hi,
>
> Just pinging to check if this issue was understood yet?
>
> Cheers, Dan
>
> O
Hi friends,
We've recently deployed a few all-flash OSD nodes to improve both bandwidth
and IOPS for active data processing in CephFS, but before taking it into
active production we've been tuning it to see how far we can get the
performance in practice - it would be interesting to hear your exper
Update on this issue:
If multiple public networks are indeed allowed (and I saw some docs
mentioning they are), then it seems to be a bug in
src/pybind/mgr/cephadm/serve.py _apply_service() method.
It just takes whatever is stored in "public_network" config variable and
tries to match it aga
Hi, I hit the same errors when doing multisite sync between luminous and
octopus, but what I founded is that my sync errors was mainly on old
multipart and shadow objects, at the "rados level" if I might say.
(leftovers from luminous bugs)
So check at the "user level", using s3cmd/awscli and t
Hi all,
In 14.2.20, when re-creating a mixed OSD after device replacement,
ceph-volume batch is no longer able to find any available space for a
block_db.
Below I have shown a zap [1] which frees up the HDD and one LV on the
block-dbs VG.
But then we try to recreate, and none of the block-dbs are
Hi Sebastian!!
(solution below)
This is weird, because we had previously tested the ceph-volume
refactor and it looked ok.
Anyway, here is the inventory output: https://pastebin.com/ADFeuNZi
And the ceph-volume log is here: https://termbin.com/i8mk
I couldn't digest why it was rejected.
I believ
Hi Team,
We have setup two Node Ceph Cluster using *Native Cephfs Driver* with *Details
as:*
- 3 Node / 2 Node MDS Cluster
- 3 Node Monitor Quorum
- 2 Node OSD
- 2 Nodes for Manager
Cephnode3 have only Mon and MDS (only for test case 4-7) rest two nodes
i.e. cephnode1 and cephnode2
2x10G for cluster + Public
2x10G for Users
lacp = 802.3ad
Smart Weblications GmbH , 26 Nis 2021 Pzt,
17:25 tarihinde şunu yazdı:
>
> Hi,
>
>
> Am 25.04.2021 um 03:58 schrieb by morphin:
> > Hello.
> >
> > We're running 1000vm on 28 node with 6 ssd (no seperate db device) and
> > these vms are Mos
So,
maybe somebody can answer me the following question:
I have ~150m objects in the ceph cluster (ceph status shows (objects:
152.68M objects, 316 TiB).
How can
radosgw-admin bucket --bucket BUCKET radoslist
create an output that is 252677729 and still growing?
Am Di., 27. Apr. 2021 um 06:59 Uhr
Hello,
Thank you very much to pickup the question and sorry for the late response.
Yes, we are sending in cleartext also using HTTPS, but how it should be send if
not like this?
Also connected to this issue a bit, when we subscribe a bucket to a topic with
non-ACL kafka topic, any operations (
On Tue, Apr 27, 2021 at 1:59 PM Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com> wrote:
> Hello,
>
> Thank you very much to pickup the question and sorry for the late response.
>
> Yes, we are sending in cleartext also using HTTPS, but how it should be
> send if not like this?
>
>
if you send the u
Hi,
I have 3 pools, where I use it exclusively for RBD images. 2 They are
mirrored and one is an erasure code. It turns out that today I received the
warning that a PG was inconsistent in the pool erasure, and then I ran
"ceph pg repair ". It turns out that after that the entire cluster
became ext
Hello,
We’ve got some issues when uploading s3 objects with a double slash // in the
name, and was wondering if anyone else has observed this issue with uploading
objects to the radosgw?
When connecting to the cluster to upload an object with the key
‘test/my//bucket’ the request returns with
For archival purposes, this is the correct yaml file:
kind: SecurityContextConstraints
apiVersion: v1
metadata:
name: custom-scc
allowPrivilegedContainer: true
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: true
allowHostPID: true
allowHostPorts: false
allowPrivilegeEscalat
Hi,
the glance option "show_image_direct_url" has been marked as
deprecated for quite some time because it's a security issue, but
without it the interaction between glance and ceph didn't work very
well, I can't quite remember what the side effects were. It seems that
they now actually t
16 matches
Mail list logo