Hi again.
I've now solved my issue with help from people in this group. Thank you for
helping out.
I thought the process was a bit complicated so I created a short video
describing the process.
https://youtu.be/Ds4Wvvo79-M
I hope this helps someone else, and again thank you.
Best regards
Daniel
Hi,
It’s hard to explain as issue is no longer here, if it happens again “ceph pg
x.y query” output could be useful.
I don’t think you went too fast or removed too many disks in a single step.
As you only have 3 nodes, Ceph should have directly noticed degraded PG and
could not do much.
You did
Hey Burkhard, Chris, all,
On 16/08/2021 10:48, Chris Palmer wrote:
It's straightforward to add multiple DNS names to an endpoint. We do
this for the sort of reasons you suggest. You then don't need separate
rgw instances (not for this reason anyway).
Assuming default:
* radosgw-admin zonegr
Hi all,
Can we enable rbd-mirror feature in product environment? if not, are there any
known issues?
Thanks,
Zhen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Christian
I don't have much experience with multisite so I'll let someone else
answer that aspect. But each RGW will only accept requests where the
Host header matches one of the "hostnames" configured as below.
Otherwise the client will simply get an error response. So, as someone
else su
Hi all,
first, apologize for my english writen :)
I installed a Ceph system with 3 servers :
- server 1 : all services
- server 2 : all services
- serveur 3 : no osd, only monitor
I put files with Cepfs : all is good and ceph monitor indicate 2
replicates.
But when I down server 2, my
Your min replicas is also 2?? Change that to 1.
> -Original Message-
> Sent: Tuesday, 17 August 2021 12:16
> To: ceph-users@ceph.io
> Subject: [ceph-users] Raid redundance not good
>
> Hi all,
>
> first, apologize for my english writen :)
>
> I installed a Ceph system with 3 servers
Hi Etienne,
Thanks for your answer. I actually had to remove the class first. So for
example this 2-step process works:
ceph osd crush rm-device-class osd.0
ceph osd crush set-device-class sdd osd.0
osd tree now reports correctly sdd as class. Funilly enough "ceph orch device
ls" still reports
Den tis 17 aug. 2021 kl 11:46 skrev Chris Palmer :
>
> Hi Christian
>
> I don't have much experience with multisite so I'll let someone else
> answer that aspect. But each RGW will only accept requests where the
> Host header matches one of the "hostnames" configured as below.
> Otherwise the clien
Den tis 17 aug. 2021 kl 12:17 skrev Network Admin
:
> Hi all,
> first, apologize for my english writen :)
> I installed a Ceph system with 3 servers :
> - server 1 : all services
> - server 2 : all services
> - serveur 3 : no osd, only monitor
> I put files with Cepfs : all is good and ceph monitor
Hello List,
I am running Proxmox on top of ceph 14.2.20 on the nodes, replica 3, size 2.
Last week I wanted to swap the HDDs to SDDs on one node.
Since i have 3 Nodes with replica 3, size 2 i did the following:
1.) cep osd set noout
2.) Stopped all OSD on that one node
3.) i set the OSDs to out
Maybe an instance of https://tracker.ceph.com/issues/46847 ?
Nest time you see this problem, you can try the new "repeer" command on
affected PGs. The "ceph pg x.y query" as mentioned by Etienne will provide a
clue if its due to this bug.
Best regards,
=
Frank Schilder
AIT Risø C
Hi, after some trial and error I got it working, so users will get synced.
However, If I try to create a bucket via s3cmd I receive the following
error:
s3cmd --access_key=XX --secret_key=YY --host=HOST mb s3://test
ERROR: S3 error: 403 (InvalidAccessKeyId)
When I try the same with ls I just get
Hi,
I figured I should follow up on this discussion, not with the intention of
bashing any particular solution, but pointing to at least one current major
challenge with cephadm.
As I wrote earlier in the thread, we previously found it ... challenging to
debug things running in cephadm. Earlier t
>
> Again, this is meant as hopefully constructive feedback rather than
> complaints, but the feeling a get after having had fairly smooth
> operations with raw packages (including fixing previous bugs leading to
> severe crashes) and lately grinding our teeth a bit over cephadm is that
> it has h
Hi all ,
Going to deploy a ceph cluster in production with replicas size of 2 . Is
there any inconvenience on the service side ? I am going to change the
default (3) to 2.
Please advise.
Regards.
Michel
___
ceph-users mailing list -- ceph-users@ceph
>
> going to deploy a test cluster and successfully deployed my first
> monitor (hurray!).
>
> Now trying to add the first osd host following instructions at:
> https://docs.ceph.com/en/latest/install/manual-deployment/#bluestore
>
ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm create
If you have to ask (and don't give crucial details like ssd/hdd etc), then I
would recommend just following the advice of more experienced and knowledgable
people here and stick to 3 (see archive).
> -Original Message-
> From: Michel Niyoyita
> Sent: Tuesday, 17 August 2021 16:29
> T
There are only two ways that size=2 can go:
A) You set min_size=1 and risk data loss
B) You set min_size=2 and your cluster stops every time you lose a
drive or reboot a machine
Neither of these are good options for most use cases; but there's
always an edge case. You should stay with size=3, min_
There are cerrtain sequences of events that can result in Ceph not knowing
which copy of a PG (if any) has the current information. That’s one way you
can effectively lose data.
I ran into it myself last year on a legacy R2 cluster.
If you *must* have a 2:1 raw:usable ratio, you’re better off
Hi,
Whether containers are good or not is a separate discussion where I suspect
there won't be consensus in the near future.
However, after just having looked at the documentation again, my main point
would be that when a major stable open source project recommends a specific
installation meth
Hello,
i get: 1 pools have many more objects per pg than average
detail: pool cephfs.backup.data objects per pg (203903) is more than
20.307 times cluster average (10041)
I set pg_num and pgp_num from 32 to 128 but my autoscaler seem to set
them back to 32 again :-/
For Details please see:
htt
Hi all,
going to deploy a test cluster and successfully deployed my first
monitor (hurray!).
Now trying to add the first osd host following instructions at:
https://docs.ceph.com/en/latest/install/manual-deployment/#bluestore
I have to note - however - that:
1.
--
copy /var/lib/ceph/boo
Hi,
I’m coming at this from the position of a newbie to Ceph. I had some
experience of it as part of Proxmox, but not as a standalone solution.
I really don’t care whether Ceph is contained or not, I don’t have the depth of
knowledge or experience to argue it either way. I can see that contai
Hey David,
In case this wasn't answered off list already:
It looks like you have only added a single OSD to each new host?
You specified 12*10T on osd{1..5}, and 12*12T on osd{6,7}.
Just as a word of caution, the added 24T is more or less going to be wasted on
osd{6,7} assuming that your crush
> On Aug 17, 2021, at 12:28 PM, Francesco Piraneo G.
> wrote:
>
> # ceph-volume lvm create --data /dev/sdb --dmcrypt --cluster euch01
Your first message indicated a default cluster name; this one implies a
non-default name.
Whatever else you do, avoid custom cluster names. They will only
going to deploy a test cluster and successfully deployed my first
monitor (hurray!).
Now trying to add the first osd host following instructions at:
https://docs.ceph.com/en/latest/install/manual-deployment/#bluestore
ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm create --data /dev/sdb
Hello everyone,
We have a ceph cluster with version Pacific v16.2.4
We are trying to implement the ceph module snap-schedule from this document
https://docs.ceph.com/en/latest/cephfs/snap-schedule/
It works if you have say, hourly and retention is h 3
ceph fs snap-schedule add /volumes/user1/vo
Hello,
about four weeks ago I upgraded my 14.2.16 cluster (144 4TB hdd-OSDs, 9
hosts) from 14.2.16 to 14.2.22. The upgrade did not cause any trouble.
The cluster is healthy. One thing is however new since the upgrade and
somewhat irritating:
Each weekend in the night from sat to sun I now se
On 17/08/2021 13:37, Janne Johansson wrote:
Don't forget that v4 auth bakes in the clients idea of what the
hostname of the endpoint was, so its not only about changing headers.
If you are not using v2 auth, you will not be able to rewrite the
hostname on the fly.
Thanks for the heads up in thi
Yes,
I want to open up a new DC where people can store their objects, but I want
the bucket names and users unique over both DC.
After some reading I found that I need one realm with multiple zonegroups,
each containing only one zone.
No sync of actual user data, but metadata like users or used bu
31 matches
Mail list logo