Hello,
etherpad history lost
need a way to recover from DB or find another way to back things up
discuss the quincy/dashboard-v3 backports? was tabled from 11/1
https://github.com/ceph/ceph/pull/54252
agreement is to not backport breaking features to stable branches.
18.2.1
LRC upgra
Hi everyone,
I just like to know what's your opinion about the reliability of erasure
coding.
Of course I can understand if we want the «best of the best of the best»
;-) I can choose the replica method.
I heard in many location “replica” are more reliable, “replica” are more
efficient etc...
I use EC 4+2 on backup backup site, production site is running replica3,
running 8 servers on backup side and 12 on production side
number of OSDs per server is 16 on all of them
Production has lacp bonded 25G networking for public and cluster network
backup has just 10G networking with no red
Yes, lots of people are using EC.
Which is more “reliable” depends on what you need. If you need to survive 4
failures, there are scenarios where RF=3 won’t do it for you.
You could in such a case use an EC 4,4 profile, 8,4, etc.
It’s a tradeoff between write speed and raw::usable ratio effi
Hi everyone.
Still me with my newbie questionsorry.
I'm using cephadm to deploy my ceph cluster, but when I search in the
documentation «docs.ceph.com» I see in some place like
https://docs.ceph.com/en/latest/rados/configuration/pool-pg-config-ref/
to change something in the /etc/ceph/
> to change something in the /etc/ceph/ceph.conf.
Central config was introduced with Mimic. Since both central config and
ceph.conf work and are supported, explicitly mentioning both in the docs every
time is a lot of work (and awkward). One day we’ll sort out an effective means
to generali
Hi Albert,
You should never edit any file in the containers, cephadm takes care of
it. Most of the parameters described in the doc you mentioned are better
managed with "ceph config" command in the Ceph configuration database.
If you want to run the ceph commnand on a Ceph machine outside a
c
Le 23/11/2023 à 09:32:49-0500, Anthony D'Atri a écrit
>
Hi,
Thanks for your answer.
>
> > to change something in the /etc/ceph/ceph.conf.
>
> Central config was introduced with Mimic. Since both central config and
> ceph.conf work and are supported, explicitly mentioning both in the docs
>>>
>>> Should I modify the ceph.conf (vi/emacs) directly ?
>>
>> vi is never the answer.
>
> WTF ? You break my dream ;-) ;-)
Let line editors die.
>>
> You're right.
>
> Currently I'm testing
>
> 17.2.7 quincy.
>
> So in my daily life how I would know if I should use ceph config or
>
Le 23/11/2023 à 15:35:25+0100, Michel Jouvin a écrit
Hi,
>
> You should never edit any file in the containers, cephadm takes care of it.
> Most of the parameters described in the doc you mentioned are better managed
> with "ceph config" command in the Ceph configuration database. If you want
> t
Hi,
Please note that there are cases where the use of ceph.conf inside a
container is justified. For example, I was unable to set monitor's
mon_rocksdb_options by any means except for providing them in monitor's own
ceph.conf within the container, all other attempts to pass this settings
were igno
> Now 25 years later lot of people recommend to use replica so if I buy XTo
> I'm only going to have X/3 To (vs raidz2 where I loose 2 disks over 9-12
> disks).
As seen from other answers, it changes which performance and space
usage you want go have, but there are other factors too. Replica = 3
(
Hi,
> On Nov 23, 2023, at 16:10, Nizamudeen A wrote:
>
> RCs for reef, quincy and pacific
> for next week when there is more time to discuss
Just little noise: pacific is ready? 16.2.15 should be last release (at least
that was the last plan), but [1] still not merged. Why now ticket is clos
[1] was closed by mistake. Reopened.
On 11/23/2023 7:18 PM, Konstantin Shalygin wrote:
Hi,
On Nov 23, 2023, at 16:10, Nizamudeen A wrote:
RCs for reef, quincy and pacific
for next week when there is more time to discuss
Just little noise: pacific is ready? 16.2.15 should be last release
Hi Mailing-Lister's,
I am reaching out for assistance regarding a deployment issue I am facing
with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
the rook helm chart, but we are encountering an issue that apparently seems
related to a known bug (https://tracker.ceph.com/issue
Hi,
please, is it better to reduce the default object size from 4MB to some
smaller value for the rbd image where there will be a lot of small mail
and webhosting files?
Thanks
Svoboda Miroslav
___
ceph-users mailing list -- ceph-users@ceph.io
To u
Hello guys.
I see many docs and threads talking about osd failed. I have a question:
how many nodes in a cluster can be failed.
I am using ec 8 + 2(10 osd nodes) and when I shutdown 2 nodes then my
cluster crashes, It cannot write anymore.
Thank you. Regards
Nguyen Huu Khoi
Hi,
basically, with EC pools you usually have a min_size of k + 1 to
prevent data loss. There was a thread about that just a few days ago
on this list. So in your case your min_size is probably 9, which makes
IO pause in case two chunks become unavailable. If your crush failure
domain is
Hi,
I don't have an idea yet why that happens, but could you increase the
debug level to see why it stops? What is the current ceph status?
Thanks,
Eugen
Zitat von Denis Polom :
Hi
running Ceph Pacific 16.2.13.
we had full CephFS filesystem and after adding new HW we tried to
start it
Hello.
I am reading.
Thank you for information.
Nguyen Huu Khoi
On Fri, Nov 24, 2023 at 1:56 PM Eugen Block wrote:
> Hi,
>
> basically, with EC pools you usually have a min_size of k + 1 to
> prevent data loss. There was a thread about that just a few days ago
> on this list. So in your case y
Hello,
How many nodes do you have?
> -Original Message-
> From: Nguyễn Hữu Khôi
> Sent: vendredi 24 novembre 2023 07:42
> To: ceph-users@ceph.io
> Subject: [ceph-users] [CEPH] Ceph multi nodes failed
>
> Hello guys.
>
> I see many docs and threads talking about osd failed. I have a qu
Hello.
I have 10 nodes. My goal is to ensure that I won't lose data if 2 nodes
fail.
Nguyen Huu Khoi
On Fri, Nov 24, 2023 at 2:47 PM Etienne Menguy
wrote:
> Hello,
>
> How many nodes do you have?
>
> > -Original Message-
> > From: Nguyễn Hữu Khôi
> > Sent: vendredi 24 novembre 2023 07:
22 matches
Mail list logo