Hi,
so there currently is a section how to configure nova [0], but it
refers to the client side ceph.conf, not the rbd details in nova.conf
as Ilya already pointed out. I'll just add what I have in one of my
test clusters in the [livbirt] section of the nova.conf (we use it
identically in
Den tors 25 jan. 2024 kl 03:05 skrev Henry lol :
>
> Do you mean object location (osds) is initially calculated only using its
> name and crushmap,
> and then the result is reprocessed with the map of the PGs?
>
> and I'm still skeptical about computation on the client-side.
> is it possible to obt
I have to say that not including a fix for a serious issue into the last
minor release of Pacific is a rather odd decision.
/Z
On Thu, 25 Jan 2024 at 09:00, Konstantin Shalygin wrote:
> Hi,
>
> The backport to pacific was rejected [1], you may switch to reef, when [2]
> merged and released
>
>
Hi,
The backport to pacific was rejected [1], you may switch to reef, when [2]
merged and released
[1] https://github.com/ceph/ceph/pull/55109
[2] https://github.com/ceph/ceph/pull/55110
k
Sent from my iPhone
> On Jan 25, 2024, at 04:12, changzhi tan <544463...@qq.com> wrote:
>
> Is there an
I found that quickly restarting the affected mgr every 2 days is an okay
kludge. It takes less than a second to restart, and never grows to
dangerous sizes which is when it randomly starts ballooning.
/Z
On Thu, 25 Jan 2024, 03:12 changzhi tan, <544463...@qq.com> wrote:
> Is there any way to sol
Do you mean object location (osds) is initially calculated only using its
name and crushmap,
and then the result is reprocessed with the map of the PGs?
and I'm still skeptical about computation on the client-side.
is it possible to obtain object location without computation on the client
because
Is there any way to solve this problem?thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
You guys can just respond here and I’ll add your responses to the docs.
Zac
Sent from [Proton Mail](https://proton.me/mail/home) for iOS
On Thu, Jan 25, 2024 at 05:52, Ilya Dryomov <[idryo...@gmail.com](mailto:On
Thu, Jan 25, 2024 at 05:52, Ilya Dryomov < wrote:
> On Wed, Jan 24, 2024 at 7:31
On Wed, Jan 24, 2024 at 8:52 PM Ilya Dryomov wrote:
>
> On Wed, Jan 24, 2024 at 7:31 PM Eugen Block wrote:
> >
> > We do like the separation of nova pools as well, and we also heavily
> > use ephemeral disks instead of boot-from-volume instances. One of the
> > reasons being that you can't detach
On Wed, Jan 24, 2024 at 7:31 PM Eugen Block wrote:
>
> We do like the separation of nova pools as well, and we also heavily
> use ephemeral disks instead of boot-from-volume instances. One of the
> reasons being that you can't detach a root volume from an instances.
> It helps in specific maintena
We do like the separation of nova pools as well, and we also heavily
use ephemeral disks instead of boot-from-volume instances. One of the
reasons being that you can't detach a root volume from an instances.
It helps in specific maintenance cases, so +1 for keeping it in the
docs.
Zitat v
Hi everyone,
Stupid question about
ceph fs volume create
how can I specify the metadata pool and the data pool ?
I was able to create a cephfs «manually» with something like
ceph fs new vo cephfs_metadata cephfs_data
but as I understand the documentation, with this method I need to dep
Hi,
The client calculates the location (PG) of an object from its name and the
crushmap.
This is what makes it possible to parallelize the flows directly from the
client.
The client also has the map of the PGs which are relocated to other OSDs
(upmap, temp, etc.)
_
On Wed, Jan 24, 2024 at 10:02 AM Murilo Morais
wrote:
> Good afternoon everybody!
>
> I have a question regarding the documentation... I was reviewing it and
> realized that the "vms" pool is not being used anywhere in the configs.
>
> The first mention of this pool was in commit 2eab1c1 and, in
Hello, I'm new to ceph and sorry in advance for the naive questions.
1.
As far as I know, CRUSH utilizes the cluster map consisting of the PG
map and others.
I don't understand why CRUSH computation is required on client-side,
even though PG-to-OSDs mapping can be acquired from the PG map.
2.
how
- Build/package PRs- who to best review these?
- Example: https://github.com/ceph/ceph/pull/55218
- Idea: create a GitHub team specifically for these types of PRs
https://github.com/orgs/ceph/teams
- Laura will try to organize people for the group
- Pacific 16.2.15 status
Murilo,
I'm looking into it.
Zac Dover
Upstream Documentation
Ceph Foundation
On Thursday, January 25th, 2024 at 1:01 AM, Murilo Morais
wrote:
>
>
> Good afternoon everybody!
>
> I have a question regarding the documentation... I was reviewing it and
> realized that the "vms" pool is no
Good afternoon everybody!
I have a question regarding the documentation... I was reviewing it and
realized that the "vms" pool is not being used anywhere in the configs.
The first mention of this pool was in commit 2eab1c1 and, in e9b13fa, the
configuration section of nova.conf was removed, but
Hi,
Hector also claims that he observed an incomplete acting set after *adding* an
OSD. Assuming that the cluster was health OK before that, that should not
happen in theory. In practice this was observed with certain definitions of
crush maps. There is, for example, the issue with "choose" and
Hi,
this question has come up once in the past[0] afaict, but it was kind of
inconclusive so I'm taking the liberty of bringing it up again.
I'm looking into implementing a key rotation scheme for Ceph client keys. As it
potentially takes some non-zero amount of time to update key material ther
On 23.01.2024 18:19, Albert Shih wrote:
Just like to known if it's a very bad idea to do a rsync of /etc/ceph
from
the «_admin» server to the other ceph cluster server.
I in fact add something like
for host in `cat /usr/local/etc/ceph_list_noeuds.txt`
do
/usr/bin/rsync -av /etc/ceph/ceph* $h
Hi,
Confirmed that this happens to me as well.
After upgrading from 18.2.0 to 18.2.1 OSD metrics like: ceph_osd_op_*
are missing from ceph-mgr.
The Grafana dashboard also doesn't display all graphs correctly.
ceph-dashboard/Ceph - Cluster : Capacity used, Cluster I/O, OSD Capacity
Utilizatio
Hi all,
I need to list the contents of the stray buckets on one of our MDSes. The MDS
reports 772674 stray entries. However, if I dump its cache and grep for stray I
get only 216 hits.
How can I get to the contents of the stray buckets?
Please note that Octopus is still hit by https://tracker.
> [...] After a few days, I have on our OSD nodes around 90MB/s
> read and 70MB/s write while 'ceph -s' have client io as
> 2,5MB/s read and 50MB/s write. [...]
This is one of my pet-peeves: that a storage system must have
capacity (principally IOPS) to handle both a maintenance
workload and a use
Le 24/01/2024 à 10:33:45+0100, Robert Sander a écrit
Hi,
>
> On 1/24/24 10:08, Albert Shih wrote:
>
> > 99.99% because I'm newbie with ceph and don't understand clearly how
> > the autorisation work with cephfs ;-)
>
> I strongly recommend you to ask for a expierenced Ceph consultant that help
Hi,
On 1/24/24 10:08, Albert Shih wrote:
99.99% because I'm newbie with ceph and don't understand clearly how
the autorisation work with cephfs ;-)
I strongly recommend you to ask for a expierenced Ceph consultant that
helps you design and setup your storage cluster.
It looks like you try
Le 24/01/2024 à 10:23:20+0100, David C. a écrit
Hi,
>
> In this scenario, it is more consistent to work with subvolumes.
Ok. I will do that.
>
> Regarding security, you can use namespaces to isolate access at the OSD level.
HumI'm currently have no idea what you just say but that's
Hi Albert,
In this scenario, it is more consistent to work with subvolumes.
Regarding security, you can use namespaces to isolate access at the OSD
level.
What Robert emphasizes is that creating pools dynamically is not without
effect on the number of PGs and (therefore) on the architecture (PG
Le 24/01/2024 à 09:45:56+0100, Robert Sander a écrit
Hi
>
> On 1/24/24 09:40, Albert Shih wrote:
>
> > Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
> > cephfs (currently and that number will increase with time).
>
> Why do you need 20 - 30 separate CephFS instances
Hi,
On 1/24/24 09:40, Albert Shih wrote:
Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
cephfs (currently and that number will increase with time).
Why do you need 20 - 30 separate CephFS instances?
and put all my cephfs inside two of them. Or should I create for
Hi,
this topic pops up every now and then, and although I don't have
definitive proof for my assumptions I still stand with them. ;-)
As the docs [2] already state, it's expected that PGs become degraded
after some sort of failure (setting an OSD "out" falls into that
category IMO):
It is
Hi everyone,
I like to know how many pool should I create for multiple cephfs ?
Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
cephfs (currently and that number will increase with time).
Should I create
one cephfs_metadata_replicated
one cephfs_data_replicated
32 matches
Mail list logo