Hm, the same test worked for me with version 16.2.13... I mean, I only
do a few writes from a single client, so this may be an invalid test,
but I don't see any interruption.
Zitat von Eugen Block :
I just tried to reproduce the behaviour but failed to do so. I have
a Reef (18.2.2) cluster
Then you were clearly paying more attention than me. ;-) We had some
maintenance going on during that talk, so I couldn't really focus
entirely on listening. But thanks for clarifying!
Zitat von Frédéric Nass :
Hi Eugen,
During the talk you've mentioned, Dan said there's a hard coded
lim
I just tried to reproduce the behaviour but failed to do so. I have a
Reef (18.2.2) cluster with multi-active MDS. Don't mind the hostnames,
this cluster was deployed with Nautilus.
# mounted the FS
mount -t ceph nautilus:/ /mnt -o name=admin,secret=,mds_namespace=secondfs
# created and
Den tors 21 nov. 2024 kl 19:18 skrev Andre Tann :
> > This post seem to show that, except they have their root named "nvme"
> > and they split on rack and not dc, but that is not important.
> >
> > https://unix.stackexchange.com/questions/781250/ceph-crush-rules-explanation-for-multiroom-racks-setu
Hi,
I don't see how it would be currently possible. The OSD creation is
handled by ceph-volume, which activates each OSD separately:
[2024-11-22 14:03:08,415][ceph_volume.main][INFO ] Running command:
ceph-volume activate --osd-id 0 --osd-uuid
aacabeca-9adb-465c-88ee-935f06fa45f7 --no-s
Casey,
Is there any way to 'speed up' full sync? While I (now) understand full sync
prioritizing older objects first, a use case of frequently reading/using
'recent' objects cannot be met in the current scenario of pulling older objects
first.
In my current scenario it appears:
that after a ti
Hi,
Thank you. The Ceph cluster is running smoothly so far. However, during our
testing, we re-installed it multiple times and observed that the
ceph-volume command took over a minute to activate the OSD.
In the activation stage, ceph-volume called "ceph-bluestore-tool
show-label".
It appears th
Hi,
I can remember that it took with the tools before cephadm somehow 8 hours
to deploy a ceph cluster with more than 2000 osds.
But I also know that CBT has a much faster approach to installing a Ceph
cluster.
Just an idea: maybe you can look at the approach at CBT to make cephadm
faster.
Regard
Hi
this is the tenth unsubscribe mail I send and after few minutes I
receive another email.
please, could some admin delete my email from the mailing list ?
thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to cep
Hi,
we recently migrated to cephadm from ceph-deploy a 18.2.2 ceph cluster
(Ubuntu with docker).
RGWs are separate vms.
We noticed syslog increased a lot due to rgw's access logs sent to it.
And because we use to log ops, a huge ops log file on
/var/log/ceph/cluster-id/ops-log-ceph-client.rgw.hostn
Well...only because I had this discussion in the back of my mind when I've
watch the video yesterday. ;-)
Cheers,
Frédéric.
- Le 22 Nov 24, à 8:59, Eugen Block ebl...@nde.ag a écrit :
> Then you were clearly paying more attention than me. ;-) We had some
> maintenance going on during that t
As previously disclosed the list currently has issues. This is the first
request visible to me. I will get you unsubscribed tonight.
> On Nov 22, 2024, at 8:28 AM, Debian 108 wrote:
>
> Hi
> this is the tenth unsubscribe mail I send and after few minutes I receive
> another email.
>
>
>
12 matches
Mail list logo