Hi!
We have read https://docs.ceph.com/en/latest/man/8/mount.ceph, and would
like to see our expectations confirmed (or denied) here. :-)
Suppose we build a three-node cluster, three monitors, three MDSs, etc,
in order to export a cephfs to multiple client nodes.
On the (RHEL8) clients (web
I can't remember that much of it, afaicr this is more a kubernetes plugin, and
what functionality was lacking in kubernetes they tried to bypass with the
plugin. So I have problems creating preprovisioned volumes. Afaicr you need the
driver to create a volume, so when you use the driver from t
On 10/25/22 17:08, Simon Oosthoek wrote:
At this point, one of noticed that a strange ip adress was mentioned;
169.254.0.2, it turns out that a recently added package (openmanage) and
some configuration had added this interface and address to hardware
nodes from Dell. For us, our single inte
> Op 26 okt. 2022 om 10:11 heeft mj het volgende
> geschreven:
>
> Hi!
>
> We have read https://docs.ceph.com/en/latest/man/8/mount.ceph, and would like
> to see our expectations confirmed (or denied) here. :-)
>
> Suppose we build a three-node cluster, three monitors, three MDSs, etc, in
Hi Team,
Facing the issue while installing Grafance and related containers while
deploying ceph -ansible.
Error:
t_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit
status from master 0\r\n")
fatal: [storagenode1]: FAILED! => changed=false
invocation:
module_args:
d
I use this very example, few more servers. I have no outage windows for my
ceph deployments as they support several production environments.
MDS is your focus, there are many knobs, but MDS is the key to client
experience. In my environment, MDS failover takes 30-180 seconds,
depending on how mu
On 10/19/22 00:20, Laura Flores wrote:
Hello Ceph users,
The *Ceph User + Dev Monthly Meeting* is happening this *Thursday, October
20th @ 2:00 pm UTC.* The meeting will be on this link:
https://meet.jit.si/ceph-user-dev-monthly. Please feel free to add any
topics you'd like to discuss to the mo
On 26/10/2022 10:57, Stefan Kooman wrote:
On 10/25/22 17:08, Simon Oosthoek wrote:
At this point, one of noticed that a strange ip adress was mentioned;
169.254.0.2, it turns out that a recently added package (openmanage)
and some configuration had added this interface and address to
hardwa
Just one comment on the standby-replay setting: it really depends on
the use-case, it can make things worse during failover. Just recently
we had a customer where disabling standby-replay made failovers even
faster and cleaner in a heavily used cluster. With standby-replay they
had to manua
Dear list,
I'm looking for some guide or pointers to how people upgrade the
underlying host OS in a ceph cluster (if this is the right way to
proceed, I don't even know...)
Our cluster is nearing the 4.5 years of age and now our ubuntu 18.04 is
nearing the end of support date. We have a mixe
lab issues blocking centos container builds and teuthology testing:
* https://tracker.ceph.com/issues/57914
* delays testing for 16.2.11
upcoming events:
* Ceph Developer Monthly (APAC) next week, please add topics:
https://tracker.ceph.com/projects/ceph/wiki/CDM_02-NOV-2022
* Ceph Virtual 2022 st
Hi Simon,
You can just dist-upgrade the underlying OS. Assuming that you installed
the packages from https://download.ceph.com/debian-octopus/, just change
bionic to focal in all apt-sources, and dist-upgrade away.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
-- Original Message --
Hi Gr. Stefan,
I'll reply to the whole list in case anyone else has the same question.
Regarding 16.2.11, there is currently no ETA since we are experiencing some
issues in our testing lab. As soon as the testing lab is fixed, which is
the main priority at the moment, we plan to resume getting in
You should be able to `do-release-upgrade` from bionic/18 to focal/20.
Octopus/15 is shipped for both dists from ceph.
Its been a while since I did this, the release upgrader might disable the ceph
repo, and uninstall the ceph* packages.
However, the OSDs should still be there, re-enable the ceph
Hi all,
Thanks for the interesting discussion. Actually it's a bit disappointing
to see that also cephfs with multiple MDS servers is not as HA as we
would like it.
I read also that filover time depends on the number of clients. We will
only have three, and they will not do heavy IO. So that
Hey guys,
I ran into a weird issue, hope you can explain what I'm observing. I'm
testing* Ceph 16.2.10* on *Ubuntu 20.04* in *Google Cloud VMs*, I created 3
instances and attached 4 persistent SSD disks to each instance. I can see
these disks attached as `/dev/sdb, /dev/sdc, /dev/sdd, /dev/sde` de
We've done 14.04 -> 16.04 -> 18.04 -> 20.04 all at various stages of our
ceph cluster life.
The latest 18.04 to 20.04 was painless and we ran:
apt update && apt dist-upgrade -y -o Dpkg::Options::=\"--force-confdef\" -o
Dpkg::Options::=\"--force-confold\"
do-release-upgrade --allow-third-party -f D
Hi,
2022年10月24日(月) 11:22 Satoru Takeuchi :
...
> Could you tell me how to fix this problem and what is the `...rgw.opt` pool.
I understood that "...rgw.otp" pool is for mfa. In addition, I
consider this behavior is bug and opened a new issue.
pg autoscaler of rgw pools doesn't work after creatin
On 10/26/22 16:14, Simon Oosthoek wrote:
Dear list,
I'm looking for some guide or pointers to how people upgrade the
underlying host OS in a ceph cluster (if this is the right way to
proceed, I don't even know...)
Our cluster is nearing the 4.5 years of age and now our ubuntu 18.04 is
neari
19 matches
Mail list logo