I think, I found the reason.
the cephadm-script uses the ubuntu repo instead the ceph repo.
so I get the older version 15 ...
root@node1:~# ./cephadm -v add-repo --release pacific
Could not locate podman: podman not found
Installing repo GPG key from
https://download.ceph.com/keys/release.asc.
Hi,
On 24.06.21 09:34, Jana Markwort wrote:
>
> I think, I found the reason.
> the cephadm-script uses the ubuntu repo instead the ceph repo.
> so I get the older version 15 ...
>
> root@node1:~# ./cephadm -v add-repo --release pacific
> Could not locate podman: podman not found
> Installing rep
Dear List,
since my update yesterday from 14.2.18 to 14.2.20 i got an unhealthy
cluster. As I remember right, it appeared after rebooting the second
server. They are 7 missing objects from pgs of a cache pool (pool 3).
This pool is now changed writeback to proxy and i'm not able to flush
all
ok, the problem is the GPG key:
root@node1:~# ./cephadm -v add-repo --release pacific
Could not locate podman: podman not found
Installing repo GPG key from
https://download.ceph.com/keys/release.asc...
Installing repo file at /etc/apt/sources.list.d/ceph.list...
...
W: https://download.ceph.c
If I understand the documentation for the placements in "ceph orch
apply" correctly, I can place the daemons by number or on specific
host. But what I want is:
"Start 3 mgr services, and one of it should be started on node ceph01."
How I can achieve this?
Thanks!
On Wed, Jun 23, 2021 at 6:39 PM Daniel Iwan wrote:
> this looks like a bug, the topic should be created in the right tenant.
>> please submit a tracker for that.
>>
>
> Thank you for confirming.
> Created here https://tracker.ceph.com/issues/51331
>
>
thanks
> yes. topics are owned by the tena
Hi everyone,
Today is the final day for Ceph Month! Here's today's schedule:
9:00 ET / 15:00 CEST Evaluating CephFS Performance vs. Cost on
High-Density Commodity Disk Servers [Dan van der Ster]
9:30 ET / 15:30 CEST Ceph Market Development Working Group BoF
10:10 ET / 16:10 CEST Ceph Community Am
Hi Marc,
We can look into that for future events. For this event, we
recommended people subscribe to the Ceph Community Calendar which does
display the times in your local time.
https://calendar.google.com/calendar/embed?src=9ts9c7lt7u1vic2ijvvqqlfpo0%40group.calendar.google.com
On Tue, Jun 22,
Dear Patrick,
thanks for letting me know.
Could you please consider to make this a ceph client mount option, for example,
'-o fast_move', that enables a code path that enforces an mv to be a proper
atomic mv with the risk that in some corner cases the target quota is overrun?
With this option
I notice on
https://docs.ceph.com/en/latest/rbd/iscsi-initiator-esx/
that it lists a requirement of
"VMware ESX 6.5 or later using Virtual Machine compatibility 6.5 with VMFS 6."
Could anyone enlighten me as to why this specific limit is in place?
Officlaly knowing something like, "you have to
Dear Ceph Folks,
Does anyone has real experience of using rbd mirroring for disaster recovery
over 1000 miles away?
I am planning using Ceph rbd mirroring feature for DR, and has no real
experience. Could anyone sharing good or bad experience here? I am thinking of
using iSCSI over rbd-nbd m
Hi Philip,
Part of it will be down to VFMS supporting features for ISCSI and then that is
chained to specific ESXi and VM levels.
Andrew Ferris
Network & System Management
UBC Centre for Heart & Lung Innovation
St. Paul's Hospital, Vancouver
http://www.hli.ubc.ca
>>> Philip Brown 6/24/2
I would appreciate it if anyone could call out specific features involved here.
"upgrade because it's better" doesnt usually fly in cost justification writeups.
- Original Message -
From: "Andrew Ferris"
To: "ceph-users" , "Philip Brown"
Sent: Thursday, June 24, 2021 1:13:02 PM
Subjec
On Sat, Jun 19, 2021 at 3:43 PM Nico Schottelius
wrote:
> Good evening,
>
> as an operator running Ceph clusters based on Debian and later Devuan
> for years and recently testing ceph in rook, I would like to chime in to
> some of the topics mentioned here with short review:
>
> Devuan/OS package:
On Sun, Jun 20, 2021 at 9:51 AM Marc wrote:
> Remarks about your cephadm approach/design:
>
> 1. I am not interested in learning podman, rook or kubernetes. I am using
> mesos which is also on my osd nodes to use the extra available memory and
> cores. Furthermore your cephadm OC is limited to o
On Tue, Jun 22, 2021 at 11:58 AM Martin Verges wrote:
>
> > There is no "should be", there is no one answer to that, other than 42.
> Containers have been there before Docker, but Docker made them popular,
> exactly for the same reason as why Ceph wants to use them: ship a known
> good version (CI
On Tue, Jun 22, 2021 at 1:25 PM Stefan Kooman wrote:
> On 6/21/21 6:19 PM, Nico Schottelius wrote:
> > And while we are at claiming "on a lot more platforms", you are at the
> > same time EXCLUDING a lot of platforms by saying "Linux based
> > container" (remember Ceph on FreeBSD? [0]).
>
> Indeed
Hello.
Today we've experienced a complete CEPH cluster outage - total loss of
power in the whole infrastructure.
6 osd nodes and 3 monitors went down at the same time. CEPH 14.2.10
This resulted in unfound objects, which were "reverted" in a hurry with
ceph pg mark_unfound_lost revert
In retrosp
On 6/24/21 5:34 PM, Frank Schilder wrote:
Please, in such situations where developers seem to have to make a definite
choice, consider the possibility of offering operators to choose the
alternative that suits their use case best. Adding further options seems far
better than limiting function
Followup. This is what's written in logs when I try to fix one PG:
ceph pg repair 3.60
primary osd log:
2021-06-25 01:07:32.146 7fc006339700 -1 log_channel(cluster) log [ERR] :
repair 3.53 3:cb4336ff:::rbd_data.e2d302dd699130.69b3:6aa5 : is
an unexpected clone
2021-06-25 01:07:32.146 7
Hi Stefan,
> Isn't that where LazyIO is for? See ...
Yes, it is, to some extend. However, there are many large HPC applications that
will not start using exotic libraries for IO. A parallel file system offers
everything that is needed with standard OS library calls. This is better solved
on th
I've actually had rook-ceph not proceed with something that I would have
continued on with. Turns out I was wrong and it was right. Its checking was
more through then mine. Thought that was pretty cool. It eventually cleared
itself and finished up.
For a large ceph cluster, the orchestration is
I bumped into this recently:
https://samuel.karp.dev/blog/2021/05/running-freebsd-jails-with-containerd-1-5/
:)
Kevin
From: Sage Weil
Sent: Thursday, June 24, 2021 2:06 PM
To: Stefan Kooman
Cc: Nico Schottelius; Kai Börnert; Marc; ceph-users
Subject: [ce
23 matches
Mail list logo