Hi,
Which video talks about that?
Thanks.
On Mon, May 15, 2023 at 9:51 PM Michal Strnad
wrote:
> Hi,
>
>
> thank you for the response. That sounds like a reasonable solution.
>
> Michal
>
> On 5/15/23 14:15, Konstantin Shalygin wrote:
> > Hi,
> >
> >> On 15 May 2023, at 14:58, Michal Strnad
>
Hi Yaarit,
On Fri, May 12, 2023 at 7:23 PM Yaarit Hatuka wrote:
>
> Hi everyone,
>
> Over this weekend we will run a sync between telemetry crashes and Redmine
> tracker issues.
> This might affect your inbox, depending on your Redmine email notification
> setup. You can set up filters for these
Okay, thanks for verifying that bit, sorry to have gone about it so long. I
guess we could look at connection issues next. I wrote a short python
script that tries to connect to hosts using asyncssh closely to how
cephadm does it (
https://github.com/adk3798/testing_scripts/blob/main/asyncssh-conn
Hi,
I followed this documentation:
https://docs.ceph.com/en/pacific/cephadm/adoption/
This is the error I get when trying to enable cephadm.
ceph mgr module enable cephadm
Error ENOENT: module 'cephadm' reports that it cannot run on the active
manager daemon: loading remoto library:No module na
I just checked every single host. The only processes of cephadm running
where "cephadm shell" from debugging. I closed all of them, so now I can
verify, there's not a single cephadm process running on any of my ceph
hosts. (and since I found the shell processes, I can verify I didn't
have a typ
If it persisted through a full restart, it's possible the conditions that
caused the hang are still present after the fact. The two known causes I'm
aware of are lack of space in the root partition and hanging mount points.
Both would show up as processes in "ps aux | grep cephadm" though. The
latt
As you've already seem to have figured out, "ceph orch device ls" is
populated with the results from "ceph-volume inventory". My best guess to
try and debug this would be to manually run "cephadm ceph-volume --
inventory" (the same as "cephadm ceph-volume inventory", I just like to
separate the cep
This is why I even tried a full cluster shutdown. All Hosts were out, so
there's not a possibility that there's any process hanging. After I
started the nodes, it's just the same as before. All refresh times show
"4 weeks". Like it stopped simoultanously on all nodes.
Some time ago we had a sm
On 5/15/23 13:03, Daniel Baumann wrote:
On 5/15/23 12:11, Frank Schilder wrote:
Because more often than not it isn't.
Sadly, I have to agree. We basically gave up after luminous, where every
update (on our test-ceph cluster) was a major pain. Until then, we
always updated after one week of a n
Hi,
thank you for the response. That sounds like a reasonable solution.
Michal
On 5/15/23 14:15, Konstantin Shalygin wrote:
Hi,
On 15 May 2023, at 14:58, Michal Strnad wrote:
at Cephalocon 2023, it was mentioned several times that for service
tasks such as data deletion via garbage colle
Patrick,
Sorry for delayed response. This seems to be the limit of assistance I’m
capable of providing. My deployments are all ubuntu and bootstrapped (or
upgraded) according to this starting doc:
https://docs.ceph.com/en/quincy/cephadm/install/#cephadm-deploying-new-cluster
It is very confus
On 5/15/23 12:11, Frank Schilder wrote:
> Because more often than not it isn't.
Sadly, I have to agree. We basically gave up after luminous, where every
update (on our test-ceph cluster) was a major pain. Until then, we
always updated after one week of a new release.
To add one more point..
The
This is sort of similar to what I said in a previous email, but the only
way I've seen this happen in other setups is through hanging cephadm
commands. The debug process has been, do a mgr failover, wait a few
minutes, see in "ceph orch ps" and "ceph orch device ls" which hosts have
and have not be
Hi all,
I have a problem with exporting 2 different sub-folder ceph-fs kernel mounts
via nfsd to the same IP address. The top-level structure on the ceph fs is
something like /A/S1 and /A/S2. On a file server I mount /A/S1 and /A/S2 as two
different file systems under /mnt/S1 and /mnt/S2 using
Dear Xiubo,
I uploaded the cache dump, the MDS log and the dmesg log containing the
snaptrace dump to
ceph-post-file: 763955a3-7d37-408a-bbe4-a95dc687cd3f
Sorry, I forgot to add user and description this time.
A question about trouble shooting. I'm pretty sure I know the path where the
error
Hi all, to avoid a potentially wrong impression I would like to add some words.
Slightly out of order:
> By the way, regarding performance I recommend the Cephalocon
> presentations by Adam and Mark. There you can learn what efforts are
> made to improve ceph performance for current and future ve
Hi,
I tried a lot of different approaches but I didn't have any success so far.
"ceph orch ps" still doesn't get refreshed.
Some examples:
mds.mds01.ceph06.huavsw ceph06 starting -
---
mds.mds01.ceph06.rrxmks ceph06 error
Hi Marc,
I planned to put it on-line. The hold-back is that the main test is un-taring a
nasty archive and this archive might contain personal information, so I can't
just upload it as is. I can try to put together a similar archive from public
sources. Please give me a bit of time. I'm also a
Adam & Mark topics: bluestore and bluestore v2
https://youtu.be/FVUoGw6kY5k
https://youtu.be/7D5Bgd5TuYw
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 15.05.23 um 16:47 schrieb Jens G
Don't know if it helps, but we have also experienced something similar
with osd images. We changed the image tag from version to sha and it did
not happen again.
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
htt
https://www.youtube.com/playlist?list=PLrBUGiINAakPd9nuoorqeOuS9P9MTWos3
-Original Message-
From: Marc
Sent: Monday, May 15, 2023 4:42 PM
To: Joachim Kraftmayer - ceph ambassador ; Frank
Schilder ; Tino Todino
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: CEPH Version choice
>
>
>
> By the way, regarding performance I recommend the Cephalocon
> presentations by Adam and Mark. There you can learn what efforts are
> made to improve ceph performance for current and future versions.
>
Link?
___
ceph-users mailing list -- ceph-user
Hi,
I know the problems that Frank has raised. However, it should also be
mentioned that many critical bugs have been fixed in the major versions.
We are working on the fixes ourselves.
We and others have written a lot of tools for ourselves in the last 10
years to improve migration/update
CEPH configurations can be forced to use multipath but my experience is
that it is painful and manual at best. The orchestrator design criteria
supports low-cost/commodity hardware and multipath is a sophistication
not yet addressed. The orchestrator sees all of the available device paths
with no a
Hi,
Pacific and quincy still supports barematel deloyed setup?
Istvan Szabo
Staff Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
-Original Message-
From: Il
I have upgraded dozens of clusters 14 -> 16 using the methods described in
the docs, and when followed precisely no issues have arisen. I would
suggest moving to a release that is receiving backports still (pacific or
quincy). The important aspects are only doing one system at a time. In the
case o
On 12/15/22 15:31, Stolte, Felix wrote:
Hi Patrick,
we used your script to repair the damaged objects on the weekend and it went
smoothly. Thanks for your support.
We adjusted your script to scan for damaged files on a daily basis, runtime is
about 6h. Until thursday last week, we had exactly
I think with the `config set` commands there is logic to notify the
relevant mgr modules and update their values. That might not exist with
`config rm`, so it's still using the last set value. Looks like a real bug.
Curious what happens if the mgr restarts after the `config rm`. Whether it
goes bac
Hi,
> On 15 May 2023, at 14:58, Michal Strnad wrote:
>
> at Cephalocon 2023, it was mentioned several times that for service tasks
> such as data deletion via garbage collection or data replication in S3 via
> zoning, it is good to do them on dedicated radosgw gateways and not mix them
> with
Hi all,
at Cephalocon 2023, it was mentioned several times that for service
tasks such as data deletion via garbage collection or data replication
in S3 via zoning, it is good to do them on dedicated radosgw gateways
and not mix them with gateways used by users. How can this be achieved?
How
Hello.
I think i found a bug in cephadm/ceph orch:
Redeploying a container image (tested with alertmanager) after removing
a custom `mgr/cephadm/container_image_alertmanager` value, deploys the
previous container image and not the default container image.
I'm running `cephadm` from ubuntu 22.
>
> We set up a test cluster with a script producing realistic workload and
> started testing an upgrade under load. This took about a month (meaning
> repeating the upgrade with a cluster on mimic deployed and populated
Hi Frank, do you have such scripts online? On github or so? I was thinking o
> What are the main reasons for not upgrading to the latest and greatest?
Because more often than not it isn't.
I guess when you write "latest and greatest" you talk about features. When we
admins talk about "latest and greatest" we talk about stability. The times that
one could jump with a pro
>
> I've been reading through this email list for a while now, but one thing
> that I'm curious about is why a lot of installations out there aren't
> upgraded to the latest version of CEPH (Quincy).
>
> What are the main reasons for not upgrading to the latest and greatest?
If you are starting
Hi,
> On 15 May 2023, at 11:37, Tino Todino wrote:
>
> What are the main reasons for not upgrading to the latest and greatest?
One of the main reasons - "just can't", because your Ceph-based products will
get worse at real (not benchmark) performance, see [1]
[1]
https://lists.ceph.io/hyper
Hi all,
I've been reading through this email list for a while now, but one thing that
I'm curious about is why a lot of installations out there aren't upgraded to
the latest version of CEPH (Quincy).
What are the main reasons for not upgrading to the latest and greatest?
Thanks.
Tino
This E-m
why are you still not on 14.2.22?
>
> Yes, the documents show an example of upgrading from Nautilus to
> Pacific. But I'm not really 100% trusting the Ceph documents, and I'm
> also afraid of what if Nautilus is not compatible with Pacific in some
> operations of monitor or osd =)
___
37 matches
Mail list logo