Hi!
Adam! Big thanx!
"ceph config rm osd.91 container_image" completly solve this trouble.
I don't understand why this happened, but at least now everything works.
Thank you so much again!
- Original Message -
> From: "Fyodor Ustinov"
> To: "Ad
questions:
1. How did it get there
2. How to delete it - as far as I understand this field is not editable?
- Original Message -
> From: "Adam King"
> To: "Fyodor Ustinov"
> Cc: "ceph-users"
> Sent: Tuesday, 1 February, 2022 17:45:13
> Subject:
Hi!
No mode ideas? :(
- Original Message -
> From: "Fyodor Ustinov"
> To: "Adam King"
> Cc: "ceph-users"
> Sent: Friday, 28 January, 2022 23:02:26
> Subject: [ceph-users] Re: cephadm trouble
> Hi!
>
>> Hmm, I'm not
podman pull s-8-2-1:/dev/bcache0
/usr/bin/podman: stderr Error: invalid reference format
ERROR: Failed command: /usr/bin/podman pull s-8-2-1:/dev/bcache0
>
> Thanks,
>
> - Adam King
>
> On Thu, Jan 27, 2022 at 7:06 PM Fyodor Ustinov wrote:
>
>> Hi!
>>
>> I
2.18.1
quay.io/prometheus/node-exporter:v0.18.1
quay.io/prometheus/alertmanager:v0.20.0
quay.io/ceph/ceph-grafana:6.7.4
docker.io/library/haproxy:2.3
docker.io/arcts/keepalived
>
> Thanks,
>
> - Adam King
Thanks a lot!
WBR,
Fyodor.
>
> On Thu, Jan 27, 2022 at 9:10 AM Fyo
Hi!
I rebooted the nodes with mgr and now I see the following in the cephadm.log:
As I understand it - cephadm is trying to execute some unsuccessful command of
mine (I wonder which one), it does not succeed, but it keeps trying and trying.
How do I stop it from trying?
2022-01-27 16:02:58,123
Hi!
I restarted mgr - it didn't help. Or do you mean something else?
> Hi,
>
> have you tried to failover the mgr service? I noticed similar
> behaviour in Octopus.
>
>
> Zitat von Fyodor Ustinov :
>
>> Hi!
>>
>> No one knows how to fix it?
Hi!
No one knows how to fix it?
- Original Message -
> From: "Fyodor Ustinov"
> To: "ceph-users"
> Sent: Tuesday, 25 January, 2022 11:29:53
> Subject: [ceph-users] How to remove stuck daemon?
> Hi!
>
> I have Ceph cluster version 16.2.7 wi
Hi!
I have Ceph cluster version 16.2.7 with this error:
root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_WARN 1 failed cephadm daemon(s)
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
daemon osd.91 on s-26-8-2-1 is in error state
But I don't have that osd anymore. I deleted it.
r
Hi!
> Btw, the first interesting find: I enabled 'rbd_balance_parent_reads' on
> the clients, and single-thread reads now scale much better, I routinely get
> similar readings from a single disk doing 4k reads with 1 thread:
It seems to me that this function should not give any gain in "real" loa
Hi!
Yes. You're right. Ganesha does. But ceph doesn't use all of ganesh's
functionality.
In the ceph dashboard there is no way to enable nfs3, only nfs4
- Original Message -
> From: "Marc"
> To: "Fyodor Ustinov"
> Cc: "ceph-users&quo
Hi!
I think ceph only supports nsf4?
- Original Message -
> From: "Marc"
> To: "Fyodor Ustinov" , "ceph-users"
> Sent: Monday, 4 October, 2021 12:44:38
> Subject: RE: nfs and showmount
> I can remember asking the same some time ago. I thi
Hi!
As I understand it - the built-in NFS server does not support the command
"showmount -e"?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi!
It looks exactly the same as the problem I had.
Try the `cephadm ls` command on the `rhel1.robeckert.us` node.
- Original Message -
> From: "Robert W. Eckert"
> To: "ceph-users"
> Sent: Monday, 20 September, 2021 18:28:08
> Subject: [ceph-users] Getting cephadm "stderr:Inferring
Hi!
> no problem. Maybe you played around and had this node in the placement
> section previously? Or did it have the mon label? I'm not sure, but
> the important thing is that you can clean it up.
Yes! I did play with another cluster before and forgot to completely clear that
node! And the fsid
Hi!
> Was there a MON running previously on that host? Do you see the daemon
> when running 'cephadm ls'? If so, remove it with 'cephadm rm-daemon
> --name mon.s-26-9-17'
Hmm. 'cephadm ls' running directly on the node does show that there is mon. I
don't quite understand where it came from and I
Hi!
After upgrading to version 16.2.6, my cluster is in this state:
root@s-26-9-19-mon-m1:~# ceph -s
cluster:
id: 1ef45b26-dbac-11eb-a357-616c355f48cb
health: HEALTH_WARN
failed to probe daemons or devices
In logs:
9/17/21 1:30:40 PM[ERR]cephadm exited with an error co
Hi!
> Correction: Containers live at https://quay.io/repository/ceph/ceph now.
>
As I understand, command
ceph orch upgrade start --ceph-version 16.2.6
is broken and will not be able to update the ceph?
root@s-26-9-19-mon-m1:~# ceph orch upgrade start --ceph-version 16.2.6
Initiating upgrade
Hi!
It's not my report - https://tracker.ceph.com/issues/45009
- Original Message -
> From: "Hans van den Bogert"
> To: "ceph-users"
> Sent: Tuesday, 27 July, 2021 11:20:48
> Subject: [ceph-users] Re: we're living in 2005.
>> Try to install a completely new ceph cluster from scratch o
Hi!
>>> docs.ceph.io ? If there’s something that you’d like to see added there,
>>> you’re
>>> welcome to submit a tracker ticket, or write to me privately. It is not
>>> uncommon for documentation enhancements to be made based on mailing list
>>> feedback.
>> Documentation...
>> Try to install
Hi!
> docs.ceph.io ? If there’s something that you’d like to see added there,
> you’re
> welcome to submit a tracker ticket, or write to me privately. It is not
> uncommon for documentation enhancements to be made based on mailing list
> feedback.
Documentation...
Try to install a completely
Hi!
I have fresh installed pacific
root@s-26-9-19-mon-m1:~# ceph version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
I managed to bring him to this state:
root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gate
Hi!
Thanks a lot for your help!
The problem turned out to be that the container with prometheus does not use
the hosts file. DNS only. And I have all servers were described only in hosts.
- Original Message -
> From: "Ernesto Puerta"
> To: "Fyodor Ustinov"
Hi!
I installed fresh cluster 16.2.4
as described in https://docs.ceph.com/en/latest/cephadm/#cephadm
Everything works except for one thing: there are only graphics in the hosts /
overall performance, (only the CPU and the network). In all other places the
inscription "no data".
What could I h
Hi!
> I really do not care about these 1-2 days in between, why are you? Do
> not install it, configure yum to lock a version, update your local repo
> less frequent.
I already asked this question - what to do to those who today decide to install
the CEPH for the first time?
ceph-deploy instal
Hi!
Again. New version in repository without announce.
:(
I wonder who needs to write a letter and complain that there would always be an
announcement, and then a new version in the repository?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-use
Hi!
Thank you very much!
It remains to understand why this link is not in the documentation. :)
- Original Message -
> From: "Torben Hørup"
> To: "Fyodor Ustinov"
> Cc: "ceph-users"
> Sent: Friday, 18 October, 2019 15:03:09
> Subject:
Hi!
CEPH documentation requre "tcmu-runner-1.4.0 or newer package", but I can not
find this package for Centos.
Maybe someone knows where to download this package?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
Hi!
Recomendation is: 1G RAM on 1T disk space + 1-2G for each OSD.
In any case I recommend read this page:
https://docs.ceph.com/docs/master/start/hardware-recommendations
- Original Message -
> From: "Amudhan P"
> To: "Ashley Merrick"
> Cc: "ceph-users"
> Sent: Sunday, 22 September
Hi!
No. I looking information on https://ceph.io/ why I should not install version
14.2.4, although it is in the repository and how it differs from 14.2.3
- Original Message -
> From: "Bastiaan Visser"
> To: ceph-users@ceph.io
> Sent: Tuesday, 17 September, 2019 10:52:54
> Subject: [cep
Hi!
Fine! Maybe you know what a new user should do? Which does not yet have a local
copy of the repository, and is now trying to install the latest version?
- Original Message -
> From: "Ronny Aasen"
> To: ceph-users@ceph.io
> Sent: Tuesday, 17 September, 2019 09:46:03
> Subject: [ceph-
Hi!
I create bug https://tracker.ceph.com/issues/41832
Maybe someone also encountered such a problem?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi!
Cache tiering is a great solution if the cache size is larger than the hot
data. Even better if the data can cool quietly in the cache. Otherwise, it’s
really better not to do this.
- Original Message -
> From: "Wido den Hollander"
> To: "Eikermann, Robert" , ceph-users@ceph.io
> S
Hi!
Сan anybody help me - if I turn on bluestore_default_buffered_write will i get
a WriteBack or WriteThrow?
According to the documentation, we don’t understand this.
And the second question - but in general there is an analog of the writeback in
the OSD (I perfectly understand the danger of s
Hi!
The problem is not so much that you or I cannot update.
The problem is that now you cannot install nautilus in at least one of the
standard ways.
- Original Message -
> From: "EDH - Manuel Rios Fernandez"
> To: "Fyodor Ustinov" , "ceph-users"
-
> From: "Fyodor Ustinov"
> To: "ceph-users"
> Sent: Wednesday, 4 September, 2019 13:18:55
> Subject: [ceph-users] CEPH 14.2.3
> Dear CEPH Developers!
>
> We all respect you for your work.
>
> But I have one small request.
>
> Please, m
Hi!
"oflag=sync" or in some cases "direct,sync" avoid any cache usage.
- Original Message -
> From: "Vitaliy Filippov"
> To: "Fyodor Ustinov"
> Cc: "EDH - Manuel Rios Fernandez" , "ceph-users"
>
> Sent: We
Dear CEPH Developers!
We all respect you for your work.
But I have one small request.
Please, make an announcement about the new version and prepare the
documentation before posting the new version to the repository.
It is very, very, very necessary.
WBR,
Fyodor.
_
gt; but not when power cycling which would reinforce a hardware component being
>> the
>> culprit in this case.
>>
>> Greetings
>> Fabian
>>
>> Am Dienstag, den 03.09.2019, 14:13 +0300 schrieb Fyodor Ustinov:
>>> Hi!
>>>
>>> I unde
Hi!
In this case, using dd is quite acceptable.
- Original Message -
> From: vita...@yourcmc.ru
> To: "Fyodor Ustinov"
> Cc: "EDH - Manuel Rios Fernandez" , "ceph-users"
>
> Sent: Tuesday, 3 September, 2019 15:18:23
> Subject: Re: [cep
optional. Power cycle enough.
> Yes indeed very funny case, are you sure sdd/sdc etc are not being
> reconnected(renumbered) to different drives because of some bus reset or
> other failure? Or maybe some udev rule is messing things up?
>
>
>
> -Original Message-
&
Hi!
Micron_1100_MTFD
But not only SSD "too slowly". And HDD - "too quickly".
> Hi Fyodor
>
> Whats the model of SSD?
>
> Regards
>
>
> -----Mensaje original-
> De: Fyodor Ustinov
> Enviado el: martes, 3 de septiembre de 2019 13:13
Hi!
I understand that this question is not quite for this mailing list, but
nonetheless, experts who may be encountered this have gathered here.
I have 24 servers, and on each, after six months of work, the following began
to happen:
[root@S-26-5-1-2 cph]# uname -a
Linux S-26-5-1-2 5.2.11-1.el
43 matches
Mail list logo