[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Radoslav Milanov
+1 On 3.3.2021 г. 11:37 ч., Marc wrote: Secondly, are we expecting IBM to "kill off" Ceph as well? Stop spreading rumors! really! one can take it further and say kill product x, y, z until none exist! This natural / logical thinking, the only one to blame here is IBM/redhat. If you have no

[ceph-users] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov
Hello, Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10 cluster. Managers were upgraded fine then first monitor went down for upgrade and never came back. Researching at the unit files container fails to run because of an error: root@host1:/var/lib/ceph/97d9f40e-9d33-

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov
t on if we want to run containers. -Dave -- Dave Hall Binghamton University kdh...@binghamton.edu <mailto:kdh...@binghamton.edu> On Wed, Apr 14, 2021 at 12:51 PM Radoslav Milanov mailto:radoslav.mila...@gmail.com>> wrote: Hello, Cluster is 3 nodes Debian 10. Started ceph

[ceph-users] Issues upgrading to 16.2.1

2021-04-20 Thread Radoslav Milanov
Hello Tried cephadm upgrade form 16.2.0 to 16.2.1 Managers were updated first then process halted on first monitor being upgraded. The monitor fails to start: root@host3:/var/lib/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6/mon.host3# /usr/bin/docker run --rm --ipc=host --net=host --entrypoint

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-30 Thread Radoslav Milanov
If stream is so great why is RHEL different ? On 30.6.2021 г. 03:49 ч., Teoman Onay wrote: For similar reasons, CentOS 8 stream, as opposed to every other CentOS released before, is very experimental. I would never go in production with CentOS 8 stream. Experimental?? Looks like you still

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-30 Thread Radoslav Milanov
-stream releases. Stream is one minor release ahead than RHEL which means it already contains part of the fixes which will be released a few months later in RHEL. It could be considered even more stable as it already contains part of the fixes. On Wed, 30 Jun 2021, 15:15 Radoslav Milanov

[ceph-users] Re: Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image

2021-09-21 Thread Radoslav Milanov
There is a problem upgrading ceph-iscsi from 16.25 to 16.2.6 2021-09-21T12:43:58.767556-0400 mgr.nj3231.wagzhn [ERR] cephadm exited with an error code: 1, stderr:Redeploy daemon iscsi.iscsi.nj3231.mqeari ... Creating ceph-iscsi config... Write file: /var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da

[ceph-users] Re: [IMPORTANT NOTICE] Potential data corruption in Pacific

2021-10-29 Thread Radoslav Milanov
Not everyone is subscribed to low traffic MLs. Something like should be posted on all lists I think. On 29.10.2021 г. 05:43 ч., Daniel Poelzleithner wrote: On 29/10/2021 11:23, Tobias Fischer wrote: I would propose to either create a separate Mailing list for these kind of Information from t

[ceph-users] Re: Ceph-Dokan Mount Caps at ~1GB transfer?

2021-11-01 Thread Radoslav Milanov
Have you tries this with the native client under Linux ? It could be just slow cephfs ? On 1.11.2021 г. 06:40 ч., Mason-Williams, Gabryel (RFI,RAL,-) wrote: Hello, We have been trying to use Ceph-Dokan to mount cephfs on Windows. When transferring any data below ~1GB the transfer speed is as