ceph is cluster - so reboots aren't an issue (we do set noout during a
planed serial reboot of all machines of the cluster).
personally i don't think the hassle of live patching is worth it. it's a
very gross hack that only works well in very specific niche cases. ceph
(as every proper cluster) is
I do it in production
On Thu, Apr 26, 2018, 2:47 AM John Hearns wrote:
> Ronny, talking about reboots, has anyone had experience of live kernel
> patching with CEPH? I am asking out of simple curiosity.
>
>
> On 25 April 2018 at 19:40, Ronny Aasen wrote:
>
>> the difference in cost between 2 a
HI Ronny,
Thanks for the detailed answer. It's much appreciated! I will keep this
in the back of my mind, but for now the cost is prohibitive as we're
using these servers not as storage-only space but full-fledged servers
(i.e. Ceph is mounted locally, there's a webserver and database). And 2
Ronny, talking about reboots, has anyone had experience of live kernel
patching with CEPH? I am asking out of simple curiosity.
On 25 April 2018 at 19:40, Ronny Aasen wrote:
> the difference in cost between 2 and 3 servers are not HUGE. but the
> reliability difference between a size 2/1 pool
the difference in cost between 2 and 3 servers are not HUGE. but the
reliability difference between a size 2/1 pool and a 3/2 pool is
massive. a 2/1 pool is just a single fault during maintenance away from
dataloss. but you need multiple simultaneous faults, and have very bad
luck to break a
On 25/04/18 10:52, Ranjan Ghosh wrote:
And, yes, we're running a "size:2 min_size:1" because we're on a very
tight budget. If I understand correctly, this means: Make changes of
files to one server. *Eventually* copy them to the other server. I hope
this *eventually* means after a few minutes.
On Wed, 2018-04-25 at 11:52 +0200, Ranjan Ghosh wrote:
> Thanks a lot for your detailed answer. The problem for us, however,
> was
> that we use the Ceph packages that come with the Ubuntu distribution.
> If
> you do a Ubuntu upgrade, all packages are upgraded in one go and the
> server is reboo
Thanks a lot for your detailed answer. The problem for us, however, was
that we use the Ceph packages that come with the Ubuntu distribution. If
you do a Ubuntu upgrade, all packages are upgraded in one go and the
server is rebooted. You cannot influence anything or start/stop services
one-by-o
On Thu, Apr 12, 2018 at 5:05 AM, Mark Schouten wrote:
> On Wed, 2018-04-11 at 17:10 -0700, Patrick Donnelly wrote:
>> No longer recommended. See:
>> http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-
>> cluster
>
> Shouldn't docs.ceph.com/docs/luminous/cephfs/upgrading include t
On Wed, 2018-04-11 at 17:10 -0700, Patrick Donnelly wrote:
> No longer recommended. See:
> http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-
> cluster
Shouldn't docs.ceph.com/docs/luminous/cephfs/upgrading include that
too?
--
Kerio Operator in de Cloud? https://www.kerioindec
Hello Ronny,
On Wed, Apr 11, 2018 at 10:25 AM, Ronny Aasen wrote:
> mds: restart mds's one at the time. you will notice the standby mds taking
> over for the mds that was restarted. do both.
No longer recommended. See:
http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-cluster
ceph upgrades are usualy not a problem:
ceph have to be upgraded in the right order. normally when each service
is on its own machine this is not difficult.
but when you have mon, mgr, osd, mds, and klients on the same host you
have to do it a bit carefully..
i tend to have a terminal open wit
Ah, nevermind, we've solved it. It was a firewall issue. The only thing
that's weird is that it became an issue immediately after an update.
Perhaps it has sth. to do with monitor nodes shifting around or
anything. Well, thanks again for your quick support, though. It's much
appreciated.
BR
Thank you for your answer. Do you have any specifics on which thread
you're talking about? Would be very interested to read about a success
story, because I fear that if I update the other node that the whole
cluster comes down.
Am 11.04.2018 um 10:47 schrieb Marc Roos:
I think you have to u
I think you have to update all osd's, mon's etc. I can remember running
into similar issue. You should be able to find more about this in
mailing list archive.
-Original Message-
From: Ranjan Ghosh [mailto:gh...@pw6.de]
Sent: woensdag 11 april 2018 16:02
To: ceph-users
Subject: [ceph
15 matches
Mail list logo