On Wed, Jun 02, 2021 at 01:55:36PM -0500, Harry G. Coin via FreeIPA-users wrote:
> Long time freeipa users have faced a certain 'fragility' freeipa has
> inherited, mostly as a result of freeipa being the 'band director' over
> a number of distinct subsystems maintained by various groups across the
> world.
> 
> This or that 'little upgrade' in a seemingly small sub-part of freeipa
> 'suddenly breaks' major things like not being able to install a replica
> & etc, there's a quite a list and it's been going on for a few years at
> least to my knowledge.   Usually one expects newer features to have bugs
> but none that disrupt core prior functionality.
> 
> I wonder whether it would be a solution to this if free-ipa took a look
> at how a 'similar feeling'  multi-host, multi-subsystem architecture has
> appeared to have solved this puzzle: ceph's 'containers' and
> 'orchestrator'  / cephadm / 'ceph orch' concept.
> 
> For some time, as freeipa, ceph relied on packages and 'dependency hell
> management' to operate as native packages across hosts connected on an
> internal network.  Then in a very effective shift: they treated 'the
> contents of a container' much as 'one thing owned entirely by and
> released by ceph' and tested that -- each container housing known-good
> versions of dependent and third party modules as well as their own code
> -- 'as one thing',  to the point of providing their own tool to
> 'download and manage upgrade installs in the proper sequence' across
> hosts providing this-or-that functionality.
> 
> You might imagine a freeipa orchestrator upgrading masters and replicas
> in the correct order, freeipa devs knowing for certain-sure that no 'dnf
> upgrade' on the host will disrupt the setup that passed qa in the
> container... Will not 'corrupt a database' owing to a 'sync' with
> content one version understood but another did not, etc.
> 
> Over these many months, while freeipa has struggled to provide
> consistent service and value, ceph has been working nearly flawlessly
> across many upgrade cycles and I think it's because ceph controls the
> versions of the subsystems in the containers-- and that improves QA and
> dramatically limits surprise breakages' that lead to the feeling of
> 'always catching up' under conditions of time pressure owing to down
> services, this or that distro's 100 package mantainers deciding when/if
> to include this/that patch and when to publish which new version, which
> are 'security updates', which are 'bug fix updates', etc.   If freeipa
> server came in a container that was tested and QA'd as a container,
> deployed as a container, perhaps the 'fragility factor' would improve by
> 10x.
> 
> My $0.02

Hi Harry,

There is a current effort (still in early stages) to implement
something like what you describe: FreeIPA in OpenShift managed via
an OpenShift Operator.

Can't say much else because we still have a lot of technical and
policy challenges to solve.  Certainly can't give a timeframe.  But
rest assured we are aware of the potential benefits of container
orchestration.  We are also aware that it is not a panacea and that
the engineering costs are orders of magnitude greater than 2c ^_^

Cheers,
Fraser
_______________________________________________
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to