> "complexity, OMG!!!111!!!" is not enough of a statement. You have to explain
> what complexity you gain and what complexity you reduce.
> Installing SeaweedFS consists of the following: `cd seaweedfs/weed && make
> install`
> This is the type of problem that Ceph is trying to solve, and starting
Dear Nico,
do you think it is sensible and it's a precise statement saying that "we can't
reduce complexity by adding a layer of complexity"?
Containers are always adding a so-called layer, but people keep using them, and
in some cases, they offload complexity from another side.
Claiming the
>
> We are using cephadm and think it is OK. We also use Kubernetes, and
> some manual “docker run” command at the same time, on the same set of
> hosts. They works fine together. I think it should be fine to have
> multiple OC systems, and take the best of each one.
Oh yes, and how do you manage
> 在 2021年11月19日,02:51,Marc 写道:
>
>
>>
>> We also use containers for ceph and love it. If for some reason we
>> couldn't run ceph this way any longer, we would probably migrate
>> everything to a different solution. We are absolutely committed to
>> containerization.
>
> I wonder if you are r
> In this context, I find it quite disturbing that nobody is willing even to
> discuss an increase of the release cycle from say 2 to 4 years. What is so
> important about pumping out one version after the other that real issues
> caused by this speed are ignored?
One factor I think is that
>
> If your building a ceph cluster, the state of a single node shouldn't
> matter. Docker crashing should not be a show stopper.
>
You remind me of this senior software engineer of redhat that told me it was
not that big of deal that ceph.conf got deleted and the root fs was mounted via
a bin
>
> Please remember, free software comes still with a price. You can not
> expect someone to work on your individual problem while being cheap on
> your highly critical data. If your data has value, then you should
> invest in ensuring data safety. There are companies out, paying Ceph
> developers
>
> docker itself is not the problem,
I would even argue the opposite. If the docker daemon crashes it takes down all
containers. Sorry but in this time this is really not necessary with other
alternatives.
___
ceph-users mailing list -- ceph-user
> Am 17.11.21 um 20:14 schrieb Marc:
> >> a good choice. It lacks RBD encryption and read leases. But for us
> >> upgrading from N to O or P is currently not
> >>
> > what about just using osd encryption with N?
>
>
> That would be Data at Rest encryption only. The keys for the OSDs are
> stored
The weighted category prioritization clearly identifies reliability as the top
priority.
Daniel
> Am 18.11.2021 um 15:32 schrieb Sasha Litvak :
>
> Perhaps I missed something, but does the survey concludes that users don't
> value reliability improvements at all? This would explain why devel
Perhaps I missed something, but does the survey concludes that users don't
value reliability improvements at all? This would explain why developers
team wants to concentrate on performance and ease of management.
On Thu, Nov 18, 2021, 07:23 Stefan Kooman wrote:
> On 11/18/21 14:09, Maged Mokht
Hello Cephers,
i too am for LTS releases or for some kind of middle ground like longer
release cycle and/or have even numbered releases designated for
production like before. We all use LTS releases for the base OS when
running Ceph, yet in reality we depend much more on the Ceph code than
th
Am 17.11.21 um 20:14 schrieb Marc:
a good choice. It lacks RBD encryption and read leases. But for us
upgrading from N to O or P is currently not
what about just using osd encryption with N?
That would be Data at Rest encryption only. The keys for the OSDs are stored on
the mons. Data is tr
Den ons 17 nov. 2021 kl 18:41 skrev Dave Hall :
>
> The real point here: From what I'm reading in this mailing list it appears
> that most non-developers are currently afraid to risk an upgrade to Octopus
> or Pacific. If this is an accurate perception then THIS IS THE ONLY
> PROBLEM.
You might
docker itself is not the problem, it's super nice. It's just that adm/orch
is yet another deployment tool, and yet again not reliable enough. It's
easy to break, and adds additional errors like you can see at my
screenshot. I have a collection of them ;).
We are talking about a storage, meant to s
On 11/17/21 8:19 PM, Martin Verges wrote:
There are still alternative solutions without the need for useless
containers and added complexity. Stay away from that crap and you won't
have a hard time. 😜
I don't have a problem with the containers *at all*. And with me
probably a lot of users. But
On Wed, Nov 17, 2021 at 6:10 PM Janne Johansson wrote:
>
> > * I personally wouldn't want to run an LTS release based on ... what would
> > that be now.. Luminous + security patches??. IMO, the new releases really
> > are much more performant, much more scalable. N, O, and P are really much
> > mu
> [2]: https://ceph.io/en/community/team/
Is this everyone who are working full time on ceph?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> > And it looks like I'll have to accept the move to containers even
> though I have serious concerns about operational maintainability due to
> the inherent opaqueness of container solutions.
>
> There are still alternative solutions without the need for useless
> containers and added complexity
> And it looks like I'll have to accept the move to containers even though
I have serious concerns about operational maintainability due to the
inherent opaqueness of container solutions.
There are still alternative solutions without the need for useless
containers and added complexity. Stay away
>
> a good choice. It lacks RBD encryption and read leases. But for us
> upgrading from N to O or P is currently not
>
what about just using osd encryption with N?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-u
Hello Dave,
> The potential to lose or lose access to millions of files/objects or
petabytes of data is enough to keep you up at night.
> Many of us out here have become critically dependent on Ceph storage, and
probably most of us can barely afford our production clusters, much less a
test cluste
Am 17.11.21 um 18:09 schrieb Janne Johansson:
>> * I personally wouldn't want to run an LTS release based on ... what would
>> that be now.. Luminous + security patches??. IMO, the new releases really
>> are much more performant, much more scalable. N, O, and P are really much
>> much *much* better
> * I personally wouldn't want to run an LTS release based on ... what would
> that be now.. Luminous + security patches??. IMO, the new releases really
> are much more performant, much more scalable. N, O, and P are really much
> much *much* better than previous releases. For example, I would not
>
> > Yeah, generally there is no much enthusiasm about supporting that
> among developers.
>
> I guess its because none of them is administrating any large production
> installation
Exactly!
> The actual implied upgrade period is every 2 years and every
> 4 years as an exception. For storage
> features" per se -- one which comes to mind is the fix related to
> detecting
> network binding addrs, for example -- something that would reasonably
> have
> landed in and broken LTS clusters.)
> * I personally wouldn't want to run an LTS release based on ... what
> would
> that be now.. Luminou
On 17/11/2021 15:19, Marc wrote:
The CLT is discussing a more feasible alternative to LTS, namely to
publish an RC for each point release and involve the user community to
help test it.
How many users even have the availability of a 'test cluster'?
The Sanger has one (3 hosts), which was a re
>
> The demand for LTS - at least in our case - does not stem from
> unprofessionalism or biased opinion.
> It's the desire to stay up to date on security patches as much as
> possible while maintaining a well tested and stable environment.
Is this not the definition of Long Term Stable? ;)
> Bo
> The CLT is discussing a more feasible alternative to LTS, namely to
> publish an RC for each point release and involve the user community to
> help test it.
How many users even have the availability of a 'test cluster'?
___
ceph-users mailing list
Just as a friendly reminder:
1) No one prevents you from hiring developers to work on Ceph in a way you
like.
2) I personally dislike the current release cycle and would like change
that a bit.
3) There is a reason companies like our own prevent users from using latest
as "production", we tag them
> >
> > But since when do developers decide? Do you know any factory where
> factory workers decide what product they are going to make and not the
> product management???
>
> You might want to check out [1] and [2]. There are different
> stakeholders with different interests. All these parties ha
My 2 cents:
* the best solution to ensure ceph's future rock solid stability is to
continually improve our upstream testing. We have excellent unit testing to
avoid regressions on specific bugs, and pretty adequate upgrade testing,
but I'd like to know if we're missing some high level major upgrade
Oh yes I have been telling gillette for years to stop producing so many
different plastic model razors, but I still see racks full of them. I also have
been telling BMW not to share so many parts between models, but they are still
doing this. I also have been telling Microsoft about the many s
The demand for LTS - at least in our case - does not stem from
unprofessionalism or biased opinion.
It's the desire to stay up to date on security patches as much as possible
while maintaining a well tested and stable environment.
Both Pacific and Octopus (we’re currently on Nautilus) have some
The CLT is discussing a more feasible alternative to LTS, namely to
publish an RC for each point release and involve the user community to
help test it.
This can be discussed at the user-dev meeting tomorrow.
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
(BTW I just restored that etherpad --
First of all that's an open source - so developers tend to have higher
influence to decision making.
And you can replace "among developers" to "among CLT" in my previous post...
Hopefully this position can be shifter if there is a wide "feature
request" from the field hence please try to share
But since when do developers decide? Do you know any factory where factory
workers decide what product they are going to make and not the product
management??? IT is becoming such a refuge for undetected unprofessionals.
>
> Yeah, generally there is no much enthusiasm about supporting that a
Yeah, generally there is no much enthusiasm about supporting that among
developers. But it would be nice to hear points from user side anyway...
Igor
On 11/17/2021 2:29 PM, Peter Lieven wrote:
Am 17.11.21 um 12:20 schrieb Igor Fedotov:
Hi Peter,
sure, why not...
See [1]. I read it that it
Am 17.11.21 um 12:20 schrieb Igor Fedotov:
> Hi Peter,
>
> sure, why not...
See [1]. I read it that it is not wanted by upstream developers. If we want it
the community has to do it.
Nevertheless, I have put it on the list.
Peter
[1]
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thr
Hi Peter,
sure, why not...
Thanks,
Igor
On 11/17/2021 10:48 AM, Peter Lieven wrote:
Am 09.11.2021 um 00:01 schrieb Igor Fedotov :
Hi folks,
having a LTS release cycle could be a great topic for upcoming "Ceph User + Dev
Monthly meeting".
The first one is scheduled on November 18, 202
> > having a LTS release cycle could be a great topic for upcoming "Ceph
> User + Dev Monthly meeting".
> >
> > The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
> >
> > https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
> >
> > Any volunteers to extend the agenda and advocate the
> Am 09.11.2021 um 00:01 schrieb Igor Fedotov :
>
> Hi folks,
>
> having a LTS release cycle could be a great topic for upcoming "Ceph User +
> Dev Monthly meeting".
>
> The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
>
> https://pad.ceph.com/p/ceph-user-dev-monthly-minute
Am 08.11.21 um 23:59 schrieb Igor Fedotov:
> Hi folks,
>
> having a LTS release cycle could be a great topic for upcoming "Ceph User +
> Dev Monthly meeting".
>
> The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
>
> https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
>
> Any volu
Hi folks,
having a LTS release cycle could be a great topic for upcoming "Ceph
User + Dev Monthly meeting".
The first one is scheduled on November 18, 2021, 14:00-15:00 UTC
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
Any volunteers to extend the agenda and advocate the idea?
Thank
might want packages not containers for Ceph
deployments
Hi Franck,
I totally agree with your point 3 (also with 1 and 2 indeed). Generally
speaking, the release cycle of many softwares tends to become faster and
faster (not only for ceph, but also openstack etc...) and it's really
har
I have the idea that the choice for ceph adm and the release schedule is more
feulled by market acquisition aspirations. How can you reason with that.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
Hi Franck,
I totally agree with your point 3 (also with 1 and 2 indeed). Generally
speaking, the release cycle of many softwares tends to become faster and
faster (not only for ceph, but also openstack etc...) and it's really
hard and tricky to maintain an infrastructure up to date in such
co
> Setting up cephadm was pretty straight forward and doing the upgrade was
> also "easy". But I was not fond of it at all as I felt that I lost
> control.
> I had set up a couple of machines with different hardware profiles to
> run
> various services on each, and when I put hosts into the cluster
> for Windows
>
> From: Erik Lindahl<mailto:erik.lind...@gmail.com>
> Sent: 17 August 2021 16:01
> To: Marc<mailto:m...@f1-outsourcing.eu>
> Cc: Nico Schottelius<mailto:nico.schottel...@ungleich.ch>; Kai
> Börnert<mailto:kai.boern...@posteo.de>; ceph-users c
indahl<mailto:erik.lind...@gmail.com>
Sent: 17 August 2021 16:01
To: Marc<mailto:m...@f1-outsourcing.eu>
Cc: Nico Schottelius<mailto:nico.schottel...@ungleich.ch>; Kai
Börnert<mailto:kai.boern...@posteo.de>; ceph-users<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: Why you mi
Hi,
Whether containers are good or not is a separate discussion where I suspect
there won't be consensus in the near future.
However, after just having looked at the documentation again, my main point
would be that when a major stable open source project recommends a specific
installation meth
>
> Again, this is meant as hopefully constructive feedback rather than
> complaints, but the feeling a get after having had fairly smooth
> operations with raw packages (including fixing previous bugs leading to
> severe crashes) and lately grinding our teeth a bit over cephadm is that
> it has h
Hi,
I figured I should follow up on this discussion, not with the intention of
bashing any particular solution, but pointing to at least one current major
challenge with cephadm.
As I wrote earlier in the thread, we previously found it ... challenging to
debug things running in cephadm. Earlier t
On Fri, Jun 25, 2021 at 10:27 AM Nico Schottelius
wrote:
> Hey Sage,
>
> Sage Weil writes:
> > Thank you for bringing this up. This is in fact a key reason why the
> > orchestration abstraction works the way it does--to allow other
> > runtime environments to be supported (FreeBSD!
> > sysvinit/
GCC, the whole toolchain, myriad dependencies, the ways that Python has
patterend itself after Java. Add in the way that the major Linux distributions
are moving targets and building / running on just one of them is a huge task,
not to mention multiple versions of each. And the way that system
What I am getting from reading 'between the lines'. Is that they want to create
something that is easier to install for a broader target audience.
And instead of just saying this, for whatever reasoning other arguments have
been put forward which are questionable and therefore raise a discussion
Am 25.06.21 um 17:13 schrieb Nico Schottelius:
> *If* this this really about complexity of package building, why did you
> not shout out to the community and ask for help? I assume that one or
> the other party on this mailing list is open for helping out.
If I may chime in, package building and
Sent: Wednesday, June 2, 2021 2:26 PM
To: Matthew Vernon; ceph-users@ceph.io
Subject: [ceph-users] Re: Why you might want packages not containers for Ceph
deployments
Check twice before you click! This email originated from outside PNNL.
Hi,
that's also a +1 from me — we also use containers h
> Orchestration is hard, especially with every permutation. The devs have
> implemented what they feel is the right solution for their own needs
> from the sound of it. The orchestration was made modular to support non
> containerized deployment. It just takes someone to step up and implement
>
Hey Sage,
Sage Weil writes:
> Thank you for bringing this up. This is in fact a key reason why the
> orchestration abstraction works the way it does--to allow other
> runtime environments to be supported (FreeBSD!
> sysvinit/Devuan/whatever for systemd haters!)
I would like you to stop labeli
Hey Sage,
thanks for the reply.
Sage Weil writes:
> Rook is based on kubernetes, and cephadm on podman or docker. These
> are well-defined runtimes. Yes, some have bugs, but our experience so
> far has been a big improvement over the complexity of managing package
> dependencies across even
> The security issue (50 containers -> 50 versions of openssl to patch)
> also still stands — the earlier question on this list (when to expect
> patched containers for a CVE affecting a library)
I assume they use the default el7/el8 as a base layer, so when that is updated,
you will get the upda
> rgw, grafana, prom, haproxy, etc are all optional components. The
Is this Prometheus stateful? Where is this data stored?
> Early on the team building the container images opted for a single
> image that includes all of the daemons for simplicity. We could build
> stripped down images for eac
Am 18.06.21 um 20:42 schrieb Sage Weil:
Following up with some general comments on the main container
downsides and on the upsides that led us down this path in the first
place.
[...]
Thanks, Sage, for the nice and concise summary on the Cephadm benefits, and the
reasoning on why the path was
> but our experience so
> far has been a big improvement over the complexity of managing package
> dependencies across even just a handful of distros
Do you have some charts or docs that show this complexity problem, because I
have problems understanding it.
This is very likely due to that my un
>
> This thread would not be so long if docker/containers solved the
> problems, but it did not. It solved some, but introduced new ones. So we
> cannot really say its better now.
The only thing I can deduct from this thread, is the necessity to create a
solution for eg. 'dentists' to install
This thread would not be so long if docker/containers solved the problems,
but it did not. It solved some, but introduced new ones. So we cannot
really say its better now.
Again, I think focus should more on a working ceph with clean documentation
while leaving software management, packages to adm
: [ceph-users] Re: Why you might want packages not containers for Ceph
deployments
Check twice before you click! This email originated from outside PNNL.
On Tue, Jun 22, 2021 at 1:25 PM Stefan Kooman wrote:
> On 6/21/21 6:19 PM, Nico Schottelius wrote:
> > And while we are at claiming
ation is very nice.
Thanks,
Kevin
From: Sage Weil
Sent: Thursday, June 24, 2021 1:46 PM
To: Marc
Cc: Anthony D'Atri; Nico Schottelius; Matthew Vernon; ceph-users@ceph.io
Subject: [ceph-users] Re: Why you might want packages not containers for Ceph
d
On Tue, Jun 22, 2021 at 1:25 PM Stefan Kooman wrote:
> On 6/21/21 6:19 PM, Nico Schottelius wrote:
> > And while we are at claiming "on a lot more platforms", you are at the
> > same time EXCLUDING a lot of platforms by saying "Linux based
> > container" (remember Ceph on FreeBSD? [0]).
>
> Indeed
On Tue, Jun 22, 2021 at 11:58 AM Martin Verges wrote:
>
> > There is no "should be", there is no one answer to that, other than 42.
> Containers have been there before Docker, but Docker made them popular,
> exactly for the same reason as why Ceph wants to use them: ship a known
> good version (CI
On Sun, Jun 20, 2021 at 9:51 AM Marc wrote:
> Remarks about your cephadm approach/design:
>
> 1. I am not interested in learning podman, rook or kubernetes. I am using
> mesos which is also on my osd nodes to use the extra available memory and
> cores. Furthermore your cephadm OC is limited to o
On Sat, Jun 19, 2021 at 3:43 PM Nico Schottelius
wrote:
> Good evening,
>
> as an operator running Ceph clusters based on Debian and later Devuan
> for years and recently testing ceph in rook, I would like to chime in to
> some of the topics mentioned here with short review:
>
> Devuan/OS package:
On 6/18/21 8:42 PM, Sage Weil wrote:
We've been beat up for years about how complicated and hard Ceph is.
Rook and cephadm represent two of the most successful efforts to
address usability (and not just because they enable deployment
management via the dashboard!), and taking advantage of conta
On 6/21/21 6:19 PM, Nico Schottelius wrote:
And while we are at claiming "on a lot more platforms", you are at the
same time EXCLUDING a lot of platforms by saying "Linux based
container" (remember Ceph on FreeBSD? [0]).
Indeed, and that is a more fundamental question: how easy it is to make
On 6/22/21 6:56 PM, Martin Verges wrote:
> There is no "should be", there is no one answer to that, other than
42. Containers have been there before Docker, but Docker made them
popular, exactly for the same reason as why Ceph wants to use them: ship
a known good version (CI tests) of the soft
On 6/21/21 7:37 PM, Marc wrote:
I have seen no arguments why to use containers other than to try and make it "easier" for new ceph people.
I advise to read the whole thread again, especially Sage his comments,
as there are other benefits. It would free up resources that can be
dedicated t
>
> >
> > I have seen no arguments why to use containers other than to try and
> make it "easier" for new ceph people.
>
> I advise to read the whole thread again, especially Sage his comments,
> as there are other benefits. It would free up resources that can be
> dedicated to (arguably) more pr
> There is no "should be", there is no one answer to that, other than 42.
Containers have been there before Docker, but Docker made them popular,
exactly for the same reason as why Ceph wants to use them: ship a known
good version (CI tests) of the software with all dependencies, that can be
run "a
>
> I think 2 things need to be clarified here:
>
> > [...]
> > Again, clean orchestration, being able to upgrade each deamon without
> > influencing running ones, this is just not possible with the native
> > packages.
>
> If a daemon is running on an operating system, it does not reload shar
> -Original Message-
> Sent: Monday, 21 June 2021 16:44
> Subject: Re: [ceph-users] Re: Why you might want packages not containers
> for Ceph deployments
>
>
> > I think the primary goal of a container environments are resource
> isolation. At least when I rea
I think 2 things need to be clarified here:
> [...]
> Again, clean orchestration, being able to upgrade each deamon without
> influencing running ones, this is just not possible with the native
> packages.
If a daemon is running on an operating system, it does not reload shared
libraries or bin
--Original Message-
Sent: Monday, 21 June 2021 01:21
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Why you might want packages not containers for
Ceph deployments
Because all of this reads way to negative regarding containers for me I
wanted to give a different perspective.
Coming from a day
> -Original Message-
> Sent: Sunday, 20 June 2021 21:34
> To: ceph-users@ceph.io
> Subject: *SPAM* [ceph-users] Re: Why you might want packages not
> containers for Ceph deployments
>
>
> > 3. Why is in this cephadm still being talked about systemd?
e-
> Sent: Monday, 21 June 2021 01:21
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Why you might want packages not containers for
> Ceph deployments
>
> Because all of this reads way to negative regarding containers for me I
> wanted to give a different perspective.
Because all of this reads way to negative regarding containers for me I
wanted to give a different perspective.
Coming from a day to day job, that heavily utilizes kubernetes for its
normal environment, I found cephadm quite like a godsent,
instead of having to deal with a lot of pesky detail
> 3. Why is in this cephadm still being talked about systemd? Your orchestrator
> should handle restarts,namespaces and failed tasks not? There should be no
> need to have a systemd dependency, at least I have not seen any container
> images relying on this.
Podman uses systemd to manage conta
Thanks for answering these. I have been using ceph since Kraken and are now on
Nautilus. I thought before to join this discussion to watch this video[1] on
cephadm, but it seems to be more about what console commands to type. So please
indulge my rookie comments.
> the cephadm
> team isn't yet
Good evening,
as an operator running Ceph clusters based on Debian and later Devuan
for years and recently testing ceph in rook, I would like to chime in to
some of the topics mentioned here with short review:
Devuan/OS package:
- Over all the years changing from Debian to Devuan, changing the
Hello Sage,
> ...I think that part of this comes down to a learning curve...
> ...cephadm represent two of the most successful efforts to address
usability...
Somehow it does not look right to me.
There is much more to operate a Ceph cluster than just deploying software.
Of course that helps on
Thanks, Sage. This is a terrific distilation of the challenges and benefits.
FWIW here are a few of my own perspectives, as someone experienced with Ceph
but with limited container experience. To be very clear, these are
*perceptions* not *assertions*; my goal is discussion not argument. Fo
Following up with some general comments on the main container
downsides and on the upsides that led us down this path in the first
place.
Aside from a few minor misunderstandings, it seems like most of the
objections to containers boil down to a few major points:
> Containers are more complicated
On Wed, Jun 2, 2021 at 9:01 AM Daniel Baumann wrote:
> > * Ceph users will benefit from both approaches being supported into the
> > future
>
> this is rather important for us as well.
>
> we use systemd-nspawn based containers (that act and are managed like
> traditional VMs, just without the ov
On Thu, Jun 3, 2021 at 2:18 AM Marc wrote:
> Not using cephadm, I would also question other things like:
>
> - If it uses docker and docker daemon fails what happens to you containers?
This is an obnoxious feature of docker; podman does not have this problem.
> - I assume the ceph-osd containers
I'm arriving late to this thread, but a few things stood out that I
wanted to clarify.
On Wed, Jun 2, 2021 at 4:28 PM Oliver Freyermuth
wrote:
> To conclude, I strongly believe there's no one size fits all here.
>
> That was why I was hopeful when I first heard about the Ceph orchestrator
> idea
host containers.
Cheers
-Original Message-
From: Eneko Lacunza
Sent: Friday, 4 June 2021 15:49
To: ceph-users@ceph.io
Subject: *SPAM***** [ceph-users] Re: Why you might want packages
not containers for Ceph deployments
Hi,
We operate a few Ceph hyperconverged clusters with Proxmox,
. Some of the VMs host containers.
Cheers
-Original Message-
From: Eneko Lacunza
Sent: Friday, 4 June 2021 15:49
To: ceph-users@ceph.io
Subject: *SPAM* [ceph-users] Re: Why you might want packages
not containers for Ceph deployments
Hi,
We operate a few Ceph hyperconverged
> 在 2021年6月4日,21:51,Eneko Lacunza 写道:
>
> Hi,
>
> We operate a few Ceph hyperconverged clusters with Proxmox, that provides a
> custom ceph package repository. They do a great work; and deployment is a
> brezee.
>
> So, even as currently we would rely on Proxmox packages/distribution and no
Hi,
We operate a few Ceph hyperconverged clusters with Proxmox, that
provides a custom ceph package repository. They do a great work; and
deployment is a brezee.
So, even as currently we would rely on Proxmox packages/distribution and
not upstream, we have a number of other projects deployed
e 2021 15:49
> To: ceph-users@ceph.io
> Subject: *SPAM***** [ceph-users] Re: Why you might want packages
> not containers for Ceph deployments
>
> Hi,
>
> We operate a few Ceph hyperconverged clusters with Proxmox, that
> provides a custom ceph package repository. They
1 - 100 of 121 matches
Mail list logo