The discussion is a little bit like what Henry Ford called "a faster horse".
Henry Ford say that if he had asked his customers what they wanted, they would have told him "a faster horse". But what Ford did was give them something they did not know they coud ask for. It is common for users to want only slight incremental changes over what they have. Almost no one ever asks for a revolutinaly product. If you are re-doing this CNC controller for use this century you cut-out the need for real-time Linux. Once to can place all of EMC in plain old userspace then you can manage upgrades using the distributions native pagage system. You simply click "upgrade: and type in a password. How would this work? You simply moove all the real-time stuff off of the Linux based system an onto cheap hardware. On Thu, Apr 2, 2020 at 8:54 PM Rafael Skodlar <[email protected]> wrote: > On 2020-04-02 07:13, R C wrote: > > my 2 cnts; > > > > > > I work in/with HPC, and run into that stuff all the time, and it is > > unavoidable. > > what stuff? It would be much easier to read old thread if related lines > were in related place. > > > > > Since HPCs run diskless, and boot in/from a network, we simply build a > > complete new image, (and keep > > > > the older ones around). We never even update an image, we simply build a > > new one from scratch, since > > And stop services when Saltstack, fabric, or mush could be used to > update software without much downtime ... Nonstop rebuilds are not a > solution to everything. > > > > > an update on an existing system never works and it is easier to rebuild > > a repo (at least in RHEL it is). > > > > Libraries etc, specific to applications either get relocated, or are > > merged with the OS ones on a virtual file system. > > > > > > Of course that is pretty much undo-able, impractical, unaffordable to do > > at home, so what I do: I use different drives with > > That's possible since 1990s when we could buy first removable drives for > PCs. Some were IDE, other SCSI based. Same idea as in IBM, DEC, HP, and > other mainframe computers. > > > > > separate installs (I use these now very inexpensive CRU data trays to > > swap drives, and SSDs are really inexpensive now) > > > > > > And indeed, let's not even get started on "rolling back" within an image. > > > > > > containers; that's one of these things that don't seem to work > > consistently yet. I know people (at work) that are working > > > > Ever heard of Google? See more bellow > > > with it, developing in it, but I have not seen it work reliably/stable > > yet. It will definitely go there, but as of yet, at least > > ??? It's being used in so many places that would spin your head. > > https://www.sdxcentral.com/articles/news/t-mobile-to-slash-30m-in-cloud-costs-with-kubernetes/2020/04/ > > > > > at scale, it is not working. (there are lots of issues that come down to > > latency/timing and rdma issues and we don't > > > > even use real time kernels etc. most of what I do is based on RHEL and > > application specific RHEL 'flavors') > > > > > > as I said, just my 2 cts, > > > > > > Ron > I study technologies while you watch sports... > > -- > Rafael > > > _______________________________________________ > Emc-users mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/emc-users > -- Chris Albertson Redondo Beach, California _______________________________________________ Emc-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/emc-users
