Hi Aaron,

I'm taking the unusual step of top-posting because your post is
so wide-ranging. And yes, I know I'm replying to a reply to you...

First, I'm 99% sure this is not, and never has been, a ctwm problem.

Second, as an elder in the Church Of The Known State, I find the
wide-ranging set of modifications you've made breathtaking. In my
opinion, even if you manage to get everything to work, your setup will
be so fragile as to be a worry for as long as you keep this setup.

Third, your many attempts to get a computer that served your purposed
have repeatedly failed.

Therefore, my recommendation is that you thoroughly back up your data,
wipe the disk, partition the disk only into a swap partition and a root
partition, and fresh install. I recommend you not use Redhat or any of
its derivatives because they go all out to depend on systemd to the
maximum possible extent. I recommend you not use any Ubuntu because 1)
It failed for you this time, 2) It uses systemd, which might be part of
your problem, 3) they use that silly new packaging technique that has
been causing problems for everything, and 4) They're so "we do it all
for you" that it's hard to diagnose your machine.

Normally, I'd recommend you use Void Linux like I do. But you
absolutely need Zoom, and Void and Zoom don't play well together.
Therefore I recommend you install Devuan, which is a no-systemd
derivative of Debian. When asked to pick a wm/de (Window Manager or
Desktop Environment), pick the simplest one they offer, probably
Openbox, which is very similar to ctwm. Configure your Devuan to boot
to the command line, so you can start X with startx. Starting with
startx gives you more troubleshooting testpoints, if you need them.
Then install ctwm, back up your ~/.xinitrc, and change ~/.xinitrc to
run ctwm instead of whatever you picked.

Please remember, the more complex wm/de you pick, the more likely that
your audio will depend on your wm/de and cease to work when you switch
to ctwm.

You should ask on the Devuan mailing list how many people have had
success initting with runit instead of sysvinit. Runit is much better
than sysvinit, which is itself much better than systemd. If you find
several have had success with runit, choose runit for your init. If
not, choose sysvinit.

Also: If this install fails to give you all you want, after a
little troubleshooting if you don't find success, you can wipe it and
install a different distro. I learned Linux in 1998 by installing
Redhat (back when it was good) 50 times. With modern machines it's fast
and easy.

One more thing. You don't say if this machine is a laptop or a desktop,
but if it's a desktop I'd recommend you spend $100 for a 1TB SSD or
NVMe drive and boot to that, bind-mounting all other partitions to fast
changing directories like /home and /var. Leave /usr on the SSD/NVMe.
Finally, try installing as an MBR boot instead of GPT on the 1TB
SSD/NVMe. MBR works fine on all disks below 2TB. GPT is easy to get
wrong and harder to back up. So if your motherboard still supports MBR,
I'd go for MBR on a 1TB SSD/NVMe, with things like /home and /var bind
mounted from your current (I assume) spinning rust drive.

HTH 

SteveT

Steve Litt 
Autumn 2022 featured book: Thriving in Tough Times
http://www.troubleshooters.com/bookstore/thrive.htm

Rhialto said on Fri, 3 Feb 2023 15:55:40 +0100

>> [I'll understand if anyone finds all that unintelligble...]  
>
>Well surely it sounds like the various installers got confused about
>what to put where. I won't pretend to be able to diagnose the situation
>fully, but I do think I know some tools that may help you with that. In
>particular, which partitions exist exactly and who is using them.
>
>For example, I have a suspicion that you have a single /boot or
>/boot/efi partition which has been overwritten or changed by multiple
>installers.
>
>First, the command "lsblk" should show you all "block devices" your
>current kernel sees, and where they are mounted. If they are not
>mounted, they likely belong to some other Linux install.
>
>Likely you will have a /boot or /boot/efi partition (at least, Ubuntu
>likes to have it that way). In there there is likely a directory with a
>grubx64.efi and grub.cfg: those are likely the boot loader and its
>config file as started by your UEFI firmware. In the Ubuntu I'm looking
>at, the grub.cfg file merely defers to another grub.cfg file, something
>like ($root)/boot/grub/grub.cfg.
>
>If you can still boot all the different Linux installs you made (I
>don't recall if you mentioned that), likely there are multiple
>directories with grub inside /boot/efi, or the second-stage grub.cfg
>file has multiple entries for the different systems. In any case, each
>of those would show a place to look for things. In particular they
>would indicate which partition to use as a root partition.
>
>Maybe it will show you that there is no place that the CentOS stuff
>could be hiding. Perhaps Ubuntu overwrote (part of) it. If CentOS
>installed itself with separate / and /usr partitions (I have no idea if
>it would) then some of it may still be available. I suspect its /
>partition to be re-used by Ubuntu though, given the name of centos-root
>which you see in Ubuntu.
>
>Second, you mention "/dev/mapper/centos-root". The first part of that,
>"/dev/mapper" usually refers to the Logical Volume Manager. To get an
>overview of which logical volumes exist, issue the "lvs" command as
>root. There is also "pvs" to list Physical Volumes (in case you have
>multiple disks) and "vgs" to show Volume Groups. There might be
>multiple LVs around for different Linux versions. 
>
>Hopefully this will at least help you to find data which may still be
>available on your disk even though you don't see it, so that you may
>save or discard it as you wish.
>
>-Olaf.
>-- 
>___ "Buying carbon credits is a bit like a serial killer paying
>someone else to \X/  have kids to make his activity cost neutral."
>-The BOFH    falu.nl@rhialto

Reply via email to