On 11/24/24 17:56, Charlie Gibbs wrote:
I have a 20-year-old box which was nonetheless enough to run Debian
Bookworm (12.5) - but the video card, equipped with an Nvidia GeForce
610 GPU, was too old. I was getting messages on boot saying that it
was only supported by drivers up to version 390, while Bookworm doesn't
support drivers that old.
You have not mentioned backups or archives. Please do so, if you have
not already.
A key decision you must make is whether to have one computer
(all-in-one) or to have two computers (workstation and file server).
Each approach has its advantages and disadvantages. My SOHO
installation has a file server, a backup server, many client devices,
Gigabit Ethernet, and 802.11ac Wi-Fi. I have an 8-port KVM switch at my
primary work area.
A newer used video card could make that old motherboard/ CPU/ memory
useful again. I would put the new parts in a new case, put the old
parts in the old case, buy a KVM switch, and have two working computers.
This gives you more options.
I recently rebuilt my file server and my backup server using Fractal
Design cases and power supplies. I have been very impressed with the
quietness and cooling:
https://www.fractal-design.com/products/cases/define/define-r5/black/
https://www.fractal-design.com/products/power-supplies/ion/ion-2-platinum-660w/black/
https://www.fractal-design.com/products/fans/venturi/venturi-hf-14/white/
The box was getting flaky on boot anyway,
How so?
so I figured it was time to spring for a new motherboard,
Make and model?
complete with an AMD Ryzen 5 processor,
Model?
32GB of RAM, and GeForce 1030 video card.
I was getting nothing on the screen when I first fired it up, but a
friend and I eventually tracked it down to a RAM module that wasn't
properly seated. Once we corrected that, the machine happily came up,
found the existing hard drive and everything on it, and was fully
operational. Things really have progressed since the bad old days.
But here's the catch. Since I was laying out the bucks for lots of new
hardware anyway, the salesman talked me into throwing in a 1TB NVMe SSD.
Make and model?
What the heck, might as well really speed things up. However, I want
to keep my existing hard drive; it's a fairly new 4TB unit
Make and mode?
and /home
contains large archives of music and video files. What I'd like to
do is move everything to the SSD - including the /home partition but
without the music and video files, which I'd leave on the spinning rust
in a renamed set of directories mounted elsewhere.
Rather than doing a full re-install and copying massive amounts of data
back and forth, I'm trying to take a shortcut - which may or may not be
a good idea, but I'll let you guys judge.
Here's the output of lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 27.9G 0 part /
├─sda3 8:3 0 7.5G 0 part [SWAP]
└─sda4 8:4 0 3.6T 0 part /home
sdb 8:16 1 0B 0 disk
sdc 8:32 1 0B 0 disk
sdd 8:48 1 0B 0 disk
sde 8:64 1 0B 0 disk
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
├─nvme0n1p2 259:6 0 30G 0 part
├─nvme0n1p3 259:7 0 8G 0 part
└─nvme0n1p4 259:8 0 893.5G 0 part
As you can see, I've duplicated the partitions on the SSD. I also
copied the 30GB / partition to the SSD with dd, and changed the
UUID of the copy to avoid conflicts due to the cloning. I mounted
/dev/nvme0n1p2 (which I hope to make the new / partition) and
changed the UUIDs in its copy of /etc/fstab to point to the
partitions on the SSD.
I think my problem is getting GRUB to go to the SSD. I tried the
following:
sudo grub-install /dev/nvme0n1
The following messages came out (with a delay of several seconds between
them):
Installing for i386-pc platform.
Installation finished. No error reported.
(Is that first message correct? That sounds like old hardware.)
When re-booting, I went into the BIOS screen, and saw that the SSD was
first in the boot order. However, this probably doesn't mean much if
I didn't get it set up properly. The machine boots, but apparently
falls back to the hard drive. The first two lines of dmesg are:
[ 0.000000] Linux version 6.1.0-23-amd64
(debian-ker...@lists.debian.org) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU
ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian
6.1.99-1 (2024-07-15)
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.1.0-23-amd64
root=UUID=fb2c9cb9-1737-4bbf-b3e8-c5e88b40877e ro quiet
According to blkid, that UUID corresponds to /dev/sda2, i.e. the /
partition on the hard drive. I'm obviously missing an incantation
to make the machine go to the SSD instead. In /boot/grub/grub.cfg
I find all sorts of references to the UUID of /dev/sda2, but the
file starts with a big scary "DO NOT EDIT THIS FILE" message.
I've been looking up GRUB documentation, but my eyes are starting to
glaze over. I get the feeling that I'm close, but don't quite have
the GRUB fu. Could someone provide some pointers?
Every time I tried multi-boot, I found it to be brittle. Similarly so
for multiple bootable drives in the same computer. My solution was hard
drive mobile racks, extra drives, and one OS instance per drive.
Throwing money at the problem lowers the knowledge and skill
requirements, simplifies system administration, and prevents service
interruptions:
https://www.startech.com/en-us/hdd/drw150satbk
https://www.startech.com/en-us/hdd/hsb220sat25b
https://www.startech.com/en-us/hdd/s25slotr
Given the various high-quality "free" virtualization solutions that are
now available, I use virtual machines whenever they make sense.
I suggest that you disconnect all drives except optical and the NVMe
SSD, configure the new motherboard firmware for UEFI, boot the Debian
Installer, go to a rescue shell, zero-fill or secure erase the NVMe SSD,
reboot the Debian Installer, and do a fresh install of Debian onto the
NVMe SSD. I do not use LVM. I keep the boot, swap, and root partitions
small, to facilitate imaging, disaster preparedness, and disaster
recovery (1 GB, 1 GB, and 13 GB, respectively). If the SSD has spare
empty space, I create a partition and file system for scratch files,
virtual machines, etc..
For the all-in-one approach, then reconnect the 4 TB HDD, disable the
bootable flag on the appropriate partition, and mount the data
partitions(s)/ file system(s).
For the workstation and server approach, then share the 4 TB HDD data
file system(s) on the server and mount them on the workstation.
In both cases, think about getting another 4 TB HDD and implementing
RAID1 (mirror). HDD sectors become unreadable over time, and your
files, directories, file systems, and/or partition tables will become
corrupt if you do not have RAID. Automatic repair by RAID is far easier
than restoring from backups/ archives or than forensic disk drive data
recovery. Your motherboard firmware may provide hardware RAID, or you
can buy a hardware RAID card. Linux offers several choices of software
RAID.
David