On 1/18/24 03:57, David Christensen wrote:
On 1/17/24 22:44, gene heskett wrote:>> On 1/18/24 00:50, David
Christensen wrote:
The migration took two passes because udev can't make up its alleged
mind so I was finally forced to use the rescue mode to edit fstab to
mount it by UUID and that worked, I've got /home on the copy right now.
Congratulations! :-)
and I took the 60 G's of swap out too since I've never used more the
20G with any gfx program, so I figure 47G's on /dev/sda is enough.
1 GB swap works for me. When a memory leak gets out of control, I do
not have to wait long for the lock up.
So now none of the raid is mounted, but the 30+ second lag when
opening a write path is still there, so I was erroneously blaming the
raid. So I've narrowed the problem
Good to know.
but w/o a good clue what to do next.
Find the needle in the haystack or do a fresh install. I prefer the
latter, because I can estimate the effort and I am reasonably confident
of the outcome.
One thing that bothers me is there is no way the installers parted
shows partition names for non-raid disks. To me that is a serious bug.
It appears from the help that it can LABEL a partition but can't read
that LABEL.
When installing to UEFI/GPT, I am able to label partitions in the Debian
Installer, the labels are visible in the installer, and the labels
persist on disk after installation is complete.
parted when asked to print all does that just fine, but the | doesn't
put it to less, so it scrolls off screen the top 60% of a parted's
print all output at some fraction of C speed. Not exactly helpful. I
have other things to do while I cogitate on what to do next.
The following works as expected on my machine:
2024-01-18 00:34:41 root@laalaa ~
# parted -l | less
Many thanks to all that helped.
YW. :-)
If you use rsync(1), I suggest using some kind of integrity checking
tool to verify that the source and destination file systems are
identical. I prefer BSD mtree(8):
I assume I'd have to remount the raid like to /raid?
Whew! That's got more arguments than rsync...
The old /home RAID10 still has its metadata on disk. I would install
the "mdadm" package, edit /etc/fstab, copy and rework the old /home line
(new mount point, add option "ro"), create the mount point, and mount.
I believe mdadm is already installed. At least enough to collect and
mount this raid10 and use it for /home for the last nearly 2 years.
Now after all this folderall, all 4 of the SSD's are reporting read
errors at very high lba's.
all 4 drives are reporting the same poh, 21027 hours for the occurence
of the error, that sounds like it could be just one crash or dirty power
down. In which case it s/b repairable
Do we have a repair utility that will force the drive to reallocate a
spare sector and fix those?
I have issued a smartctl -tlong on all 4 drives, results in about 3 hours.
David
.
Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
- Louis D. Brandeis