On Sat, 11 Jan 2025, Michael Stone wrote:
> > root@titan ~ mdadm --misc /dev/md4 --stop
>
> This is incorrect syntax, and a no-op (so the array did not stop). You want
> `mdadm --misc --stop /dev/md4`. The --misc is implied so you can just use
> `mdadm --stop /dev/md4`
I ran the command
root@t
On Sat, Jan 11, 2025 at 12:11:39PM +0100, Roger Price wrote:
I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat
reported
md4 : active raid1 sdb7[0]
20970368 blocks super 1.0 [2/1] [U_]
bitmap: 1/1 pages [4KB], 65536KB chunk
I understand that the array has to
> Sent: Saturday, January 11, 2025 at 9:51 AM
> From: "Roger Price"
> To: "debian-user Mailing List"
> Subject: Re: Removing an unwanted RAID 1 array
>
> On Sat, 11 Jan 2025, Greg Wooledge wrote:
>
> > On Sat, Jan 11, 2025 at 13:10:51 +010
On Sat, 11 Jan 2025, Greg Wooledge wrote:
> On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote:
> > On Sat, 11 Jan 2025, Michel Verdier wrote:
> >
> > > If I remember well you have to first set the device as faulty with --fail
> > > before --remove could be accepted.
> >
> > No luck :
>
On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote:
> On Sat, 11 Jan 2025, Michel Verdier wrote:
>
> > If I remember well you have to first set the device as faulty with --fail
> > before --remove could be accepted.
>
> No luck :
>
> root@titan ~ mdadm --fail /dev/md4 --remove /dev/sdb7
> But if this is the last device you can erase the partition to remove RAID
> informations.
I intend to erase the partition, but I hoped for something cleaner from an
mdadm
RAID management point of view.
Roger
he device as faulty with --fail
before --remove could be accepted. But if this is the last device you can
erase the partition to remove RAID informations.
I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat
reported
md4 : active raid1 sdb7[0]
20970368 blocks super 1.0 [2/1] [U_]
bitmap: 1/1 pages [4KB], 65536KB chunk
I understand that the array has to be inactive before it can be removed, so I
stopped it, but
On Tue, 24 Dec 2024 15:45:31 +0100 (CET)
Roger Price wrote:
> File /proc/mdstat indicates a dying RAID device with an output
> section such as
>
> md3 : active raid1 sdg6[0]
> 871885632 blocks super 1.0 [2/1] [U_]
> bitmap: 4/7 pages [16KB], 65536KB chun
On Tue, 24 Dec 2024, Greg Wooledge wrote:
On Tue, Dec 24, 2024 at 15:45:31 +0100, Roger Price wrote:
md3 : active raid1 sdg6[0]
871885632 blocks super 1.0 [2/1] [U_]
bitmap: 4/7 pages [16KB], 65536KB chunk
Note the [U-].
There isn't any [U-] in that output. There is [U_].
Hi,
On Tue, Dec 24, 2024 at 03:45:31PM +0100, Roger Price wrote:
> I would like to scan /proc/mdstat and set a flag if [U-], [-U] or [--]
> occur.
Others have pointed out your '-' vs '_' confusion. But are you sure you
wouldn't rather just rely on the "mdadm --monitor" command that emails
you whe
Roberto C. Sánchez (12024-12-24):
> I think that '==' is the wrong tool.
string1 == string2
string1 = string2
True if the strings are equal. = should be used with the test
command for POSIX conformance. When used with the [[ command,
On Tue, Dec 24, 2024 at 10:37:29 -0500, Roberto C. Sánchez wrote:
> I think that '==' is the wrong tool. That is testing for string
> equality, whilst you are looking for a partial match. This is what I was
> able to get working after hacking on it for a minute or two:
>
> #! /bin/bash -u
> set -x
Hi Roger,
On Tue, Dec 24, 2024 at 03:45:31PM +0100, Roger Price wrote:
> File /proc/mdstat indicates a dying RAID device with an output section such
> as
>
> md3 : active raid1 sdg6[0]
>871885632 blocks super 1.0 [2/1] [U_]
>bitmap: 4/7 pages [16KB], 65536KB c
> File /proc/mdstat indicates a dying RAID device with an output section such
> as
>
> md3 : active raid1 sdg6[0]
>871885632 blocks super 1.0 [2/1] [U_]
>bitmap: 4/7 pages [16KB], 65536KB chunk
>
> Note the [U-].
I can't see a "[U-]", only a "[U_]"
Stefan
On Tue, Dec 24, 2024 at 15:45:31 +0100, Roger Price wrote:
> File /proc/mdstat indicates a dying RAID device with an output section such
> as
>
> md3 : active raid1 sdg6[0]
>871885632 blocks super 1.0 [2/1] [U_]
>bitmap: 4/7 pages [16KB], 65536KB chunk
>
&g
Roger Price (12024-12-24):
> File /proc/mdstat indicates a dying RAID device with an output section such
> as
Maybe try to find a more script-friendly source for that information in
/sys/class/block/md127/md/?
Regards,
--
Nicolas George
File /proc/mdstat indicates a dying RAID device with an output section
such as
md3 : active raid1 sdg6[0]
871885632 blocks super 1.0 [2/1] [U_]
bitmap: 4/7 pages [16KB], 65536KB chunk
Note the [U-]. The "-" says /dev/sdh is dead. I would like to scan /proc/mdstat
and
On 09/10/24 at 21:10, Jochen Spieker wrote:
Andy Smith:
Hi,
On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote:
Do you know whether MD is clever enough to send an email to root when it
fails the device? Or have I to keep an eye on /proc/mdstat?
For more than a decade mdadm has s
Andy Smith:
> Hi,
>
> On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote:
>> Do you know whether MD is clever enough to send an email to root when it
>> fails the device? Or have I to keep an eye on /proc/mdstat?
>
> For more than a decade mdadm has shipped with a service that runs i
Hi,
On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote:
> Do you know whether MD is clever enough to send an email to root when it
> fails the device? Or have I to keep an eye on /proc/mdstat?
For more than a decade mdadm has shipped with a service that runs in
monitor mode to do thi
On 08/10/24 at 20:40, Andy Smith wrote:
Hi,
On Tue, Oct 08, 2024 at 04:58:46PM +0200, Jochen Spieker wrote:
Why is the RAID still considered healthy? At some point I
would expect the disk to be kicked from the RAID.
This will happen when/if MD can't compensate by reading data from
That is exactly what was confusing me here.
> What I would not do at this point is subject it to more physical
> stress than unavoidable. Unless you absolutely must, do not physically
> unplug or remove that disk before the RAID array has resilvered onto
> the new disk. It's currently
e...@gmx.us:
> On 10/8/24 16:07, Jochen Spieker wrote:
>>| Oct 06 14:27:11 jigsaw kernel: I/O error, dev sdb, sector 9361257600 op
>>0x0:(READ) flags 0x0 phys_seg 150 prio class 3
>>| Oct 06 14:27:30 jigsaw kernel: I/O error, dev sdb, sector 9361275264 op
>>0x0:(READ) flags 0x4000 phys_seg 161 pr
would definitely question a value of 0 for failed
(current pending and offline uncorrectable) _and_ reallocated sectors
for a disk that's reporting I/O errors, for example. _At least_ one of
those should be >0 for a truthful storage device in that situation.
What I would not do at this
On 10/8/24 16:07, Jochen Spieker wrote:
| Oct 06 14:27:11 jigsaw kernel: I/O error, dev sdb, sector 9361257600 op
0x0:(READ) flags 0x0 phys_seg 150 prio class 3
| Oct 06 14:27:30 jigsaw kernel: I/O error, dev sdb, sector 9361275264 op
0x0:(READ) flags 0x4000 phys_seg 161 prio class 3
| Oct 06 1
ncern right now though.
>
> Here is a thing I wrote about it quite some time ago:
>
>
> https://strugglers.net/~andy/mothballed-blog/2015/11/09/linux-software-raid-and-drive-timeouts/#how-to-check-set-drive-timeouts
Thanks a lot again.
>> Do you think I should do remove the dr
| Oct 06 14:37:20 jigsaw kernel: I/O error, dev sdb, sector 9400871680 op
0x0:(READ) flags 0x0 phys_seg 160 prio class 3
… and so on. On the second RAID check, the numbers are not the same, but
in the same range.
> If the disk is a few days away from being replaced, I would not
> bother sh
so when there's issues. The
kernel SCSI layer will try several times, so the drive's timeout is
multiplied. Only if this ends up exceeding 30s will you get a read
error, and the message from MD about rescheduling the sector.
> The data is still readable from the other disk in the RAID, ri
Jochen Spieker wrote:
> I have two disks in a RAID-1:
>
> | $ cat /proc/mdstat
> | Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> | md0 : active raid1 sdb1[2] sdc1[0]
> | 5860390400 blocks super 1.2 [2/2] [UU]
> |
Hey,
please forgive me for posting a question that is not Debian-specific,
but maybe somebody here can explain this to me. Ten years ago I would
have posted to Usenet instead.
I have two disks in a RAID-1:
| $ cat /proc/mdstat
| Personalities : [raid1] [linear] [multipath] [raid0] [raid6
Hi Marc,
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
3. grub BOOT FAILS IF ANY LV HAS dm-integrity, EVEN IF NOT LINKED TO /
if I reboot now, grub2 complains about rimage issues, clear the screen
and then I am at the grub2 prompt.
Booting is only possible with Debian rescue, disabling the dm-int
ld record the exact address
where the kernel & initrd was, regardless of abstractions layers :->)
Recently, I have been playing with RAID-on-LVM (I was mostly using LVM
on md before, which worked with grub), and it works too.
Where grub fails, is if you have /boot on the same LVM volume g
> I found this [1], quoting: "I'd also like to share an issue I've
> discovered: if /boot's partition is a LV, then there must not be a
> raidintegrity LV anywhere before that LV inside the same VG. Otherwise,
> update-grub will show an error (disk `lvmid/.../...' not found) and GRUB
> cannot boot.
Hello,
On Wed, May 22, 2024 at 10:13:06AM +, Andy Smith wrote:
> metadata tags to some PVs prevented grub from assembling them,
grub is indeed very fragile if you use dm-integrity anywhere on any of
your LVs on the same VG where /boot is (or at least if in the list
of LVs, the dm-integrity pr
Hello,
On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote:
> I will try this work-around and report back here. As I said, I can
> live with /boot on RAID without dm-integrity, as long as the rest can be
> dm-integrity+raid protected.
I'm interested in how you get on.
Hello,
On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote:
> I will try this work-around and report back here. As I said, I can
> live with /boot on RAID without dm-integrity, as long as the rest can be
> dm-integrity+raid protected.
So, enable dm-integrity on all LVs,
ity enabled.
I will try this work-around and report back here. As I said, I can
live with /boot on RAID without dm-integrity, as long as the rest can be
dm-integrity+raid protected.
[1]
https://unix.stackexchange.com/questions/717763/lvm2-integrity-feature-breaks-lv-activation
egritysetup (from LUKS), but LVM RAID PVs -- I don't use
LUKS encryption anyway on that system
2) the issue is not the kernel not supporting it, because when the
system is up, it works (I have done tests to destroy part of the
underlying devices, they get detected and fixed correctly)
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
Any idea what could be the problem? Any way to just make grub2 ignore
the rimage (sub)volumes at setup and boot time? (I could live with / aka
vg1/root not using dm-integrity, as long as the data/docker/etc volumes
are integrity-protected) ? Or how to
Hello,
1. INITIAL SITUATION: WORKS (no dm-integrity at all)
I have a Debian bookwork uptodate system that boots correctly with
kernel 6.1.0-21-amd64.
It is setup like this:
- /dev/nvme1n1p1 is /boot/efi
- /dev/nvme0n1p2 and /dev/nvme1n1p2 are the two LVM physical volumes
- a volume g
files that glow blue. ;-)
My files glow Greene so I am safe
>
>
>>> On 12/13/23 10:42, Pocket wrote:
>>>> After removing raid, I completely redesigned my network to be more inline
>>>> with the howtos and other information.
>>>
>>> Plea
), restoring from the snapshot should
produce a set of files that work correctly.
Radioactive I see
Do not eat files that glow blue. ;-)
On 12/13/23 10:42, Pocket wrote:
After removing raid, I completely redesigned my network to be more inline with
the howtos and other information.
Please
Sent from my iPad
> On Dec 14, 2023, at 4:09 AM, David Christensen
> wrote:
>
> On 12/13/23 08:51, Pocket wrote:
>> I gave up using raid many years ago and I used the extra drives as backups.
>> Wrote a script to rsync /home to the backup drives.
>
>
>
On 12/13/23 08:51, Pocket wrote:
I gave up using raid many years ago and I used the extra drives as
backups.
Wrote a script to rsync /home to the backup drives.
While external HDD enclosures can work, my favorite is mobile racks:
https://www.startech.com/en-us/hdd/drw150satbk
https
On 7/2/23 13:11, Mick Ab wrote:
On 19:58, Sun, 2 Jul 2023 David Christensen
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of
On 02.07.2023 22:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two
disks contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that
might be connected to the current motherboard. The new
On 19:58, Sun, 2 Jul 2023 David Christensen
> On 7/2/23 10:23, Mick Ab wrote:
> > I have a software RAID 1 array of two hard drives. Each of the two disks
> > contains the Debian operating system and user data.
> >
> > I am thinking of changing the motherboard becaus
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard
On Sun, 2 Jul 2023 18:23:31 +0100
Mick Ab wrote:
> I am thinking of changing the motherboard because of problems that
> might be connected to the current motherboard. The new motherboard
> would be the same make and model as the current motherboard.
>
> Would I need to recreate t
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard would be the same
make and model as
Tim Woodall (12023-03-17):
> Yes. It's possible. Took me about 5 minutes to work out the steps. All
> of which are already mentioned upthread.
All of them, except one.
> mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing
Until now, all suggestions with mdadm starte
ly fail a disk then store it in a safe deposit box or
> > > > something as
> > > > a backup, but I have not gotten around to it.
> > > >
> > > > It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID
> > > > as
>
(plus a hot
spare). On top of that is LUKS, and on top of that is LVM. I keep meaning
to manually fail a disk then store it in a safe deposit box or something as
a backup, but I have not gotten around to it.
It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID as
an additional
gt; spare). On top of that is LUKS, and on top of that is LVM. I keep meaning
> > to manually fail a disk then store it in a safe deposit box or something as
> > a backup, but I have not gotten around to it.
> >
> > It sounds to me like adding an iSCSI volume (e.g. from AWS)
On 3/17/23 12:36, Gregory Seidman wrote:
On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
[...]
PS There's that old saying, "RAID is not a substitute for a backup".
What you're trying to do sounds suspiciously similar to an old "RAID
split-mirror" backup
d
umount /mnt/fred
mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing
echo "Mounting single disk raid"
mount ${md} /mnt/fred
ls -al /mnt/fred
mdadm ${md} --add ${d2}
sleep 10
echo "Done sleeping - sync had better be done!"
mdadm ${md} --fail ${d2}
mdadm ${md
Gregory Seidman wrote:
> On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
> [...]
> > PS There's that old saying, "RAID is not a substitute for a backup".
> > What you're trying to do sounds suspiciously similar to an old "RAID
> > split-m
Nicolas George (12023-03-17):
> It is not vagueness, it is genericness: /dev/something is anything and
> contains anything, and I want a solution that works for anything.
Just to be clear: I KNOW that what I am asking, the ability to
synchronize an existing block device onto another over the netwo
Greg Wooledge (12023-03-17):
> > I have a block device on the local host /dev/something with data on it.
^^^
There. I have data, therefore, any solution that assumes the data is not
there can only be proposed by somebody who di
On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
[...]
> PS There's that old saying, "RAID is not a substitute for a backup".
> What you're trying to do sounds suspiciously similar to an old "RAID
> split-mirror" backup technique. Just saying.
This t
On Fri, Mar 17, 2023 at 05:01:57PM +0100, Nicolas George wrote:
> Dan Ritter (12023-03-17):
> > If Reco didn't understand your question, it's because you are
> > very light on details.
>
> No. Reco's answers contradict the very first sentence of my first
> e-mail.
The first sentence of your first
On Fri, 17 Mar 2023, Nicolas George wrote:
Dan Ritter (12023-03-17):
If Reco didn't understand your question, it's because you are
very light on details.
No. Reco's answers contradict the very first sentence of my first
e-mail.
Is this possible?
How can Reco's answers contradict that.
Re
Dan Ritter (12023-03-17):
> If Reco didn't understand your question, it's because you are
> very light on details.
No. Reco's answers contradict the very first sentence of my first
e-mail.
--
Nicolas George
Nicolas George wrote:
> Reco (12023-03-17):
> > Well, theoretically you can use Btrfs instead.
>
> No, I cannot. Obviously.
>
> > What you're trying to do sounds suspiciously similar to an old "RAID
> > split-mirror" backup technique.
>
>
Reco (12023-03-17):
> Well, theoretically you can use Btrfs instead.
No, I cannot. Obviously.
> What you're trying to do sounds suspiciously similar to an old "RAID
> split-mirror" backup technique.
Absolutely not.
If you do not understand the question, it is okay to no
nclusion, implementing mdadm + iSCSI + ext4 would be probably the
best way to achieve whatever you want to do.
PS There's that old saying, "RAID is not a substitute for a backup".
What you're trying to do sounds suspiciously similar to an old "RAID
split-mirror" backup technique. Just saying.
Reco
Reco (12023-03-17):
> Yes, it will destroy the contents of the device, so backup
No. If I accepted to have to rely on an extra copy of the data, I would
not be trying to do something complicated like that.
--
Nicolas George
ool resilvering"
(syncronization between mirror sides) concerns only actual data residing
in a zpool. I.e. if you have 1Tb mirrored zpool which is filled to 200Gb
you will resync 200Gb.
In comparison, mdadm RAID resync will happily read 1Tb from one drive
and write 1Tb to another *unless* yo
/md0 --level=mirror --force --raid-devices=1 \
> --metadata=1.0 /dev/local_dev missing
>
> --metadata=1.0 is highly important here, as it's one of the few mdadm
> metadata formats that keeps said metadata at the end of the device.
Well, I am sorry to report that you did not rea
sor architecture restrictions, and somewhat unusual design
decisions for the filesystem storage.
So let's keep it on MDADM + iSCSI for now.
> What I want to do:
>
> 1. Stop programs and umount /dev/something
>
> 2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
&g
).
What I want to do:
1. Stop programs and umount /dev/something
2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
--metadata-file /data/raid_something /dev/something
→ Now I have /dev/md0 that is an exact image of /dev/something, with
changes on it synced instantaneously.
3
On 2/23/23 11:05, Tim Woodall wrote:
On Wed, 22 Feb 2023, Nicolas George wrote:
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
I've wanted this ...
I think only hardware raid where the bios thinks it's a s
On Wed, 22 Feb 2023, Nicolas George wrote:
Hi.
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
I've wanted this but settled for using dd to copy the start of the disk,
fdisk to rewrite the GPT properly then mda
Hello,
I have seen some installations with following setup:
GPT
sda1 sdb1 bios_grub md1 0.9
sda2 sdb2 efi md2 0.9
sda3 sdb3 /boot md3 0.9
sda4 sdb4 / md? 1.1
on such installations it's important, that grub installation is made
with "grub-install --removable"
I mean it was some grub bugs about
Am 22.02.2023 um 17:07 schrieb Nicolas George:
> Unfortunately, that puts the partition table
> and EFI partition outside the RAID: if you have to add/replace a disk,
> you need to partition and reinstall GRUB, that makes a few more
> manipulations on top of syncing the RAID.
Yes, i g
Nicolas George wrote:
> Hi.
>
> Is there a solution to have a whole-disk RAID (software, mdadm) that is
> also partitioned in GPT and bootable in UEFI?
Not that I know of. An EFI partition needs to be FAT32 or VFAT.
What I think you could do:
Partition the disks with GPT: 2 par
o make an USB
stick that was bootable in legacy mode, bootable in UEFI mode and usable
as a regular USB stick (spoiler: it worked, until I tried it with
Windows.)
But it will not help for this issue.
> The only issue, i have had a look at, was the problem to have a raid,
> that is bootable
up (not use them at all) and unfortunately that
applies to standard GPT tools as well, but the dual bootability can
solve some problems.
The only issue, i have had a look at, was the problem to have a raid,
that is bootable no matter which one of the drives initially fails, a
problem, that can
Hi.
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
What I imagine:
- RAID1, mirroring: if you ignore the RAID, the data is there.
- The GPT metadata is somewhere not too close to the beginning of the
drive nor too close
computers.
When I boot the flash drive in a Dell Precision 3630 Tower that has Windows
11 Pro installed on the internal NVMe drive, the internal PCIe NVMe drive is
not visible to Linux:
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to "
acity storage costs to a
minimum."
I believe that is marketing speak for "the computer supports Optane
Memory", not "every machine comes with Optane Memory".
I believe that's the pseudo-RAID you are seeing in the UEFI setup screen.
Maybe you can see the physical
8:41 11.2G 0 part
> `-sda4_crypt 254:00 11.2G 0 crypt /
> sr0 11:01 1024M 0 rom
>
> 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
> # l /dev/n*
> /dev/null /dev/nvram
>
> /dev/net:
> ./ ../ tun
>
>
> The w
s/dfb/p/precision-3630-workstation/pd,
the machine has Optane. I believe that's the pseudo-RAID you are
seeing in the UEFI setup screen.
Maybe you can see the physical drives using raid utilities.
Jeff
boot the flash drive in a Dell Precision 3630 Tower that has
Windows 11 Pro installed on the internal NVMe drive, the internal PCIe
NVMe drive is not visible to Linux:
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to "AHCI".
.2G 0 part
>`-sda4_crypt 254:00 11.2G 0 crypt /
> sr0 11:01 1024M 0 rom
>
> 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
> # l /dev/n*
> /dev/null /dev/nvram
>
> /dev/net:
> ./ ../ tun
>
>
> The work-around is to c
254:00 11.2G 0 crypt /
sr0 11:01 1024M 0 rom
2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
# l /dev/n*
/dev/null /dev/nvram
/dev/net:
./ ../ tun
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
Hi Gary,
On Mon, Aug 22, 2022 at 10:00:34AM -0400, Gary Dale wrote:
> I'm running Debian/Bookworm on an AMD64 system. I recently added a second
> drive to it for use in a RAID1 array.
What was the configuration of the array before you added the new
drive? Was it a RAID-1 with one mis
7813768832 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices:
root@hawk:~#
You may notice from my output that I have raid on 0, 1, 3, and 4. 4 is
the spare. 3 and 4 are not in numeric order. And there is no 2. So I'm
not s
On 8/22/22, Gary Dale wrote:
> I'm running Debian/Bookworm on an AMD64 system. I recently added a
> second drive to it for use in a RAID1 array. However I'm now getting
> regular messages about "SparesMissing event on...".
>
> cat /proc/mdstat shows the problem: active raid1 sda1[0] sdb1[2] - the
I'm running Debian/Bookworm on an AMD64 system. I recently added a
second drive to it for use in a RAID1 array. However I'm now getting
regular messages about "SparesMissing event on...".
cat /proc/mdstat shows the problem: active raid1 sda1[0] sdb1[2] - the
newly added drive is showing up as
Many Thanks for the very helpful reply, Reco!
--
Felix Natter
debian/rules!
Thanks Sven!
--
Felix Natter
debian/rules!
Many Thanks for the very helpful reply Andy!
--
Felix Natter
debian/rules!
Darac Marjal writes:
> On 11/09/2021 17:55, Felix Natter wrote:
>> hello fellow Debian users,
>>
>> I have an SSD for the root filesystem, and two HDDs using RAID1 for
>> /storage running Debian10. Now I need a plan B in case the upgrade
>> fails.
>
> Just want to check that you've not missed som
hi Andrei,
Andrei POPESCU writes:
thank you for your answer.
> On Sb, 11 sep 21, 18:55:56, Felix Natter wrote:
>> hello fellow Debian users,
>>
>> I have an SSD for the root filesystem, and two HDDs using RAID1 for
>> /storage running Debian10. Now I need a plan B in case the upgrade
>> fails.
On Tuesday 14 September 2021 12:55:41 Dan Ritter wrote:
> Gene Heskett wrote:
> > This is interesting and I will likely do it when I install the
> > debian-11.1-net-install I just burnt.
> >
> > But, I have installed 4, 1 terabyte samsung SSD's on a separate
>
Gene Heskett wrote:
> This is interesting and I will likely do it when I install the
> debian-11.1-net-install I just burnt.
>
> But, I have installed 4, 1 terabyte samsung SSD's on a separate non-raid
> controller card which I intend to use as a software raid-6 or 10
up (disk-wise),
> and found out that when reinstalling Debian11, the d-i does recognize
> the RAID1 (/storage) and can reuse it while keeping the data.
>
> My question is: How does d-i know how the individual HDDs were combined
> into a RAID1? For all that "sudo fdisk -l" s
1 - 100 of 3825 matches
Mail list logo