On Mon, 2020-06-08 at 23:25 -0400, Jon LaBadie wrote:
> On Tue, Jun 02, 2020 at 10:57:10AM +0100, Patrick O'Callaghan wrote:
> > I have a powered USB dock with a couple of SATA drives configured as
> > RAID1, and used only for nightly backups. The (minimal) manual for the
> > dock tells me it will
On Tue, Jun 02, 2020 at 10:57:10AM +0100, Patrick O'Callaghan wrote:
> I have a powered USB dock with a couple of SATA drives configured as
> RAID1, and used only for nightly backups. The (minimal) manual for the
> dock tells me it will power down after 30 minutes idle time, however I
> don't see t
On Fri, 2020-06-05 at 14:58 -0700, Samuel Sieb wrote:
> On 6/5/20 2:30 PM, Patrick O'Callaghan wrote:
> > On Fri, 2020-06-05 at 13:02 -0700, Samuel Sieb wrote:
> > > You don't need an mdadm.conf file or anything. The mdraid system will
> > > automatically build an array when it sees the drives app
On Sat, 2020-06-06 at 08:03 +1000, Cameron Simpson wrote:
> > It's very possible (indeed likely) that I'm stopping the array in the
> > wrong way, but I don't see any other way to do it. The mdadm man page
> > mentions '-A' as the way to start an array, but doesn't talk about how
> > to stop it, so
Before we get into this, two things:
I'd parse the array device members from /proc/mdstat unless there's
something easiler to parse.
I'd parse the location of drives-by-partition-id from "lsblk -bfr". That
shows mount points too.
So that lets you figure out:
lsblk: mount-point -> mounte
On 6/5/20 2:30 PM, Patrick O'Callaghan wrote:
On Fri, 2020-06-05 at 13:02 -0700, Samuel Sieb wrote:
You don't need an mdadm.conf file or anything. The mdraid system will
automatically build an array when it sees the drives appear. And if you
are using UUIDs in any mount descriptions, that will
On Fri, 2020-06-05 at 13:02 -0700, Samuel Sieb wrote:
> On 6/5/20 4:25 AM, Patrick O'Callaghan wrote:
> > On Thu, 2020-06-04 at 23:15 +0100, Patrick O'Callaghan wrote:
> > > > The ancient standard is "- - -" and has worked whenever I have used
> > > > it, but google may be able to confirm.
> > >
>
On 6/5/20 4:25 AM, Patrick O'Callaghan wrote:
On Thu, 2020-06-04 at 23:15 +0100, Patrick O'Callaghan wrote:
The ancient standard is "- - -" and has worked whenever I have used
it, but google may be able to confirm.
OK, thanks.
Right, that seems to work, however when I bring the drives back
It is the uuid of the md device.
The sd device will change when you plug and unplug a usb device
(delete and rescan) and cannot be counted on to be the same. You
might have to script something to read out the mdstat config file stop
the md device and then clean up the underlying sd* devices it wa
On Fri, 2020-06-05 at 07:07 -0500, Roger Heflin wrote:
> The sdX should change.
>
> If you have a mdadm.conf file with a uuid in it the md device
> definition it will always be the same.
>
> Something like this:
>
> MAILADDR root
> AUTO +imsm +1.x -all
> ARRAY /dev/md13 metadata=1.2 level=raid6
The sdX should change.
If you have a mdadm.conf file with a uuid in it the md device
definition it will always be the same.
Something like this:
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md13 metadata=1.2 level=raid6 num-devices=7
name=localhost.localdomain:11 UUID=a54550f7:da200f3e:90606715
On Thu, 2020-06-04 at 23:15 +0100, Patrick O'Callaghan wrote:
> > The ancient standard is "- - -" and has worked whenever I have used
> > it, but google may be able to confirm.
>
>
> OK, thanks.
Right, that seems to work, however when I bring the drives back online
they now have different number
On Thu, 2020-06-04 at 12:16 -0500, Roger Heflin wrote:
> If you have an md device running overtop of it or LVM that would need
>
> to be stopped or disabled before device deletion.
Yes, I've done that.
> The scans are on the pci hardware that is over the device.
>
>
>
> in /sys/devices/pci0*
hdparm -Sread the man page to understand how to
set the values, they are a not linear.
If you have an md device running overtop of it or LVM that would need
to be stopped or disabled before device deletion.
The scans are on the pci hardware that is over the device.
in /sys/devices/pci0*
On 2020-06-02 11:57, Patrick O'Callaghan wrote:
I have a powered USB dock with a couple of SATA drives configured as
RAID1, and used only for nightly backups. The (minimal) manual for the
dock tells me it will power down after 30 minutes idle time, however I
don't see this happening. I presume th
On Thu, 2020-06-04 at 14:21 +0200, Bob Marcan wrote:
> On Tue, 02 Jun 2020 10:57:10 +0100
> Patrick O'Callaghan wrote:
>
> > I have a powered USB dock with a couple of SATA drives configured as
> > RAID1, and used only for nightly backups. The (minimal) manual for the
> > dock tells me it will po
On Tue, 02 Jun 2020 10:57:10 +0100
Patrick O'Callaghan wrote:
> I have a powered USB dock with a couple of SATA drives configured as
> RAID1, and used only for nightly backups. The (minimal) manual for the
> dock tells me it will power down after 30 minutes idle time, however I
> don't see this h
On Thu, 2020-06-04 at 12:16 +0100, Patrick O'Callaghan wrote:
> > you might be able to do this:
> > echo 1 > /sys/block/sdX/device/delete
> > that will remove the sdX device, to get it back you would need to
> > reboot or do a"
> > echo "- - -" to the "scan" device under the device that controls it
On Wed, 2020-06-03 at 18:04 -0500, Roger Heflin wrote:
> you might be able to do this:
>
> echo 1 > /sys/block/sdX/device/delete
>
>
>
> that will remove the sdX device, to get it back you would need to
>
> reboot or do a"
>
> echo "- - -" to the "scan" device under the device that controls i
you might be able to do this:
echo 1 > /sys/block/sdX/device/delete
that will remove the sdX device, to get it back you would need to
reboot or do a"
echo "- - -" to the "scan" device under the device that controls it to
bring it back.
You may need to also use smartctl and/or hdparm (if this usb
On Wed, 2020-06-03 at 11:24 +0100, Patrick O'Callaghan wrote:
> On Wed, 2020-06-03 at 08:33 +1000, Cameron Simpson wrote:
> > On 02Jun2020 10:57, Patrick O'Callaghan wrote:
> > > I have a powered USB dock with a couple of SATA drives configured as
> > > RAID1, and used only for nightly backups. Th
On Wed, Jun 03, 2020 at 11:37:51AM -0500, Roger Heflin wrote:
>
> It is not much of a useful security measure, since root can su - user
> and take a look see anyway.
It's more for protecting against root-level services and not a
malicious admin running as root.
--
Jonathan Billings
It is not much of a useful security measure, since root can su - user
and take a look see anyway.
On Wed, Jun 3, 2020 at 10:13 AM Jonathan Billings wrote:
>
> On Tue, Jun 02, 2020 at 04:38:17PM +0100, Patrick O'Callaghan wrote:
> > $ sudo lsof /raid
> > lsof: WARNING: can't stat() fuse.gvfsd-fuse
On 2020-06-03 17:11, Jonathan Billings wrote:
This is just a red herring. lsof, running as root, can't poke around
in the gvfs mounts for a user. The FUSE mounts for gvfs is locked
down so only users can get at them, as a security measure. lsof looks
at all processes, including the gvfs ones.
On Wed, 2020-06-03 at 11:11 -0400, Jonathan Billings wrote:
> On Tue, Jun 02, 2020 at 04:38:17PM +0100, Patrick O'Callaghan wrote:
> > $ sudo lsof /raid
> > lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
> > Output information may be incomplete.
> > lsof: WARNING:
On Tue, Jun 02, 2020 at 04:38:17PM +0100, Patrick O'Callaghan wrote:
> $ sudo lsof /raid
> lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
> Output information may be incomplete.
> lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc
> Outpu
On Wed, 2020-06-03 at 08:33 +1000, Cameron Simpson wrote:
> On 02Jun2020 10:57, Patrick O'Callaghan wrote:
> > I have a powered USB dock with a couple of SATA drives configured as
> > RAID1, and used only for nightly backups. The (minimal) manual for the
> > dock tells me it will power down after
On Wed, 2020-06-03 at 06:12 +0800, Ed Greshko wrote:
> On 2020-06-02 23:38, Patrick O'Callaghan wrote:
> > On Tue, 2020-06-02 at 10:17 -0500, Roger Heflin wrote:
> > > you might want to
> > >
> > > run a lsof and see if anything has open files on it, and if nothing
> > >
> > > has open files on i
On 02Jun2020 10:57, Patrick O'Callaghan wrote:
I have a powered USB dock with a couple of SATA drives configured as
RAID1, and used only for nightly backups. The (minimal) manual for the
dock tells me it will power down after 30 minutes idle time, however I
don't see this happening. I presume th
On 2020-06-02 23:38, Patrick O'Callaghan wrote:
> On Tue, 2020-06-02 at 10:17 -0500, Roger Heflin wrote:
>> you might want to
>>
>> run a lsof and see if anything has open files on it, and if nothing
>>
>> has open files on it, then it may something monitoring say space.
> $ sudo lsof /raid
> lsof:
On Tue, 2020-06-02 at 11:51 -0500, Roger Heflin wrote:
> I have not dug too far into those tools, but the machine I have a
>
> desktop running on does give me messages about low free disk space, so
>
> something in the gui side of things is doing the call that df uses to
>
> get data. df will
On Tue, 2020-06-02 at 10:45 -0600, Greg Woods wrote:
> On Tue, Jun 2, 2020 at 9:27 AM Patrick O'Callaghan
> wrote:
>
> >
> > (the only change is in the last two fields, where you had 1 1 rather
> > than my 0 0, but it made no difference, not that I thought it would). I
> > also did the daemon-re
I have not dug too far into those tools, but the machine I have a
desktop running on does give me messages about low free disk space, so
something in the gui side of things is doing the call that df uses to
get data. df will spin things up, and if it samples often enough
would prevent it from bei
On Tue, Jun 2, 2020 at 9:27 AM Patrick O'Callaghan
wrote:
>
>
> (the only change is in the last two fields, where you had 1 1 rather
> than my 0 0, but it made no difference, not that I thought it would). I
> also did the daemon-reload thing just in case.
>
I have found that if you change an fst
On Tue, 2020-06-02 at 10:17 -0500, Roger Heflin wrote:
> you might want to
>
> run a lsof and see if anything has open files on it, and if nothing
>
> has open files on it, then it may something monitoring say space.
$ sudo lsof /raid
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/
On Tue, 2020-06-02 at 20:24 +0800, Ed Greshko wrote:
> On 2020-06-02 20:07, Ed Greshko wrote:
> > On 2020-06-02 19:29, Patrick O'Callaghan wrote:
> > > Doesn't seem to do anything. I've tested with this line in /etc/fstab:
> > >
> > > UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4
> >
you should not need any mount options.
I have a mounted md array and have went to great lengths to make sure
nothing usually accesses it and the drives do spin down and stay spun
down until something accesses it. Making sure nothing was accessing
it was not the easiest. A "df" with no options wi
On Tue, Jun 02, 2020 at 11:57:13AM +0100, Patrick O'Callaghan wrote:
> Thanks Ed. I managed to find the man pages under systemd-mount(1) but
> it took a while. The man page for mount(1) should have a reference but
> doesn't.
They aren't really mount options, but something that the
systemd-fstab-ge
On 2020-06-02 20:07, Ed Greshko wrote:
> On 2020-06-02 19:29, Patrick O'Callaghan wrote:
>> Doesn't seem to do anything. I've tested with this line in /etc/fstab:
>>
>> UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4
>> rw,x-systemd.automount,x-systemd.device-timeout=1,x-systemd.idle-ti
On 2020-06-02 19:29, Patrick O'Callaghan wrote:
> Doesn't seem to do anything. I've tested with this line in /etc/fstab:
>
> UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4
> rw,x-systemd.automount,x-systemd.device-timeout=1,x-systemd.idle-timeout=60
> 0 0
>
> followed by 'mount -a
On Tue, 2020-06-02 at 11:57 +0100, Patrick O'Callaghan wrote:
> On Tue, 2020-06-02 at 18:18 +0800, Ed Greshko wrote:
> > On 2020-06-02 17:57, Patrick O'Callaghan wrote:
> > > I have a powered USB dock with a couple of SATA drives configured as
> > > RAID1, and used only for nightly backups. The (mi
On Tue, 2020-06-02 at 18:18 +0800, Ed Greshko wrote:
> On 2020-06-02 17:57, Patrick O'Callaghan wrote:
> > I have a powered USB dock with a couple of SATA drives configured as
> > RAID1, and used only for nightly backups. The (minimal) manual for the
> > dock tells me it will power down after 30 mi
On 2020-06-02 17:57, Patrick O'Callaghan wrote:
> I have a powered USB dock with a couple of SATA drives configured as
> RAID1, and used only for nightly backups. The (minimal) manual for the
> dock tells me it will power down after 30 minutes idle time, however I
> don't see this happening. I pres
I have a powered USB dock with a couple of SATA drives configured as
RAID1, and used only for nightly backups. The (minimal) manual for the
dock tells me it will power down after 30 minutes idle time, however I
don't see this happening. I presume that something (such as the md
system) is touching t
44 matches
Mail list logo