On Tue, 2020-06-23 at 19:17 -0500, Roger Heflin wrote:
> The uuid= I believe requires a link that udev creates when the device
> gets created and it scans it. That link may or may not be done when
> mdadm returns with 0 as that udev/link creating is not in the mdadm
> code path.
>
>
> So wait a
The uuid= I believe requires a link that udev creates when the device
gets created and it scans it. That link may or may not be done when
mdadm returns with 0 as that udev/link creating is not in the mdadm
code path.
So wait a second before you mount it. In reality it will probably
happen much f
On Tue, 2020-06-23 at 09:23 -0700, Doug H. wrote:
> > I'm assuming that once mdadm returns 0 the array is up and running.
>
>
> Perhaps a simpler way to assure that it is running is to check a static
> file that you place on the raid. If it is there then the raid is up. It
> would show the file
On Tue, 2020-06-23 at 09:23 -0700, Doug H. wrote:
> > I'm assuming that once mdadm returns 0 the array is up and running.
>
>
> Perhaps a simpler way to assure that it is running is to check a static
>
> file that you place on the raid. If it is there then the raid is up. It
>
> would show the
On Tue, 2020-06-23 at 17:07 +0100, Patrick O'Callaghan wrote:
> On Tue, 2020-06-23 at 06:12 -0500, Roger Heflin wrote:
> > How are you mounting the filesystem? (I don't remember details from
> >
> > the prior discussion) Directly with /dev/mdXXX or are you using
> > some
> > other link? If you a
On Tue, 2020-06-23 at 06:12 -0500, Roger Heflin wrote:
> How are you mounting the filesystem? (I don't remember details from
>
> the prior discussion) Directly with /dev/mdXXX or are you using some
> other link? If you are using some other link then udev needs time to
> get the original mdXXX cr
How are you mounting the filesystem? (I don't remember details from
the prior discussion) Directly with /dev/mdXXX or are you using some
other link? If you are using some other link then udev needs time to
get the original mdXXX creation event and then create other links
pointing to mdXXX.
And a
On Mon, 2020-06-22 at 18:35 +0200, Roberto Ragusa wrote:
> On 2020-06-21 18:54, Patrick O'Callaghan wrote:
> > 0 3 * * * root /usr/local/bin/dock up && /usr/bin/borgmatic ;
> > /usr/local/bin/dock down
>
> I would suggest to avoid multiple commands on a cron entry,
> better to have a simple /usr/
On 2020-06-21 18:54, Patrick O'Callaghan wrote:
0 3 * * * root /usr/local/bin/dock up && /usr/bin/borgmatic ;
/usr/local/bin/dock down
I would suggest to avoid multiple commands on a cron entry,
better to have a simple /usr/local/bin/do_backup with the three commands inside.
There you can log
On Sun, 2020-06-21 at 18:06 -0500, Roger Heflin wrote:
> It would appear that the md stop is still running when you remove the disk.
>
>
>
> So in both cases you run the exact same script but from cron it fails?
Yes.
> I would think you either need a loop validating that the md stopped or
> ju
It would appear that the md stop is still running when you remove the disk.
So in both cases you run the exact same script but from cron it fails?
I would think you either need a loop validating that the md stopped or
just put a simple few second sleep delay between the md stop and the
disk stop.
I have a backup script (using Borgmatic) I run every night, where the
target is a RAID1 array connected by a USB dock. The dock is normally
off so the script turns it on, does the backup, then turns it off
again. This is the cron entry:
# cat /etc/cron.d/borgmatic
# Run borgmatic every day at 3am
12 matches
Mail list logo