Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Christofer C. Bell
On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS 
wrote:

> >
> > What makes you think this has *anything* to do with systemd? Bitching
> > about systemd every time you hit a problem isn't helpful.  Don't.
>
> If it's not systemd, who else does it? Can you elaborate, please?
>

I'll wager it's the mdadm.service unit.  You're seeing systemd in the log
because systemd has a unit loaded that's managing your md devices.  The
package mdadm installs these files:

/usr/lib/systemd/system/mdadm.service
/usr/lib/systemd/system/mdmonitor-takeover.service
/usr/lib/systemd/system/mdmonitor.service

Perhaps if you turn off these services, you'll be able to manage your disks
without interference.  I do not use mdadm on my system, I'm just looking at
the content of the rpm file on rpmfind.net.  That said, systemd isn't the
culprit here.  It's doing what it's supposed to (starting a managed service
on demand).

I do concede the logs are confusing.  For example, this appears in my logs:

Feb 26 05:10:03 demeter systemd: Starting This service automatically renews
any certbot certificates found...

While there is no indication in the log, this is being started by:

[cbell@demeter log]$ systemctl status certbot-renew.timer
● certbot-renew.timer - This is the timer to set the schedule for automated
renewals
   Loaded: loaded (/usr/lib/systemd/system/certbot-renew.timer; enabled;
vendor preset: disabled)
   Active: active (waiting) since Thu 2019-02-21 17:54:43 CST; 4 days ago

Warning: Journal has been rotated since unit was started. Log output is
incomplete or unavailable.
[cbell@demeter log]$

And you can see the log message through the service unit using journalctl:

[cbell@demeter log]$ journalctl -u certbot-renew.service | grep "This
service" | tail -1
Feb 26 05:10:07 demeter.home systemd[1]: Started This service automatically
renews any certbot certificates found.
[cbell@demeter log]$

You can see there's no indication in /var/log/messages that it's the
certbot-renewal service (timer) that's logging this.  So it's easy to
misinterpret where the messages are coming from, like your mdadm messages.
Perhaps having the journal indicate which service or timer is logging a
message is a feature request for Lennart!

Hope this helps!

-- 
Chris

"If you wish to make an apple pie from scratch, you must first invent the
Universe." -- Carl Sagan
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Simon Matter via CentOS
> On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS
> 
> wrote:
>
>> >
>> > What makes you think this has *anything* to do with systemd? Bitching
>> > about systemd every time you hit a problem isn't helpful.  Don't.
>>
>> If it's not systemd, who else does it? Can you elaborate, please?
>>
>
> I'll wager it's the mdadm.service unit.  You're seeing systemd in the log
> because systemd has a unit loaded that's managing your md devices.  The
> package mdadm installs these files:
>
> /usr/lib/systemd/system/mdadm.service
> /usr/lib/systemd/system/mdmonitor-takeover.service
> /usr/lib/systemd/system/mdmonitor.service

I'm not sure what your box runs but it's at least not CentOS 7.

CentOS 7 contains these md related units:
/usr/lib/systemd/system/mdadm-grow-continue@.service
/usr/lib/systemd/system/mdadm-last-resort@.service
/usr/lib/systemd/system/mdadm-last-resort@.timer
/usr/lib/systemd/system/mdmonitor.service
/usr/lib/systemd/system/mdmon@.service

The only md related daemon running besides systemd is mdadm. I've never
seen such behavior with EL6 and the mdadm there so I don't think it will
ever do such things.

The message produced comes from mdadm-last-resort@.timer. Whatever
triggers it it's either systemd or something like systemd-udevd.

How is it not systemd doing it? Such things didn't happen with pre systemd
distributions.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] hplips

2019-02-26 Thread mark
Yeah, about that... Back in '12, we got a nice HP poster printer. They
don't support Linux, but a co-worker got the .ppd out of the Mac support
file.

I tried it, then I looked at the file in vi... and found it *only*
supported the 24" printer, *not* the 44" printer that we had.

Well, we just bought a replacement, since the old printer was EOL'd. This
time, I went to hplips, d/l and built the current one, and sure enough,
there's a .ppd for the DesignJet Z9.

Any bets on what I found? No? There was *NO* support for the 44" printer.
I saw *one* setting for something 30x42 (the 100' roll of paper is 42"
wide).

I joined and tried asking about that on the hplips site. That was last
week. Not a single response.

HP no longer does software support for their printers (I got that from the
engineer - we do have contract support). Had a friend d/l and install the
Mac ps driver... no .ppd.

Anyone know someone who's involved with hplips? Otherwise, I'm going to
have to spend half a day or a day hacking the damn .ppd.

Btw, the server it's attached to is C6. One of these days I'll upgrade it
to C7, but the *only* think it's used for now are these poster printers.

  mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] hplips

2019-02-26 Thread Jon LaBadie
On Tue, Feb 26, 2019 at 01:48:19PM -0500, mark wrote:
> Yeah, about that... Back in '12, we got a nice HP poster printer. They
> don't support Linux, but a co-worker got the .ppd out of the Mac support
> file.
> 
> I tried it, then I looked at the file in vi... and found it *only*
> supported the 24" printer, *not* the 44" printer that we had.
> 
> Well, we just bought a replacement, since the old printer was EOL'd. This
> time, I went to hplips, d/l and built the current one, and sure enough,
> there's a .ppd for the DesignJet Z9.
> 
> Any bets on what I found? No? There was *NO* support for the 44" printer.
> I saw *one* setting for something 30x42 (the 100' roll of paper is 42"
> wide).

Might 30" be the maximum "height" (or length of paper for one print job).
If yes, the maximum "width" might be the 42 of 30x42.

Is 42" supported under windows?  If yes, the ppd file is likely compatible.

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Jobst Schmalenbach
On Mon, Feb 25, 2019 at 05:24:44PM -0800, Gordon Messmer 
(gordon.mess...@gmail.com) wrote:
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:
>
[snip]
> 
> What makes you think this has *anything* to do with systemd? Bitching about
> systemd every time you hit a problem isn't helpful.  Don't.

Becasue of this.

Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives before 
activating degraded array md2..




-- 
When you want a computer system that works, just choose Linux; When you want a 
computer system that works, just, choose Microsoft.

  | |0| |   Jobst Schmalenbach, General Manager
  | | |0|   Barrett & Sales Essentials
  |0|0|0|   +61 3 9533 , POBox 277, Caulfield South, 3162, Australia
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Jobst Schmalenbach
On Tue, Feb 26, 2019 at 03:37:34PM +0100, Simon Matter via CentOS 
(centos@centos.org) wrote:
> > On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS
> > 
> > wrote:
> >> > What makes you think this has *anything* to do with systemd? Bitching
> >> > about systemd every time you hit a problem isn't helpful.  Don't.
> >>
> >> If it's not systemd, who else does it? Can you elaborate, please?
> 
> How is it not systemd doing it? Such things didn't happen with pre systemd
> distributions.

I just had a hardware failure of a Raid controller (well they fail thats why we 
have backups).
This means putting the drives onto a new controller I have to (re-) format them.

In Centos6 times this took me under an hour to fix this, mostly due to the 
rsyncing time.
Yesterday it took me over 6 hours to move a system.

Jobst


-- 
Why don't sheep shrink when it rains?

  | |0| |   Jobst Schmalenbach, General Manager
  | | |0|   Barrett & Sales Essentials
  |0|0|0|   +61 3 9533 , POBox 277, Caulfield South, 3162, Australia
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Jobst Schmalenbach
On Mon, Feb 25, 2019 at 11:23:12AM +, Tony Mountifield (t...@softins.co.uk) 
wrote:
> In article <20190225050144.ga5...@button.barrett.com.au>,
> Jobst Schmalenbach  wrote:
> > Hi.
> > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade 
> > new/old machines.
> > 
> > I was trying to setup two disks as a RAID1 array, using these lines
> > 
> >   mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 
> > /dev/sdc1
> >   mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 
> > /dev/sdc2
> >   mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 
> > /dev/sdc3
> > 
> > then I did a lsblk and realized that I used --level=0 instead of --level=1 
> > (spelling mistake)
> 
> So I believe you need to do:
> 
> mdadm --zero-superblock /dev/sdb1
> mdadm --zero-superblock /dev/sdb2
>

I actually deleted the partitions, at first using fdisk than parted (read a few 
ideas on the internet).
Also from the second try onwards I also changed the partition sizes, 
filesystems.
Also I tried with one disk missing (either sda or sdb).


Jobst




-- 
If proof denies faith, and uncertainty denies proof, then uncertainty is proof 
of God's existence.

  | |0| |   Jobst Schmalenbach, General Manager
  | | |0|   Barrett & Sales Essentials
  |0|0|0|   +61 3 9533 , POBox 277, Caulfield South, 3162, Australia
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-26 Thread Gordon Messmer

On 2/26/19 6:37 AM, Simon Matter via CentOS wrote:

How is it not systemd doing it? Such things didn't happen with pre systemd
distributions.



The following log is from a CentOS 6 system.  I created RAID devices on 
two drives.  I then stopped the RAID devices and 'dd' over the beginning 
of the drive.  I then re-partition the drives.


At that point, the RAID devices auto-assemble.  They actually partially 
fail, below, but the behavior that this thread discusses absolutely is 
not systemd-specific.


What you're seeing is that you're wiping the partition, but not the RAID 
information inside the partitions.  When you remove and then re-create 
the partitions, you're hot-adding RAID components to the system.  They 
auto-assemble, as they have (or should have) for a long time.  It's 
probably more reliable under newer revisions, but this is long-standing 
behavior.


The problem isn't systemd.  The problem is that you're not wiping what 
you think you're wiping.  You need to use "wipefs -a" on each partition 
that's a RAID component first, and then "wipefs -a" on the drive itself 
to get rid of the partition table.



[root@localhost ~]# dd if=/dev/zero of=/dev/vdb bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 0.0757563 s, 6.9 MB/s
[root@localhost ~]# dd if=/dev/zero of=/dev/vdc bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 0.0385181 s, 13.6 MB/s
[root@localhost ~]# kpartx -a /dev/vdb
  Warning: Disk has a valid GPT signature but invalid PMBR.
  Assuming this disk is *not* a GPT disk anymore.
  Use gpt kernel option to override.  Use GNU Parted to correct disk.
[root@localhost ~]# kpartx -a /dev/vdc
  Warning: Disk has a valid GPT signature but invalid PMBR.
  Assuming this disk is *not* a GPT disk anymore.
  Use gpt kernel option to override.  Use GNU Parted to correct disk.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
unused devices: 
[root@localhost ~]# parted /dev/vdb -s mklabel gpt   mkpart primary ext4 
1M 200M   mkpart primary ext4 200M 1224M   mkpart primary ext4 1224M 100%
[root@localhost ~]# parted /dev/vdc -s mklabel gpt   mkpart primary ext4 
1M 200M   mkpart primary ext4 200M 1224M   mkpart primary ext4 1224M 100%

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
unused devices: 
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 vdc3[1] vdb3[0]
  19775360 blocks super 1.0 [2/2] [UU]

md1 : active raid1 vdb2[0]
  999360 blocks super 1.0 [2/1] [U_]

md0 : active raid1 vdb1[0]
  194496 blocks super 1.0 [2/1] [U_]

unused devices: 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos