Re: [CentOS] Centos 7.6 & ether-wake

2019-02-25 Thread Kay Diederichs
On 2/22/19 11:04 PM, Gregory P. Ennis wrote:
> Everyone,
> 
> I have not been able to get ether-wake to work waking up other centos 7.6 
> machines after
> the upgrade to Centos 7.6.  Has anyone else had this problem, and if so any 
> luck with a
> fix?
> 
> Greg Ennis
> 

we have no problem with ether-wake under CentOS 7.6 . Has been working
for us as under earlier versions.

Years ago, we had to replace some RHEL-provided ethernet driver modules
with ElRepo-provided ones for WOL to work. So check out ElRepo for newer
drivers for your cards.

HTH,

Kay

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Restarting nfs-server using systemctl resulted in error

2019-02-25 Thread Nurdiyana Ali
Hi All,

I am having strange issues with NFS server on CentOS 7.2. I am unable to
restart the nfs-server:

[root@hostname ~]# systemctl restart nfs-server

** (pkttyagent:20603): WARNING **: Unable to register authentication agent:
Timeout was reached
Error registering authentication agent: Timeout was reached
(g-io-error-quark, 24)
Failed to restart nfs-server.service: Connection timed out

Googling doesn't seem to yield answers. How do I restart this service?

Sincerely,
Nurdiyana Ali.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-25 Thread Tony Mountifield
In article <20190225050144.ga5...@button.barrett.com.au>,
Jobst Schmalenbach  wrote:
> Hi.
> 
> CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old 
> machines.
> 
> I was trying to setup two disks as a RAID1 array, using these lines
> 
>   mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 
> /dev/sdc1
>   mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 
> /dev/sdc2
>   mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 
> /dev/sdc3
> 
> then I did a lsblk and realized that I used --level=0 instead of --level=1 
> (spelling mistake)
> The SIZE was reported double as I created a striped set by mistake, yet I 
> wanted the mirrored.
> 
> Here starts my problem, I cannot get rid of the /dev/mdX no matter what I do 
> (try to do).
> 
> I tried to delete the MDX, I removed the disks by failing them, then removing 
> each array md0, md1 and md2.
> I also did
> 
>   dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz 
> /dev/sdX)-1024)) count=1024
>   dd if=/dev/zero of=/dev/sdX bs=512 count=1024
>   mdadm --zero-superblock /dev/sdX
> 
> Then I wiped each partition of the drives using fdisk.

The superblock is a property of each partition, not just of the whole disk.

So I believe you need to do:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3
mdadm --zero-superblock /dev/sdc1
mdadm --zero-superblock /dev/sdc2
mdadm --zero-superblock /dev/sdc3

Cheers
Tony
-- 
Tony Mountifield
Work: t...@softins.co.uk - http://www.softins.co.uk
Play: t...@mountifield.org - http://tony.mountifield.org
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Geany 1.34

2019-02-25 Thread H
On 02/14/2019 10:20 PM, Stephen John Smoogen wrote:
> On Thu, 14 Feb 2019 at 15:18, H  wrote:
>> On 02/14/2019 08:19 PM, Stephen John Smoogen wrote:
>>> On Thu, 14 Feb 2019 at 13:25, Stephen John Smoogen  wrote:
 On Thu, 14 Feb 2019 at 12:47, H  wrote:
> On 02/14/2019 05:58 PM, Tate Belden wrote:
>> FWIW, on Fedora 29, I'm running Geany 1.34.1 and didn't have to enable
>> anything other than the default repositories. So, it'd appear to at least
>> be in the stream.
>>
>> geany-1.34.1-2.fc29.x86_64
>>
>> On Thu, Feb 14, 2019 at 8:53 AM H  wrote:
>>
>>> On 01/04/2019 04:17 AM, H wrote:
 Does anyone know if Geany 1.34 is headed to one of the repositories? I
>>> am running version 1.31 and interested in the 1.33+ since the markdown
>>> plugin has apparently been updated.
>>> I raised this issue on the bugtracker for Fedora EPEL several weeks ago
>>> but have not seen any response. What can I do to raise this issue?
>>>
>>> Appreciate suggestions.
>>>
>>> ___
>>> CentOS mailing list
>>> CentOS@centos.org
>>> https://lists.centos.org/mailman/listinfo/centos
>>>
> Yes, I have seen it available for Fedora but was hoping it would be made 
> available for both Centos 6 and 7 as well.
>
 I am testing to see how much work needs to be done. I got it to build
 the geany part easily for EL7. Just do the following

 fedpkg clone geany
 cp geany
 fedpkg srpm
 fedpkg switch-branch epel7
 vi geany.spec
 # change 1.32 to 1.34.1
 fedpkg mockbuild

 el6 looks like it does not have new enough versions of vte and some
 other parts to work.

>>> The major problem with el6 is the lack of a new enough c++ compiler
>>> for the code. Trying to fix that is way more than I have time to work
>>> on.
>>>
>>>
 --
 Stephen J Smoogen.
>>>
>> That sounds great! Btw, might be time to install CentOS 7 on the one 
>> remaining CentOS 6 machine...
>>
>> If you finish it for CentOS 7, any chance it could be made available in the 
>> EPEL repository so a yum update works?
>>
> That will be up to the package maintainer versus me. I will contact
> them and find out.
>
>
Stephen, did you hear back from the maintainers? I have not seen any update on 
my request...

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Policy issue: C7 and motion

2019-02-25 Thread mark
Not sure who's package let an error slip in, but I don't believe I've had
this issue before: SELinux is preventing /usr/bin/motion from map access
on the chr_file /dev/video1

Yes, that should be allowed by default.

mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 7.6 & ether-wake

2019-02-25 Thread eliezer
Can you be more specific about the hardware?
I have a setup of DELL desktop, DELL Server SuperMicro Server and couple
other devices.
I am using from a cgi script the next on one server to wake the other:
/usr/bin/sudo /usr/sbin/ether-wake "XY::XY" -b && echo 1

All of my servers have Intel PRO 1Gbit ethernet nics(2,4,1.. ports per
machine).

To make the Desktop wakeup I had to do the next:
- Reset the bios settings to default
- Reconfigure the bios to allow remote wake up
- Disable couple Power related special sleep settings(on the desktop)
- Make sure that the switch can handle 10Mbps connection (since most of
these nics stay at low power and low speed waiting for WOL packets).

Works for me on at-least three machines with CentOS 7.6
# lsb_release -a
LSB Version::core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:CentOS Linux release 7.6.1810 (Core)
Release:7.6.1810
Codename:   Core

Let me know if you need some help,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: CentOS  On Behalf Of Gregory P. Ennis
Sent: Thursday, February 14, 2019 03:41
To: CentOS mailing list 
Subject: [CentOS] Centos 7.6 & ether-wake

Everyone,

I have not been able to get ether-wake to work waking up other centos 7.6
machines after
the upgrade to Centos 7.6.  Has anyone else had this problem, and if so any
luck with a
fix?

Greg Ennis

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Restarting nfs-server using systemctl resulted in error

2019-02-25 Thread Gordon Messmer

On 2/25/19 3:18 AM, Nurdiyana Ali wrote:

I am having strange issues with NFS server on CentOS 7.2.



(Obligatory: "7.2" means you haven't applied patches in a very long 
time, and probably have a large number of security vulnerabilities on 
this system as well as bugs you're likely to hit and then ask about 
here.  Please update for your sake and ours)




[root@hostname ~]# systemctl restart nfs-server

** (pkttyagent:20603): WARNING **: Unable to register authentication agent:
Timeout was reached



I think that's the sort of thing you'd see if you updated the dbus 
packages.  You'd need to restart the dbus service, which will break some 
things.  You're better off rebooting this time, if a dbus update is the 
cause.



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-25 Thread Gordon Messmer

On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:

I tried to delete the MDX, I removed the disks by failing them, then removing 
each array md0, md1 and md2.
I also did

   dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz 
/dev/sdX)-1024)) count=1024



Clearing the initial sectors doesn't do anything to clear the data in 
the partitions.  They don't become blank just because you remove them.


Partition your drives, and then use "wipefs -a /dev/sd{b,c}{1,2,3}"



I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect array) 
over and over again (systemd frustration).



What makes you think this has *anything* to do with systemd? Bitching 
about systemd every time you hit a problem isn't helpful.  Don't.



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 7.6 & ether-wake

2019-02-25 Thread Simon Matter via CentOS
> Can you be more specific about the hardware?
> I have a setup of DELL desktop, DELL Server SuperMicro Server and couple
> other devices.
> I am using from a cgi script the next on one server to wake the other:
> /usr/bin/sudo /usr/sbin/ether-wake "XY::XY" -b && echo 1

Does is work if you do

ether-wake -i  "XY::XY"

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid

2019-02-25 Thread Simon Matter via CentOS
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:
>> I tried to delete the MDX, I removed the disks by failing them, then
>> removing each array md0, md1 and md2.
>> I also did
>>
>>dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz
>> /dev/sdX)-1024)) count=1024
>
>
> Clearing the initial sectors doesn't do anything to clear the data in
> the partitions.  They don't become blank just because you remove them.
>
> Partition your drives, and then use "wipefs -a /dev/sd{b,c}{1,2,3}"
>
>
>> I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect
>> array) over and over again (systemd frustration).
>
>
> What makes you think this has *anything* to do with systemd? Bitching
> about systemd every time you hit a problem isn't helpful.  Don't.

If it's not systemd, who else does it? Can you elaborate, please?

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos