Re: [CentOS] DHCP max-lease-time maximum

2016-07-07 Thread Götz Reinicke - IT Koordinator
Am 06.07.16 um 18:19 schrieb John R Pierce:
> On 7/6/2016 1:27 AM, Götz Reinicke - IT Koordinator wrote:
>> :)  ... the long lease is for some Accesspoints which we dont like to
>> configure static, just plug in and run.
>
> why not configure reservations for those access points?
>
> the downside of a really long lease time is if you have to change
> something like DNS, gateway, whatever, the clients with a really long
> lease will not 'see' the change until 50% of hte lease time expires as
> thats the default refresh.
>
>
Hi, what do you mean with "reservations"? the APs in question just need
an IP for connecting to the managemnt server which is in the same subnet.

Thanks for your feedback . Götz


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DHCP max-lease-time maximum

2016-07-07 Thread Eero Volotinen
Static MAC ip mapping on dhcp server?

Eero
7.7.2016 12.38 ip. "Götz Reinicke - IT Koordinator" <
goetz.reini...@filmakademie.de> kirjoitti:

Am 06.07.16 um 18:19 schrieb John R Pierce:
> On 7/6/2016 1:27 AM, Götz Reinicke - IT Koordinator wrote:
>> :)  ... the long lease is for some Accesspoints which we dont like to
>> configure static, just plug in and run.
>
> why not configure reservations for those access points?
>
> the downside of a really long lease time is if you have to change
> something like DNS, gateway, whatever, the clients with a really long
> lease will not 'see' the change until 50% of hte lease time expires as
> thats the default refresh.
>
>
Hi, what do you mean with "reservations"? the APs in question just need
an IP for connecting to the managemnt server which is in the same subnet.

Thanks for your feedback . Götz



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to have more than on SELinux context on a directory

2016-07-07 Thread Fabian Arrotin
On 06/07/16 21:17, Bernard Fay wrote:
> I can access /depot/tftp from a tftp client but unable to do it from a
> Windows client as long as SELinux is enforced.  If SELinux is permissive I
> can access it then I know Samba is properly configured.
> 
> # getenforce
> Enforcing
> # ls -dZ /depot/tftp/
> drwxrwxrwx. root root system_u:object_r:tftpdir_rw_t:s0 /depot/tftp/
> 
> 
> And if I do it the other way around, give the directory a type
> samba_share_t then the tftp clients are unable to push files.
> 
> # getenforce
> Enforcing
> [root@CTSFILESRV01 depot]# ls -ldZ tftp/
> drwxrwxrwx. root root system_u:object_r:samba_share_t:s0 tftp/
> 
> 
> I would then to either create my own type or missing access rules as you
> suggest. Unfortunately, this will be when I will have time which I don't
> have at the moment.
> 
> Thanks for you help
> 

Don't forget that it's about process type and context.
If you need multiple processes/domain types accessing the same context
files, you'd probably just need a common context/label.


man -k _selinux => will show you man pages for everything regarding
selinux and domain/process/context


=> man tftpd_selinux
=> search for samba and :

If you want to share files with multiple domains (Apache, FTP, rsync,
Samba), you can set  a  file  context  of  public_content_t  and
public_content_rw_t.   These context allow any of the above domains to
read the content.
 If you want a particular domain to write to the public_content_rw_t
domain, you must set the appropriate  boolean.


But read the whole tftpd_selinux and samba_selinux man pages (and they
share almost the same content for "Sharing files" stanzas :-)

-- 
Fabian Arrotin
The CentOS Project | http://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab



signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DHCP max-lease-time maximum

2016-07-07 Thread John R Pierce

On 7/7/2016 2:37 AM, Götz Reinicke - IT Koordinator wrote:

Hi, what do you mean with "reservations"? the APs in question just need
an IP for connecting to the managemnt server which is in the same subnet.


a DHCP reservation is when you convert a lease to a static reservation, 
so each time that MAC/identifier asks for an IP, you give it one thats 
preconfigured.   typically you can just cut/paste an entry form 
/var/state/dhcpd.leases into /etc/dhcpd.conf and tweak it to something 
like...



   host myhost {
  hardware ethernet 00:01:08:00:ad:33;
  fixed-address 192.168.0.249;
  option host-name "myhost";
   }


or if you're clever write a bit of perl to do that automatically,

--
john r pierce, recycling bits in santa cruz

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DHCP max-lease-time maximum

2016-07-07 Thread Götz Reinicke - IT Koordinator
Thanks, I did know that and we are using this in other situations. But
as written in my third reply:

To much work every MAC I dont have to type counts.

Regards . Götz



Am 07.07.16 um 11:44 schrieb Eero Volotinen:
> Static MAC ip mapping on dhcp server?
>
> Eero
> 7.7.2016 12.38 ip. "Götz Reinicke - IT Koordinator" <
> goetz.reini...@filmakademie.de> kirjoitti:
>
> Am 06.07.16 um 18:19 schrieb John R Pierce:
>> On 7/6/2016 1:27 AM, Götz Reinicke - IT Koordinator wrote:
>>> :)  ... the long lease is for some Accesspoints which we dont like to
>>> configure static, just plug in and run.
>> why not configure reservations for those access points?
>>
>> the downside of a really long lease time is if you have to change
>> something like DNS, gateway, whatever, the clients with a really long
>> lease will not 'see' the change until 50% of hte lease time expires as
>> thats the default refresh.
>>
>>
> Hi, what do you mean with "reservations"? the APs in question just need
> an IP for connecting to the managemnt server which is in the same subnet.
>
> Thanks for your feedback . Götz
>


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-announce Digest, Vol 137, Issue 2

2016-07-07 Thread centos-announce-request
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ...@centos.org

You can reach the person managing the list at
centos-announce-ow...@centos.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-announce digest..."


Today's Topics:

   1. Announcing Release for Gluster 3.8 on CentOS  Linux 7 x86_64
  (Niels de Vos)
   2. Announcing Release for Gluster 3.8 on CentOS  Linux 6 x86_64
  (Niels de Vos)
   3. CEEA-2016:1388 CentOS 5 tzdata Enhancement Update (Johnny Hughes)
   4. CEEA-2016:1388 CentOS 7 tzdata Enhancement Update (Johnny Hughes)
   5. CEEA-2016:1388 CentOS 6 tzdata Enhancement Update (Johnny Hughes)


--

Message: 1
Date: Wed, 6 Jul 2016 16:03:34 +0200
From: Niels de Vos 
To: centos-annou...@centos.org
Subject: [CentOS-announce] Announcing Release for Gluster 3.8 on
CentOS  Linux 7 x86_64
Message-ID: <20160706140334.gf15...@ndevos-x240.usersys.redhat.com>
Content-Type: text/plain; charset="us-ascii"

We are happy to announce the General Availability of Gluster 3.8 for
CentOS Linux 7. Earlier versions (3.6 and 3.7) are still available and
will keep receiving updates until the upstream Gluster Community marks
them End-Of-Life.

GlusterFS 3.8 brings many improvements and new functionalities. These
are documented in the 3.8.0 release notes:
  
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md

To install GlusterFS 3.8, only two commands are needed:

  # yum install centos-release-gluster
  # yum install glusterfs-server

The centos-release-gluster content comes from the
centos-release-gluster38 package delivered via CentOS Extras repos. This
contains all the metadata and dependancy information, needed to install
GlusterFS 3.8.

Deployments that have centos-release-gluster36 or -gluster37 installed
will not automatically upgrade to version 3.8. These installations will
continue to stick to their current stable version with minor updates.

The existing quickstart guide will still work with the new version of
Gluster. In fact, we do not expect to see any issues with use-cases that
work on 3.7 already. Installation and configuration is the same in the
new version:
  https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

More details about the packages that the Gluster project provides in the
Storage SIG is available in the documentation:
  https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster

The centos-release-gluster* repositories offer additional packages that
enhance the usability of Gluster itself. Users can request additional
tools and applications to be provided, just send us an email with your
suggestions. The current list of packages that is (planned to become)
available can be found here:
  https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster/Ecosystem-pkgs

These Gluster repositories and packages are provided through the Storage
SIG. General information about the SIG can be read in the wiki:
  https://wiki.centos.org/SpecialInterestGroup/Storage

We welcome all feedback, comments and contributions. You can get in
touch with the CentOS Storage SIG on the centos-devel mailing list
( https://lists.centos.org ) and with the Gluster developer and user
communities at https://www.gluster.org/mailman/listinfo , we are also
available on irc at #gluster on irc.freenode.net, and on twitter at
@gluster .

Cheers,
Niels de Vos
Storage SIG member & Gluster maintainer
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: 


--

Message: 2
Date: Wed, 6 Jul 2016 16:03:57 +0200
From: Niels de Vos 
To: centos-annou...@centos.org
Subject: [CentOS-announce] Announcing Release for Gluster 3.8 on
CentOS  Linux 6 x86_64
Message-ID: <20160706140357.gg15...@ndevos-x240.usersys.redhat.com>
Content-Type: text/plain; charset="us-ascii"

We are happy to announce the General Availability of Gluster 3.8 for
CentOS Linux 6. Earlier versions (3.6 and 3.7) are still available and
will keep receiving updates until the upstream Gluster Community marks
them End-Of-Life.

GlusterFS 3.8 brings many improvements and new functionalities. These
are documented in the 3.8.0 release notes:
  
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md

To install GlusterFS 3.8, only two commands are needed:

  # yum install centos-release-gluster
  # yum install glusterfs-server

The centos-release-gluster content c

[CentOS] Help sought for email problem

2016-07-07 Thread Timothy Murphy
My home server is running CentOS-7.1
I'm running postfix and dovecot on it.
I collect email from a few sources with fetchmail
and move it to ~/Maildir/cur/ with procmail.

Or at least, I did do this.
For some reason procmail has stopped doing its job,
and the email that I collect is finishing in /var/spool/mail/tim/

I can't work out what has caused this,
or what the cure is.
I haven't changed postfix or dovecot config files,
or .procmailrc.
I did stop and re-start postfix and dovecot on the server,
which I haven't done for some time,
so it is possible some earlier upgrade has come into play.

Any advice or elucidation gratefully received.



-- 
Timothy Murphy  
gayleard /at/ eircom.net
School of Mathematics, Trinity College, Dublin


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DHCP max-lease-time maximum

2016-07-07 Thread Sylvain CANOINE

> Thanks, I did know that and we are using this in other situations. But
> as written in my third reply:
> 
> To much work every MAC I dont have to type counts.
Take some time to write a script, and you'll earn much time after. But it's a 
really, really bad idea to configure too long leases.

Sylvain.
Pensez ENVIRONNEMENT : n'imprimer que si ncessaire

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 6, iscsi,

2016-07-07 Thread m . roth
Hi, folks,

   I installed the iscsi packages, and I *thought* that the first thing to
do was configure /etc/tgt/target.conf. Am I wrong? Did I have to
configure iSNS on the RAID appliance?

   All that I find on the web, incl. upstream docs, is to install it, then
start the service.

   What am I misunderstanding?

 mark, aka "Confused"

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6, iscsi, followup question

2016-07-07 Thread m . roth
m.r...@5-cent.us wrote:
> Hi, folks,
>
>I installed the iscsi packages, and I *thought* that the first thing to
> do was configure /etc/tgt/target.conf. Am I wrong? Did I have to
> configure iSNS on the RAID appliance?
>
>All that I find on the web, incl. upstream docs, is to install it, then
> start the service.
>
>What am I misunderstanding?
>
>  mark, aka "Confused"
>

One more thing - not a VM, not ESX; should I use direct store or backing
store?

mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6, iscsi,

2016-07-07 Thread Denniston, Todd A CIV NAVSURFWARCENDIV Crane
Thu Jul 7 14:47:23 UTC 2016, m.roth at 5-cent.us wrote:
> Hi, folks,
> 
>I installed the iscsi packages, and I *thought* that the first thing to
> do was configure /etc/tgt/target.conf. Am I wrong? Did I have to
> configure iSNS on the RAID appliance?

One thing to make sure you have straight is Target vs. Initiator [2], for me it 
originally felt like the names were backwards.  I would expect the physical 
RAID machine to be the Target. Was that your expectation too, as it is a little 
unclear from the para above?

> 
>All that I find on the web, incl. upstream docs, is to install it, then
> start the service.

It might have helped if you had pointed at which upstream docs... some are 
better than others on the same subject.  


When I did a test of it, I found starting with the Target creation doc [1]  and 
following with the Initiator doc[3] (each on different RH/CentOS boxes) helped 
me to figure it out.  BTW while in [1] they use /dev/vdb it works as well with 
real /dev/sd* or Logical Volumes, I don't recall _trying_ with PVs.

[1] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-iscsi.html

[2] https://en.wikipedia.org/wiki/ISCSI#Initiator

[3] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/iscsi-api.html

Even when this disclaimer is not here:
I am not a contracting officer. I do not have authority to make or modify the 
terms of any contract.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6, iscsi,

2016-07-07 Thread m . roth
Hi, Todd,

Denniston, Todd A CIV NAVSURFWARCENDIV Crane wrote:
> Thu Jul 7 14:47:23 UTC 2016, m.roth at 5-cent.us wrote:
>>
>>I installed the iscsi packages, and I *thought* that the first thing
>> to do was configure /etc/tgt/target.conf. Am I wrong? Did I have to
>> configure iSNS on the RAID appliance?
>
> One thing to make sure you have straight is Target vs. Initiator [2], for
> me it originally felt like the names were backwards.  I would expect the
> physical RAID machine to be the Target. Was that your expectation too, as
> it is a little unclear from the para above?
>>
>>All that I find on the web, incl. upstream docs, is to install it,
>> then start the service.
>
> It might have helped if you had pointed at which upstream docs... some are
> better than others on the same subject.
>
I was just googling on various sets of search terms... The one upstream
was a RH documentation on setting up iSCSI for RHEL5/6.
>
> When I did a test of it, I found starting with the Target creation doc [1]
>  and following with the Initiator doc[3] (each on different RH/CentOS
> boxes) helped me to figure it out.  BTW while in [1] they use /dev/vdb it
> works as well with real /dev/sd* or Logical Volumes, I don't recall
> _trying_ with PVs.

Yeah, well, about that... my RAID is an appliance - it's a JetStor, from
AC&NC (yes, that's an endorsement, I really like their hardware - it's
very reliable, and we've got some, hell, there's two we need to replace..
that are Ultra320 SCSI for real, to give you an idea of the reliability,
and their prices are very good). I've got docs for the JetStor, which
include setting up the target - I'm still trying to figure the identifier
name.
>
Thanks very much for the links - I'll be looking at them in a minute.

   mark
> [1]
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-iscsi.html
>
> [2] https://en.wikipedia.org/wiki/ISCSI#Initiator
>
> [3]
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/iscsi-api.html
>
> Even when this disclaimer is not here:
> I am not a contracting officer. I do not have authority to make or modify
> the terms of any contract.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6, iscsi,

2016-07-07 Thread Denniston, Todd A CIV NAVSURFWARCENDIV Crane
Thu Jul 7 17:31:05 UTC 2016, m.roth at 5-cent.us wrote:
> I've got docs for the JetStor, which
> include setting up the target - I'm still trying to figure the identifier
> name.

Although the identifier name format is shown in [1], it is better explained in 
[2] & [3].

[1] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-iscsi.html

[2] 
http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/iscsi-quick-connect-red-hat-linux-guide.html

[3] https://en.wikipedia.org/wiki/ISCSI#Addressing

Even when this disclaimer is not here:
I am not a contracting officer. I do not have authority to make or modify the 
terms of any contract.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Help sought for email problem

2016-07-07 Thread Alexander Dalloz

Am 07.07.2016 um 15:19 schrieb Timothy Murphy:

My home server is running CentOS-7.1
I'm running postfix and dovecot on it.
I collect email from a few sources with fetchmail
and move it to ~/Maildir/cur/ with procmail.

Or at least, I did do this.
For some reason procmail has stopped doing its job,
and the email that I collect is finishing in /var/spool/mail/tim/

I can't work out what has caused this,
or what the cure is.
I haven't changed postfix or dovecot config files,
or .procmailrc.
I did stop and re-start postfix and dovecot on the server,
which I haven't done for some time,
so it is possible some earlier upgrade has come into play.

Any advice or elucidation gratefully received.


You will have to give us more details about your setup. Can we guess 
that fetchmail directly hands the mail to procmail as the MDA? If so 
verify that your procmail recipe is really used. Enable verbose logging by


VERBOSE=yes
LOGABSTRACT=all
LOGFILE=/desired/path/to/procmail.log

If your fetchmail setup involves postfix or dovecot, then please provide 
details.


Things do not start to behave differently without a system change. So 
think about what you might have changed. Any updates applied? While 
saying so, you hopefully run CentOS 7 latest and not 7.1.


Regards

Alexander


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] update clamav to 0.99.2

2016-07-07 Thread Helmut Drodofsky

Helo,

update is in EPEL repository.

on startup, clamd does not further create clamd.sock and clamd.pid

clamd service stops without any message - even in debug mode.

It's a nightmare.

Helmut

--
Viele Grüße
Helmut Drodofsky
 
Internet XS Service GmbH

Heßbrühlstraße 15
70565 Stuttgart
  Geschäftsführung
Dr.-Ing. Roswitha Hahn-Drodofsky
HRB 21091 Stuttgart
USt.ID: DE190582774
Tel. 0711 781941 0
Fax: 0711 781941 79
Mail: i...@internet-xs.de
www.internet-xs.de


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] NetworkManger creates extra bonds; is this a bug?

2016-07-07 Thread Joe Smithian
Hi All,

I see an unexpected beahviour from NetworkManager on CentOS 7.1.
Using nmcli tool, I create a bond with two slaves as explained in the Red
Hat 7.1 Networking guide. I enable slaves and master; bond works as
expected.
When I restart NetworkManager, it creates a new bond with the same name but
not connected to any device. Two bonds with the same name is confusing for
my other monitoring scripts.
I'm wondering why a second bond is created? Is it a bug in NetworkManger?


#Create a bond with two slaves
nmcli con add autoconnect no type bond con-name bond0 ifname bond0
nmcli con mod bond0 ipv6.method ignore ipv4.method manual  ipv4.addresses
${BOND_IP}/${BOND_CIDR} ${BOND_GW} ${BOND_DNS} ${BOND_DNS_SEARCH}
ipv4.never-default no ipv4.ignore-auto-dns no
nmcli con add autoconnect no type bond-slave con-name bond-slave-eth0
ifname eth0 master bond0
nmcli con add autoconnect no type bond-slave con-name bond-slave-eth1
ifname eth1 master bond0

#Enable bond
nmcli con mod bond-slave-eth0 connection.autoconnect yes
nmcli con up bond-slave-eth0

nmcli con mod bond-slave-eth1 connection.autoconnect yes
nmcli con up bond-slave-eth1

nmcli con mod bond0 connection.autoconnect yes
nmcli con up bond0

systemctl restart NetworkManager
systemctl restart iptables

nmcli con | grep bond
bond09942bdc6-df72-4723-b2ed-47a78e3a5c59  bondbond0
bond-slave-eth0  8b0fbbe1-a7f0-448c-8005-46d11599f57a  802-3-ethernet  eth0
bond-slave-eth1  333dd1b9-15a4-4119-8e42-55ac3621a85d  802-3-ethernet  eth1
*bond0460dd9e8-bc0b-473e-9c89-41facda98b66  bond
--# Why this extra bond connections has been created?*


I'd appreciate your comments and suggestions to fix the issue.


Thanks,

Joe
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NetworkManger creates extra bonds; is this a bug?

2016-07-07 Thread Digimer
On 07/07/16 05:21 PM, Joe Smithian wrote:
> Hi All,
> 
> I see an unexpected beahviour from NetworkManager on CentOS 7.1.
> Using nmcli tool, I create a bond with two slaves as explained in the Red
> Hat 7.1 Networking guide. I enable slaves and master; bond works as
> expected.
> When I restart NetworkManager, it creates a new bond with the same name but
> not connected to any device. Two bonds with the same name is confusing for
> my other monitoring scripts.
> I'm wondering why a second bond is created? Is it a bug in NetworkManger?
> 
> 
> #Create a bond with two slaves
> nmcli con add autoconnect no type bond con-name bond0 ifname bond0
> nmcli con mod bond0 ipv6.method ignore ipv4.method manual  ipv4.addresses
> ${BOND_IP}/${BOND_CIDR} ${BOND_GW} ${BOND_DNS} ${BOND_DNS_SEARCH}
> ipv4.never-default no ipv4.ignore-auto-dns no
> nmcli con add autoconnect no type bond-slave con-name bond-slave-eth0
> ifname eth0 master bond0
> nmcli con add autoconnect no type bond-slave con-name bond-slave-eth1
> ifname eth1 master bond0
> 
> #Enable bond
> nmcli con mod bond-slave-eth0 connection.autoconnect yes
> nmcli con up bond-slave-eth0
> 
> nmcli con mod bond-slave-eth1 connection.autoconnect yes
> nmcli con up bond-slave-eth1
> 
> nmcli con mod bond0 connection.autoconnect yes
> nmcli con up bond0
> 
> systemctl restart NetworkManager
> systemctl restart iptables
> 
> nmcli con | grep bond
> bond09942bdc6-df72-4723-b2ed-47a78e3a5c59  bondbond0
> bond-slave-eth0  8b0fbbe1-a7f0-448c-8005-46d11599f57a  802-3-ethernet  eth0
> bond-slave-eth1  333dd1b9-15a4-4119-8e42-55ac3621a85d  802-3-ethernet  eth1
> *bond0460dd9e8-bc0b-473e-9c89-41facda98b66  bond
> --# Why this extra bond connections has been created?*
> 
> 
> I'd appreciate your comments and suggestions to fix the issue.
> 
> 
> Thanks,
> 
> Joe

To this day, on EL6, creating bonds always generates a spurious 'bond0'
interface with no slaved interfaces. It was reported to red hat bugzilla
ages ago but the issue was closed without resolution (sorry, I've been
looking for the rhbz# but haven't found it yet).

digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NetworkManger creates extra bonds; is this a bug?

2016-07-07 Thread Digimer
On 07/07/16 05:36 PM, Digimer wrote:
> On 07/07/16 05:21 PM, Joe Smithian wrote:
>> Hi All,
>>
>> I see an unexpected beahviour from NetworkManager on CentOS 7.1.
>> Using nmcli tool, I create a bond with two slaves as explained in the Red
>> Hat 7.1 Networking guide. I enable slaves and master; bond works as
>> expected.
>> When I restart NetworkManager, it creates a new bond with the same name but
>> not connected to any device. Two bonds with the same name is confusing for
>> my other monitoring scripts.
>> I'm wondering why a second bond is created? Is it a bug in NetworkManger?
>>
>>
>> #Create a bond with two slaves
>> nmcli con add autoconnect no type bond con-name bond0 ifname bond0
>> nmcli con mod bond0 ipv6.method ignore ipv4.method manual  ipv4.addresses
>> ${BOND_IP}/${BOND_CIDR} ${BOND_GW} ${BOND_DNS} ${BOND_DNS_SEARCH}
>> ipv4.never-default no ipv4.ignore-auto-dns no
>> nmcli con add autoconnect no type bond-slave con-name bond-slave-eth0
>> ifname eth0 master bond0
>> nmcli con add autoconnect no type bond-slave con-name bond-slave-eth1
>> ifname eth1 master bond0
>>
>> #Enable bond
>> nmcli con mod bond-slave-eth0 connection.autoconnect yes
>> nmcli con up bond-slave-eth0
>>
>> nmcli con mod bond-slave-eth1 connection.autoconnect yes
>> nmcli con up bond-slave-eth1
>>
>> nmcli con mod bond0 connection.autoconnect yes
>> nmcli con up bond0
>>
>> systemctl restart NetworkManager
>> systemctl restart iptables
>>
>> nmcli con | grep bond
>> bond09942bdc6-df72-4723-b2ed-47a78e3a5c59  bondbond0
>> bond-slave-eth0  8b0fbbe1-a7f0-448c-8005-46d11599f57a  802-3-ethernet  eth0
>> bond-slave-eth1  333dd1b9-15a4-4119-8e42-55ac3621a85d  802-3-ethernet  eth1
>> *bond0460dd9e8-bc0b-473e-9c89-41facda98b66  bond
>> --# Why this extra bond connections has been created?*
>>
>>
>> I'd appreciate your comments and suggestions to fix the issue.
>>
>>
>> Thanks,
>>
>> Joe
> 
> To this day, on EL6, creating bonds always generates a spurious 'bond0'
> interface with no slaved interfaces. It was reported to red hat bugzilla
> ages ago but the issue was closed without resolution (sorry, I've been
> looking for the rhbz# but haven't found it yet).
> 
> digimer

Found it:

https://bugzilla.redhat.com/show_bug.cgi?id=1245440


Neil Horman 2015-07-22 14:30:57 EDT

inserting the bonding module always creates the first bond interface,
thats always how its been, and isn't a bug.

Status: NEW → CLOSED
Resolution: --- → NOTABUG
Last Closed: 2015-07-22 14:30:57


digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] update clamav to 0.99.2

2016-07-07 Thread Jon LaBadie
On Thu, Jul 07, 2016 at 10:19:04PM +0200, Helmut Drodofsky wrote:
> Helo,
> 
> update is in EPEL repository.
> 
> on startup, clamd does not further create clamd.sock and clamd.pid
> 
> clamd service stops without any message - even in debug mode.
> 
> It's a nightmare.

No help to offer.  I updated it on July 2 and have seen no problems.

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos