[CentOS] links browser update, now needs xWindows?

2016-10-24 Thread Richard Beels

hi there.

The new update for links in EPEL takes it from 2.8-2 to 2.13-1.  But 
yum includes 21 xWindows dependencies that weren't required before.


I'd rather not install them - it's a headless server.  Was this intentional?




 Package Arch 
Version   Repository   Size


Updating:
 links   x86_64 
1:2.13-1.el7  epel2.8 M

Installing for dependencies:
 cairo   x86_64 
1.14.2-1.el7  base711 k
 fontconfig  x86_64 
2.10.95-7.el7 base228 k
 fontpackages-filesystem noarch 
1.44-8.el7base9.9 k
 graphite2   x86_64 
1.3.6-1.el7_2 updates 112 k
 harfbuzzx86_64 
0.9.36-1.el7  base156 k
 libXdamage  x86_64 
1.1.4-4.1.el7 base 20 k
 libXext x86_64 
1.3.3-3.el7   base 39 k
 libXfixes   x86_64 
5.0.1-2.1.el7 base 18 k
 libXft  x86_64 
2.3.2-2.el7   base 58 k
 libXrender  x86_64 
0.9.8-2.1.el7 base 25 k
 libXxf86vm  x86_64 
1.1.3-2.1.el7 base 17 k
 libeventx86_64 
2.0.21-4.el7  base214 k
 librsvg2x86_64 
2.39.0-1.el7  base123 k
 libthai x86_64 
0.1.14-9.el7  base187 k
 libxshmfencex86_64 
1.2-1.el7 base7.2 k
 mesa-libEGL x86_64 
10.6.5-3.20150824.el7 base 74 k
 mesa-libGL  x86_64 
10.6.5-3.20150824.el7 base184 k
 mesa-libgbm x86_64 
10.6.5-3.20150824.el7 base 40 k
 mesa-libglapi   x86_64 
10.6.5-3.20150824.el7 base 39 k
 pango   x86_64 
1.36.8-2.el7  base287 k
 pixman  x86_64 
0.32.6-3.el7  base254 k


Transaction Summary

Install ( 21 Dependent packages)
Upgrade  1 Package

Cheers!

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Larry Martell
On Fri, Oct 21, 2016 at 11:42 AM,   wrote:
> Larry Martell wrote:
>> On Fri, Oct 21, 2016 at 11:21 AM,   wrote:
>>> Larry Martell wrote:
 We have 1 system ruining Centos7 that is the NFS server. There are 50
 external machines that FTP files to this server fairly continuously.

 We have another system running Centos6 that mounts the partition the
 files
 are FTP-ed to using NFS.
>>> 
>>> What filesystem?
>>
>> Sorry for being dense, but I am not a sys admin, I am programmer and
>> we have no sys admin. I don't know what you mean by your question. I
>> am NFS mounting to what ever the default filesystem would be on a
>> CentOS6 system.
>
> This *is* a sysadmin issue. Each partition is formatted as a specific type
> of filesystem. The standard Linux filesystems for Upsteam-descended have
> been ext3, then ext4, and now xfs. Tools to manipulate xfs will not work
> with extx, and vice versa.
>
> cat /etc/fstab on the systems, and see what they are. If either is xfs,
> and assuming that the systems are on UPSes, then the fstab which controls
> drive mounting on a system should have, instead of "defaults",
> nobarrier,inode64.

The server is xfs (the client is nfs). The server does have inode64
specified, but not nobarrier.

> Note that the inode64 is relevant if the filesystem is > 2TB.

The file system is 51TB.

> The reason I say this is that we we started rolling out CentOS 7, we tried
> to put one of our user's home directory on one, and it was a disaster.
> 100% repeatedly, untarring a 100M tarfile onto an nfs-mounted drive took
> seven minutes, where before, it had taken 30 seconds. Timed. It took us
> months to discover that NFS 4 tries to make transactions atomic, which is
> fine if you're worrying about losing power or connectivity. If you're on a
> UPS, and hardwired, adding the nobarrier immediately brought it down to 40
> seconds or so.

We are not seeing a performance issue - do you think nobarrier would
help with our lock up issue? I wanted to try it but my client did not
want me to make any changes until we got the bad disk replaced.
Unfortunately that will not happen until Wednesday.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Larry Martell
On Sun, Oct 23, 2016 at 9:02 AM, Larry Martell  wrote:
> Hi Matt-
>
> Thank you for this very detailed and thoughtful reply.
>
> On Fri, Oct 21, 2016 at 4:43 PM, Matt Garman  wrote:
>> On Fri, Oct 21, 2016 at 4:14 AM, Larry Martell  
>> wrote:
>>> We have 1 system ruining Centos7 that is the NFS server. There are 50
>>> external machines that FTP files to this server fairly continuously.
>>>
>>> We have another system running Centos6 that mounts the partition the files
>>> are FTP-ed to using NFS.
>>>
>>> There is a python script running on the NFS client machine that is reading
>>> these files and moving them to a new dir on the same file system (a mv not
>>> a cp).
>>
>> To be clear: the python script is moving files on the same NFS file
>> system?  E.g., something like
>>
>> mv /mnt/nfs-server/dir1/file /mnt/nfs-server/dir2/file
>>
>> where /mnt/nfs-server is the mount point of the NFS server on the
>> client machine?
>
> Correct.
>
>> Or are you moving files from the CentOS 7 NFS server to the CentOS 6 NFS 
>> client?
>
> No the files are FTP-ed to the CentOS 7 NFS server and then processed
> and moved on the CentOS 6 NFS client.
>
>> If the former, i.e., you are moving files to and from the same system,
>> is it possible to completely eliminate the C6 client system, and just
>> set up a local script on the C7 server that does the file moves?  That
>> would cut out a lot of complexity, and also improve performance
>> dramatically.
>
> The problem doing that is the files are processed and loaded to MySQL
> and then moved by a script that uses the Django ORM, and neither
> django, nor any of the other python packages needed are installed on
> the server. And since the server does not have an external internet
> connection (as I mentioned in my reply to Mark) getting it set up
> would require a large amount of effort.
>
> Also, we have this exact same setup on over 10 other systems, and it
> is only this one that is having a problem. The one difference with
> this one is that the sever is CentOS7 - on all the other systems both
> the NFS server and client are CentOS6.
>
>> Also, what is the size range of these files?  Are they fairly small
>> (e.g. 10s of MB or less), medium-ish (100s of MB) or large (>1GB)?
>
> Small - They range in size from about 100K to 6M.
>
>>> Almost daily this script hangs while reading a file - sometimes it never
>>> comes back and cannot be killed, even with -9. Other times it hangs for 1/2
>>> hour then proceeds on.
>>
>> Timeouts relating to NFS are the worst.
>>
>>
>>> Coinciding with the hanging I see this message on the NFS server host:
>>>
>>> nfsd: peername failed (error 107)
>>>
>>> And on the NFS client host I see this:
>>>
>>> nfs: V4 server returned a bad sequence-id
>>> nfs state manager - check lease failed on NFSv4 server with error 5
>>
>> I've been wrangling with NFS for years, but unfortunately those
>> particular messages don't ring a bell.
>>
>> The first thing that came to my mind is: how does the Python script
>> running on the C6 client know that the FTP upload to the C7 server is
>> complete?  In other words, if someone is uploading "fileA", and the
>> Python script starts to move "fileA" before the upload is complete,
>> then at best you're setting yourself up for all kinds of confusion,
>> and at worst file truncation and/or corruption.
>
> The python script checks the modification time of the file, and only
> if it has not been modified in more then 2 minutes does it process it.
> Otherwise it skips it and waits for the next run to potentially
> process it. Also, the script can tell if the file is incomplete in a
> few different ways. So if it has not been modified in more then 2
> minutes, the script starts to process it, but if it finds that it's
> incomplete it aborts the processing and leaves it for next time.
>
>> Making a pure guess about those particular errors: is there any chance
>> there is a network issue between the C7 server and the C6 client?
>> What is the connection between those two servers?  Are they physically
>> adjacent to each other and on the same subnet?  Or are they on
>> opposite ends of the globe connected through the Internet?
>
> Actually both the client and server are virtual machines running on
> one physical machine. The physical machine is running CentOS7. There
> is nothing else running on the physical machine other then the 2 VMs.
>
>> Clearly two machines on the same subnet, separated only by one switch
>> is the simplest case (i.e. the kind of simple LAN one might have in
>> his home).  But once you start crossing subnets, then routing configs
>> come into play.  And maybe you're using hostnames rather than IP
>> addresses directly, so then name resolution comes into play (DNS or
>> /etc/hosts).  And each switch hop you add requires that not only your
>> server network config needs to be correct, but also your switch config
>> needs to be correct as well.  And if you're going over the Internet,
>> well... I'd pro

Re: [CentOS] Disk near failure

2016-10-24 Thread Alessandro Baggi

Il 21/10/2016 17:20, m.r...@5-cent.us ha scritto:

John R Pierce wrote:

On 10/21/2016 2:03 AM, Alessandro Baggi wrote:


My ssds are failing?


SSD's wear out based on writes per block.   they distribute those
writes, but once each block has been written X number of times, they are
no longer reliable.

they appear to still be working perfectly, but they are beyond their
design life.   soon or later, if you continue the amount of writes
you've been doing, you'll get back errors or bad data.

I would plan on replacing those drives sooner rather than later.   5
years was a good run.


1. Especially if they're consumer grade.
2. And that's a fairly early large (for SSD) drive.
3. We've got a RAID appliance that takes actual SCSI that's still running,
though we're now in the process of replacing these 10 yr old RAIDs
4. SATA is a *lot* cheaper for *much* larger capacity drives...

  mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Hey there,
I've runned smartctl -l xerror/error /dev/sda but I get:

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.36.2.el7.x86_64] 
(local build)

Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log not supported


I've noticed this also with smartctl -a /dev/sda


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] links browser update, now needs xWindows?

2016-10-24 Thread Leonard den Ottolander
Hello Richard,

On Mon, 2016-10-24 at 03:39 -0400, Richard Beels wrote:
> The new update for links in EPEL takes it from 2.8-2 to 2.13-1.  But 
> yum includes 21 xWindows dependencies that weren't required before.
> 
> I'd rather not install them - it's a headless server.  Was this intentional?

Very valid question but you should direct your question at the EPEL
maintainer of this package:

https://lists.fedoraproject.org/archives/list/epel-de...@lists.fedoraproject.org/

Regards,
Leonard.

-- 
mount -t life -o ro /dev/dna /genetic/research


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread mark

On 10/24/16 03:52, Larry Martell wrote:

On Fri, Oct 21, 2016 at 11:42 AM,   wrote:

Larry Martell wrote:

On Fri, Oct 21, 2016 at 11:21 AM,   wrote:

Larry Martell wrote:

We have 1 system ruining Centos7 that is the NFS server. There are 50
external machines that FTP files to this server fairly continuously.

We have another system running Centos6 that mounts the partition the
files are FTP-ed to using NFS.



What filesystem?



cat /etc/fstab on the systems, and see what they are. If either is xfs,
and assuming that the systems are on UPSes, then the fstab which controls
drive mounting on a system should have, instead of "defaults",
nobarrier,inode64.


The server is xfs (the client is nfs). The server does have inode64
specified, but not nobarrier.


Note that the inode64 is relevant if the filesystem is > 2TB.


The file system is 51TB.


The reason I say this is that we we started rolling out CentOS 7, we tried
to put one of our user's home directory on one, and it was a disaster.
100% repeatedly, untarring a 100M tarfile onto an nfs-mounted drive took
seven minutes, where before, it had taken 30 seconds. Timed. It took us
months to discover that NFS 4 tries to make transactions atomic, which is
fine if you're worrying about losing power or connectivity. If you're on a
UPS, and hardwired, adding the nobarrier immediately brought it down to 40
seconds or so.


We are not seeing a performance issue - do you think nobarrier would
help with our lock up issue? I wanted to try it but my client did not
want me to make any changes until we got the bad disk replaced.
Unfortunately that will not happen until Wednesday.


Absolutely add nobarrier, and see what happens.

mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Disk near failure

2016-10-24 Thread Leonard den Ottolander
Hi,

On Mon, 2016-10-24 at 12:07 +0200, Alessandro Baggi wrote:
> === START OF READ SMART DATA SECTION ===
> SMART Error Log not supported

I reckon there's a  between those lines. The line right after the
first should read something like:

SMART overall-health self-assessment test result: PASSED

or "FAILED" for that matter. If not try running

smartctl -t short /dev/sda

, wait for the indicated time to expire, then check the output of
smartctl -a (or -x) again.

Regards,
Leonard.

-- 
mount -t life -o ro /dev/dna /genetic/research


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Gordon Messmer

On 10/24/2016 04:51 AM, mark wrote:
Absolutely add nobarrier, and see what happens. 



Using "nobarrier" might increase overall write throughput, but it 
removes an important integrity feature, increasing the risk of 
filesystem corruption on power loss.  I wouldn't recommend doing that 
unless your system is on a UPS, and you've tested and verified that it 
will perform an orderly shutdown when the UPS is on battery power and 
its charge is low.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread m . roth
Gordon Messmer wrote:
> On 10/24/2016 04:51 AM, mark wrote:
>> Absolutely add nobarrier, and see what happens.
>
> Using "nobarrier" might increase overall write throughput, but it
> removes an important integrity feature, increasing the risk of
> filesystem corruption on power loss.  I wouldn't recommend doing that
> unless your system is on a UPS, and you've tested and verified that it
> will perform an orderly shutdown when the UPS is on battery power and
> its charge is low.
>
As I noted in my original repost, that it needs to be on a UPS, and to
repeat myself, untaring a 107MB tarfile on an xfs filesystem mounted over
NFS, it was ->seven minutes<-, 100% repeatable, while after we added
nobarrier and remounted it, it was about ->40 seconds<-. That's *hugely*
significant.

  mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CVE-2016-5195 “DirtyCOW”: Critical Linux Kernel Flaw

2016-10-24 Thread Gilbert Sebenste

On Sat, 22 Oct 2016, Valeri Galtsev wrote:


On Sat, October 22, 2016 7:49 pm, Valeri Galtsev wrote:

Dear All,

I guess, we all have to urgently apply workaround, following, say, this:

https://gryzli.info/2016/10/21/protect-cve-2016-5195-dirtycow-centos-7rhel7cpanelcloudlinux/

At least those of us who still have important multi user machines running
Linux.


I should have said CentOS 7. Older ones (CentOS 6 and 5) are not vulnerable.


Patch is out on RHEL side:

https://rhn.redhat.com/errata/RHSA-2016-2098.html

***
Gilbert Sebenste
(My opinions only!)  **
***
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] SElinux suggestions needed: migrating backup service

2016-10-24 Thread Leon Fauster
Hi folks,

normally I have not so much to do with SElinux but I expected to get in touch 
sooner or later :-)

I migrated a backup-system from El5 to EL6 and the rsync backup process is 
complaining about selinux attr's now. 

client <-> server (fetches via rsync -aHAX)

client# sestatus 
SELinux status: disabled


server# sestatus
SELinux status: enabled
SELinuxfs mount:/selinux
Current mode:   permissive
Mode from config file:  permissive
Policy version: 24
Policy from config file:targeted


for example, no label for this file on client side: 

client# ls -laZ /usr/share/zoneinfo/Africa/Bissau 
-rw-r--r--  root root  
/usr/share/zoneinfo/Africa/Bissau


but on server side: 

rsync: rsync_xal_clear: 
lremovexattr("usr/share/zoneinfo/Africa/.Bissau.WaE4wj","security.selinux") 
failed: Permission denied (13)

and 

server# ls -laZ /BACKUP/usr/share/zoneinfo/Africa/Bissau 
-rw-r--r--. root root unconfined_u:object_r:locale_t:s0 
usr/share/zoneinfo/Africa/Bissau


the local (server) destination is mounted like:

server# cat /proc/mounts |grep BACKUP
/dev/sdc1 /BACKUP ext3 
rw,seclabel,nosuid,nodev,noatime,nodiratime,errors=continue,acl,barrier=1,data=ordered
 0 0

this partition comes from the former system (EL5 productively used without 
labeling it and with SElinux disabled).

I started to enable SElinux (permissive) on new systems and therefore disabling 
SElinux 
like it was done before on the former system is not an option.

Any suggestions to avoid the default labeling 
"unconfined_u:object_r:locale_t:s0"? 


--
Thanks,
LF


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Matt Garman
On Sun, Oct 23, 2016 at 8:02 AM, Larry Martell  wrote:
>> To be clear: the python script is moving files on the same NFS file
>> system?  E.g., something like
>>
>> mv /mnt/nfs-server/dir1/file /mnt/nfs-server/dir2/file
>>
>> where /mnt/nfs-server is the mount point of the NFS server on the
>> client machine?
>
> Correct.
>
>> Or are you moving files from the CentOS 7 NFS server to the CentOS 6 NFS 
>> client?
>
> No the files are FTP-ed to the CentOS 7 NFS server and then processed
> and moved on the CentOS 6 NFS client.


I apologize if I'm being dense here, but I'm more confused on this
data flow now.  Your use of "correct" and "no" seems to be
inconsistent with your explanation.  Sorry!

At any rate, what I was looking at was seeing if there was any way to
simplify this process, and cut NFS out of the picture.  If you need
only to push these files around, what about rsync?

> The problem doing that is the files are processed and loaded to MySQL
> and then moved by a script that uses the Django ORM, and neither
> django, nor any of the other python packages needed are installed on
> the server. And since the server does not have an external internet
> connection (as I mentioned in my reply to Mark) getting it set up
> would require a large amount of effort.

...right, but I'm pretty sure rsync should be installed on the server;
I believe it's default in all except the "minimal" setup profiles.
Either way, it's trivial to install, as I don't think it has any
dependencies.  You can download the rsync rpm from mirror.centos.org,
then scp it to the server, then install via yum.  And Python is
definitely installed (requirement for yum) and Perl is probably
installed as well, so with rsync plus some basic Perl/Python scripting
you can create your own mover script.

Actually, rsync may not even be necessary, scp may be sufficient for
your purposes.  And scp should definitely be installed.


> Also, we have this exact same setup on over 10 other systems, and it
> is only this one that is having a problem. The one difference with
> this one is that the sever is CentOS7 - on all the other systems both
> the NFS server and client are CentOS6.

>From what you've described so far, with what appears to be a
relatively simple config, C6 or C7 "shouldn't" matter.  However, under
the hood, C6 and C7 are quite different.

> The python script checks the modification time of the file, and only
> if it has not been modified in more then 2 minutes does it process it.
> Otherwise it skips it and waits for the next run to potentially
> process it. Also, the script can tell if the file is incomplete in a
> few different ways. So if it has not been modified in more then 2
> minutes, the script starts to process it, but if it finds that it's
> incomplete it aborts the processing and leaves it for next time.

This script runs on C7 or C6?

> The hardware is new, and is in a rack in a server room with adequate
> and monitored cooling and power. But I just found out from someone on
> site that there is a disk failure, which happened back on Sept 3. The
> system uses RAID, but I don't know what level. I was told it can
> tolerate 3 disk failures and still keep working, but personally, I
> think all bets are off until the disk has been replaced. That should
> happen in the next day or 2, so we shall see.

OK, depending on the RAID scheme and how it's implemented, there could
be disk timeouts causing things to hang.


> I've been watching and monitoring the machines for 2 days and neither
> one has had a large CPU load, not has been using much memory.

How about iostat?  Also, good old "dmesg" can suggest if the system
with the failed drive is causing timeouts to occur.


> None on the client. On the server it has 1 dropped Rx packet.
>
>> Do
>>> "ethtool " on both machines to make sure both are linked up
>>> at the correct speed and duplex.
>
> That reports only "Link detected: yes" for both client and server.

OK, but ethtool should also say something like:

...
Speed: 1000Mb/s
Duplex: Full
...

For a 1gbps network.  If Duplex is reported as "half", then that is
definitely a problem.  Using netperf is further confirmation of
whether or not your network is functioning as expected.


> sar seems to be running, but I can only get it to report on the
> current day. The man page shows start and end time options, but is
> there a way to specify the stand and end date?

If you want to report on a day in the past, you have to pass the file
argument, something like this:

sar -A -f /var/log/sa/sa23 -s 07:00:00 -e 08:00:00

That would show you yesterday's data between 7am and 8am.  The files
in /var/log/sa/saXX are the files that correspond to the day.  By
default, XX will be the day of the month.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Shutdown/Poweroff

2016-10-24 Thread Chris Olson
Is there a good source of information about how Linux distributions
and installers identify and interact with the hardware devices in
a system?  We are particularly interested in the shutdown process
that leads to complete power-off.  Thanks.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Shutdown/Poweroff

2016-10-24 Thread m . roth
Chris Olson wrote:
> Is there a good source of information about how Linux distributions
> and installers identify and interact with the hardware devices in
> a system?  We are particularly interested in the shutdown process
> that leads to complete power-off.  Thanks.

You mean something more than the shutdown command, or poweroff?

The only hardware thing that might conflict with that is a BIOS setting,
that offers the option, on powerloss, of restart/stay off/last state.

  mark

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Larry Martell
On Mon, Oct 24, 2016 at 1:32 PM, Matt Garman  wrote:
> On Sun, Oct 23, 2016 at 8:02 AM, Larry Martell  
> wrote:
>>> To be clear: the python script is moving files on the same NFS file
>>> system?  E.g., something like
>>>
>>> mv /mnt/nfs-server/dir1/file /mnt/nfs-server/dir2/file
>>>
>>> where /mnt/nfs-server is the mount point of the NFS server on the
>>> client machine?
>>
>> Correct.
>>
>>> Or are you moving files from the CentOS 7 NFS server to the CentOS 6 NFS 
>>> client?
>>
>> No the files are FTP-ed to the CentOS 7 NFS server and then processed
>> and moved on the CentOS 6 NFS client.
>
>
> I apologize if I'm being dense here, but I'm more confused on this
> data flow now.  Your use of "correct" and "no" seems to be
> inconsistent with your explanation.  Sorry!

I though you were asking "Are you doing: A: moving files on the same
NFS filesystem , or B: moving them across filesystems?

And I replied, "Correct I am doing A, no I am not doing B."

The script moves the files from /mnt/nfs-server/dir1/file to
/mnt/nfs-server/dir2/file.

> At any rate, what I was looking at was seeing if there was any way to
> simplify this process, and cut NFS out of the picture.  If you need
> only to push these files around, what about rsync?

It's not just moving files around. The files are read, and their
contents are loaded into a MySQL database.

>> The problem doing that is the files are processed and loaded to MySQL
>> and then moved by a script that uses the Django ORM, and neither
>> django, nor any of the other python packages needed are installed on
>> the server. And since the server does not have an external internet
>> connection (as I mentioned in my reply to Mark) getting it set up
>> would require a large amount of effort.
>
> ...right, but I'm pretty sure rsync should be installed on the server;
> I believe it's default in all except the "minimal" setup profiles.
> Either way, it's trivial to install, as I don't think it has any
> dependencies.  You can download the rsync rpm from mirror.centos.org,
> then scp it to the server, then install via yum.  And Python is
> definitely installed (requirement for yum) and Perl is probably
> installed as well, so with rsync plus some basic Perl/Python scripting
> you can create your own mover script.
>
> Actually, rsync may not even be necessary, scp may be sufficient for
> your purposes.  And scp should definitely be installed.

This site is not in any way connected to the internet, and you cannot
bring in any computers, phones, or media of any kind. There is a
process to get machines or files in, but it is onerous and time
consuming. This system was set up and configured off site and then
brought on site.

To run the script on the C7 NFS server instead of the C6 NFS client
many python libs will have to installed. I do have someone off site
working on setting up a local yum repo with what I need, and then we
are going to see if we can zip and email the repo and get it on site.
But none of us are sys admins and we don't really know what we're
doing so we may not succeed and it may take longer then I will be here
in Japan (I am scheduled to leave Saturday).

>> Also, we have this exact same setup on over 10 other systems, and it
>> is only this one that is having a problem. The one difference with
>> this one is that the sever is CentOS7 - on all the other systems both
>> the NFS server and client are CentOS6.
>
> From what you've described so far, with what appears to be a
> relatively simple config, C6 or C7 "shouldn't" matter.  However, under
> the hood, C6 and C7 are quite different.
>
>> The python script checks the modification time of the file, and only
>> if it has not been modified in more then 2 minutes does it process it.
>> Otherwise it skips it and waits for the next run to potentially
>> process it. Also, the script can tell if the file is incomplete in a
>> few different ways. So if it has not been modified in more then 2
>> minutes, the script starts to process it, but if it finds that it's
>> incomplete it aborts the processing and leaves it for next time.
>
> This script runs on C7 or C6?

C6

>
>> The hardware is new, and is in a rack in a server room with adequate
>> and monitored cooling and power. But I just found out from someone on
>> site that there is a disk failure, which happened back on Sept 3. The
>> system uses RAID, but I don't know what level. I was told it can
>> tolerate 3 disk failures and still keep working, but personally, I
>> think all bets are off until the disk has been replaced. That should
>> happen in the next day or 2, so we shall see.
>
> OK, depending on the RAID scheme and how it's implemented, there could
> be disk timeouts causing things to hang.

Yes, that's why when I found about the disk failure I wanted to hold
off doing anything until the disk gets replaced. But as that is not
happening until Wenesday afternoon I think I want to try Mark's
nobarrier conifg option today.

>> I've been watching and monito

Re: [CentOS] NFS help

2016-10-24 Thread Matt Garman
On Mon, Oct 24, 2016 at 2:42 PM, Larry Martell  wrote:
>> At any rate, what I was looking at was seeing if there was any way to
>> simplify this process, and cut NFS out of the picture.  If you need
>> only to push these files around, what about rsync?
>
> It's not just moving files around. The files are read, and their
> contents are loaded into a MySQL database.

On what server does the MySQL database live?


> This site is not in any way connected to the internet, and you cannot
> bring in any computers, phones, or media of any kind. There is a
> process to get machines or files in, but it is onerous and time
> consuming. This system was set up and configured off site and then
> brought on site.

But clearly you have a means to log in to both the C6 and C7 servers,
right?  Otherwise, how would be able to see these errors, check
top/sar/free/iostat/etc?

And if you are logging in to both of these boxes, I assume you are
doing so via ssh?

Or are you actually physically sitting in front of these machines?

If you have ssh access to these machines, then you can trivially copy
files to/from them.  If ssh is installed and working, then scp should
also be installed and working.  Even if you don't have scp, you can
use tar over ssh to the same effect.  It's ugly, but doable, and there
are examples online for how to do it.

Also: you made a couple comments about these machines, it looks like
the C7 box (FTP server + NFS server) is running bare metal (i.e. not a
virtual machine).  The C6 instance (NFS client) is virtualized.  What
host is the C6 instance?

Is the C6 instance running under the C7 instance?  I.e., are both
machines on the same physical hardware?  If that is true, then your
"network" (at least the one between C7 and C6) is basically virtual,
and to have issues like this on the same physical box is certainly
indicative of a mis-configuration.


> To run the script on the C7 NFS server instead of the C6 NFS client
> many python libs will have to installed. I do have someone off site
> working on setting up a local yum repo with what I need, and then we
> are going to see if we can zip and email the repo and get it on site.
> But none of us are sys admins and we don't really know what we're
> doing so we may not succeed and it may take longer then I will be here
> in Japan (I am scheduled to leave Saturday).

Right, but my point is you can write your own custom script(s) to copy
files from C7 to C6 (based on rsync or ssh), do the processing on C6
(DB loading, whatever other processing), then move back to C7 if
necessary.  You said yourself you are a programmer not a sysadmin, so
change the nature of the problem from a sysadmin problem to a
programming problem.

I'm certain I'm missing something, but the fundamental architecture
doesn't make sense to me given what I understand of the process flow.

Were you able to run some basic network testing tools between the C6
and C7 machines?  I'm interested specifically in netperf, which does
round trip packet testing, both TCP and UDP.  I would look for packet
drops with UDP, and/or major performance outliers with TCP, and/or any
kind of timeouts with either protocol.

How is name resolution working on both machines?  Do you address
machines by hostname (e.g., "my_c6_server"), or explicitly by IP
address?  Are you using DNS or are the IPs hard-coded in /etc/hosts?

To me it still "smells" like a networking issue...

-Matt
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Problems with VPN Connection

2016-10-24 Thread Macmor Mach
Good afternoon
Best regard

I'm having trouble with a CentOS server release 5.10, so that my users
connect via VPN Intranet type, I could not find a solution, if I can
collaborate appreciate them, attached logs when it worked and now.

Log running:

Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Re-using SSL/TLS context
Mon Oct 10 13:50:02 2016 193.60.90.72:23683 LZO compression initialized
Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Control Channel MTU parms [
L:1543 D:163 EF:66 EB:3 ET:3 EL:0 ]
Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Data Channel MTU parms [ L:1543
D:1453 EF:43 EB:133 ET:3 EL:0 AF:3/1 ]
Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Local Options hash (VER=V4):
'31aebcbb'
Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Expected Remote Options hash
(VER=V4): '3ebe1e45'
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 CRL CHECK OK:
/C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
... mawebs.com

Mon Oct 10 13:50:03 2016 193.60.90.72:23683 VERIFY OK: depth=1,
/C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
... mawebs.com

Mon Oct 10 13:50:03 2016 193.60.90.72:23683 CRL CHECK OK:
/C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
... mawebs.com

Mon Oct 10 13:50:03 2016 193.60.90.72:23683 VERIFY OK: depth=0,
/C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
... mawebs.com

Mon Oct 10 13:50:03 2016 193.60.90.72:23683 TLS: Username/Password
authentication succeeded for username 'usuario6'
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Encrypt: Cipher
'DES-EDE-CBC' initialized with 128 bit key
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Encrypt: Using 160
bit message hash 'SHA1' for HMAC authentication
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Decrypt: Cipher
'DES-EDE-CBC' initialized with 128 bit key
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Decrypt: Using 160
bit message hash 'SHA1' for HMAC authentication
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Control Channel: TLSv1, cipher
TLSv3/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Mon Oct 10 13:50:03 2016 193.60.90.72:23683 [imawebs_common_cert] Peer
Connection Initiated with 193.60.90.72:23683

Log no Operating:

Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Re-using SSL/TLS context
Mon Oct 24 10:46:31 2016 193.60.90.72:60903 LZO compression initialized
Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Control Channel MTU parms [
L:1543 D:163 EF:63 EB:3 ET:3 EL:0 ]
Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Data Channel MTU parms [ L:1543
D:1453 EF:43 EB:133 ET:3 EL:0 AF:3/1 ]
Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Local Options hash (VER=V4):
'31aebcbb'
Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Expected Remote Options hash
(VER=V4): '3ebe1e45'
Mon Oct 24 10:46:32 2016 193.60.90.72:60903 CRL CHECK OK:
/C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
... mawebs.com

Mon Oct 24 10:46:32 2016 193.60.90.72:60903 VERIFY OK: depth=1,
/C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
... mawebs.com

Mon Oct 24 10:46:32 2016 193.60.90.72:60903 CRL CHECK OK:
/C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
... mawebs.com

Mon Oct 24 10:46:32 2016 193.60.90.72:60903 VERIFY OK: depth=0,
/C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
... mawebs.com

Mon Oct 24 10:46:34 2016 193.60.90.72:60903 WARNING: Failed running command
(--auth-user-pass-verify): external program exited with error status: 1
Mon Oct 24 10:46:34 2016 193.60.90.72:60903 TLS Auth Error: Auth
Username/Password verification failed for peer
Mon Oct 24 10:46:34 2016 193.60.90.72:60903 Control Channel: TLSv1, cipher
TLSv3/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Mon Oct 24 10:46:34 2016 193.60.90.72:60903 [imawebs_common_cert] Peer
Connection Initiated with 193.60.90.72:60903

-- 
MAch
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Gordon Messmer

On 10/24/2016 07:29 AM, m.r...@5-cent.us wrote:

As I noted in my original repost, that it needs to be on a UPS



Ah.  I see that now.  Still, may I suggest that whenever we recommend 
remedies that eliminate reliability measures, such as mounting with 
"nobarrier", we also repeat caveats so that users who find these 
conversations in search results later don't miss them?  I think that's 
important to note that the system should be on a UPS, *and* that it has 
been verified that the system will perform an orderly shut-down before 
the UPS loses charge. "nobarrier" shouldn't be used without performing 
such a test.



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Matt Garman
Another alternative idea: you probably won't be comfortable with this,
but check out systemd-nspawn.  There are lots of examples online, and
even I wrote about how I use it:
http://raw-sewage.net/articles/fedora-under-centos/

This is unfortunately another "sysadmin" solution to your problem.
nspawn is the successor to chroot, if you are at all familiar with
that.  It's kinda-sorta like running a system-within-a-system, but
much more lightweight.  The "slave" systems share the running kernel
with the "master" system.  (I could say the "guest" and "host"
systems, but those are virtual machine terms, and this is not a
virtual machine.)  For your particular case, the main benefit is that
you can natively share filesystems, rather than use NFS to share
files.

So, it's clear you have network capability between the C6 and C7
systems.  And surely you must have ssh installed on both systems.
Therefore, you can transfer files between C6 and C7.  So here's a way
you can use systemd-nspawn to get around trying to install all the
extra libs you need on C7:

1. On the C7 machine, create a systemd-nspawn container.  This
container will "run" C6.
2. You can source everything you need from the running C6 system
directly.  Heck, if you have enough disk space on the C7 system, you
could just replicate the whole C6 tree to a sub-directory on C7.
3. When you configure the C6 nspawn container, make sure you pass
through the directory structure with these FTP'ed files.  Basically
you are substituting systemd-nspawn's bind/filesystem pass-through
mechanism in place of NFS.

With that setup, you can "probably" run all the C6 native stuff under
C7.  This isn't guaranteed to work, e.g. if your C6 programs require
hooks into the kernel, it could fail, because now you're running on a
different kernel... but if you only use userspace libraries, you'll
probably be OK.  But I was actually able to get HandBrake, compiled
for bleeding-edge Ubuntu, to work within a C7 nspawn container.

That probably trades one bit of complexity (NFS) for another
(systemd-nspawn).  But just throwing it out there if you're completely
stuck.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SElinux suggestions needed: migrating backup service

2016-10-24 Thread Gordon Messmer

On 10/24/2016 09:53 AM, Leon Fauster wrote:

Any suggestions to avoid the default labeling 
"unconfined_u:object_r:locale_t:s0"?



Not off the top of my head.  I think you need to either a) not try to 
preserve the labels or b) run the backup as a user which can manage 
labels.  What is the rsync command you are currently using, and what 
user does rsync run as on the backup server?


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS help

2016-10-24 Thread Larry Martell
On Mon, Oct 24, 2016 at 5:25 PM, Matt Garman  wrote:
> On Mon, Oct 24, 2016 at 2:42 PM, Larry Martell  
> wrote:
>>> At any rate, what I was looking at was seeing if there was any way to
>>> simplify this process, and cut NFS out of the picture.  If you need
>>> only to push these files around, what about rsync?
>>
>> It's not just moving files around. The files are read, and their
>> contents are loaded into a MySQL database.
>
> On what server does the MySQL database live?

The C6 host, same one that the script runs on. We can of course access
the MySQL server from the C7 host, assuming the needed packages are
there.

>> This site is not in any way connected to the internet, and you cannot
>> bring in any computers, phones, or media of any kind. There is a
>> process to get machines or files in, but it is onerous and time
>> consuming. This system was set up and configured off site and then
>> brought on site.
>
> But clearly you have a means to log in to both the C6 and C7 servers,
> right?  Otherwise, how would be able to see these errors, check
> top/sar/free/iostat/etc?
>
> And if you are logging in to both of these boxes, I assume you are
> doing so via ssh?
>
> Or are you actually physically sitting in front of these machines?

The machines are on a local network. I access them with putty from a
windows machine, but I have to be at the site to do that.

> If you have ssh access to these machines, then you can trivially copy
> files to/from them.  If ssh is installed and working, then scp should
> also be installed and working.  Even if you don't have scp, you can
> use tar over ssh to the same effect.  It's ugly, but doable, and there
> are examples online for how to do it.
>
> Also: you made a couple comments about these machines, it looks like
> the C7 box (FTP server + NFS server) is running bare metal (i.e. not a
> virtual machine).  The C6 instance (NFS client) is virtualized.

Correct.

> What host is the C6 instance?
>
> Is the C6 instance running under the C7 instance?  I.e., are both
> machines on the same physical hardware?  If that is true, then your
> "network" (at least the one between C7 and C6) is basically virtual,
> and to have issues like this on the same physical box is certainly
> indicative of a mis-configuration.

Yes, the C6 instance is running on the C7 machine. What could be
mis-configured? What would I check to find out?

>> To run the script on the C7 NFS server instead of the C6 NFS client
>> many python libs will have to installed. I do have someone off site
>> working on setting up a local yum repo with what I need, and then we
>> are going to see if we can zip and email the repo and get it on site.
>> But none of us are sys admins and we don't really know what we're
>> doing so we may not succeed and it may take longer then I will be here
>> in Japan (I am scheduled to leave Saturday).
>
> Right, but my point is you can write your own custom script(s) to copy
> files from C7 to C6 (based on rsync or ssh), do the processing on C6
> (DB loading, whatever other processing), then move back to C7 if
> necessary.  You said yourself you are a programmer not a sysadmin, so
> change the nature of the problem from a sysadmin problem to a
> programming problem.

Yes, that is potential solution I had not thought of. The issue with
this is that we have the same system installed at many, many sites,
and they all work fine. It is only this site that is having an issue.
We really do not want to have different SW running at just this one
site. Running the script on the C7 host is a change, but at least it
will be the same software as every place else.

> I'm certain I'm missing something, but the fundamental architecture
> doesn't make sense to me given what I understand of the process flow.
>
> Were you able to run some basic network testing tools between the C6
> and C7 machines?  I'm interested specifically in netperf, which does
> round trip packet testing, both TCP and UDP.  I would look for packet
> drops with UDP, and/or major performance outliers with TCP, and/or any
> kind of timeouts with either protocol.

netperf is not installed.

> How is name resolution working on both machines?  Do you address
> machines by hostname (e.g., "my_c6_server"), or explicitly by IP
> address?  Are you using DNS or are the IPs hard-coded in /etc/hosts?

Everything is by ip address.

> To me it still "smells" like a networking issue...
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SElinux suggestions needed: migrating backup service

2016-10-24 Thread Leon Fauster
Am 24.10.2016 um 23:44 schrieb Gordon Messmer :
> On 10/24/2016 09:53 AM, Leon Fauster wrote:
>> Any suggestions to avoid the default labeling 
>> "unconfined_u:object_r:locale_t:s0"?
> 
> 
> Not off the top of my head.  I think you need to either a) not try to 
> preserve the labels or b) run the backup as a user which can manage labels.  
> What is the rsync command you are currently using, and what user does rsync 
> run as on the backup server?


Plain rsync -aHAX with some excludes and executed as root on the backup system.

Doing so I get:

rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.alias","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.ccwmap","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.dep","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.ieee1394map","security.selinux")
 failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.inputmap","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.isapnpmap","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.ofmap","security.selinux") 
failed: Permission denied (13)
rsync: rsync_xal_clear: 
lremovexattr("lib/modules/2.6.18-412.el5/modules.pcimap","security.selinux") 
failed: Permission denied (13)

 

The thing is, that files from the source system that doesn't have a label get a 
new 
one on the destination system. Here is some kind of inheritance in place.

client# ls -laZ /lib/modules/2.6.18-412.el5/modules.*
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.alias
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.ccwmap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.dep
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.ieee1394map
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.inputmap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.isapnpmap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.ofmap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.pcimap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.seriomap
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.symbols
-rw-r--r--  root root  
/lib/modules/2.6.18-412.el5/modules.usbmap


backupserver# ls -laZ daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.*
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.alias
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.ccwmap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.dep
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.ieee1394map
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.inputmap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.isapnpmap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.ofmap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.pcimap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.seriomap
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.symbols
-rw-r--r--. root root unconfined_u:object_r:modules_object_t:s0 
daily.0/ee-sl1/lib/modules/2.6.18-412.el5/modules.usbmap


Using rsync -aHA (without X) circumvent the output but its still unclear what 
exactly triggers 
the above output.  The next weekend seems to be reserved for a SElinux dive 
thought ...

--
LF









___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problems with VPN Connection

2016-10-24 Thread Leon Fauster
Am 24.10.2016 um 23:38 schrieb Macmor Mach :
> I'm having trouble with a CentOS server release 5.10, so that my users
> connect via VPN Intranet type, I could not find a solution, if I can
> collaborate appreciate them, attached logs when it worked and now.
> 
> Log running:
> 
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Re-using SSL/TLS context
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 LZO compression initialized
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Control Channel MTU parms [
> L:1543 D:163 EF:66 EB:3 ET:3 EL:0 ]
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Data Channel MTU parms [ L:1543
> D:1453 EF:43 EB:133 ET:3 EL:0 AF:3/1 ]
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Local Options hash (VER=V4):
> '31aebcbb'
> Mon Oct 10 13:50:02 2016 193.60.90.72:23683 Expected Remote Options hash
> (VER=V4): '3ebe1e45'
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 CRL CHECK OK:
> /C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
> ... mawebs.com
> 
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 VERIFY OK: depth=1,
> /C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
> ... mawebs.com
> 
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 CRL CHECK OK:
> /C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
> ... mawebs.com
> 
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 VERIFY OK: depth=0,
> /C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
> ... mawebs.com
> 
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 TLS: Username/Password
> authentication succeeded for username 'usuario6'
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Encrypt: Cipher
> 'DES-EDE-CBC' initialized with 128 bit key
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Encrypt: Using 160
> bit message hash 'SHA1' for HMAC authentication
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Decrypt: Cipher
> 'DES-EDE-CBC' initialized with 128 bit key
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Data Channel Decrypt: Using 160
> bit message hash 'SHA1' for HMAC authentication
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 Control Channel: TLSv1, cipher
> TLSv3/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
> Mon Oct 10 13:50:03 2016 193.60.90.72:23683 [imawebs_common_cert] Peer
> Connection Initiated with 193.60.90.72:23683
> 
> Log no Operating:
> 
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Re-using SSL/TLS context
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 LZO compression initialized
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Control Channel MTU parms [
> L:1543 D:163 EF:63 EB:3 ET:3 EL:0 ]
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Data Channel MTU parms [ L:1543
> D:1453 EF:43 EB:133 ET:3 EL:0 AF:3/1 ]
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Local Options hash (VER=V4):
> '31aebcbb'
> Mon Oct 24 10:46:31 2016 193.60.90.72:60903 Expected Remote Options hash
> (VER=V4): '3ebe1e45'
> Mon Oct 24 10:46:32 2016 193.60.90.72:60903 CRL CHECK OK:
> /C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
> ... mawebs.com
> 
> Mon Oct 24 10:46:32 2016 193.60.90.72:60903 VERIFY OK: depth=1,
> /C=CO/ST=Valle/L=CALI/O=IMAWEBS_-_IMAWE
> ... mawebs.com
> 
> Mon Oct 24 10:46:32 2016 193.60.90.72:60903 CRL CHECK OK:
> /C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
> ... mawebs.com
> 
> Mon Oct 24 10:46:32 2016 193.60.90.72:60903 VERIFY OK: depth=0,
> /C=CO/ST=Valle/O=IMAWEBS_-_IMAWEBS/OU=S
> ... mawebs.com
> 
> Mon Oct 24 10:46:34 2016 193.60.90.72:60903 WARNING: Failed running command
> (--auth-user-pass-verify): external program exited with error status: 1
> Mon Oct 24 10:46:34 2016 193.60.90.72:60903 TLS Auth Error: Auth
> Username/Password verification failed for peer
> Mon Oct 24 10:46:34 2016 193.60.90.72:60903 Control Channel: TLSv1, cipher
> TLSv3/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
> Mon Oct 24 10:46:34 2016 193.60.90.72:60903 [imawebs_common_cert] Peer
> Connection Initiated with 193.60.90.72:60903



It looks like openvpn, right? Which version?

Your auth-user-pass-verify script does not authenticate your users.

How does your configuration look like?

--
LF


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Shutdown/Poweroff

2016-10-24 Thread Joseph L. Casale
> Is there a good source of information about how Linux distributions and 
> installers identify
> and interact with the hardware devices in a system?  We are particularly 
> interested in the
> shutdown process that leads to complete power-off.  Thanks.

None other I assume than from the horse's mouth, http://www.uefi.org/acpi/specs

jlc
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos