Can anyone point me to a how to or beginners guide to setting up LDAP
authentication on CentOS5 with replication?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Jan 10, 2008 6:38 PM, Craig White <[EMAIL PROTECTED]> wrote:
> On Thu, 2008-01-10 at 14:40 -0600, Sean Carolan wrote:
> > Can anyone point me to a how to or beginners guide to setting up LDAP
> > authentication on CentOS5 with replication?
>
> well, i
sure, I use webmin's LDAP Users and Groups module on every network
server that I maintain. It's perfect for my needs.
Yes, this is exactly what I'm trying to do. It would be perfect for our
needs too.
The first question that occurs to me is if you did all that. When you do
'getent passwd'
not really, have you run system-config-authentication ? That also
configures pam & nss which are necessary items.
Yes, I have and unfortunately when the 'ldap' tags are added to
/etc/nsswitch.conf the system won't allow me to authenticate, su or sudo
at all!
If each user shows only once AN
Thanks for your patience, Craig. So I took your advice and started
with a fresh install of CentOS 5, and followed the instructions in the
documentation exactly as they are written. I got this far:
[EMAIL PROTECTED] migration]# ./migrate_all_online.sh
Enter the X.500 naming context you wish to i
> Just so we're clear here, you are actually trying to learn two distinct
> things simultaneously, how to use LDAP and how to use LDAP to
> authenticate. They are not the same thing. If you knew how to use LDAP,
> adding authentication to the knowledge base would be relatively trivial.
> Likewise,
> sure but for less than $20 and 2-3 hours, you can master LDAP and be the
> envy of all the guys in your office and the object of affection for all
> the ladies.
>
> ;-)
>
> kerberos is actually a more secure authentication system because
> passwords don't continually cross the network.
I do plan
Could somebody please repost the solution or point me at the correct
resource.
I would also appreciate advice on how to do this on a RHEL4 server being
updated with up2date.
Is it safe just to delete the old kernel and initrd files from the boot
partition and the grub conf file?
Unless you
Scott Ehrlich wrote:
On an ext3 filesystem, what would cause the system to claim it is out of
disk space for a program writing information to disk, when df -h shows
ample GB available and the file is being written to local disk rather
than an nfs-mounted filesystem?
Hi Scott:
I had this exac
Maybe there's an ntp expert out there who can help me with this. I have an NTP server serving our local network. It is
set up to use pool.ntp.org servers for it's upstream sync. ntpq -p reveals that the server is stuck on stratum 16,
which I understand means "not synced". The clients are unab
The zeros in the "reach" column indicate that the server has been unable to
receive any packets from the upstream servers.
Is your server inside a firewall? If so, perhaps it is blocking NTP traffic.
You need to have it allow UDP port 123 in both directions. You don't need
port forwarding from ou
This is almost certainly incorrect unless you're running a very, very
old RHEL/CentOS release. I believe /var/lib/ntp is the canonical
directory for the drift file in 4.x and 5.x. I doubt ntpd is allowed to
write to /etc/ntp, especially if SELinux is enabled.
Good observation, Paul. That conf
I am using CentOS 5.0 on my desktop workstation. Are there any deeply
compelling reasons to upgrade to version 5.1? I read through the release
notes but didn't see any whiz-bang new features. Perhaps some of you can
share your personal experience letting us know if you have noticed any
diffe
Want to have security updates?
That depends. If the security update is for a local vulnerability on my own
single-user workstation then I may think twice before installing it. In other
words, if the security risk is minimal then it may not be worth the hassle of
upgrading my kernel and havi
You can set that as an option in yum.conf . However, you do run the chance of
running out of space in /boot if you get too many kernels piled up there. The
default is to keep the last 2 (or 3?) kernels and delete the older ones.
I wonder why it is trying to delete a newer kernel than the one I
So, yes there are deeply compelling reasons to upgrade. If you want to
have patches for several kernel buffer exploits, as well as many other
security and functionality patches, you need to do one thing;
yum upgrade, and answer yes.
Or even easier;
yum -y upgrade.
When I have some time to re
If you want to keep your existing kernel for a while, just change the
grub default back after the update installs the new one. Then you can
switch, reboot, and rebuild the necessary stuff whenever you have a chance.
Thanks, this is probably what I will end up doing. I tend to err on the side
Can anyone recommend an enterprise-class monitoring system for both
Linux and Windows servers? Here are my requirements:
SNMP trap collection, ability to import custom MIBs
isup/isdown monitoring of ports and daemons
Server health monitors (CPU, Disk, Memory, etc)
SLA reporting with nice graphs
P
> You might take a look at OpenNMS and ZenOSS. I'm not sure if either
> could do everything you're asking for out of the box however.
Thanks, ZenOSS just might fit the bill.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listi
> I tried to use Zenoss for monitoring a small network (about 5 subnets)
> and i had really a hard time with relationships (a version of sept 2007).
Did you use the 'enterprise' or the OS version?
___
CentOS mailing list
CentOS@centos.org
http://lists.ce
I understand that Red Hat has purchased and open-sourced (well sort
of) what was formerly known as Netscape Directory Server. I am
looking for version 6 of netscape directory server, does anyone know
if this is available somewhere?
http://www.redhat.com/docs/manuals/dir-server/release-notes/ds60r
> I believe the only thing you can download is the code that was audited
> for suitable GPL License which is what is known as Fedora Directory
> Server...
>
> http://directory.fedoraproject.org/wiki/Download
I figured as much. I have an old version of Netscape Directory Server
which I was hoping
Hi all:
I'm doing an audit of some Linux machines, and have used this awk
one-liner to find accounts with uid > 499:
awk -F: '{if ($3 > 499) print $0}' < /etc/passwd
It works great if you run it on a host directly, but if I try to ssh
to a remote host and run the command it fails:
mybox$ ssh se
> > ssh servername awk -F: "'{if (\$3 > 499) print \$0}'" < /etc/passwd
>
> ssh servername awk -F: "'{if (\$3 > 499) print \$0}' < /etc/passwd"
>
> otherwise '< /etc/passwd' happens on the client.
Awesome, thanks!
___
CentOS mailing list
CentOS@centos.
I just customized my prompt with a PS1= variable. Since updating my
.bashrc this way, when I try to run commands remotely with ssh I get
this:
[EMAIL PROTECTED]:~]$ ssh server pwd
No value for $TERM and no -T specified/home/scarolan
No value for $TERM and
Keychain is quite a useful tool for automating SSH logins without
having to use password-less keys:
http://www.gentoo.org/proj/en/keychain/
Normally it is used like this to set the SSH_AUTH_SOC and
SSH_AGENT_PID variables:
source ~/.keychain/hostname-sh
(This is what's in hostname-sh)
SSH_AUTH_SO
> If the variables are exported when you call Perl, they will be
> available inside Perl. And, when you call "system", they will be
> available for those processes as well. Are you having any specific
> problems doing this? If you can't make it work, please send more
> details on what exactly
> The variables are not exported when I call Perl. This is what I am
> trying to do. How do I get those variables to be available to the
> bash "system" commands within the script?
Just so it is clear what I am trying to do, there are some scp and ssh
commands like:
system ("/usr/bin/scp /tm
> system("source ~/.keychain/hostname-sh; cmd");
>
> Is this what you're looking for?
Yes, this works. Is there a way to only source the file once? There
are a bit over a dozen scp and ssh commands in the script.
Unfortunately this is not my script, otherwise I'd have just done this
all in bas
> One solution would be to "source ~/.keychain/hostname-sh" in the shell
> before calling the perl script. That should work.
Ok, can't do this because end-users will not like the extra step.
> Another one would be to source it before calling scp:
>
> system ("source ~/.keychain/hostname-sh; /
We have a directory full of installation and configuration scripts
that are updated on a fairly regular basis. I would like to implement
some sort of version control for these files. I have used SVN and CVS
in the past, but I thought I'd ask if anyone can recommend a simple,
easy-to-use tool that
> I dont really think you can get much easier than CVS if you need
> centralized management over a network. If it never gets off the
> machine then there is RCS. If those aren't simple enough... I don't
> think any of the others are going to help.
Thanks for the pointers, it looks like we will
I have run into a snag with my CVS installation:
[EMAIL PROTECTED]:~]$ cvs co -P installfiles
cvs checkout: Updating installfiles
cvs [checkout aborted]: out of memory; can not allocate 1022462837 bytes
Unfortunately we have a couple of large binary .tgz files in the
repository. I was able to ch
> Try upping your ulimit.
> What does "ulimit -a" give.
[EMAIL PROTECTED]:~]$ ulimit -a
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 4
max memory size (kbytes, -m) unlimited
> > Checking in binary files into CVS or any repository control system is
> > usually a broken thing. You want to either check in the stuff inside
> > the tar ball seperately (if its going to change), or just copy it into
> > the archive by updating CVSROOT/cvswrappers
This comes back to the p
> Because these tools are meant to deal with source code files and deal
> with diffs of such files. You are cramming a 1 gigabyte of compressed
> bits at it and its trying to make sure it could give you a diff of it
> later on. I don't have any idea why you would want to store it in a
> CVS ty
"just copy it into the archive by updating CVSROOT/cvswrappers"
*.tar -k 'b' -m 'COPY'
*.tbz -k 'b' -m 'COPY'
*.tgz -k 'b' -m 'COPY'
This worked great. Thank you, Stephan. The enormous .tar.gz is now
easily swallowed by the CVS snake.
___
CentOS
> Thank you, Stephan. The enormous .tar.gz is now
> easily swallowed by the CVS snake.
I mis-spelled your name, Stephen, my bad.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Ok, I can't quite figure out how to make this work. I want to
simultaneously log everything for facility local5 in a local file and
a remote syslog-ng server. local7 is working fine getting the
boot.log log entries transferred over to the syslog-ng server, but not
so much with local5. Local log
UPDATE:
The problem seems to be on the client side, because when I do this:
logger -p local5.info test
the file does show up properly on the syslog-ng host. Anyone have an
idea why the other processes that write to local5 on the client are
not logging to the remote host?
> local5.*
I have also found that there are a small handful of hosts that seem to
spit out a line or two of log output once in a while on the server,
but have not yet identified a pattern.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/lis
I have a virus and spam filter device that can do VRFY commands to
reject invalid email before it gets to the next mail hop. How can I
configure the SMTP server to only allow VRFY commands from one
particular IP address, and nowhere else? I don't want spammers to be
able to hammer on the gateway
> Block the outside world from reaching anything but the filter by firewall or
> other means. Otherwise the spammers will find it and go around your filter.
I think this will probably work. I believe this server does not need
to be open to the 'net.
_
I have a bunch of these scattered through /var/log/messages on a
couple of servers:
I/O error: dev 08:20, sector 0
The server functions just fine. Anyone have an idea what might be
causing this error?
___
CentOS mailing list
CentOS@centos.org
http://li
I really like gnu screen and use it everyday but there's one thing
that is a bit inconvenient, and that's the odd line wrapping and
terminal size issues that seem to pop up. The problem crops up when I
type or paste a really long command, and then go back and try to edit
it; the text starts to wra
> You wouldn't by any chance be using PuTTY to access the session? If
> so, you may need to play around with the terminal settings including
> the scroll type so that it displays correctly. I don't recall the
> specifics but a similar thing happened to me.
Actually, no I'm using gnome-terminal o
> I tried to forget the incompatibilities in different old terminal types
> after about everything settled on xterm compatibility. Instead of
> running screen, can you run a desktop session under freenx on a server
> somewhere and run everything in terminal windows there (even ssh
> sessions that
> In this case, you might want to conditionally assign some reasonable
> value on failure. Say:
>
> tput -T $TERM init >/dev/null 2>&1 || export TERM=xterm
>
> 'tset -q' is another test which can be used.
The remote host's $TERM variable is in fact xterm. When I connect to
the screen session
>> The remote host's $TERM variable is in fact xterm. When I connect to
>> the screen session the $TERM variable is 'screen'.
>
> Are you running screen locally or remotely?
Remotely. My work machine is a laptop, which is not powered on all
the time. Hence I use a remote box as a jumping-off po
Can anyone point out reasons why it might be a bad idea to put this
sort of line in your /etc/hosts file, eg, pointing the FQDN at the
loopback address?
127.0.0.1hostname.domain.com hostname localhost localhost.localdomain
___
CentOS mailing list
C
> First, if your host is actually communicating with any kind of ip-based
> network, it is quite certain, that 127.0.0.1 simply isn't his IP
> address. And, at least for me, that's a fairly good reason.
Indeed. It does seem like a bad idea to have a single host using
loopback, while the rest of t
> (Make sure you pick .dummy so as not to interfere with any other DNS.)
>
> In theory you could leave off .dummy, but then you risk hostname being
> completed with the search domain in resolv.conf, which creates the
> problems already mentioned with putting hostname.domain.com in
> /etc/hosts. (I
I have an odd situation here, maybe one of you can help. We have a
script that runs via a cron job. It's purpose is to decrypt
PGP-encrypted files in a certain directory. I have tried the command
two different ways, both fail with the same error message:
gpg --decrypt $file > ${file%.txt}.decry
On Mon, Oct 19, 2009 at 2:41 PM, Spiro Harvey wrote:
> Is the cron job running as a different user? eg; are you running gpg as
> a non-privileged user and the cronjob as root?
The cronjob script runs from /etc/crontab. Let me try root's personal
crontab instead.
> Typically this type of problem is caused by environment variables
> that are set in a login shell, but are missing or different than
> those set for jobs running under cron.
You nailed it, Bill. Running the cron from root's personal crontab
worked fine. Must have been environment variable rela
I have an SSH server that was set up for a client, and every time we
try to upload large files via SFTP or scp, the transfers speed quickly
slows to zero and gives a - stalled - status message, then
disconnects. Here is an example:
ftp> put iTunesSetup.exe iTunesSetup.exe
Uploading iTunesSetup.ex
On Mon, Dec 21, 2009 at 7:06 PM, 唐建伟 wrote:
> I met the same as you, but always due to the bad network connection.
>
I should probably provide some more information, the server is a VMware
guest running CentOS 5.3. It's using the vmxnet driver for the eth0
connection. IPv6 is disabled.
__
> I'm not sure what would cause that, but I'd use rsync over ssh instead of sftp
> anyway - and use the -P option to permit restarting.
If it were up to me, we'd take that route. The software the client is
using is WinSCP which does have a restart feature, however it's not
working for us. I'm wo
> Tell him to switch WinSCP to SCP mode.
>
> Kai
Tried that, it still fails the same way. Here's the short list of
what I've tried to troubleshoot this:
Used SCP via the gui and command line
Used SFTP via the gui and command line
Ran yum update to bring all packages up to date
Tried stock CentOS
> Just an idea or thought on it. You never said what the file size was or did
> you? My idea is that is, there not a file size limitation on transfer to
> and from the server? I thought there was? Check you vsftpd.conf out or
> what ever ftp server your running for the size limitation. Maybe s
> Load balancer... is that set up to maintain connections, or will it, like
> IBM's
> WebSeal, go to whichever server is next/least used in the middle of a
> connection?
It's set to use "least connection" but there is only one server behind
the virtual IP at the moment.
I'm reasonably sure at t
>> # Turn off SACK
>> net.ipv4.tcp_sack = 0
>
> and execute "sysctl -p" to apply it. You can also use "sysctl -w
> net.ipv4.tcp_sack=0" to turn it off temporarily. Our file transfers worked
> just fine after the change.
>
> I realize there are differences our situation and yours and this might no
I have a large group of Linux servers that I inherited from a previous
administrator. Unfortunately there is no single sign-on configured so
each server has it's own local accounts with local authentication.
Normally I use ssh keys and a handy shell script to change passwords
on all these machines
> If your script change passwords via ssh and usermod, why not at
> the same time do a chage -d number username?
Thank you, I may end up doing it this way at least until we can
configure AD or LDAP authentication.
___
CentOS mailing list
CentOS@centos.or
Maybe one of you can help. We have set up a CentOS server so that
each user who logs in via sftp will be jailed in their home directory.
Here's the relevant sshd_config:
# override default of no subsystems
Subsystem sftpinternal-sftp -f LOCAL2 -l INFO
Match Group sftponly
Chro
> I solved a similar issue with jail and syslog adding a "-a
> /home/jail/dev/log" parameter to syslog startup.
In our environment the chroot jail is /home/username. Does this mean
we need a /home/username/dev/log for each and every user? If the
daemon is chroot'd to /home/username wouldn't thi
> I believe you will need:
> syslogd -a "/home/username01/dev/log" -a "/home/username02/dev/log"
> -a "/home/username03/dev/log" -a "/home/username04/dev/log" - or
> something like this. I don't know the syntax for multiples "-a"...
This seems very impractical, both from a security standpoint an
I have set up entries in /etc/hosts.allow and /etc/hosts.deny as follows:
/etc/hosts.allow
sendmail : 10.0.0.0/255.0.0.0
sendmail : LOCAL
/etc/hosts.deny
sendmail : ALL
When I try to connect to port 25 from an Internet host via telnet, the
server still responds as usual. The only difference I s
> $ ldd /usr/sbin/sendmail.sendmail | grep wrap
> libwrap.so.0 => /usr/lib/libwrap.so.0 (0x00319000)
>
> tcp_wrappers never sees the connection directly. sendmail handles it
> from start to end.
Thanks for this info. I will set up an iptables rule to block this access.
> I'm confused. I'd expect the above symbol listing to show that sendmail is
> in fact using the libwrap library and it should be doing what the allow/deny
> files say.
>
> Regardless, the simple way to tell sendmail what you want to permit is to
> use the /usr/mail/access file.
My goal was to
Is there a way to force rsync to set a specific owner and group on
destination files? I have managed to get the permissions set up the
way I want, but the owner and group are still defaulting to a numeric
id instead of the correct owner and group. I suppose I could add a
manual "chown -R owner:gr
> Do your user and group names on both your source and destination
> systems have matching numeric values?
No. The source system is a Windows machine running cygwin-rsyncd.
> Linux/UNIX systems carry the numeric values and look up the text
> values in /etc/passwd and /etc/group for display. If
> What rsync options are you using? rsync has options to preserve owner
> and group, if you exclude those options, then won't the files assume
> the user and group of the user account on the destination machine? I
> haven't tested this, but it looks good on paper.
Currently the script runs as root
In our environment we have many legacy application servers running
apache/jserv. There is a web server front end, then a couple of
load-balanced java servers on the backside. One of the problems we
are faced with is hung or stuck jvms. I have looked at the java
process with the ps command, and t
> How about setting up a cron to monitor it and auto restart if it's not
> responding?
>
> wget -q --timeout=30 http://localhost:8008/ -O /dev/null || (command to
> restart jserv)
I tried pulling up port 8008 in a web browser, but it doesn't work
quite like that. Apache is configured with mod_jse
> Check
> http://support.hyperic.com/confluence/display/hypcomm/HyperForge/#HyperFORGE-pluginforge
> for existing plugins.
> Perhaps what you want can be done with a JMX plugin ?
Hyperic looks interesting, but anytime someone claims "Zero-Touch
Systems Management" I have to raise a skeptical eyebr
> Sounds similar to the mod_jk connector in apache to connect to
> tomcat. When I had to deal with this I setup a dedicated apache
> instance on each system running tomcat whose sole purpose for
> existence was for testing that connector.
>
> So say setup an apache instance on another port, and hav
>>> Sounds similar to the mod_jk connector in apache to connect to
>>> tomcat. When I had to deal with this I setup a dedicated apache
>>> instance on each system running tomcat whose sole purpose for
>>> existence was for testing that connector.
We have decided to take this tactic and set up a de
> [EMAIL PROTECTED]:~/ApacheJServ-1.1.2]$ ./configure
> --with-jdk-home=/usr/local/mercury/Sun/jdk1.5.0_01
> --with-JSDK=/usr/local/mercury/Sun/JSDK2.0/lib/jsdk.jar
> --with-apache-src=/usr/include/httpd/
If I run the configure command without --with-apache-src here is what I get:
configure: erro
> This seems to indicate that it wants the apache header files, which
> are installed in /usr/include/httpd. Anyway if someone has an idea
> how I can get a working mod_jserv module for CentOS3 let me know.
Ok, so after doing some more reading it appears that you can simply
build the mod_jserv.so
> mod_jserv is really old, are you sure it can be compiled against apache
> 2?
> If you need a jk connector, use mod_jk. You can find the source rpm in
> the RHWAS repository (I didn't check if CentOS has a binary version
> somewhere).
>
> ciao
> ad
Hi Andrea, thanks for your reply. I know mod_js
> Hi Andrea, thanks for your reply. I know mod_jserv is ancient, but we
> have to support it because it's still being used on production
> machines. Will mod_jk connect in the same way that mod_jserv does?
I have mod_jk module properly loaded now, how would I duplicate this
function of jserv wit
> I have mod_jk module properly loaded now, how would I duplicate this
> function of jserv with mod_jk?
>
>
>ApJServMount /servlets ajpv12://servername.com:8008/root
>ApjServAction .html /servlets/gnujsp
>
>
I should add that "servername.com" is localhost, so this could
certainly be a lo
I found this on the mod_jk howto from the apache site:
*
For example the following directives will send all requests ending in
.jsp or beginning with /servlet to the "ajp13" worker, but jsp
requests to files located in /otherworker will go to "remoteworker".
JkMount /*.jsp
Andrea thank you again for your help. I think I have almost got this
set up right. I copied your workers.properties file and the
appropriate entries from mod_jk.conf and now I can connect, but get a
400 error. I only have the default Apache site configured on this
box, and my mod_jk.conf file in
>> JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
>
> I'm not too famillar with those JkOptions but looking at my old
> mod_jk configs I have no JkOptions defined, try removing them and
> see if anything changes? My old configs were ajp13, so perhaps
> they might be needed with ajp
> I guess what I'm not clear on is how you replace mod_jserv's configuration:
>
>ApJServMount /servlets ajpv12://host.domain.com:8008/root
>
> with the equivalent version using JkMount.
>
On the old server running mod_jserv our configuration looks like this:
ApJServMount /servl
> Might it be
>
> JkMount /*.html ajp12
>
> assuming ajp12 is the name of your worker in worker.properties
>
Yea, I tried that and even just a simple wildcard like this:
JkMount /* ajp12
but no dice. If I can't solve this then I may have to just install
apache 1.3 everywher
This is a bit naive and childish:
"how terribly shocking...I suggest also blocking China, 'cause they're
commies, and France because they eat frogs"
The OP is not discriminating against Africa because of government systems,
skin color, or diet. He is trying to reduce lost revenue, credit card
re
>
>
> Ever heard of the Western Union scam?
Yes, it usually goes something like this:
Scammer emails an online business asking if he can over-pay you with a
check. The check looks just like any other business check and is often
printed with the name of a real bank. The scammer then asks you to
This awk command pulls URLs from an apache config file, where $x is
the config filename.
awk '/:8008\/root/ {printf $3 "\t"}' $x
The URL that is output by the script looks something like this:
ajpv12://hostname.network.company.com:8008/root
Is there a way to alter the output so it only shows "h
>> The URL that is output by the script looks something like this:
>>
>> ajpv12://hostname.network.company.com:8008/root
>>
>> Is there a way to alter the output so it only shows "hostname" by
>> itself? Do I need to pipe this through awk again to clean it up?
>
> awk '/:8008\/root/ {printf $3 "\t
The awk output that was piped into to the sed command looks like this:
ajpv12://host1.domain.company.com:8008/root
ajpv12://host2.domain.company.com:8008/root
ajpv12://host3.domain.company.com:8008/root
___
CentOS mailing list
CentOS@centos.org
http://li
those are supposed to be tab-separated urls, all on one line.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
> If 'ajpv12://' and ':8008/root' are always going to be the same:
>
> awk '/:8008\/root/ {printf $3 "\t"}' $x | sed 's/ajpv12:\/\///g' | sed
> 's/:8008\/root//g'
>
> If these change then your going to need either a more complex awk,
> or more complex sed expression.
>
> -Ross
Marvelous. Thanks
Will there be a BIND patch available for this vulnerability, for CentOS 3.9?
http://www.kb.cert.org/vuls/id/800113
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
I'm attempting to block access to port 53 from internet hosts for an
internal server. This device is behind a gateway router so all
traffic appears to come from source ip 10.100.1.1. Here are my
(non-working) iptables rules:
-A RH-Firewall-1-INPUT -s 10.100.1.1 -m tcp -p tcp --dport 53 -j REJECT
> CRITICAL : [ipv6_test] Kernel is not compiled with IPv6 support
> [ OK ]
> FATAL: Module off not found.
> CRITICAL : [ipv6_test] Kernel is not compiled with IPv6 support
Try looking inside /etc/modprobe.conf for these lines:
alias net-pf
I've used the guide on mantic.org before, worked well for me:
http://www.mantic.org/wiki/Installing_BackupPC
We use BackupPC extensively where I work, once you get it settled down
and in a steady state it is invaluable.
___
CentOS mailing list
CentOS@ce
> Yep. They are there. So what is the 'proper' method to get them out (other
> than using VI and deleteing the lines?)?
>
I would comment them out and add another comment like this:
# Un-comment these to disable ipv6
#alias net-pf-10 off
#alias ipv6 off
You will need to reboot the server to enab
1 - 100 of 222 matches
Mail list logo