x is always playing catch-up on toys produced for
commercial OSes.
Sean
Parshwa Murdia wrote:
> But at least work could be done in Fedora too like without
> going into the technical details at least multimedia could be used,
> secured bank transactions could be done, prints can be taken a
m.r...@5-cent.us wrote:
Lessee, FC10->FC13 ... but gnome is completely
broken, and you can't log
in, then find that gnome is hostile to window manager switching ...
At least you got to late-FC before that one ... still UNFIXED since
RH8! ...(so KDE since for m
months ago either Clonezilla or Gparted (or both) did not.
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
ng worked on here:
http://flattr.com/
But, apart from being slightly experimental, may not be appropriate
either to your particular dilemma?
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
giving money, pay
> the bill(s).
Core issue, I think, is the rights, privileges etc (the 'ownership'
attributes -- whether explicit or implicit) which attach to making
payments under most models. If/when my own little earner project fails
to earn, it disappears, and
(e) call up 'konquerorsu.desktop' (root-konqueror with embedded root-Kterm)
(f) have normal cron scheduling
...... maybe more,
but that's a start.
Thanks for listening.
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Ah, a reminder that it is always dangerous to unveil the vague? Sorry
... I should have pre-read 6000 pages from Redhat ... (but maybe I did!).
Sean
Michael R. Dilworth wrote:
> I'm sorry (I know don't feed the trolls), but recently
> there have been quite a few remarks resembli
ut thanks for the thought.
Sean
> On Fri,
> 17 Dec 2010, Sean wrote:
>
>> To: centos@centos.org
>> From: Sean
>> Subject: [CentOS] two cents or not two cents
>>
>> Hello Producers
>>
>> "Longevity of Support" is an attractive drawcard fo
Les Mikesell wrote:
> On
> 12/17/10 2:12 PM, Sean wrote:
>> Interesting, and probably worth a play with indeed, although I tend to
>> steer clear of Bash (unhappy with) whenever possible to do the same in
>> Perl (happy with). I imagine there is machine level stuff inv
Les Mikesell wrote:
> On
> 12/18/10 3:24 PM, Sean wrote:
>>
>>>
>>> Or, you might move to java for a more self-contained, OS/distribution
>>> independent way of doing things.
>>>
>> Why Perl? Because writing/maintaining 20,000 lines of te
and since I
gather that a new major is immanent maybe it will support the new Google
Chrome (along with Seamonkey, Opera-11+)? I wonder if there is a list of
packages somewhere. If the repo web-page for CentOS provided the actual
repo-address I was going to try direct my FC4-yum there for list
.
(including my own, which are a sort of dark grey!?).
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
would certainly
get overwrites, though not quite random.
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Les Mikesell wrote:
> On
> 12/21/2010 1:06 PM, Sean wrote:
>>
>>>If you can treat something as a black box and trust it, the size of
>>> the component isn't that important.
>> "If" or "IFF" ..(IF AND ONLY IF)..? A deep sceptici
Not sure exactly what you are trying to do, but Tie::File might be worth
a look at if you haven't done so already?
Sean
ken wrote:
> Given an HTML file which looks like this:
>
> - begin snippet -
>
>> > > We've Lied to You…> >
Hi all,
RH published the advisory 2 weeks ago, according to
https://access.redhat.com/errata/RHSA-2018:0980. The main repo does not
appear to have the packages noted yet -
http://mirror.centos.org/centos/7/updates/x86_64/Packages/
We've been waiting on a few of these bugs to be fixed for some ti
vendors to receive broken drives back from GOV/MIL clients securely so that
failure methods can be researched.
Dell and EMC have been presenting this to us at storage briefs for a couple
of years now.
--Sean
On Thu, May 10, 2018 at 8:00 AM wrote:
> From: m.r...@5-cent.us
> To: CentOS mailin
Is there a way to track CentOS's progress on RHSA-2018-2113?
https://access.redhat.com/errata/RHSA-2018:2113
Thanks!
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Firefox 60.0.1.0 ESR Progress?
> On 07/02/2018 06:57 AM, Sean wrote:
> > Is there a way to track CentOS's progress on RHSA-2018-2113?
> >
> > https://access.redhat.com/errata/RHSA-2018:2113
> >
> > Thanks!
> > __
l list.
I appreciate you taking the time to answer this thread! Thanks for
your hard work!
From: Johnny Hughes
To: centos@centos.org
Cc:
Bcc:
Date: Thu, 5 Jul 2018 06:16:14 -0500
Subject: Re: [CentOS] Firefox 60.0.1.0 ESR Progress?
On 07/03/2018 09:04 AM, Sean wrote:
> Thanks for the idea, I'm
don't doubt that if I ditched NetworkManager and went for eth0:0 and
eth0:1 for the IP interfaces, all would be well. I'd just like to see if
anyone has some input on the issue.
--Sean
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
Before I load the proprietary driver on all the problematic systems, I
was hoping someone on the list might have some insight or suggestions.
Thanks!
--Sean
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
Hi Leon,
I don't have access to a CentOS 6.10 system handy, but it looks like a
policy issue. If I take you're ausearch output and pipe it to
audit2allow on my CentOS 7.6 system, I get the following:
#= httpd_t ==
# This avc is allowed in the current policy
allow htt
t CentOS is my
platform so I'm not sure if it's a distribution specific configuration
or functional change to Gnome. I tried searching through
gitlab.gnome.org to see if I can dig up any issues, release notes and
such, but I didn't find an
herwise noting a
"design" change between Gnome versions.
--Sean
On Mon, Feb 18, 2019 at 12:40 PM James Pearson
wrote:
>
> Sean wrote:
> >
> > It seems that with CentOS 7.6 and Gnome 3.28, a clean install of a
> > Workstation package profile does not build th
like a reasonable task for a Gnome user to do with out
escalating privilege. I can't explain why growisofs needs getattr on
all those disk devices, or why it "should" be denied. I have not
texted extensively outside of the current scenario, but I do believe
if the user is unconfined the burn process works as expected. There
is a very old Fedora bug suggesting similar, but not identical
behavior: https://bugzilla.redhat.com/show_bug.cgi?id=479014
--Sean
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
. So I made a project just to play
with this stuff, if you want to check it out [4].
[1] http://shellcheck.net
[2] https://github.com/koalaman/shellcheck
[3] https://github.com/bats-core/bats-core
[4] https://gitlab.com/salderma/bash-spec-test
--Sean
>
> From: Jerry Geis
> To: CentO
Can anyone point me to a how to or beginners guide to setting up LDAP
authentication on CentOS5 with replication?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Jan 10, 2008 6:38 PM, Craig White <[EMAIL PROTECTED]> wrote:
> On Thu, 2008-01-10 at 14:40 -0600, Sean Carolan wrote:
> > Can anyone point me to a how to or beginners guide to setting up LDAP
> > authentication on CentOS5 with replication?
>
> well, i
sure, I use webmin's LDAP Users and Groups module on every network
server that I maintain. It's perfect for my needs.
Yes, this is exactly what I'm trying to do. It would be perfect for our
needs too.
The first question that occurs to me is if you did all that. When you do
'getent passwd'
not really, have you run system-config-authentication ? That also
configures pam & nss which are necessary items.
Yes, I have and unfortunately when the 'ldap' tags are added to
/etc/nsswitch.conf the system won't allow me to authenticate, su or sudo
at all!
If each user shows only once AN
Thanks for your patience, Craig. So I took your advice and started
with a fresh install of CentOS 5, and followed the instructions in the
documentation exactly as they are written. I got this far:
[EMAIL PROTECTED] migration]# ./migrate_all_online.sh
Enter the X.500 naming context you wish to i
new users and authenticate them,
manage tickets, etc. Now I understand what you meant about LDAP not
being designed for authentication. Thank you again for your time,
Craig. This was a good learning experience for me.
thanks
Sean
___
Ce
> sure but for less than $20 and 2-3 hours, you can master LDAP and be the
> envy of all the guys in your office and the object of affection for all
> the ladies.
>
> ;-)
>
> kerberos is actually a more secure authentication system because
> passwords don't continually cross the network.
I do plan
Could somebody please repost the solution or point me at the correct
resource.
I would also appreciate advice on how to do this on a RHEL4 server being
updated with up2date.
Is it safe just to delete the old kernel and initrd files from the boot
partition and the grub conf file?
Unless you
exact same problem with a machine just a couple days ago. In
my case, unmounting the file system and running e2fsck -f on the
partition fixed the problem. At least it might be worth a try.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http
Maybe there's an ntp expert out there who can help me with this. I have an NTP server serving our local network. It is
set up to use pool.ntp.org servers for it's upstream sync. ntpq -p reveals that the server is stuck on stratum 16,
which I understand means "not synced". The clients are unab
The zeros in the "reach" column indicate that the server has been unable to
receive any packets from the upstream servers.
Is your server inside a firewall? If so, perhaps it is blocking NTP traffic.
You need to have it allow UDP port 123 in both directions. You don't need
port forwarding from ou
This is almost certainly incorrect unless you're running a very, very
old RHEL/CentOS release. I believe /var/lib/ntp is the canonical
directory for the drift file in 4.x and 5.x. I doubt ntpd is allowed to
write to /etc/ntp, especially if SELinux is enabled.
Good observation, Paul. That conf
I am using CentOS 5.0 on my desktop workstation. Are there any deeply
compelling reasons to upgrade to version 5.1? I read through the release
notes but didn't see any whiz-bang new features. Perhaps some of you can
share your personal experience letting us know if you have noticed any
diffe
Want to have security updates?
That depends. If the security update is for a local vulnerability on my own
single-user workstation then I may think twice before installing it. In other
words, if the security risk is minimal then it may not be worth the hassle of
upgrading my kernel and havi
You can set that as an option in yum.conf . However, you do run the chance of
running out of space in /boot if you get too many kernels piled up there. The
default is to keep the last 2 (or 3?) kernels and delete the older ones.
I wonder why it is trying to delete a newer kernel than the one I
So, yes there are deeply compelling reasons to upgrade. If you want to
have patches for several kernel buffer exploits, as well as many other
security and functionality patches, you need to do one thing;
yum upgrade, and answer yes.
Or even easier;
yum -y upgrade.
When I have some time to re
If you want to keep your existing kernel for a while, just change the
grub default back after the update installs the new one. Then you can
switch, reboot, and rebuild the necessary stuff whenever you have a chance.
Thanks, this is probably what I will end up doing. I tend to err on the side
Can anyone recommend an enterprise-class monitoring system for both
Linux and Windows servers? Here are my requirements:
SNMP trap collection, ability to import custom MIBs
isup/isdown monitoring of ports and daemons
Server health monitors (CPU, Disk, Memory, etc)
SLA reporting with nice graphs
P
> You might take a look at OpenNMS and ZenOSS. I'm not sure if either
> could do everything you're asking for out of the box however.
Thanks, ZenOSS just might fit the bill.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listi
> I tried to use Zenoss for monitoring a small network (about 5 subnets)
> and i had really a hard time with relationships (a version of sept 2007).
Did you use the 'enterprise' or the OS version?
___
CentOS mailing list
CentOS@centos.org
http://lists.ce
I understand that Red Hat has purchased and open-sourced (well sort
of) what was formerly known as Netscape Directory Server. I am
looking for version 6 of netscape directory server, does anyone know
if this is available somewhere?
http://www.redhat.com/docs/manuals/dir-server/release-notes/ds60r
> I believe the only thing you can download is the code that was audited
> for suitable GPL License which is what is known as Fedora Directory
> Server...
>
> http://directory.fedoraproject.org/wiki/Download
I figured as much. I have an old version of Netscape Directory Server
which I was hoping
nt to put this into a for loop so I can grab the info from
multiple machines.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
> > ssh servername awk -F: "'{if (\$3 > 499) print \$0}'" < /etc/passwd
>
> ssh servername awk -F: "'{if (\$3 > 499) print \$0}' < /etc/passwd"
>
> otherwise '< /etc/passwd' happens on the client.
Awesome, thanks!
___
CentOS mailing list
CentOS@centos.
I just customized my prompt with a PS1= variable. Since updating my
.bashrc this way, when I try to run commands remotely with ssh I get
this:
[EMAIL PROTECTED]:~]$ ssh server pwd
No value for $TERM and no -T specified/home/scarolan
No value for $TERM and
Keychain is quite a useful tool for automating SSH logins without
having to use password-less keys:
http://www.gentoo.org/proj/en/keychain/
Normally it is used like this to set the SSH_AUTH_SOC and
SSH_AGENT_PID variables:
source ~/.keychain/hostname-sh
(This is what's in hostname-sh)
SSH_AUTH_SO
> If the variables are exported when you call Perl, they will be
> available inside Perl. And, when you call "system", they will be
> available for those processes as well. Are you having any specific
> problems doing this? If you can't make it work, please send more
> details on what exactly
> The variables are not exported when I call Perl. This is what I am
> trying to do. How do I get those variables to be available to the
> bash "system" commands within the script?
Just so it is clear what I am trying to do, there are some scp and ssh
commands like:
system ("/usr/bin/scp /tm
> system("source ~/.keychain/hostname-sh; cmd");
>
> Is this what you're looking for?
Yes, this works. Is there a way to only source the file once? There
are a bit over a dozen scp and ssh commands in the script.
Unfortunately this is not my script, otherwise I'd have just done this
all in bas
> One solution would be to "source ~/.keychain/hostname-sh" in the shell
> before calling the perl script. That should work.
Ok, can't do this because end-users will not like the extra step.
> Another one would be to source it before calling scp:
>
> system ("source ~/.keychain/hostname-sh; /
We have a directory full of installation and configuration scripts
that are updated on a fairly regular basis. I would like to implement
some sort of version control for these files. I have used SVN and CVS
in the past, but I thought I'd ask if anyone can recommend a simple,
easy-to-use tool that
> I dont really think you can get much easier than CVS if you need
> centralized management over a network. If it never gets off the
> machine then there is RCS. If those aren't simple enough... I don't
> think any of the others are going to help.
Thanks for the pointers, it looks like we will
I have run into a snag with my CVS installation:
[EMAIL PROTECTED]:~]$ cvs co -P installfiles
cvs checkout: Updating installfiles
cvs [checkout aborted]: out of memory; can not allocate 1022462837 bytes
Unfortunately we have a couple of large binary .tgz files in the
repository. I was able to ch
> Try upping your ulimit.
> What does "ulimit -a" give.
[EMAIL PROTECTED]:~]$ ulimit -a
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 4
max memory size (kbytes, -m) unlimited
> > Checking in binary files into CVS or any repository control system is
> > usually a broken thing. You want to either check in the stuff inside
> > the tar ball seperately (if its going to change), or just copy it into
> > the archive by updating CVSROOT/cvswrappers
This comes back to the p
> Because these tools are meant to deal with source code files and deal
> with diffs of such files. You are cramming a 1 gigabyte of compressed
> bits at it and its trying to make sure it could give you a diff of it
> later on. I don't have any idea why you would want to store it in a
> CVS ty
"just copy it into the archive by updating CVSROOT/cvswrappers"
*.tar -k 'b' -m 'COPY'
*.tbz -k 'b' -m 'COPY'
*.tgz -k 'b' -m 'COPY'
This worked great. Thank you, Stephan. The enormous .tar.gz is now
easily swallowed by the CVS snake.
___
CentOS
> Thank you, Stephan. The enormous .tar.gz is now
> easily swallowed by the CVS snake.
I mis-spelled your name, Stephen, my bad.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Ok, I can't quite figure out how to make this work. I want to
simultaneously log everything for facility local5 in a local file and
a remote syslog-ng server. local7 is working fine getting the
boot.log log entries transferred over to the syslog-ng server, but not
so much with local5. Local log
UPDATE:
The problem seems to be on the client side, because when I do this:
logger -p local5.info test
the file does show up properly on the syslog-ng host. Anyone have an
idea why the other processes that write to local5 on the client are
not logging to the remote host?
> local5.*
I have also found that there are a small handful of hosts that seem to
spit out a line or two of log output once in a while on the server,
but have not yet identified a pattern.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/lis
I have a virus and spam filter device that can do VRFY commands to
reject invalid email before it gets to the next mail hop. How can I
configure the SMTP server to only allow VRFY commands from one
particular IP address, and nowhere else? I don't want spammers to be
able to hammer on the gateway
> Block the outside world from reaching anything but the filter by firewall or
> other means. Otherwise the spammers will find it and go around your filter.
I think this will probably work. I believe this server does not need
to be open to the 'net.
_
I have a bunch of these scattered through /var/log/messages on a
couple of servers:
I/O error: dev 08:20, sector 0
The server functions just fine. Anyone have an idea what might be
causing this error?
___
CentOS mailing list
CentOS@centos.org
http://li
On 1/19/11 11:49 AM, Rudi Ahlers wrote:
> On Wed, Jan 19, 2011 at 9:46 PM, Joshua Baker-LePain wrote:
>> On Wed, 19 Jan 2011 at 11:44am, Bob Eastbrook wrote
>>
>>> By default, CentOS v5 requires a user's password when the system wakes
>>> up from the screensaver. This can be disabled by each us
Memory Utilization is 92.02%, crossed warning (80) or critical (90)
> threshold.
>
> Since server have 128 GB RAM and only 1 application. I really don't belive
> that. Does there has some way can check memory utilitation ?
>
What is
print $6 " " $7 " " $9}' | sort -r
Not using ls:
To take that input and sort you'd have to do some hashing to translate
the months to a sortable format (like numbers) I think. Alternatively,
you could use the listed date to generate a UTF date via the date command.
~S
I really like gnu screen and use it everyday but there's one thing
that is a bit inconvenient, and that's the odd line wrapping and
terminal size issues that seem to pop up. The problem crops up when I
type or paste a really long command, and then go back and try to edit
it; the text starts to wra
> You wouldn't by any chance be using PuTTY to access the session? If
> so, you may need to play around with the terminal settings including
> the scroll type so that it displays correctly. I don't recall the
> specifics but a similar thing happened to me.
Actually, no I'm using gnome-terminal o
> I tried to forget the incompatibilities in different old terminal types
> after about everything settled on xterm compatibility. Instead of
> running screen, can you run a desktop session under freenx on a server
> somewhere and run everything in terminal windows there (even ssh
> sessions that
e rich. Requires hardware
to go with it. http://www.zeus.com/products/load-balancer/
IPVS or LVS can work as a really simple/free solution:
http://www.linuxvirtualserver.org/software/ipvs.html
Round robin DNS would balance load, but will cause problems if one of
them goes down.
You could also set up apache or squid to do proxying...
Cheers,
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
>>> Hi Sean,
>>>
>>> Can you explain as I may be planning this for a site.
>>>
>>> So if I have 2 identical servers, each with there own IP, how will
>>> one
>>> of them going down cause issues?
>>>
>>> I'
> In this case, you might want to conditionally assign some reasonable
> value on failure. Say:
>
> tput -T $TERM init >/dev/null 2>&1 || export TERM=xterm
>
> 'tset -q' is another test which can be used.
The remote host's $TERM variable is in fact xterm. When I connect to
the screen session
>> The remote host's $TERM variable is in fact xterm. When I connect to
>> the screen session the $TERM variable is 'screen'.
>
> Are you running screen locally or remotely?
Remotely. My work machine is a laptop, which is not powered on all
the time. Hence I use a remote box as a jumping-off po
Can anyone point out reasons why it might be a bad idea to put this
sort of line in your /etc/hosts file, eg, pointing the FQDN at the
loopback address?
127.0.0.1hostname.domain.com hostname localhost localhost.localdomain
___
CentOS mailing list
C
> First, if your host is actually communicating with any kind of ip-based
> network, it is quite certain, that 127.0.0.1 simply isn't his IP
> address. And, at least for me, that's a fairly good reason.
Indeed. It does seem like a bad idea to have a single host using
loopback, while the rest of t
> (Make sure you pick .dummy so as not to interfere with any other DNS.)
>
> In theory you could leave off .dummy, but then you risk hostname being
> completed with the search domain in resolv.conf, which creates the
> problems already mentioned with putting hostname.domain.com in
> /etc/hosts. (I
I have an odd situation here, maybe one of you can help. We have a
script that runs via a cron job. It's purpose is to decrypt
PGP-encrypted files in a certain directory. I have tried the command
two different ways, both fail with the same error message:
gpg --decrypt $file > ${file%.txt}.decry
On Mon, Oct 19, 2009 at 2:41 PM, Spiro Harvey wrote:
> Is the cron job running as a different user? eg; are you running gpg as
> a non-privileged user and the cronjob as root?
The cronjob script runs from /etc/crontab. Let me try root's personal
crontab instead.
> Typically this type of problem is caused by environment variables
> that are set in a login shell, but are missing or different than
> those set for jobs running under cron.
You nailed it, Bill. Running the cron from root's personal crontab
worked fine. Must have been environment variable rela
I have an SSH server that was set up for a client, and every time we
try to upload large files via SFTP or scp, the transfers speed quickly
slows to zero and gives a - stalled - status message, then
disconnects. Here is an example:
ftp> put iTunesSetup.exe iTunesSetup.exe
Uploading iTunesSetup.ex
On Mon, Dec 21, 2009 at 7:06 PM, 唐建伟 wrote:
> I met the same as you, but always due to the bad network connection.
>
I should probably provide some more information, the server is a VMware
guest running CentOS 5.3. It's using the vmxnet driver for the eth0
connection. IPv6 is disabled.
__
> I'm not sure what would cause that, but I'd use rsync over ssh instead of sftp
> anyway - and use the -P option to permit restarting.
If it were up to me, we'd take that route. The software the client is
using is WinSCP which does have a restart feature, however it's not
working for us. I'm wo
> Tell him to switch WinSCP to SCP mode.
>
> Kai
Tried that, it still fails the same way. Here's the short list of
what I've tried to troubleshoot this:
Used SCP via the gui and command line
Used SFTP via the gui and command line
Ran yum update to bring all packages up to date
Tried stock CentOS
> Just an idea or thought on it. You never said what the file size was or did
> you? My idea is that is, there not a file size limitation on transfer to
> and from the server? I thought there was? Check you vsftpd.conf out or
> what ever ftp server your running for the size limitation. Maybe s
> Load balancer... is that set up to maintain connections, or will it, like
> IBM's
> WebSeal, go to whichever server is next/least used in the middle of a
> connection?
It's set to use "least connection" but there is only one server behind
the virtual IP at the moment.
I'm reasonably sure at t
>> # Turn off SACK
>> net.ipv4.tcp_sack = 0
>
> and execute "sysctl -p" to apply it. You can also use "sysctl -w
> net.ipv4.tcp_sack=0" to turn it off temporarily. Our file transfers worked
> just fine after the change.
>
> I realize there are differences our situation and yours and this might no
dea for a
workaround?
Thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
> If your script change passwords via ssh and usermod, why not at
> the same time do a chage -d number username?
Thank you, I may end up doing it this way at least until we can
configure AD or LDAP authentication.
___
CentOS mailing list
CentOS@centos.or
Maybe one of you can help. We have set up a CentOS server so that
each user who logs in via sftp will be jailed in their home directory.
Here's the relevant sshd_config:
# override default of no subsystems
Subsystem sftpinternal-sftp -f LOCAL2 -l INFO
Match Group sftponly
Chro
> I solved a similar issue with jail and syslog adding a "-a
> /home/jail/dev/log" parameter to syslog startup.
In our environment the chroot jail is /home/username. Does this mean
we need a /home/username/dev/log for each and every user? If the
daemon is chroot'd to /home/username wouldn't thi
> I believe you will need:
> syslogd -a "/home/username01/dev/log" -a "/home/username02/dev/log"
> -a "/home/username03/dev/log" -a "/home/username04/dev/log" - or
> something like this. I don't know the syntax for multiples "-a"...
This seems very impractical, both from a security standpoint an
firewall policy worked out.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
1 - 100 of 289 matches
Mail list logo