On Sun, Oct 27, 2013 at 08:15:43PM -0600, Bob Proulx wrote:
> Reco wrote:
> > Oh. You mean that HP suddenly transformed to good fairies and stopped
> > charging extra for aCC? Or IBM received an encrypted signal from their
> > supervisors from Mars and did the same to vacc? And don't even mention
> > Sun, those guys managed to build their base system with two different C
> > compilers at once (gcc and that thing they put in Sun Studio instead
> > of C compiler).
> 
> Wait.  You mean the first thing you compile on a new system isn't gcc?
> Sometimes it would be 'make' first.  Then gcc, binutils, and the rest
> of the support chain.  The make again using gcc.  Then a hundred
> others!

Yep. On Solaris I use vendor packages with gcc, gmake and GNU toolchain.
On AIX I use Linux Compatibility toolkit, and it provides me GNU
toolchain too.
Luckily I don't have to compile anything for HP-UX. Heard someone built
gcc for it, didn't needed it so far.

Once I've bootstrapped GNU toolchain on Solaris (it was x86 so it was
relatively fast), and I have no desire to repeat this process on, say,
T2000.

> 
> > As for 'solid base'... C'mon, treating openssh as a third-party tool? No
> > meaningful firewall in default install? Telnet and FTP (root is allowed
> > by default) enabled by default and are listening 0.0.0.0? Mandatory
> > access control as a paid feature? Clearly our definitions of 'solid
> > base' are different.
> 
> By solid base I mean the Unix kernel.  Have you ever needed to rescue
> a system suffering under a fork-bomb?

Well, there was that incident with Solaris projects and limiting LWPs
with them, and I thought it was a good idea to test it with Perl fork
bomb. That particular project was configured wrong way :(
Bugger ate all memory just as fine as it'd did on Linux. Forking any
process wasn't possible as a result. So, server was bounced.


> Under the Linux kernel with
> defaults you will need to power cycle it.  Even if you were already
> logged into it at best you would rather quickly get "Connection closed
> by foreign host."  But I have been able to log into HP-UX systems
> while under such stress and was able to kill the offending processes.
> That is what I meant by a solid base.  It has a solid kernel.  That is
> the base of the operating system.

I didn't test fork bombs on HP-UX (that's something I'll probably do in
the future). If they use optimistic memory allocation, it'll be an
interesting experience.


> The other things you mention I
> place in another layer above it.  Most are policy decisions about
> telnet, ftp, and others wide open you can affect and change when it is
> your system to maintain.  There isn't any reason not to turn off
> telnet and ftp entirely for example.

That's a legitimate point of view. But I prefer the systems in which I
don't have to turn off anything unneeded (ideally, I don't have to install
anything I don't need).


> But I agree about the security aspect.  When I have needed to put one
> of those legacy systems on the net I usually protected it by putting
> it behind a separate firewall box.  Because of some of the problems
> you mention.  Using a separate proxy box for just the task needed made
> the security easier.  But that doesn't make the machine less reliable
> for running large loads with an uptime of years.

There's nothing you wrote here I'd disagree with.


> And one must be careful of throwing stones.  For example Debian does
> not provide a firewall by default.  And it is debatable if it needs
> one.  Many people don't configure one.  Many people do.  It all
> depends upon many things about the use case.  I don't put one on
> internal machines.  But I do put one on front facing machines.

That's Debian fault indeed. But at least they don't include any network
services worth speaking of (should we count NFS portmapper, or not?) in
an installation produced by netboot.


> > > You left the large "unless local sysadmins care about security" escape
> > > clause there.  But what about if the local admin *does* care about
> > > security?  In that case you can have a system with _better_ security
> > > than that provided by the vendor.
> > 
> > If local sysadmin cares about security then that site is truly blessed.
> > No irony. See, I earn my salary for solving problems with certain
> > proprietary cross-platform software. As a part of job, I visit may
> > different places, and what do I see there?
> 
> No need to try to convince me.  I have seen many horrors.  But I don't
> think this problem is specific to the legacy Unix vendors.

Of course not, that's something I've admitted in the same mail. UNIXes
just make managing useful third-party software harder, that's all.

> > Not that UNIXes are that bad. It happens for any OS, GNU/Linux included.
> 
> And that is exactly my point.  The biggest place I see problems today
> are companies that have full paid support for RHEL.  But they are
> running very old and outdated software.  I ask them why they are
> running RHEL and the answer is invariably because that was a
> commercially supported platform for them.  Then I ask them if they are
> actually getting support.  The answer is invariably no, but that is
> the way "corporate" set them up.

Or, they got 'lucky' with the hardware, which requires proprietary blobs
(available for RHEL only) just to be able to boot. Or, they are using
that expensive SAN, which requires proprietary blobs to be able to do
proper multipathing (available for RHEL only). Or they have this
proprietary software, which is supported on RHEL only.
A good example for vendor lock-in.


> That is an exaggeration.  For one it would need to be a local exploit
> for sudo to come in play.

Ok, let's say … CVE-2010-0427. Somewhat old, but possible.

> Therefore it would require a local user to
> attack it.  A local access attack.

SSH or telnet which is given such user for any legitimate purpose will do
just fine.

> The password on a t-shirt would
> require simply require someone who could walk by the admin and see it
> to gain remote access.

Hmm. Usually they keep developers, end users and sysadmins separated
here. So it's basically the same access complexity.

> Most of the users on such machines are all working for the same
> company.  The first person to break into the machine probably is doing
> so in order to fix something while the official admin is not
> available.  Or not knowledgeable enough.  Everyone in the group will
> know it and it won't be a concern.  And very likely that person will
> very soon be nominated into becoming the next officially designated
> admin for the machine!  Been there.  Done that. :-)

I won't argue with this, but I've seen different ends of this story.
Maybe I've just got used to attribute to a malice even it can be
attributed by incompetence.


> I agree with you that those machines are hard to argue as being
> secure.  Especially today.  Most vendors have outsourced their deep
> maintenance three shell companies deep overseas.  Things happen but
> not with any real understanding.  Bugs don't get fixed anymore.  They
> don't even get documented anymore.  It isn't a happy ecosystem.
> 
> But when they are being used today they are mostly buried deep behind
> layers of networking.  They are not available for attack from the
> outside world.  The inside world is all working for them and they
> don't need the protection from them.  The reliability of the kernel is
> more important.  They actually could operate without a root password
> and it won't be a problem.  And I mean that seriously.  Therefore I
> can't get excited about an outdated sudo on an old Unix system.

Existing installations - yes. New ones (and there's strong demand for
AIX where I live, for example) - hardly.

And sudo isn't that important. There's always Swiss-cheese
web-interfaces today :)

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131028143303.GC23316@x101h

Reply via email to