Alex Bligh - linux-kernel wrote:
>In debugging why my (unloaded) IMAP server takes many seconds
>to open folders, I discovered what looks like a problem
>in 2.4's feeding of entropy into /dev/random. When there
>is insufficient entropy in the random number generator,
>reading from /dev/random blo
Linus Torvalds wrote:
>Ehh.. I will bet you $10 USD that if libc allocates the next file
>descriptor on the first "malloc()" in user space (in order to use the
>semaphores for mm protection), programs _will_ break.
>
>You want to take the bet?
Good point. Speaking of which:
ioctl(fd, UIOCATTA
Jeff Garzik wrote:
>Then you make your local random pool vulnerable to external
>manipulation, to a certain extent...
Adding more bits to the pool should never hurt; the cryptographic
mixing ensures this. What _can_ hurt is adding predictable bits but
(erroneously) bumping up the entropy counte
Horst von Brand wrote:
>Adding stuff that adds no entropy (or at least doesn't add to the estimated
>entropy pool) is just a waste of effort, AFAIKS.
Adding stuff that has no entropy is a waste of effort.
Adding stuff that probably has entropy, but where you don't bump
the entropy counter, *doe
Mike Coleman wrote:
>My limited mental abilities notwithstanding, I think this is one more reason
>to ditch ptrace for a better method of process tracing/control. It's served
>up to this point, but ptrace has a fundamental flaw, which is that it tries to
>do a lot of interprocess signalling and
Abel Muñoz Alcaraz wrote:
> I have replaced the execve() kernel [syscall]
> with my own implementation but it doesn't work well.
In Linux, hooking into sys_call_table[] is a pretty painful way
to interpose on system calls. Unfortunately, there's no other
way to do it (in Linux) that I know of..
David S. Miller wrote:
>Linux should not honor the incorrect sequence number. If the sequence
>number is incorrect, the RST could legitimately be for another
>connection.
How could it be for another connection, if it has source and destination
port numbers? I thought the sequence number was the
Andi Kleen wrote:
>On Fri, Oct 06, 2000 at 09:06:31PM +0000, David Wagner wrote:
>> David S. Miller wrote:
>> >Linux should not honor the incorrect sequence number. If the sequence
>> >number is incorrect, the RST could legitimately be for another
>> >co
David S. Miller wrote:
> From: [EMAIL PROTECTED] (David Wagner)
>
> How could it be for another connection, if it has source and
> destination port numbers?
>
>Consider previously existing connections with the same src/dst/ports
>and the effects of massive packet
IV's should never be repeated (per key). If you are using CBC mode,
they should not be just a counter, either (for different reasons).
A simple implementation technique is simply to use the encryption of
a block number / sector number / counter as your IV. This ensures that
IV's don't repeat an
kernel wrote:
>There are some who believe that "not unique" IVs (across multiple
>filesystems) facilitates some methods of cryptanalysis.
It's a not a matter of "belief"; it's a matter of fact.
The weakness is that the first block of ciphertext depends
only on the IV and the first block of plai
Marc Mutz wrote:
>> There are some who believe that "not unique" IVs (across multiple
>> filesystems) facilitates some methods of cryptanalysis.
>
>Do you have a paper reference?
There's no paper, because it's too trivial to appear in a paper.
But you can find this weakness described in any good
Ingo Rohloff wrote:
>> There is a paper about why it is a bad idea to use
>> sequence numbers for CBC IV's. I just have to find the reference to it.
>
>Does this mean sequence as in 0,1,2,3,4 ... or does this mean
>any pre-calculate-able sequence ? In the former case we might just use
>a simple
Ingo Rohloff wrote:
>-snip---
> As an example, it is not true that CBC encryption
> can use an arbitrary nonce initialization vector: it is essential
> that the IV be unpredictable by the adversary. (To see this, suppose
> t
Marc Mutz wrote:
>David Wagner wrote:
>> (However, it does get one
>> thing wrong: it claims that it's ok to use a serial number for your
>> IV. This is not correct, and I can give a reference for this latter,
>> subtler point, if you like.)
>>
Helge Hafting wrote:
>So, no reason for a firewall author to check these bits.
You don't think like a firewall designer! :-)
Practice being really, really paranoid. Think: You're designing a
firewall; you've got some reserved bits, currently unused; any future code
that uses them could behave
Matt Mackall wrote:
>While it may have some good properties, it lacks
>some that random.c has, particularly robustness in the face of failure
>of crypto primitives.
It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--
Do you know of any ana
>First, a reminder that the design goal of /dev/random proper is
>information-theoretic security. That is, it should be secure against
>an attacker with infinite computational power.
I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in wh
Jean-Luc Cooke wrote:
>Info-theoretic randomness is a strong desire of some/many users, [..]
I don't know. Most of the time that I've seen users say they want
information-theoretic randomness, I've gotten the impression that those
users didn't really understand what information-theoretic randomn
Theodore Ts'o wrote:
>With a properly set up set of init scripts, /dev/random is initialized
>with seed material for all but the initial boot [...]
I'm not so sure. Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those ini
linux wrote:
>/dev/urandom depends on the strength of the crypto primitives.
>/dev/random does not. All it needs is a good uniform hash.
That's not at all clear. I'll go farther: I think it is unlikely
to be true.
If you want to think about cryptographic primitives being arbitrarily
broken, I t
linux wrote:
>David Wagner wrote:
>>linux wrote:
>>> First, a reminder that the design goal of /dev/random proper is
>>> information-theoretic security. That is, it should be secure against
>>> an attacker with infinite computational power.
>
>> I am s
Hacksaw wrote:
>What I would expect the kernel to do is this:
>
>system_call_data_prep (userdata, size){ [...]
> for each page from userdata to userdata+size
> {
> if the page is swapped out, swap it in
> if the page is not owned by the user process, return -ENOWAYMAN
linux wrote:
>3) Fortuna's design doesn't actually *work*. The authors' analysis
> only works in the case that the entropy seeds are independent, but
> forgot to state the assumption. Some people reviewing the design
> don't notice the omission.
Ok, now I understand your objection. Yup, t
linux wrote:
>Thank you for pointing out the paper; Appendix A is particularly
>interesting. And the [BST03] reference looks *really* nice! I haven't
>finished it yet, but based on what I've read so far, I'd like to
>*strongly* recommnd that any would-be /dev/random hackers read it
>carefully. I
Jean-Luc Cooke wrote:
>The part which suggests choosing an irreducible poly and a value "a" in the
>preprocessing stage ... last I checked the value for a and the poly need to
>be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
>something and someone can point it out.
I do
Mikulas Patocka wrote:
>If you are checking permissions on server, read/execute have no security
>meaning.
This seems a bit too strong. If I try to exec a file that has read
permission enabled but not execute permission, I'd like this to fail.
You can just imagine sysadmins who turn off exec bi
>Its a linux kernel modification, that allows to decide wich uid, pid or
>file can open a tcp socket in listening state.
- Putting access control on listen() [rather than socket()/bind()]
seems like a really bad idea. In particular, in some cases one can
bind to a port and receive messages o
David Madore wrote:
>This does not tell me, then, why CAP_SETPCAP was globally disabled by
>default, nor why passing of capabilities across execve() was entirely
>removed instead of being fixed.
I do not know of any good reason. Perhaps the few folks who knew enough
to fix it properly didn't fee
David Madore wrote:
>I intend to add a couple of capabilities which are normally available
>to all user processes, including capability to exec(), [...]
Once you have a mechanism that lets you prevent the untrusted program
from exec-ing a setuid/setgid program (such as your bounding set idea),
I
Lorenzo Hernández García-Hierro wrote:
>El lun, 18-04-2005 a las 15:05 -0400, Dave Jones escribió:
>> This is utterly absurd. You can find out anything thats in /proc/cpuinfo
>> by calling cpuid instructions yourself.
>> Please enlighten me as to what security gains we achieve
>> by not allowing
Matt Mackall wrote:
>On Sat, Apr 16, 2005 at 01:08:47AM +0000, David Wagner wrote:
>> http://eprint.iacr.org/2005/029
>
>Unfortunately, this paper's analysis of /dev/random is so shallow that
>they don't even know what hash it's using. Almost all of section 5.3
Theodore Ts'o wrote:
>For one, /dev/urandom and /dev/random don't use the same pool
>(anymore). They used to, a long time ago, but certainly as of the
>writing of the paper this was no longer true. This invalidates the
>entire last paragraph of Section 5.3.
Ok, you're right, this is a serious f
Tetsuo Handa writes:
>When I attended at Security Stadium 2003 as a defense side,
>I was using devfs for /dev directory. The files in /dev directory
>were deleted by attckers and the administrator was unable to login.
If the attacker gets full administrator-level access on your machine,
there are
David Wagner wrote:
> If the attacker gets full administrator-level access on your machine,
> there are a gazillion ways the attacker can prevent other admins from
> logging on. This patch can't prevent that. It sounds like this patch
> is trying to solve a fundamentally un
>For those systems that have everything on one big partition, you can often
>do stuff like:
>
>ln /etc/passwd /tmp/
>
>and wait for /etc/passwd to get clobbered by a cron job run by root...
How would /etc/passwd get clobbered? Are you thinking that a tmp
cleaner run by cron might delete /tmp/what
>The attack is to hardlink some tempfile name to some file you want
>over-written. This usually involves just a little bit of work, such as
>recognizing that a given root cronjob uses an unsafe predictable filename
>in /tmp (look at the Bugtraq or Full-Disclosure archives, there's plenty).
>Then y
Andrea Arcangeli wrote:
>On Sun, Jan 23, 2005 at 07:34:24AM +0000, David Wagner wrote:
>> [...Ostia...] The jailed process inherit an open file
>> descriptor to its jailor, and is only allowed to call read(), write(),
>> sendmsg(), and recvmsg(). [...]
>
>Why to ca
H. Peter Anvin wrote:
>By author:Jorgen Cederlof <[EMAIL PROTECTED]>
>> If we only allow user chroots for processes that have never been
>> chrooted before, and if the suid/sgid bits won't have any effect under
>> the new root, it should be perfectly safe to allow any user to chroot.
>
>Safe,
Paul Menage wrote:
>It could potentially be useful for a network daemon (e.g. a simplified
>anonymous FTP server) that wanted to be absolutely sure that neither it
>nor any of its libraries were being tricked into following a bogus
>symlink, or a "/../" in a passed filename. After initialisation,
It is not rocket science to populate a chroot environment with enough
files to make many interesting applications work. Don't expect a general
solution---chroot is not a silver bullet---but it is useful. (Note also
that whether you can populate a chroot environment sufficiently is roughly
indepe
Mohammad A. Haque wrote:
>Why do this in the kernel when it's available in userspace?
Because the userspace implementations aren't equivalent.
In particular, it is not so easy for them to enforce the following
restriction:
(*) If a non-root user requested the chroot, then setuid/setgid
bi
Jesse Pollard wrote:
>2. Any penetration is limited to what the user can access.
Sure, but in practice, this is not a limit at all.
Once a malicious party gains access to any account on your
system (root or non-root), you might as well give up, on all
but the most painstakingly careful configur
>More interestingly, it changes the operation of SAK in two ways:
>(a) It does less, namely will not kill processes with uid 0.
I think this is bad for security.
(I assume you meant euid 0, not ruid 0. Using the real uid
for access control decisions is a very odd thing to do.)
-
To unsubscribe
Chris Wright wrote:
>Only difference is in number of context switches, and number of running
>processes (and perhaps ease of determining policy for which syscalls
>are allowed). Although it's not really seccomp, it's just restricted
>syscalls...
There is a simple tweak to ptrace which fixes that
Chris Wright wrote:
>* David Wagner ([EMAIL PROTECTED]) wrote:
>> There is a simple tweak to ptrace which fixes that: one could add an
>> API to specify a set of syscalls that ptrace should not trap on. To get
>> seccomp-like semantics, the user program could specify {read,
Samium Gromoff wrote:
>This patch removes the dropping of ADDR_NO_RANDOMIZE upon execution of setuid
>binaries.
>
>Why? The answer consists of two parts:
>
>Firstly, there are valid applications which need an unadulterated memory map.
>Some of those which do their memory management, like lisp syst
Samium Gromoff wrote:
>the core of the problem are the cores which are customarily
>dumped by lisps during the environment generation (or modification) stage,
>and then mapped back, every time the environment is invoked.
>
>at the current step of evolution, those core files are not relocatable
>in
Samium Gromoff wrote:
>[...] directly setuid root the lisp system executable itself [...]
Like I said, that sounds like a bad idea to me. Sounds like a recipe for
privilege escalation vulnerabilities. Was the lisp system executable
really implemented to be secure even when you make it setuid ro
Phillip Susi wrote:
>Why are non root users allowed write access in the first place? Can't
>the pollute the entropy pool and thus actually REDUCE the amount of good
>entropy?
Nope, I don't think so. If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make
Phillip Susi wrote:
>David Wagner wrote:
>> Nope, I don't think so. If they could, that would be a security hole,
>> but /dev/{,u}random was designed to try to make this impossible, assuming
>> the cryptographic algorithms are secure.
>>
>> After all,
Warning: tangent with little practical relevance follows:
Kyle Moffett wrote:
>Actually, our current /dev/random implementation is secure even if
>the cryptographic algorithms can be broken under traditional
>circumstances.
Maybe. But, I've never seen any careful analysis to support this or
Continuing the tangent:
Henrique de Moraes Holschuh wrote:
>On Mon, 27 Nov 2006, Ben Pfaff wrote:
>> [EMAIL PROTECTED] (David Wagner) writes:
>> > Well, if you want to talk about really high-value keys like the scenarios
>> > you mention, you probably shouldn't b
Stephen Smalley wrote:
>On Thu, 2007-06-21 at 21:54 +0200, Lars Marowsky-Bree wrote:
>> And now, yes, I know AA doesn't mediate IPC or networking (yet), but
>> that's a missing feature, not broken by design.
>
>The incomplete mediation flows from the design, since the pathname-based
>mediation doe
Stephen Smalley wrote:
>That would certainly help, although one might quibble with the use of
>the word "confinement" at all wrt AppArmor (it has a long-established
>technical meaning that implies information flow control, and that goes
>beyond even complete mediation - it requires global and pers
Stephen Smalley wrote:
>On Fri, 2007-06-22 at 01:06 -0700, John Johansen wrote:
>> No the "incomplete" mediation does not flow from the design. We have
>> deliberately focused on doing the necessary modifications for pathname
>> based mediation. The IPC and network mediation are a wip.
>
>The fa
I've heard four arguments against merging AA.
Argument 1. SELinux does it better than AA. (Or: SELinux dominates AA.
Or: SELinux can do everything that AA can.)
Argument 2. Object labeling (or: information flow control) is more secure
than pathname-based access control.
Argument 3. AA isn't com
James Morris wrote:
>The point is that the pathname model does not generalize, and that
>AppArmor's inability to provide adequate coverage of the system is a
>design issue arising from this.
I don't see it. I don't see why you call this a design issue. Isn't
this just a case where they haven'
James Morris wrote:
>A. Pathname labeling - applying access control to pathnames to objects,
>rather than labeling the objects themselves.
>
>Think of this as, say, securing your house by putting a gate in the street
>in front of the house, regardless of how many other possible paths there
>are
[EMAIL PROTECTED] wrote:
> no, this won't help you much against local users, [...]
Pavel Machek wrote:
>Hmm, I guess I'd love "it is useless on multiuser boxes" to become
>standard part of AA advertising.
That's not quite what david@ said. As I understand it, AppArmor is not
focused on preventi
[EMAIL PROTECTED] writes:
>Experience over on the Windows side of the fence indicates that "remote bad
>guys get some local user first" is a *MAJOR* part of the current real-world
>threat model - the vast majority of successful attacks on end-user boxes these
>days start off with either "Get user t
Pavel Machek wrote:
> David Wagner wrote:
>> There was no way to follow fork securely.
>
>Actually there is now. I did something similar called subterfugue and
>we solved this one.
Yes, I saw that. I thought subterfugue was neat. The way that
subterfugue was a clever hack --
Karl MacMillan wrote:
>I don't think that the ease-of-use issue is clear cut. The hard part of
>understanding both SELinux policies and AppArmor profiles is
>understanding what access should be allowed. [...]
>Whether the access is allowed with the SELinux or
>AppArmor language seems like a small
Karl MacMillan wrote:
>My private ssh keys need to be protected regardless
>of the file name - it is the "bag of bits" that make it important not
>the name.
I think you picked a bad example. That's a confidentiality policy.
AppArmor can't make any guarantees about confidentiality. Neither can
S
James Morris wrote:
>I would challenge the claim that AppArmor offers any magic bullet for
>ease of use.
There are, of course, no magic bullets for ease of use.
I would not make such a strong claim. I simply stated that it
is plausible that AppArmor might have some advantages in some
deployment
James Morris wrote:
>On Tue, 17 Apr 2007, David Wagner wrote:
>> Maybe you'd like to confine the PHP interpreter to limit what it can do.
>> That might be a good application for something like AppArmor. You don't
>> need comprehensive information flow control
James Morris wrote:
>This is not what the discussion is about. It's about addressing the many
>points in the FAQ posted here which are likely to cause misunderstandings,
>and then subsequent responses of a similar nature.
Thank you. Then I misunderstood, and I owe you an apology. Thank you
f
James Morris wrote:
>On Wed, 18 Apr 2007, Crispin Cowan wrote:
>> How is it that you think a buffer overflow in httpd could allow an
>> attacker to break out of an AppArmor profile?
>
>Because you can change the behavior of the application and then bypass
>policy entirely by utilizing any mechani
Stephen Smalley wrote:
>Confinement in its traditional sense (e.g. the 1973 Lampson paper, ACM
>Vol 16 No 10) means information flow control, which you have agreed
>AppArmor does not and cannot provide.
Right, that's how I understand it, too.
However, I think some more caveats are in order. In
Crispin Cowan wrote:
> How is it that you think a buffer overflow in httpd could allow an
> attacker to break out of an AppArmor profile?
James Morris wrote:
> [...] you can change the behavior of the application and then bypass
> policy entirely by utilizing any mechanism other than direct file
Stephen Smalley wrote:
>Integrity protection requires information flow control; you can't
>protect a high integrity process from being corrupted by a low integrity
>process if you don't control the flow of information. Plenty of attacks
>take the form of a untrusted process injecting data that wi
Pavel Machek wrote:
>You can do the same with ptrace. If that's not fast enough... improve
>ptrace?
I did my Master's thesis on a system called Janus that tried using ptrace
for this goal. The bottom line is that ptrace sucks for this purpose.
It is a kludge. It is not the right approach. I do
Indan Zupancic wrote:
>On Thu, April 12, 2007 11:35, Satyam Sharma wrote:
>> 1. First, sorry, I don't think an RSA implementation not conforming to
>> PKCS #1 qualifies to be called RSA at all. That is definitely a *must*
>> -- why break strong crypto algorithms such as RSA by implementing them
>>
73 matches
Mail list logo