Nick Coghlan <[EMAIL PROTECTED]> writes: > Fair cop on the C thing, but that example otherwise illustrates my > point perfectly.
I'm not sure what point you mean. > Unpickling untrusted data is just as dangerous as evaluating or > executing untrusted data. > > This is *still* dangerous, because there *is no patch* to fix the > problem. Pickle is now documented as being unsafe for untrusted data. It's just like eval now. Nobody is going to make a patch for eval that makes it safe for untrusted data. It would be nice if there were a pickle alternative that's safe to use with untrusted data, but that's sort of a separate issue (see the marshal doc thread referenced earlier). > There are only documentation changes to highlight the security risks > associated with unpickling, and I would say that unpickle's feature set actually changed incompatibly, since (see analysis in the sf bug thread) unpickle was originally designed to be safe. > Deprecation Warnings on the Cookie classes which use this unsafe feature. Yes, that means as soon as someone uses Cookie.Cookie, their application will throw a DeprecationWarning and they have to fix the error before the app can run. > So, the only effective mechanism is to get the word out to Python > *users* that the feature is unsafe, and should be used with care, > which basically requires telling the world about the problem. That's true, but the problem still has to be analyzed and a recommendation formulated, which can take a little while. > Any time Python has a problem of this sort, there is going to be at > least one solution, and only possibly two: > > 1. Avoid the feature that represents a security risk > 2. Eliminate the security risk in a maintenance update. You forgot 3. install a patch as soon as you become aware of the problem, without waiting for a maintenance update. > By keeping the process public, and clearly identifying the problematic > features, application developers can immediately start working on > protecting themselves, in parallel with the CPython developers > (possibly) working on a new maintenance release. The hope is that during the short period in which there's a confidential bug report in the system, the number of exploits in the wild won't change. Either attackers knew about the bug already and have exploits out before the bug is even reported, or they don't know about it yet. Either way, random application developers get the bug report at the same time as attackers. So the choices are that app developers get a raw bug report and have to figure out a solution while at the same time attackers who saw the same announcement are starting to launch new exploits, or else when the app developers get the bug report, they also get a bunch of analysis from the Python developers, which can help them decide what to do next. I think they benefit from the analysis, if they can get it. Keep in mind also that the submitters of the bug reports often don't see the full implications, that the app developers also might not see, but that attackers are likely to figure out instantly. So again, it helps if the Python developers can supply some analysis of their own. Finally, some reports of security bugs turn out to not really be bugs (I've submitted a few myself that have turned out that way). That kind of thing can panic an application developer into shutting down a service unnecessarily while figuring out what to do next, often at a cost of kilobucks or worse per minute of downtime, or maybe having some lesser fire drill to figure out that the problem is a non-problem. Better to let the Python developers explain the problem and close the bug before publishing it. > To go with the 72 hours + 8 example you gave - what if you could work > around the broken feature in 6? If 6 hours from seeing the raw bug report are enough to analyze the problem and develop a workaround, then given not only the raw bug report but also 72 hours worth of analysis and recommendations/fixes from the developers, I should need even less than 6 hours to install a patch. > I suspect we'll have to agree to disagree on this point. Where we can > agree is that I certainly wouldn't be unhappy if SF had a feature like > Bugzilla's security flag. I do have to say that developer responsiveness to security issues varies from one program to another. It's excellent for OpenBSD and reasonably good for Mozilla; but for Python, it's something of a weak spot, as we're seeing. -- http://mail.python.org/mailman/listinfo/python-list