For what it’s worth, I have dealt with Apple’s security team; on one occasion I 
was very unimpressed by the response and ended up disclosing to the public one 
month after the deadline I’d given Apple (sixty days) and on the other occasion 
I got a straight and immediate reply confirming the behaviour as intended, 
which I was very impressed by.  As usual the culprit is Apple’s absurd veil of 
secrecy, and if you’re vulnerability isn’t important enough for Apple to talk 
about to the press, which in practice means that a researcher has to go to the 
press with at least a description of the issue, you can forget any hope of 
forward progress because what Apple does with the information is pretty much 
internal to Apple the moment you inform them.  My vulnerability finally got 
resolved in an AirPort firmware released nine months later, by which time I’d 
simply stopped using the feature.  Apple seems to deal with security the same 
way they deal with every other bug, i.e., if it’s a problem for their public 
image, fix it, else get around to it when priorities allow them.  I don’t think 
that’s a very acceptable posture.

The reason I think that full disclosure is increasingly the right way to do 
vulnerability disclosure is simply that market-driven and unaccountable 
companies need an incentive to design software that is secure.  If preventing 
security problems from appearing in software is a seemingly intractable 
challenge, then efforts at containment or mitigation, or in software 
development practices likely to catch security problems, need to be employed.  
I don’t think it’s fair for users of software to be kept from information that 
would help them choose products or vendors based on their security record, or 
to deal with current vulnerabilities, merely because the vendor has more to 
lose from the embarrassment of yet another vulnerability than the user does 
from not knowing about it for some period of time (60-90 days, typically) 
during which, if you believe the vendors, the non-availability of exploits is 
keeping the user safe, but which of course leaves open the (frequently 
demonstrable) possibility of the bad guys already knowing about the flaws.  The 
default state for software is insecure; the exception is security-sensitive 
software designed with security in mind.  We must change that, so secure 
software is the rule, and clearly insecure software the exception.  That will 
only happen if companies have the fear of God put into them and don’t want to 
end up on next week’s CVE listings.

I’m generally not a fan of bug bounties and don’t see why any company should 
have to participate, but I’d agree that the researchers need to get paid and if 
companies do their bit and support the exposure of holes in their software, 
within the current landscape, then the situation is getting better.  And better 
a bug bounty than trading on the black (or grey) vulnerability market, which 
frequently incentivises all the wrong people for all the wrong reasons.

-- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To post to this group, send email to macvisionaries@googlegroups.com.
Visit this group at http://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.

Reply via email to