[Infowarrior] - Full Disclosure and why Vendors Hate it
Richard Forno
rforno at infowarrior.org
Sat May 31 19:14:19 UTC 2008
Full Disclosure and why Vendors Hate it
May 2008
http://www.zdziarski.com/papers/fulldisclosure.html
I did a talk recently at O'Reilly's Ignite Boston party about the
exciting iPhone forensics community emerging in law enforcement
circles. With all of the excitement came shame, however; not for me,
but for everyone in the audience who had bought an iPhone and put
something otherwise embarrassing or private on it. Very few people, it
seemed, were fully aware of just how much personal data the iPhone
retains, in spite of the fact that Apple has known about it for quite
some time. In spite of the impressive quantities of beer that get
drunk at Tommy Doyle's, I was surprised to find that many people were
sober enough to turn their epiphany about privacy into a discussion
about full disclosure. This has been a hot topic in the iPhone
development community lately, and I have spent much time pleading with
the different camps to return to embracing the practice of full
disclosure.
The iPhone is shrouded in secrecy on both sides - Apple (of course)
uses their secrets to instill hype (and gloss over many otherwise
obvious privacy flaws), while the iPhone development community uses
their secrets to ensure they can exploit future versions of the
firwmware to find these flaws (along with all the other fun stuff we
do). The secrets on both sides appear to have not only hurt the
product, but run the risk of devolving an otherwise amazing device
into the next surveillance fear. With the military and federal
agencies testing the iPhone for possible use, some of the long-held
secrets surrounding the iPhone even run the risk of affecting national
security.
Secrecy and Hype
Secrecy is nothing new, especially with Apple. One of Apple's greatest
marketing strengths is this ability to add hype around their products
by piquing the curiosity of the common geek. When it comes to such an
amazing device as the iPhone, Apple seems to be very tolerant when it
comes to grassroots hacking - tolerant enough to allow iPhone hackers
to come and give talks about it in their store. It almost seems
counter-intuitive that the more padlocks Apple places on the iPhone,
the more the number of hackers who show up to pick them, and the more
phones sold.
Obviously it isn't just hackers buying iPhones, or the community would
be much bigger. Part of what Apple is selling is the hacker image - an
image that they ingeniously didn't even have to invent. By simply
locking up the device and attracting the right audiences, every tech
store cashier within a thousand mile radius can buy an iPhone and feel
like they are in the same class of uber-hacker as the ones who
originally wrote the tools they're using. With more secrets come more
hype, and ultimately more people who buy the product to feel like
they're doing something "unsanctioned" or "cool" with it. Apple wants
you to think that buying an iPhone is bucking the system - and all
they had to do was lock it down. It is estimated that over a third of
all iPhones sold have been jailbroken and unlocked, supporting at the
very least the claim that a lot of people are unlocking their iPhones
just because Apple said they can't. Apple has proven that secrets
really can sell products.
Secrecy and Privacy
The problem with too many secrets is that they frequently rub against
the notion of privacy. One would think that secrets and privacy track
together, but more often than not, secrets only mean that you don't
know your enemy, or what weapons they have to use against you. Secrets
can be a hindrance to privacy because they leave the consumer exposed;
not knowing if their home is secure, or if it's going to be broken
into. If you knew that the lock on your front door was broken, you'd
probably be less inclined to leave a diamond ring lying on the foyer
table. More dangerous is the idea that you have no right to know about
your broken front door lock until after the locksmith fixes it.
Everyone agrees that security flaws should be fixed; the looming issue
is whether full disclosure is appropriate, or whether the "vendor
first" approach is more responsible.
The thing with secrets is that someone always has one, and when it
comes to protecting your data, a well-informed public is often better
equipped to protect themselves than an ignorant one. In the digital
world, the locks belong to the vendor, but the data is typically
within either the customer or the consumer's control; and if not the
data, then certainly lawyers from hell are within reach. Longstanding
arguments have been made that the vendor should be the first to
notified, and the owner of the data should remain in ignorance until
the front door lock has been fixed. Ironically, this is an argument I
only ever hear coming from vendors (or those indoctrinated by
vendors). Some vendors take this philosophy so seriously that they
attempt to legally bind their own customers from releasing information
about vulnerabilities to the public.
The inherent flaw in the "vendor first" argument is this: if you know
about a particular vulnerability, chances are the bad guy already does
too, and probably knew about it before you did. The bad guy is far
more dangerous when the public doesn't know what he knows, leaving the
vendor's customers and consumers both oblivious that there is any
risk, or that an appropriate response to safeguard data is necessary.
It is the customer and the consumer who have the most to lose from a
breach, and bear the most liability should one occur. It seems that
these two groups would be the best suited to also choose how the risk
should be mitigated in the short term, and ultimately what procedures
for auditing data should be taken after the fact.
If indeed the bad guy knows about the vulnerability, they are
certainly already exploiting it, leaving one to wonder what the
advantage is to keeping it secret from the public. It would seem as
though it would be a rather large disadvantage if no-one is given the
knowledge to do anything about it. It's quite simple logic:
* Full Disclosure Scenario: Vendor screws up grocery chain
software. Grocery chain and consumers notified by newspaper. Grocery
chain's customers switch to cash, with minor loss in business. Grocery
chain results in exponentially fewer losses than had they gotten sued
by credit card companies for a breach.
* Vendor First Scenario: Vendor screws up grocery chain software.
Vendor is notified, takes 2 months to patch security vulnerability.
Three grocery chains experience data breaches, with a fourth breach
while the first three figure out what happened. All four grocery
chains sued by credit card companies. Consumers and grocery chains
suffer. Vendor has disclaimer, pays nothing.
Just who is the beneficiary of the "vendor first" concept exactly?
Full disclosure ultimately protects the consumer, where as "vendor
first" only protects the vendor. Full disclosure safeguards the
consumer by getting people away from the dam until the leak is
plugged. Take this more real-world scenario for example:
* Full Disclosure Scenario: I announced last week that
refurbished iPhones may contain previous customer data, and provided
some blurred screenshots to show evidence of it. Both Apple and AT&T
are suddenly listing refurbished iPhones as unavailable. Apple revises
their refurbishing practices, and until the dam is permanently
plugged, the flood of refurbished iPhones with customer data has been
turned off.
* Vendor First Scenario: Had I reported the problem to Apple
directly, they may have decided to quietly fix their internal
practices while still selling refurbished units. Additional units are
sold with customer data on them, and no-one is any the wiser (except
for the people stealing the data). In the time it takes Apple to
revise their refurbishing practices, X additional phones containing
customer data are leaked. The consumer loses, and might not even know
it.
Plausible Deniability
The advantage that vendors gain in keeping secrets from customers is
simply having plausible deniability. When a vulnerability is actually
fixed, a vendor may deny the privacy flaw ever existed, or at least
severely downplay any risk. This can (and has) been used to sweep over
any concern, having the side effect of also downplaying any
inclination to audit for a security breach. After all, it's bad for a
vendor to have to admit to a security flaw, but entirely disasterous
for their image should anyone discover an actual breach occured. As
far as the vendor is concerned, 'tis best not to check.
I ran into this shortly after I discovered a flaw in Verizon's online
billing services some years ago, which allowed me to view other
customers' billing information through Verizon's web portal. I'll not
likely forget the famous last words of the Verizon security
technician, "Thanks for not telling anybody about this." It was the
next day that I talked to the Washington Post, with Verizon denying
and/or downplaying each claim. I doubt the leak ever would have come
to light otherwise, and most definitely would have never been audited.
My screenshots were the only proof that there ever was a problem, and
at that point it comes down to mere credibilty.
Plausible deniability is one of a vendor's greatest advantages when
the "vendor first" approach is used instead of full disclosure. By
fixing things privately, there is no way (in some cases at least) to
verify that the vulnerability ever existed, or by the time the vendor
releases information about the vulnerability, it may be well too late
to check for a privacy breach. When this happens, it is the word of
the person reporting the vulnerability against a team of corporate
engineers who will all insist it isn't as bad as it sounds.
The full disclosure approach solves the problem of corporate
accountability by ensuring that the informed public (specifically,
security professionals) can verify and assess the situation. Full
disclosure gives the public a window of opportunity to not only verify
the vulnerability, but to see just how deep the rabbit hole goes;
something the vendor is almost guaranteed to either intentionally
ignore or cover up. The bad guy is already going to test and exploit
these vulnerabilities long before the public even discovers them - the
good guys ought to have a crack at verifying it too.
Public Outcry
Just how large that window of opportunity is depends on the vendor,
and presents another reason why "vendor first" doesn't work. Vendors
can be slow about fixing things - and many have a track record of
lethargy. Some software vendors lag months behind. In spite of what
you may think, the goal of the vendor is not to produce a quality
product; it is to sell product. And in selling product, selling
support agreements come with the turf. Carefully timing security
updates so that they span certain contractual intervals is one way to
ensure that a product's maintenance fees are going to get bought into.
The average MTTR for some of the most widely used operating systems
and other popular software is on the order of 3-6 months! So if you're
following along with the thought pattern laid out here, that means 3-6
months of unknown bad guys possibly exploiting these vulnerabilities
and stealing personal information that may have otherwise been stopped
at the customer or consumer level.
There is, however, one way to ensure a vendor fixes a flaw quickly,
and that is public outcry. I find some otherwise slow vendors respond
quite snappily when five million consumers are banging down their door
and threatening to sue them in class action court. Public outcry has
become the Q/A filter for many vendors whose response times have
become ridiculously poor in recent years. It lets the vendor know what
bugs are going to hurt their bottom line - and those are the ones that
are quite likely to receive the most attention. It is certainly
advantageous for the vendor to push the "vendor first" approach when
it means removing the pressure to repair critical flaws. It is public
pressure that has the power to change governments - certainly, it can
be an effective tool at fixing security flaws.
Over-Fixing
Of course, over-fixing things is the fear many development teams have
with vendors, and is an issue I've experienced first hand with Apple,
Verizon, and a few other vendors. Before you report a security
vulnerability privately to a vendor, pretend the vendor is going to
read it miranda rights, because essentially your vulnerability can
(and will likely) be used against you. Not to incriminate you, per se,
but to rather handicap your ability to follow up.
As an example, the open source community has built up a significant
arsenal. We've built a solid base of iPhone developers as well as a
community distribution mechanism for software. Apple came along a
little later (due to public outcry) and decided to build their own
solid developer base and their own distribution mechanism,
embarrassingly trying to copy the open source community. Apple has
effectively positioned themselves as a competitor of the open
development community for the iPhone. As is the case with other
similar vendors, privately releasing a vulnerability to them is a
technological death wish; the technique you used to find the
vulnerability in the first place will likely be "fixed" so that you
won't have access to find such a vulnerability again. Make no mistake
- this is not to better secure the product; this is to quiet the noise
you've generated and ensure that they don't have to hear from you again.
Once again, full disclosure presents a window. This window of
opportunity allows others to collaborate with you by picking up where
your work left off. Over-fixes are likely going to happen, but by the
time they do, the public will have given the product a thorough
proctological and likely uncovered many additional exploits you may
have missed.
Litmus Test
Not to suggest that all vendors are evil, lazy, or financially
motivated, but in a capitalist society, it is the consumer's
responsibility to hold a corporation accountable. This is not possible
if the corporation is controlling the flow of information.
If you're interviewing vendors, ask them where you can find a manifest
of security flaws accompanied by dates reported, dates patches were
released, and a report of all associated breaches. If this information
is available publicly, you've stumbled across a rare breed of
responsible vendor.
The bottom line is this: a company that is afraid to tell the customer
about a security risk until after it's fixed is both dangerous and
irresponsible. The best litmus test when selecting a vendor is to find
vendors who embrace full disclosure in such a way that vulnerabilities
are reported quickly to their downstream customers, and if privacy-
related, the consumer. Full disclosure is the key to privacy. If your
goal is to have security flaws fixed, rather than covered up, full
disclosure is the only way to guarantee that your research will be
thoroughly tested and patched; what's more, it is the only way to
ensure that the vendor is held accountable in an age of privacy
breaches and litigation.
More information about the Infowarrior
mailing list