[Infowarrior] - Debunking Google's vuln disclosure propaganda

Richard Forno rforno at infowarrior.org
Mon Oct 27 18:21:33 UTC 2008


Debunking Google's security vulnerability disclosure propaganda
Posted by Chris Soghoian 7 comments

http://news.cnet.com/8301-13739_3-10075488-46.html?part=rss&subj=news&tag=2547-1_3-0-20

Question: You're a multi-billion dollar tech giant, you've launched a  
new phone platform after much media fanfare. Then, a security  
researcher finds a flaw in your product within days of its release.  
Worse, the vulnerability is due to the fact that you shipped old (and  
known to be flawed) software on the phones. What should you do? Issue  
an emergency update, warn users, or perhaps even issue a recall? If  
you're Google, the answer is simple -- attack the researcher.

With the news of a flaw in Google's Android phone platform making the  
New York Times on Friday, the search-giant quickly ramped up the spin  
machine. After first dismissing the amount of damage to which the flaw  
exposed users, anonymous Google executives then attempted to discredit  
the security researcher, Charlie Miller, a former NSA employee turned  
security consultant. Miller, the unnamed Googlers argued, acted  
irresponsibly by going to the New York Times to announce his  
vulnerability, instead of giving the Big G a few weeks or months to  
fix the flaw:

     Google executives said they believed that Mr. Miller had violated  
an unwritten code between companies and researchers that is intended  
to give companies time to fix problems before they are publicized.

What the Googlers are talking about is the idea of "responsible  
disclosure," one method of disclosing security vulnerabilities in  
software products. While it is an approach that is frequently followed  
by researchers, it is not the only method available, and in spite of  
the wishes of the companies whose products are frequently analyzed, it  
is by no means the "norm" for the industry.

Another frequently used method is that of "full disclosure" -- in  
which a researcher will post complete details of a vulnerability to a  
public forum (typically a mailing list dedicated to security topics).  
This approach is often used by researchers when they have discovered a  
flaw in a product made by a company with a poor track record of  
working with researchers -- or worse, threatening to sue them. For  
example, some researchers refuse to provide Apple with any advanced  
notification, due to its past behavior.

A third method involves selling information on the vulnerabilities to  
third parties (such Tippingpoint and iDefense) -- who pass that  
information on to their own customers, or perhaps keep it for  
themselves. Charlie Miller, the man who discovered the Android flaw  
has followed this path in the past, most notably when he sold details  
of a flaw in the Linux Kernel to the US National Security Agency for  
$50,000 (pdf).

Google's poor track record

First, consider the fact that security is a two-sided coin. If Google  
wants researchers to come to it first with vulnerability information,  
it is only fair to expect that Google be forthcoming with the  
community (and the general public) once the flaw has been fixed.  
Google's approach in this area is that of total secrecy -- not  
acknowledging flaws, and certainly not notifying users that a  
vulnerability existed or has been fixed. Google's CIO admitted as much  
in a 2007 interview with the Wall Street Journal:

     Regarding security-flaw disclosure, Mr. Merrill says Google  
hasn't provided much because consumers, its primary users to date,  
often aren't tech-savvy enough to understand security bulletins and  
find them "distracting and confusing." Also, because fixes Google  
makes on its servers are invisible to the user, notification hasn't  
seemed necessary, he says.

Second, companies do not have a right to expect "responsible  
disclosure." It is a mutual compromise, where the researchers provide  
the company with advanced notification in exchange for some form of  
assurance that the company will act reasonably, keep the lines of  
communication open, and give the researcher full credit once the  
vulnerability is fixed.

Google's track record in this area leaves much to be desired. Many top  
tier researchers have not been credited for disclosing flaws, and in  
some cases, Google has repeatedly dragged its feet in fixing flaws.  
The end result is that many frustrated researchers have opted to  
follow the full disclosure path, after hitting a brick wall when  
trying to provide Google with advanced notice.

I can personally confirm this experience, after I discovered a fairly  
significant flaw in a number of commercial Firefox toolbars back in  
2007. While Mozilla and Yahoo replied to my initial email within a day  
or so, and kept the lines of communication open, Google repeatedly  
stonewalled me, and I didn't hear anything from them for weeks at a  
time. Eventually, Google fixed the flaw a day or two after I went  
public with the vulnerability, 45 days after I had originally given  
the company private notice. As a result, I have extreme sympathy for  
those in the research community who have written Google off.

A rather unimpressive vulnerability

Once we actually look into the details of the vulnerability, and  
Miller's disclosure, the situation looks even worse for Google.

A known vulnerability: The Android platform is built on top of over 80  
open source libraries and programs. This particular flaw had been  
known about for some time and already fixed in the current version of  
the open source libraries. The flaw in Google's product only exists  
because the company shipped out-of-date software, which was known to  
be vulnerable.

Advanced notice: While the anonymous Google executives criticized  
Miller for not following responsible disclosure practices, it is worth  
noting that the researcher did provide Google with early notice --  
informing the company on the 20th of October. It is also important to  
note that Miller and his colleagues have yet to actually provide full  
information on the vulnerability or a working proof of concept exploit  
to the security community. Thus, it can hardly be said that Miller  
followed the full disclosure path.

If Google can criticize Miller at all, it cannot be for not warning  
the company, but perhaps for not providing them with enough warning.  
However, given that Google shipped known-vulnerable software to  
hundreds of thousands of users, and that fixed versions of the  
vulnerable software packages have been available for some time, it is  
difficult for this blogger to sympathize with the poor folks at  
Mountain View.

Furthermore, given Mr. Miller's previous mercenaryish history of  
selling software vulnerabilities to the National Security Agency  
(which presumably used the flaws to break into foreign government  
computers, and not in order to fix the vulnerable software), we should  
be happy that he is at least now sharing the existence of this flaw  
with the public. At least this way, developers have a good chance of  
finding and fixing it.

Disclosure: In the summer of 2006, I worked as an intern for the  
Application Security Team at Google. Furthermore between 2003-2005, I  
was a student at Johns Hopkins University, and advised by Prof. Avi  
Rubin, who is one of the founders of Independent Security Evaluators,  
the company that employs Charlie Miller. A couple of my former  
colleagues also now work for ISE. I have not spoken with them (or  
anyone at Google) about this article.
Christopher Soghoian delves into the areas of security, privacy,  
technology policy and cyber-law. He is a student fellow at Harvard  
University's Berkman Center for Internet and Society , and is a PhD  
candidate at Indiana University's School of Informatics. His academic  
work and contact information can be found by visiting www.dubfire.net/chris/ 
. He is a member of the CNET Blog Network, and is not an employee of  
CNET. Disclosure.



More information about the Infowarrior mailing list