The world of computer security has developed a wicked game of politically correct cat-and-mouse. This game is played out through security professionals contacting software vendors to report security vulnerabilities that affect a large portion of their customers. Like a large percentage of professional interaction, the supposed need to act within certain guidelines or be politically correct (PC) comes into play. The side-affect to this game comes in the form of slower patches and upgrades to address the problem.

What is Full Disclosure?

When a bug is found in a piece of software, the person who finds the bug has two courses of action. The first is to notify the software vendor so they may fix the problem in future revisions. In some cases, these bugs pose serious security problems and warrant patches to existing versions. The second course of action is to post the full details of the bug to a public mail list or Usenet newsgroup. In doing so, the discoverer is sharing the full details of potentially serious bugs that could affect millions of people. Posting the information is done so that administrators around the world can understand the problem and figure out how to respond to it. With anything less than full disclosure of the problem, administrators may react incorrectly or be unable to convince users and management that the problem is serious.

Most individuals or companies who discover bugs choose a mix of both courses of actions. They opt to give the software vendor a set amount of time to address the problem before they release full details to the public. This is done as an exercise in ethics as some of these bugs pose a serious threat to many organizations. Imagine if a company released full details to the public on a bug that left thousands of government and military systems vulnerable. If the information led to hackers penetrating these systems, it would raise issues of responsibility on the part of the group reporting the bug. Giving software vendors a heads-up about the problem enables them to avert disaster and protect their clients.

Why the Rush?

The logical argument would be to give vendors as much time as possible to fix the bug before going public. While this may sound reasonable, in reality it is not always the best course of action. Every day the vendor spends trying to work out the problem, it becomes more likely that other individuals will find the same bug currently being addressed. For each person who finds the bug, the better chance that one will not be as ethical and responsible with the information as others. The goal of responsible security professionals is to address as many security concerns in the shortest possible amount of time.

Gratitude (or lack thereof)

Many software vendors spend all their time and energy creating their products. In this process there is a noticeable lack of care or attention spent on proactive auditing of the products looking for security vulnerabilities. Customers purchase these products expecting secure software but often find it lacking. They turn to security companies to help them present a secure network posture for their corporate networks. In doing so, many of the security professionals run across vulnerabilities and bugs that could affect their clients. In essence, this translates into continued and diverse free security auditing for many software vendors. The type of auditing that often costs some clients up to $500/hr. Software vendors must be grateful, right?

Of course not. Speaking from past experience, I can assure you many vendors take bug reports as a personal affront and insult. Initial responses to original bug reports have been downright hostile in the past. It amazes me that these vendors show anything less than 110 percent gratification and respect for people reporting security problems. While the software vendors advertise secure platforms, their poor coding and inadequate security auditing led to these problems to begin with.

http://www.sun.com/solaris/overview.html

"The Solaris Operating Environment has been carefully engineered to deliver a reliable, high-performance, scalable, and secure platform on which to develop and deploy desktop and server applications."

http://www.microsoft.com/ntworkstation/

[In a colorful bar chart] "35 percent Lower TCO, 30 percent Faster, 1/3 Fewer helpdesk calls, Strong Security."

Killing the proverbial messenger does not solve the problem or help the situation. To expect bug reporters to send the information to them, let alone wait up to half a year for a fix, is overly presumptuous and egotistical. Companies that share vulnerability information before releasing it to the public should be regarded as heroes, nothing less.

Delays: Why Vendors Take So Long

Software vendors often have a legitimate reason for asking bug reporters not to go public right away. Often times consumers aren't aware of all of the conditions surrounding patching a vulnerability. This leads to a cry for faster release of information and quicker patches/fixes. The trick to the argument is deciding on a good amount of time to give these vendors, as they often do stall unnecessarily on releases (this said based on a long case history). Some of the valid reasons vendors delay that may escape some people:

Regression Testing: When bugs are reported for a specific version of a piece of software, the vendor is obligated to test and fix all affected versions. Once all vulnerable versions are identified, patches must be developed. Next, full tests must be run on previous versions of the software with the new patches to assure the new changes don't affect other aspects of the software.

Architecture: In today's software world, dozens of different hardware platforms exist. Patches for the PC platform are different than those for Sparc hardware. NT patches for the Alpha vs PC could include a wide variety of alterations.

Additional bugs: A handful of software vendors learned via trial-by-fire that a hasty fix is not always a quality fix. Days after the initial bug is reported and fixed, slight variations of the same bug surface, providing a healthy dose of mud to the vendor's face.

Bureaucracy: Software companies are often big, and full of internal procedure. Departments don't always play well with others.

Incentive Not to Report Bugs

Going beyond the lack of appreciation vendors show, there are several other reasons not to report bugs after discovering them.

In one case, a security company provided a large Unix vendor with a full technical write-up, working (and commented) exploit code, and a half-an-hour explanation of the bug over the phone. Despite all of this, the technicians at the vendor still could not figure out how it all worked. Up to a year later, the vendor was still seeking assistance in getting the exploit to work.

In 1998, the same security company worked with another large unix vendor with a nasty remote exploit that gave attackers full access. During the exchange of information, the security company asked if they could get a "thanks" or some kind of credit in the advisory the vendor would eventually release. Despite releasing their own advisory carefully timed to coincide with the vendor, and despite hours of tech support, the vendor still would not give the company credit. Subsequent security newsletters quoted the vendor as the original poster of the bug information, not the security company.

After a miscommunication between security company eEye and software vendor Microsoft over the public dissemination of vulnerability information, Microsoft representatives attempted to place all blame on eEye at a public convention. Without the presence of any eEye employees, a Microsoft spokesperson stood up during the Full Disclosure track at the Black Hat Briefings and proclaimed that Microsoft was the wounded dog, being kicked by big bad eEye Security. The miscommunication? A twenty four hour difference on when to post the information to public forums.

Two Examples

Two specific examples of security / vendor dealings in the past illustrate these points very well. Each demonstrates that vendors have more control over the issue than they admit to. In our examples, both security companies maintained the same stance on full disclosure. One of the companies held firm with their intention to go public in a set time. The other company adhered to the time table set by the vendor and watched the issue move from weeks to months.

Please Repent? (No Firm Stance)

In the middle of 1998, a security company called RepSec (RSI) found several vulnerable functions in one of the Solaris libraries. Upon contacting Sun Microsystems they were informed it would take some time to fix the bugs and issue patches. RepSec asked Sun for two weeks before they released their advisory to the public. Sun countered asking for at least a full month to resolve the issue. Mail went back and forth trying to work out an acceptable time frame for both parties. After two weeks, RepSec did not go through with the release of their advisory, instead waiting for Sun to give the go ahead.

After weeks more of delays, Sun still wasn't ready. Almost two full months had passed with more requests for RepSec to hold off on releasing their advisory. Excuses of regression testing, dispute on which functions were vulnerable and more came from Sun. RepSec held back, worried that the information hitting public forums could give hackers dangerous new vulnerability information that could be used to break into more machines. A little more than two full months passed and customers of RepSec were getting frustrated. They had seen the advisory and questioned why it had not been released publicly. When given the answer that Sun was delaying further, one of their customers took matters into their own hands.

An unknown customer of RepSec decided Sun was being irresponsible and posted the information to Bugtraq. After two months of stalling on a fix from Sun, with no estimate on completion of patches, Sun miraculously posted a full patch and information on the vulnerability. To many security professionals, this proved that full disclosure was the most effective and speediest solution to a problem affecting many people. Had RepSec held firm with their original plan to go public in two weeks, no doubt Sun would have followed a day after with a patch. Subsequent advisories from RepSec on Sun products reverted to long delays before patches or advisories were issued. By not holding firm to release dates, it allowed the vendor to control vital security fixes to widely used operating systems.

Do Things Our Way, It's Better (Firm Stance)

In June of 1999, a relatively new security company, eEye Security found a severe vulnerability in Microsoft's IIS. Because of the widespread popularity of Windows NT and the large install base of IIS, this vulnerability posed a serious threat to millions of companies world wide. eEye notified Microsoft of the vulnerability along with their intention to make details public in a weeks time. Microsoft immediately wanted more time, and asked for it in a less than polite fashion.

Holding true to their word, eEye released the details of the vulnerability in an advisory to the public as scheduled, along with links to Microsoft's timely patch. In seven days, Microsoft was able to assess the problem, release their own advisory and create a working patch with regression testing. Had eEye not held firm on their intended release date, Microsoft would have taken as much time as possible before releasing a patch. Past history shows they would rather avoid (or downplay) any potential egg on the face, especially when it comes to security matters.

Since their initial dealings with Microsoft, eEye has received nothing but quick and courteous responses. Says Firas Bushnaq of eEye, "We have noticed a big improvement in Microsoft's handling of security related issues in the past few months. Response time is down to hours compared to days. Someone must have sent an internal memo."

Because of their diligence in releasing vital information to the public on their time schedule and managing the expectations of the software vendor, eEye's policy on full disclosure has lead to a more responsive, open and attentive software vendor.

Case in Point?

Looking at a history of vendor released security advisories, a pattern emerges that suggests a firm stance on public dissemination of vulnerability information has its merits. Comparing a vendor like Sun Microsystems, we see no pattern of continued releases despite new vulnerabilities brought to light each month on full disclosure security mail lists. Looking at Microsoft, not only do we see a regular release of information, we see a sharp increase shortly after the eEye/Microsoft releases about the IISHACK vulnerability. Is the increase in advisories due to companies like eEye holding firm on their promise of full disclosure? Sure seems like it.

Sun Microsoft Sun Microsoft

Nov 1998 2 1 May '99 6
Dec 1998 3 3 June '99 2 5
Jan 1999 2 July '99 3
Feb 1999 3 5 Aug '99 1* 6
Mar 1999 3 Sep '99 1 9
Apr 1999 2 Oct '99 1

  • Reprint of CERT Advisory

Arguments Against

There are at least two arguments that suggest security companies have little to do with trends in vendors increasing in full disclosure.

The first argument lies in the fact that Microsoft is relatively new to security advisories. Their first advisory was released January 21st, 1999 putting them now into their first year of full disclosure. Older vendors that have been around, like Sun Microsystems, released their first advisory on September 5, 1990. Despite Microsoft being relatively new to the advisory game, unlike other vendors they have the luxury of learning from nearly a decade of other vendor's experiences.

Others may cite that the previous example of a security company dealing with a large Unix vendor had little affect in the long run, that even faced with a security company threatening public release of vulnerability information, they reverted to delayed response and little public concern. The flaw in this argument is that the security company gave in to the desires of the vendor and followed their recommendations for release dates. This is further backed by the fact that the vendor provided patch and advisory within days of the information being released by a third party.

While this is not definitive proof, it certainly weighs heavy on the fact that some companies policy of firm dates on full disclosure is a good thing.

One man's rant? Opinions on Full Disclosure

How do security professionals and operating system vendors view full disclosure and security vulnerabilities?

Al Huger (Security Focus, POC for two security companies dealing with reporting bugs to vendors):

"Full disclosure is, in and of itself, a means to an end. Many people participate in it, not out of malice but out of the hope that, with enough public scrutiny vendors, they will finally take responsibility for the software they write.

Make no mistake about it: Full disclosure is ugly. People get hurt. However, I see no reasonable alternative. For every responsible bug reporter out there, it's likely he / she has a counterpart who will most likely keep the information for themselves and use to ends that we would all rather avoid.

In a perfect world, vendors would perform rigorous security audits, previous to market release. In a perfect world we would still have buggy software (this is something we will never lose) but we would also have vendors who make security a pre-emptive consideration as opposed to a forced reaction."

Aleph1 (moderator of Bugtraq, the most popular and active full disclosure security mail list):

"Let me clarify our disclosure policy. Some people get the impression we are full disclosure extremist. We are not.

First, we would rather you work with the vendor to create a patch or fix to the problem. If the vendor is responsive and they are making a good faith effort to release a fix in a timely manner, we'd rather you keep the existence of the vulnerability secret until such a time when the vendor has the fix ready. You can then release both the vulnerability information and patch at the same time. The reasoning behind this is two fold. First, in our experience, saying a vulnerability exists but not releasing full information seldom stops attackers from obtaining details of the vulnerability. Attackers will research the problem and either come up with the information on their own or will hack their way to someone with details of the vulnerability.

Second, releasing both the details and the fix at the same time minimizes the time attackers have to find the vulnerability on their own. Next, we like that people post full details of the vulnerability once the vendor has released a fix or a patch. The reason behind this is that once patches are out attackers can easily reverse engineer them figure out what the vulnerability was. Thus you are only keeping in the dark the good guys. Knowing the vulnerability details allows people to verify that fix indeed fixes the vulnerability (we seen many cases where it doesn't, or it does so in a bad way), it allows people to look for similar vulnerabilities in other systems, and it allows people to learn from the mistakes that enabled the vulnerability."

Firas Bushnaq (eEye Digital Security, POC for reporting bugs to Microsoft):

"The adoption of full disclosure is an ethic, our responsibility and the duty of every security professional is to disclose the facts. How and when we disclose the facts is, on the other hand, the most crucial part of full disclosure. Many factors come into play, the seriousness and implications of the bug, how long has it been since it was discovered, how responsive is the vendor and how dependant are we on the vendor for a patch.

We look at a network security breach as a violation of our safety and privacy and we will continue to tactically plan and execute to make sure that we address the issues in the shortest amount of time possible. Vendors are becoming more aware, users are becoming more informed and our networks are becoming more secure."

Erik Berls (NetBSD Team, Vendor POC for receiving bug reports):

"We try to aggressively pursue any security bug, verify it within our operating system, issue an immediate temporary fix and release a correct solution as well as a security advisory as soon as possible. It tends to operate on a timeframe based on hours not weeks."

Past, Present and Future

The history of security bugs is rather bleak. For years, full disclosure was not practiced by security professionals or vendors. Small groups of interested parties exchanged information amongst themselves, unwilling to disclose it to the masses. As these bugs were slowly found by others or passed on to vendors, they eventually got fixed. This time of security through obscurity did little for the overall perception of secure computing.

The present finds us in a new frame of mind, one that sees full disclosure as a viable and important way of dealing with vulnerabilities that can lead to disastrous effects. Slowly, vendors are learning that security is a growing concern to more and more people, and that responding to these concerns in short order helps everyone in the long run.

So what does the future hold? I hope that all software vendors acknowledge the success of full disclosure and adjust their own procedures to synchronize with it. That quick and open responses to the security companies and individuals reporting these bugs becomes the standard, not the few success stories.

Brian Martin is security consultant and a former employee of RepSec (RSI), and can be reached via e-mail at bmartin@attrition.org.


home