[VIM] r0t on "bugtraqs @ all"
Steven M. Christey
coley at linus.mitre.org
Mon Jun 19 20:07:45 EDT 2006
Figured I'll answer this one on the list, since most of us probably deal
with this question at one point or another.
On Mon, 19 Jun 2006, security curmudgeon wrote:
> : Just out of curiosity - in retrospect, do you think that
> : "split-by-executable" has worked well for OSVDB? It's a clear rule and
> : easily understood, which is a big win.
> I can easily argue both methods as far as VDBs go. I have one entry in my
> queue that I groan at and push to another day over and over, because it is
> a huge split (remote file inclusion).
> Does it benefit our users to have 70 entries for this product? Our rules
> state we should split them out, but does it actually benefit anyone?
We've faced this problem in CVE on several occasions; off the top of my
head, we have:
- the PROTOS project disclosures - from SNMP in 2002 and beyond. PROTOS
style analyses *should* get dozens or hundreds of CVE's, but the data
isn't there sometimes, and other times... well, it could literally take a
week of labor. I'm slowly settling on a middle ground, but still...
- somewhat recently, maybe last year, both Ethereal and Oracle had
large-scale disclosures. Formal CVE content decisions are very clear that
if bug 1 and bug 2 are in different sets of versions, then they should be
split. Both the Oracle and Ethereal advisories were very specific on the
*starting* ranges for the affected versions, which varied very widely, but
the *ending* version was the same. Similar thing for Oracle. But the big
question here was - "is it USABLE" to anybody? CVE gets used in Red Hat
advisories, for example. How is it usable to Red Hat consumers to have an
advisory that has 70 CVE's in it when there's only one patch? (Linux
kernel updates get up there around 20, but that's more understandable than
for a single little app.)
On the other hand, just giving up and giving a single CVE to such an
advisory is not useful relative to some of the ways in which CVE is used.
We don't have a single focused audience such as sysadmins, we have
multiple audiences including IDS/tool vendors, academic researchers, etc;
and it's been a personal goal of mine to have CVE be useful in
quantitative analysis. So, we try to balance the needs of all of the
audiences and come up with something that is at least moderately workable
In this particular case, we weakened the application of the CD's in large
scale disclosures, and have settled on a repeatable process for managing
them that seems to work. So when an Ethereal or Oracle advisory comes out
with about 70 bugs in it, it'll probably result in 20 to 30 CVEs, which
seems to be a reasonable compromise: the numbers are still high, to
roughly capture how many bugs there were, but the numbers aren't
ridiculous, so sysadmins don't necessarily have to wade through mountains
Taken to its logical conclusion, the disparity in levels of detail for
each disclosure does not bode well for absolute consistency in metrics, no
matter how much we might try otherwise. So I'm hoping for repeatability,
(relative) simplicity, and clarity of bias, so when people interpret
results they can at least understand how they might be skewed (well, for
the handful of people who care about that kind of thing.)
> it may clutter up the database a bit, the obvious advantage is come
> analysis/statistic time, when we can be sure we followed standards
> making interpretation of said stats less debatable.
I would guess that this is one of the big issues for any vuln DB that
wants to be correct about the raw number of vulnerabilities *and* be
useful to sysadmins.
More information about the VIM