[Dataloss] CEOs deserve jail for data breaches [LONG]

Rich Kulawiec rsk at gsp.org
Wed Apr 9 16:36:32 UTC 2008


Sorry, too much coffee.

On Wed, Apr 09, 2008 at 09:26:33AM -0400, Allan Friedman wrote:
> The only reason to advocate this sort of measure is if we have
> concrete proof that the personal-punishment type laws are more
> effective than the other alternatives that have been discussed on this
> list, including *effective* liability models or a shared culture of
> openness and communication to prevent future breaches.

That's a really good point.   Which I'm now about to disagree with. ;-)
Well, in part at least.  Let me give it my best shot.

I'll argue that unless people are held *personally* accountable --
in both a civil and criminal sense -- there's no incentive for them
to make any effort.  Why should the executives of TJX (to re-use my
favorite example) make the slightest effort to address security issues
when they know, up front, that in the worst (likely) scenario, they
can walk away -- with their bloated salaries, their obscene bonuses,
their golden parachutes -- and start doing it again somewhere else?
All they have to do is use the word processor macro that prints out
"we take the privacy of our customers seriously", look suitably grave
at the press conference, and quietly slink off.

The people are the top of these companies are the ones who have
accepted full personal responsibility for everything that happens in
those companies on their watch.  And we're letting them (mostly) off
the hook, so we shouldn't be surprised that they repeat the behavior:
it's very profitable.

After all: it's not *their* data.  Why should they care?

(Let me note in passing that expecting Cxx level executives to
take on this responsibility is expecting a lot.  But that's the
deal: anyone who's not up to that is free to decline such a position.
But IF they accept it, and IF they accept the enormous rewards that
typically go with it, then they have also -- whether they realize it
or not -- accepted the responsibility.  Nobody rides for free.)


As I watch this list, and the "Pogo Was Right" web site, and all the
other resources that we probably all watch, and I consider the impact
that data protection legislation and enforcement has had to date,
I'm reminded of a line from Marcus Ranum's brilliant "The Six Dumbest
Ideas in Security":

	 "If it was going to work, it would have worked by now."

I submit that what we (the collective societal "we") are doing isn't
working, and that it's probably not going to work.  If you buy that
assertion (and I'm sure some do, some don't), then the question arises:
"okay, fine, let's do something different...what?"

Which brings me back to:

	"Somebody's got to go to prison."
		Agent Sadusky, "National Treasure"

which is starting to be my favorite quote of the month.  I do recognize
though, (to your point and that of others) that vigorous criminal
prosecution will probably cause problems with disclosure.  I'll counter
that by saying we already have those problems: disclosures are delayed,
minimized, obfuscated, wallpapered, and everything else possible to
pretend they don't exist or don't have significant impact.  And that
laundry list just accounts for intentional actions: some disclosures
don't happen because they don't know...or don't want to know.  So I
conclude that voluntary disclosure is at best a weak mechanism -- nice
to have, but not one we should primarily rely on.

So let me suggest a couple of possible approaches around this, approaches
which I think address the problem that increased personal accountability
could well decrease the inclination to be open.

I'll begin by citing the Kulawiec Iceberg Principle: For every
breach reported by an organization, there are ten more they're aware of.
For every breach an organization's aware of, there are ten more they
don't know about.  (Hey, I thought it up, I'm sticking my name on it.
Get your own. ;-)  More seriously, if someone beat me to it, let me 
know and I'll pummeXXXXgive them appropriate credit.)

I came up with that because it seems to match observations.  The repeated
pattern of disclosures which start at X, escalate to 2X, then 5X, then 10X,
argues for the first part.  The non-disclosures in cases where there are
obviously severe problems argues for the second.  A timely example:

	Flawed Security Lets Sprint Accounts Get Easily Hijacked 
	http://consumerist.com/376845/flawed-security-lets-sprint-accounts-get-easily-hijacked

followed up by (and thanks to Paul Ferguson for noting this):

	Flawed Sprint Security Worse Than We Thought
	http://consumerist.com/377617/flawed-sprint-security-worse-than-we-thought

and both of which very likely related back to:

	Sprint Twiddles Thumbs While 12-Year Customers Get Scammed For $2,500
	http://consumerist.com/374199/sprint-twiddles-thumbs-while-12+year-customers-get-scammed-for-2500

No disclosure yet from Sprint.  But there's obviously a problem
here, and I'd be really surprised if it *hadn't* been exploited.

The fact that this is emerging from discussions on a consumer advocacy
web site and not from Sprint corporate is telling.  Either they know
(in which case they're not being forthright) or they don't know (in
which case they may not be very good at what they're doing).

My point?  My point in going through all this is that we need mechanisms
which *do not* rely on voluntary disclosure, because it doesn't work
very well.  (This explains why I'm not very worried about a decrease
in voluntary disclosure as a result of prosecutions.)

I can think of three things that might have a fighting chance,
so I'll propose them in order to let you all promptly shred them. ;-)

1. Not just increased safeguards for whistleblowers,  but rewards.
It's worth it to us as a society, and if we do it right, it won't
cost us anything.  If J. Random System Admin inside Foo Corp notes
that data is being shipped on unencrypted CDs in violation of
law and/or best practice, and JRSA drops a dime to tell someone,
then when it's all sorted out, JRSA is compensated for due diligence.
(Where does the money come from?  Cxx executives. They have plenty
to spare.  Besides, if they're not doing their jobs correctly,
why should they be well-paid?)  We need a structure that actively
encourages the people who have firsthand knowledge of this to
rat out their bosses, preferably *before* breaches occur.  To do
that, we have to counter their company loyalty, their job security
fears, etc.: hence the need for safeguards plus rewards.

2. We need markedly higher penalties for bonehead moves. These
might include:

	- acquiring data that you really have no reason to have
	- storing data that you have shouldn't
	- storing data longer than you need it
	- throwing data in the dumpster out back
	- shipping data without encrypting it

etc.  What I mean here is that we all know there are a kazillion
threats, and the really really clever ones are probably not on
our radar...so it's hard to fault someone if they get nailed by
one of those that nobody's ever seen before.  But the easy ones?
C'mon, there's no reason to let those happen.  Even people who just
skim the glossy pages of CSO magazine should know these by now.

3. We need a mechanism to correlate independent reports of data
theft *symptoms*.  Go back to the Sprint example above: those CAN'T
be the only people affected by this problem.  There must be others.
But they may well think they're alone -- they may not realize that
they're just another data point in a pattern.  We need a way for
victims of identity theft, credit card fraud, etc. to safely
(and there's the catch!) pool information so that someone can step
back and look at the mosaic and say "hmmm...that's odd..."

I'll skip over how such a mechanism might work because I'll probably
get it wrong anyway.  (Some back-of-the-envelope scribbling suggests
that this is a hard problem.)  But I think we need it, because I
think it will give us a chance to uncover the 9 of 10 that companies
know about and aren't announcing, and the 9 of 10 that they haven't
detected yet.  It could do this because it doesn't rely on them --
and we shouldn't rely on them, because *they're the problem*.

I can tell you that it shouldn't be run (a) by any government
or (b) by any commercial institution.  All of those already have
way too much data and can't keep track of it, and besides, we
already know they have miserable security.  (The GAO keeps handing
out F grades in security to US government agencies.  I presume
this is because there is no lower grade available.)  So if they
ran it, the most likely outcome would be that it made things worse.
That said...who?   I don't know.  A non-profit with very very serious
security and privacy clue?

I'm sure that there are problems with all three of these.  I'm equally
sure that 4,5,6... are out there and may well be better.  But I think
this is worth debating because I don't think voluntary disclosure is
working very well.

---Rsk


More information about the Dataloss mailing list