[Infowarrior] - Interview with Bill Cheswick

Richard Forno rforno at infowarrior.org
Mon Jan 22 09:10:10 EST 2007


The Register » Security » Network Security »

Original URL: 
http://www.theregister.co.uk/2007/01/22/bill_cheswick_interview/
Net security from one of the fathers of the biz
By Federico Biancuzzi, SecurityFocus
Published Monday 22nd January 2007 12:28 GMT

Interview Many people have seen internet maps on walls and in various
publications over the years. Federico Biancuzzi interviewed Bill Cheswick,
who started the Internet Mapping Project that grew into software to map
corporate and government networks. They discussed firewalling, logging, NIDS
and IPS, how to fight DDoS, and the future of BGP and DNS.

Could you introduce yourself?

Bill Cheswick: I am known for my work in internet security, starting with
work on early firewalls and honeypots at Bell Labs in the late 80s. I coined
the word "proxy" in its current usage in a paper I published in 1990. I
co-authored the first full book (http://www.wilyhacker.com/) on internet
security in 1994 with Steve Bellovin. This sold very well and arrived in
time to train the first generation of network managers.

In the late 1990s Hal Burch and I did some seminal research on IP traceback,
and then started the Internet Mapping Project. This grew into software to
map corporate and government networks. We were two of seven people who
co-founded Lumeta (http://www.lumeta.com/), a spin-off from Bell Labs, to
commercialise these capabilities. You have probably seen our internet maps
(http://www.cheswick.com/ches/map/gallery/index.html) on walls and in
various publications over the years. I have served as chief scientist at
Lumeta from Sept 2000 to Sept 2006.

I am an internationally-known speaker on computers, the internet, and
security.

You wrote a famous book entitled "Firewalls and Internet Security
(http://www.wilyhacker.com/)", so I'd like to ask you a couple of technical
suggestions on firewalls. What type of policy do you prefer for filtered TCP
ports? Returning a RST or dropping packets silently?

Bill Cheswick: I prefer the silent drops: it makes an attacker wait for a
timeout, and you can't use spoofed packets to point RSTs elsewhere.
Returning an RST reveals information that really doesn't need to be
disclosed.

I don't think choosing one way or the other is a big deal, however.

I was thinking of the fact that if you drop TCP packets for a particular
port or range or ports, an attacker could spoof your IP. In fact, he would
be able to send SYN packets to the victim, who will send SYN+ACK to your IP,
but since your firewall will drop those packets instead of returning RST,
the attacker will be able to send his ACK storm undisturbed...

Bill Cheswick: It's true, but that trick will also work with any unassigned
or idle IP addresses, and there are many.

In any case, these bounced packets don't offer any amplification, so it
isn't clear why they would bother. Also, I understand that with the botnets
so common, a lot of attackers don't bother spoofing packets.

What type of logging would you suggest for a firewall filtering an internet
connection? If the aim of a firewall is to block undesired packets, why
should we log them?

Bill Cheswick: Back in the early 90s I used to log all the probes, and often
send out emails warning the owners of probing machines that they might be
compromised. Over time this became as pointless as counting bugs on a
windshield, and I stopped.

The information is not entirely useless, and the firewall can become a small
packet telescope. Most of the information revealed is statistical: worm
infection rates, etc. But you can imagine combining information about
firewall probes with other information about an attack on a company that
could yield some additional information about the attack.

Disk space is cheap, and these logs aren't needed for very long, nor do they
typically require being backed up. I like to put such logs into a large,
cheap drop-safe, and make sure that if the safe fills up, the firewall still
functions.

You didn't mention NIDS when talking about analysing data and discovering
threats. What is your opinion about the core idea and current technology of
Network Intrusion Detection Systems?

Bill Cheswick: It makes a lot of sense to watch your own network and
interconnections to keep an eye on what's going on. The problem is that
there is such volume and variety of data and protocols (a strength of the
internet) that it is really hard for a human to understand his network
traffic, unless it is highly constrained (in other words, "we only allow web
traffic on this subnet...")

Not only is it hard to really monitor what's going on, subtle, slow stealth
attacks and probes over, say, a period of months, are almost impossible to
separate from the hue and cry of momentary traffic. Most people don't try,
but that's where the real pros can eat your lunch.

NIDS are an ongoing attempt to watch the network. They all try to watch the
net, summarise traffic, report anomalies, etc. They all have problems with
false negatives and false positives. False positives quickly become a
monotonous drumbeat, and tend to quash interest in the tool and its results.
When a salesman tells you about a NIDS, or you read a paper about some new
NIDS technology, always find out the details of false positive rates, and
what they miss.

Another problem is the NIDS themselves may be subverted. We have seen buffer
overflow attacks on the monitoring host, packets that were intended to
subvert the eavesdropping software! This can turn your NIDS against you.

Deep down, network monitors have what Matt Blaze calls the "eavesdropper's
dilemma." Is the eavesdropping software seeing the same data, and
interpreting it the same way, as the destination hosts? This is a hard
problem: perhaps packets don't make it all the way to the destination, or
the end operating system can interpret overlapping data in two ways. The
eavesdropper has to understand this, and state-of-the-art implementations
actually understand the local network topology and actively probe endpoints
to determine their operating system and version. It seems to me that this
particular arms race will end badly.

This same problem exists for law enforcement and military, only on a much
grander scale. They need to extract specific, small bits of data from vast
torrents of data.

What do you think about reactive firewalls, also knows as IPS (Intrusion
Prevention Systems)?

Bill Cheswick: Reactive security is an idea that keeps popping up. It seems
logical. Why not send out a virus to cure a virus, for example? How about
having an attacked host somehow stifle the attacker, or tell a firewall to
block the noxious packets?

These are very tricky things to do, and the danger is always that an
attacker can make you DOS yourself or someone else. As an attacker, I can
make you shut down connections by making them appear to misbehave. This is
often easier than launching the original attack that the reactive system was
designed to suppress (by the way, this happens a lot in biological immune
systems as well. There are a number of diseases that trigger dangerous or
fatal immune system responses).

So I am skeptical about these systems. They may work out, but I want to keep
an eye on the actual user experiences with these.

What is the state of research in network security? What attract funds? What
is considered a promising technology?

Bill Cheswick: A lot of the easy stuff has been done, and even beaten to
death commercially. I have been intrigued by new work in a few areas.

    * There is a lot of activity on virtual machines of various sorts, like
VMware and Xen, for example. I think these have a lot of potential,
especially with better hardware support. VMs are a nice sandbox for
necessary but dangerous client software, like browsers and mail readers.
They can be used to improve testing of operating systems, which I would like
to see more of.
    * Google for "strider honey monkeys". This is a nice paper about a
proactive project at Microsoft research to go find browser exploits on evil
sites (http://www.securityfocus.com/news/11273). It has found a number of
day-zero and other exploits, which they fed into the developers and legal
department. I understand this work has been turned over to production. A
nice job.
    * I was excited by the SANE paper at Usenix from some crackerjack folk
at Stanford. It is a rethinking of intranet design, completely replacing the
end-to-end principle with centralised control. This is bad for research and
new internet technologies, but it may be exactly what a military network
needs, and maybe useful for corporate deployment. There are open questions,
but it is quite promising.

I am not that well connected with current funding streams to be able to
answer that question well.

How will the internet change with the increasing resources that common
people have access to? For example, a blind spoofing attack could become
more feasible with broadband access to the internet, and there are some
countries where you can easily and cheaply get a 100Mbps connection. Same
thing for DDoS via botnets, if each host got a 100Mbps...

Bill Cheswick: This has already happened some time ago. Parts of the Far
East have efficient home wiring, and computers there are often used in
staging attacks because they have high bandwidth. This has become such a
problem that some people just drop all email from China, since it can be a
major source of spam connections, and many people don't know anyone there.

Spoofing of attacks continue, but I am told that the spoofing rates are
down. For DDoS, why spoof when there are tens of thousands of source
addresses?

For almost all users the computer and the network have far more potential
than the average user employs almost all of the time. Common computers have
cycle times six times greater than the million dollar Cray we had at Bell
Labs in the early 90s. The Cray still wins in some performance areas, but in
many it does not. What does an average user do with this compute power?
Powerpoint and word processing don't need nearly this much power. Some
multimedia and many games do use this power.

So miscreants use the computer and the network connections of average users
for their own uses, being careful not to bother the owner. That's why
viruses these days don't tend to do nasty things like erase hard drives,
though they certainly could if they wished.

These compromised machines are very useful for making money, through spam
delivery, phishing sites, DDoS extortion attacks, etc. The incentives are
strong, and I expect this misuse to continue. I hope the population of
susceptible machines will decline as Vista gets deployed and the early kinks
get ironed out.

The big change in the internet is going to be greatly increased multimedia
delivery. An hour television show at 720p is about 5GB. People are going to
want to share these with friends, and providers are grappling with new
delivery mechanisms, perhaps permanently replacing broadcast TV.

What is the more promising path to fight DDoS?

Bill Cheswick: I have no definitive answer for this. I can imagine a world
of robust, worm-free software. Engineering, experience, and the right
economic motives can bring this about. But any public server can be abused
by the public. Are the flood of queries to CNN the result of breaking news,
or a focused DDoS attack? Even if it is breaking news, I could imagine that
the news might be created explicitly to flood the site. How would we know?

I see no theoretical possibility of doing anything more than mitigating
attacks, and ultimately throwing large amounts of computing and network
capacity at the problem, which is what all the most popular targets do.

Do you think that we could use some mapping software to fight these types of
attacks, just like weather people study the movement and shape of tornados
with satellites?

Bill Cheswick: I don't think it's likely to be useful, because the source of
DDoS attacks are widespread and generally not hidden. It doesn't help me if
I know the location of 10,000 attacking hosts: I can't possibly track them
down (using traceback, traffic analysis, or whatever) and shut them all
down. These days I am told that the attackers often don't even bother to
spoof the attacking addresses.

If there is a particular attacking stream of interest, then, yes, this
technology may be helpful, combined with others. I mentioned traffic
analysis: this is one area where I conjecture that the spooks may be well
ahead of the public literature.

There are certainly researchers examining packet traceback, flood
suppression, etc., using these tools, including my data.

It seems that net neutrality is under fire in the US. What is your opinion
from a security standpoint? Could we see some security improvements if
carriers had the right to filter the traffic on their networks?

Bill Cheswick: Short answer: some carriers do filter some traffic, and that
sometimes is a benefit to their customers. As the Chinese would tell you if
free to do so, it is actually quite hard to suppress all the unwanted
traffic, given world-class encryption and a massive traffic flow in which to
hide.

The USENIX Magazine (http://www.usenix.org/publications/login/) published an
article [PDF (http://www.cs.columbia.edu/~smb/papers/v6worms.pdf)] titled
Worm Propagation Strategies in an IPv6 Internet that you co-authored. It
seems that IPv6 could help us in fighting worms thanks to its huge address
space. What type of other indirect security advantages could IPv6 provide?

Bill Cheswick: That paper points out that it doesn't help us that much. IPv6
is a good idea, but it shouldn't be sold as a palliative for worms.

The job of hunting for hosts on a network also has legitimate motivations.
Corporate auditors are keen to find and track their assets. I think they are
going to have to talk to the routers more. Hopefully, the worms will be
excluded from these conversations.

At present, I don't see much economic pressure for corporations to switch
their intranets to IPv6. There is a lot of work involved, and I don't see
the benefits.

The internet runs on two fragile technologies: BGP connections among
routers, and a bunch of root DNS servers deployed around the planet. How
much longer do you think this setup could still be effective?

Bill Cheswick: For quite a while, actually, though there are obvious,
well-known weaknesses with both systems. The DNS root servers appear to be
13 hosts, but are actually many more. They have been under varying,
continual, low-level attacks for many years, a process that tends to toughen
the defenses and make them quite robust. A few years ago there was a strong
attack on the root servers, taking 9 of the 13 down at some point.

The heterogeneity of the root server management was part of the underlying
robustness. For example, Paul Vixie's servers (F.ROOT-SERVERS.NET) had many
hosts hiding behind that single IP address. I understand they did not go
down. In this case, the statelessness of the UDP protocol underlying the DNS
system was a strength (it is a weakness in other ways, allowing a variety of
attacks, including some new ones recently).

There are other root servers, of course. Anyone can run one, it is just a
question of getting people to use it. I understand that China is proceeding
with root servers of their own. DNSSEC is a way to get the right DNS answer,
but its deployment has had problems for at least 10 years.

BGP is certainly another network issue. Where should my routers forward
packets to? BGP distributes this information throughout the internet. There
are two problems here: 1) is the distribution working correctly, and 2) are
the other players sending the correct information in the first place. This
is usually an easy problem between an ISP and their customer. The customer
is only allowed to announce certain routes, and the ISP filters these
announcements to enforce the restriction. It is easy on a short list of
announcements.

But at the peering point with other ISPs, this becomes hard, because there
are hundreds of thousands of routes, and it isn't clear which is which.
Should I forward packets for Estonia to router A or router B? We are far
removed from the places where these answers are known.

There are proposals to grab ahold of all this information using
cryptographic signatures. SBGP is one on-going proposal, but there are lots
of problems with it, and lots of routers to change (we identify almost
200,000 routers a day worldwide in the internet mapping project
(http://www.cheswick.com/ches/map/index.html).)

And BGP announcements are misused. Evil nets will pop up for a little while,
emit bad packets, and then unannounce themselves, confounding the job of
tracking them down. Other attacks can divert packets from the proper
destinations. There have been many cases of this, both accidental and
intentional.

For all these problems, and others in the past, I have been impressed with
the response of the network community. These problems, and others like
security weaknesses, security exploits, etc., usually get dealt with in a
few days. For example, the SYN packet DOS attacks in 1996 quickly brought
together ad hoc teams of experts, and within a week, patches with new
mitigations were appearing from the vendors. You can take the internet down,
but probably not for very long.

This article originally appeared in Security Focus
(http://www.securityfocus.com/columnists/429?ref=rss).

Copyright © 2007, SecurityFocus (http://www.securityfocus.com/)





More information about the Infowarrior mailing list