Note: This was the second article I did for Ex-Game magazine (print mag in Japan). It was titled
as my name and labeled "Original Document".
In the past few years, Japan has seen very few incidents of web sites being defaced. From 1995 to January of 2000, there were
only 27 recorded defacements (http://www.attrition.org/mirror/attrition/jp.html) of Japanese web sites, very few of which were
government owned. Beginning around January 24th, a brief but intensive wave of web defacements occurred on Japanese web
servers, most owned and run by the government. Among these sites were Japan Science and Technology Agency (www.sta.go.jp),
Japanese Management and Coordination Agency (www.somucho.go.jp), and Japanese Statistics Bureau (www.stat.go.jp). Shortly
after the first few attacks, officials with the Japanese government responded by declaring the attacks a serious threat to the
operation of their information infrastructure. Within days they had asked the United States government for assistance in
dealing with the attacks. Not only did government officials ask for help in dealing with recovering from the attacks, they
asked for assistance in preventing similar incidents from happening again.
Because of the small but intense wave of defacements plaguing the Japanese government, more and more people are questioning
the skill required to perform such feats. Is the government facing computer masterminds intent on destroying the credibility
and integrity of government information? Or are the intruders nothing more than unskilled malicious teenagers with a little
luck and a lot of bravery (or is it stupidity?). Perhaps it is a little of each rolled into a less sinister and less
proficient person or persons. Accomplished hackers intent upon exploration typically does nothing that would draw undue
attention to their actions. Public, media or law enforcement scrutiny is often counterproductive to their goal of
uninterrupted learning and discovery. Unskilled kids who run scripts they can barely comprehend typically have no message
worth reading, and do not understand the potential consequences of their actions, or the seriousness of what they do.
What is now becoming an old and foolhardy debate is whether or not defacing a web page does damage to a company (or the
government). Some argue that by changing a few lines of HTML, no real damage is done to the system. Since it does not disrupt
the flow of information for more than a few hours, and since it does not prevent people from using the system, many say claims
of damage are often inflated for selfish reasons such as financial gain or public sympathy. On the other hand, some argue that
simply undermining the integrity and confidence in a system is damage enough unto itself. With the system intrusion comes the
time required to assess and repair the damage, examine the security posture of the machine(s) compromised, reports to write
detailing the incident and more. All of this adds up to lost time that administrators could have been working on projects that
earn money for the company. Jumping back, some would argue that maintaining security was part of their duties in the first
place, and that such incidents are the result of these administrators not performing their tasks in the first place.
How It Is Done
There are two basic methods for qualifying web defacements. The first involves vulnerabilities in the web server which allow a
remote attacker to alter the content of the page without logging into the server. These exploits typically involve the
intruder overwriting or appending to the existing web page. The second type of attack involves compromising the underlying
operating system in order to gain full access to the machine, and therefore access to the web pages. Once this type of
compromise has occurred, the intruder can interactively edit the existing web page, replace it with his/her own page, and a
lot more. For the most part, most Windows NT servers that experience web defacements fall into the first category since NT
isn't designed around multiple users logging in via interactive interfaces. Most Unix (Solaris, Linux, BSD, etc) defacements
occur after the intruder has gained "root" access to the machine, giving them full administrative rights.
Windows NT comes with its own web server prepackaged for customer convenience. Internet Information Server (IIS) is the second
most common web server found running on machines across the net (the most common on NT machines). According to Netcraft
(www.netcraft.com/survey/), 22.92% machines surveyed in January 2000 are running Windows NT and IIS. In keeping with
Microsoft's tradition of buggy and insecure software, IIS is no exception.
One of the most widely exploited bugs found on Windows NT systems is called the RDS/MDAC vulnerability. Through this
"feature", a third party can easily execute remote commands on a target system. What makes this bug a real threat is that the
attacker does not need initial access to the machine to begin with. Remote Data Service (RDS) is a component of Microsoft Data
Access Components (MDAC) which is installed by default with the Windows NT 4.0 Option Pack. RDS components are designed to
allow controlled access to remote data resources through Internet Information Server (IIS). One component of RDS called the
DataFactory object is exploitable to untrusted attackers. The DataFactory object is originally designed as a server based
object that handles client requests for information and provides read and write access to specific data sources.
Using exploit code widely available on the Internet, an attacker can use a single program to obtain all the information needed
to exploit the vulnerability. This same script will then prompt the attacker with "Please type the NT commandline you want to
run (cmd /c assumed):", allowing them to easily execute the commands on the remote machine. Because of the ease of which this
can be exploited, combined with a large amount of vulnerable servers, it is believed that the RDS/MDAC vulnerability is
responsible for thousands of web pages being defaced in the last six months. Because of the ease of exploitation and the lack
of knowledge required to utilize the attack, anyone and everyone that fancies himself a hacker has used this vulnerability to
deface web pages. This is somewhat evident by the childish and lame web pages that are put up in place of the original pages.
For more information on the RDS/MDAC attack, Rain Forest Puppy has written an excellent advisory outlining explicit technical
detail about the vulnerability (http://www.wiretrip.net/rfp/p/doc.asp?id=1&iface=2). Microsoft has released two security
advisories outlining details and patch information for the RDS/MDAC problem
Protecting against attacks that allow direct access to a machine is rather simple for the most part. Staying abreast of newly
discovered vulnerabilities is the single most important thing. As new bugs are found, the vendor should address the problem
with patches or upgraded software. Staying up to date on these patches will typically keep you secure from a majority of the
hackers poking around on the Internet. While this will keep you safe for the most part, there always exists a small chance
that you will be exploited by a new vulnerability before you can patch the system. This is something that is virtually
impossible to protect against, and something that all administrators must deal with.
Unix servers have been designed around the idea of allowing multiple users access the machine without losing any privileges or
ability. There are few instances where an administrator must be sitting at the machine to effect any change or alter the
configuration of the system. Because of this philosophy, users must log into the system to add or edit web pages (among other
things). For intruders intent on defacing a web page, they must first find a way onto the system before they accomplish this.
By exploiting bugs in the various services run by Unix systems, it is sometimes possible to gain remote access to the machine.
Through remote buffer overflows (http://www.fc.net/phrack/files/p49/p49-14), sniffing attacks
(http://www.robertgraham.com/pubs/sniffing-faq.html), or more crude attacks like brute forcing a login and password, attackers
are able to spawn interactive shells on a target machine. In many cases, these shells are run with the highest privileges
('root' access), and the attacker has access to alter any file on the system. In some cases the privileges are those of a
normal user causing the attacker to use additional exploits to gain more access to the machine.
In the past year, vulnerabilities in various Remote Procedure Call (RPC) services have been a consistent entry point into
thousands of Unix servers. Some of the more commonly exploited RPC services include rpc.statd, rpc.mountd, and rpc.ttdb, one
of which can be found on almost every flavor of Unix distributed today. Because security has only recently become a concern,
it has taken software vendors over a decade to realize the seriousness of the problem and only in the last year or two begin
to address these vulnerabilities. With the use of scripts readily available all over the Internet, even the most novice of
hackers (often called script kiddies) can exploit these holes in systems worldwide.
Once interactive shell access has been gained to a Unix machine, even a rudimentary understanding of the Unix operating system
is all it takes to find and edit the system web page. Using find and vi, a competent intruder can walk through the system and
assume complete control over it. Changing a web page is actually the least of the damage that could be done to a vulnerable
system. However, such defacements are typically the most publicly embarrassing incident a company can face. Because of this,
security of a system is often focused on the web server and related components. This focus can quickly create gaping holes in
the underlying operating system and allow intruders to waltz right in.
Protecting against intruders who target the operating system rather than the web servers are typically easy to deal with. The
key to security is maintaining a consistent and proactive security posture. Rather than wait for an embarrassing incident to
prompt your staff to implement better security measures, continual monitoring and updates should be performed since day one.
Once the machine is setup, administrators should take steps to improve the default security posture of the machine, as most
installations are notoriously insecure. Turning off unneeded remote services, removing extraneous permissions of SUID file,
and setting up better group control are just a few things administrators should do. Once done, you should check the web site
of the vendor of your operating system. These sites will contain updated information and security patches that address the
latest vulnerabilities known and that have been made public.
Japan and the U.S.
Looking at the wave of recent Japanese Government defacements between January 24th and February 2nd, it is interesting to note
that at least six of the servers were running Sun Microsystems Solaris Operating System while only a single instance of
Microsoft Windows NT was found. At the time of the defacements, five of the machines could not be identified. Comparing this
information with a list of United States Government servers that have been defaced
(http://www.attrition.org/mirror/attrition/gov.html), and you can see the heavy use of Windows NT.
Without more statistics showing the amount of machines running in each government, it is difficult to draw accurate
conclusions that suggest if one operating system is more secure than another. The figures above do begin to paint a picture of
each government's preference in operating platforms. The wide scale deployment of Windows NT servers through the United States
Government has left it vulnerable to attackers, as evident from the long list of defaced servers.
What may be more important is the reaction from the administrators of each system as well as the reaction from Government
officials. Public statements about U.S. servers being hacked and defaced were slow to come. It took over a year of repeated
embarassing defacements before president William Clinton took a firm stance, calling for more security in government and
military web sites as well as a better response from the Federal Bureau of Investigation (www.fbi.gov) in tracking these
online vandals. Throughout the past year or more, several different U.S. agencies have asked Congress for more funds in order
to put a stop to these attacks. Despite additional funding being granted, virtually nothing has changed and U.S. servers
continue to be defaced. As recently as February 19th, three more U.S. government servers (all running Windows NT) were
defaced. NOAA Nauticus site (www.nauticus.noaa.gov), National Ocean Service Map Finder (mapfinder.nos.noaa.gov), and the
Office of the Speaker of the House (www.speaker.gov) were the latest casualties.
Unlike the slow U.S. reaction, Japanese Government officials quickly met with law enforcement as well as requested help from
the U.S. Government (http://news.bbc.co.uk/hi/english/world/asia-pacific/newsid_619000/619139.stm). This call for help is
ironic in that the U.S. has demonstrated repeatedly that it can not protect its own information assets and web sites. Lucky
for both governments, attacks on their web sites has slowed down in the last few weeks. The question now, is will it continue?
Brian Martin (firstname.lastname@example.org)