From rforno at infowarrior.org Sun Nov 1 05:14:26 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 1 Nov 2009 01:14:26 -0400 Subject: [Infowarrior] - 1600 are suggested daily for FBI's list Message-ID: 1,600 are suggested daily for FBI's list Number of names on terrorist watch list at 400,000, agency says By Walter Pincus Washington Post Staff Writer Sunday, November 1, 2009 http://www.washingtonpost.com/wp-dyn/content/article/2009/10/31/AR2009103102141_pf.html Newly released FBI data offer evidence of the broad scope and complexity of the nation's terrorist watch list, documenting a daily flood of names nominated for inclusion to the controversial list. During a 12-month period ended in March this year, for example, the U.S. intelligence community suggested on a daily basis that 1,600 people qualified for the list because they presented a "reasonable suspicion," according to data provided to the Senate Judiciary Committee by the FBI in September and made public last week. FBI officials cautioned that each nomination "does not necessarily represent a new individual, but may instead involve an alias or name variant for a previously watchlisted person." The ever-churning list is said to contain more than 400,000 unique names and over 1 million entries. The committee was told that over that same period, officials asked each day that 600 names be removed and 4,800 records be modified. Fewer than 5 percent of the people on the list are U.S. citizens or legal permanent residents. Nine percent of those on the terrorism list, the FBI said, are also on the government's "no fly" list. This information, and more about the FBI's wide-ranging effort against terrorists, came in answers from FBI Director Robert S. Mueller III to Senate Judiciary Committee members' questions. The answers were first made public last week in Steven Aftergood's Secrecy News. Sen. Russell D. Feingold (D-Wis.), who has shown concern over some of the FBI's relatively new investigative techniques assessing possible terrorist, criminal or foreign intelligence activities, drew new information from the agency. Before the attacks of Sept. 11, 2001, the FBI needed initial information that a person or group was engaged in wrongdoing before it could open a preliminary investigation. Under current practice, no such information is needed. That led Feingold to ask how many "assessments" had been initiated and how many had led to investigations since new guidelines were put into effect in December 2008. The FBI said the answer was "sensitive" and would be provided only in classified form. Feingold was given brief descriptions of the types of assessments that can be undertaken: The inquiries can be opened by individual agents "proactively," meaning on his or her own or in response to a lead about a threat. Other assessments are undertaken to identify or gather information about potential targets or terrorists, to gather information to aid intelligence gathering and related to matters of foreign intelligence interest. Feingold pointed to a November 2008 Justice Department inspector general audit showing that in 2006, approximately 219,000 tips from the public led to the FBI's determination that there were 2,800 counterterrorism threats and suspicious incidents that year. "Regardless of the reporting source, FBI policy requires that each threat or suspicious incident should receive some level of review and assessment to determine the potential nexus to terrorism," the audit said. In a different vein, the FBI was asked why it is losing new recruits as special agents and support personnel at a time when terrorist investigations are increasing. The FBI responded that failed polygraph tests rather than other factors, such as the length of time for getting security clearances, are the main reason recruits are ending their efforts to join the bureau. In the past year, polygraphs were the cause of roughly 40 percent of special-agent applicants dropping out, the records showed. From rforno at infowarrior.org Mon Nov 2 13:37:12 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 2 Nov 2009 08:37:12 -0500 Subject: [Infowarrior] - Newsday Columnist Quits Over Paywall, Wants To Be Read Message-ID: <76055E63-FCC3-41DC-B8BE-484AA2984D50@infowarrior.org> Newsday Columnist Quits Over Paywall, Wants To Be Read from the as-he-should dept http://techdirt.com/articles/20091101/1842486752.shtml One of the reasons why the NY Times eventually did away with its old "paywall" was that its big name columnists started complaining that fewer and fewer people were reading them. We've suggested in the past that newspapers who decide to put up a paywall may find that their best reporters decide to go elsewhere, knowing that locking up their own content isn't a good thing in terms of career advancement. So, with Cablevision deciding to put Newday behind a paywall, it didn't take long for some of its columnists to start to bailing. The NY Times is reporting that Newsday columnist Saul Friedman quit and did so while publishing an open letter on why paywalls are a bad idea, while also telling the NY Times that he knew his column was popular with people outside of Newsday's footprint, and he was upset that those people would not be able to read his column and that he wouldn't be able to send out links to his columns. Oh, one other thing? Mr. Friedman is 80 years old and worked for newspapers for over 50 years. In other words, he's not just some "young kid who thinks everything online should be free" as we're so often told is the real problem. News organizations that lock up their content are increasingly going to discover that it's more and more difficult to attract top talent when compared to publications that actually help raise the journalists' profiles. From rforno at infowarrior.org Tue Nov 3 20:44:42 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 3 Nov 2009 15:44:42 -0500 Subject: [Infowarrior] - Marcus Ranum on Cloud Computing Security Message-ID: 68 seconds of excellent infosec analysis from one of the greats in the field...... *chortle* Marcus J. Ranum on Cloud Computing Security http://www.youtube.com/watch?v=jnrs2Bsfpmk (Someone go tell Ted Stevens he's on in five.......) From rforno at infowarrior.org Tue Nov 3 23:55:42 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 3 Nov 2009 18:55:42 -0500 Subject: [Infowarrior] - The ACTA Internet Chapter: Putting the Pieces Toget Message-ID: The ACTA Internet Chapter: Putting the Pieces Together http://www.michaelgeist.ca/content/view/4510/125/ Tuesday November 03, 2009 The Anti-Counterfeiting Trade Agreement negotations continue in a few hours as Seoul, Korea plays host to the latest round of talks. The governments have posted the meeting agenda, which unsurprisingly focuses on the issue of Internet enforcement. The United States has drafted the chapter under enormous secrecy, with selected groups granted access under strict non-disclosure agreements and other countries (including Canada) given physical, watermarked copies designed to guard against leaks. Despite the efforts to combat leaks, information on the Internet chapter has begun to emerge (just as they did with the other elements of the treaty). Sources say that the draft text, modeled on the U.S.- South Korea free trade agreement, focuses on following five issues: 1. Baseline obligations inspired by Article 41 of the TRIPs which focuses on the enforcement of intellectual property. 2. A requirement to establish third-party liability for copyright infringement. 3. Restrictions on limitations to 3rd party liability (ie. limited safe harbour rules for ISPs). For example, in order for ISPs to qualify for a safe harbour, they would be required establish policies to deter unauthorized storage and transmission of IP infringing content. Provisions are modeled under the U.S.-Korea Free Trade Agreement, namely Article 18.10.30. They include policies to terminate subscribers in appropriate circumstances. Notice-and- takedown, which is not currently the law in Canada nor a requirement under WIPO, would also be an ACTA requirement. 4. Anti-circumvention legislation that establishes a WIPO+ model by adopting both the WIPO Internet Treaties and the language currently found in U.S. free trade agreements that go beyond the WIPO treaty requirements. For example, the U.S.-South Korea free trade agreement specifies the permitted exceptions to anti-circumvention rules. These follow the DMCA model (reverse engineering, computer testing, privacy, etc.) and do not include a fair use/fair dealing exception. Moreover, the free trade agreement clauses also include a requirement to ban the distribution of circumvention devices. The current draft does not include any obligation to ensure interoperability of DRM. 5. Rights Management provisions, also modeled on U.S. free trade treaty language. If accurate (and these provisions are consistent with the U.S. approach for the past few years in bilateral trade negotations) the combined effect of these provisions would to be to dramatically reshape Canadian copyright law and to eliminate sovereign choice on domestic copyright policy. Having just concluded a national copyright consultation, these issues were at the heart of thousands of submissions. If Canada agrees to these ACTA terms, flexibility in WIPO implementation (as envisioned by the treaty) would be lost and Canada would be forced to implement a host of new reforms (this is precisely what U.S. lobbyists have said they would like to see happen). In other words, the very notion of a made-in-Canada approach to copyright would be gone. The Internet chapter raises two additional issues. On the international front, it provides firm confirmation that the treaty is not a counterfeiting trade, but a copyright treaty. These provisions involve copyright policy as no reasonable definition of counterfeiting would include these kinds of provisions. On the domestic front, it raises serious questions about the Canadian negotiation mandate. Negotations from Foreign Affairs are typically constrained by either domestic law, a bill before the House of Commons, or the negotiation mandate letter. Since these provisions dramatically exceed current Canadian law and are not found in any bill presently before the House, Canadians should be asking whether the negotiation mandate letter has envisioned such dramatic changes to domestic copyright law. When combined with the other chapters that include statutory damages, search and seizure powers for border guards, anti-camcording rules, and mandatory disclosure of personal information requirements, it is clear that there is no bigger IP issue today than the Anti- Counterfeiting Trade Agreement being negotiated behind closed doors this week in Korea. Update: Further coverage from IDG and Numerama. From rforno at infowarrior.org Wed Nov 4 00:52:06 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 3 Nov 2009 19:52:06 -0500 Subject: [Infowarrior] - MPAA: If We Don't Stop Piracy, The Internet Will Die Message-ID: MPAA Tells The FCC: If We Don't Stop Piracy, The Internet Will Die from the moral-panic dept Never let it be said that the folks in Hollywood aren't good at coming up with a totally fictional horror story. I just have a problem when they use it not to entertain, but to create a moral panic to push the government to pass laws in their favor. In discussing the recent 60 Minutes piece that was really nothing more than an MPAA scare tactic, some suggested that it was really just a first step in the process of getting the government to make sure net neutrality rules had a special Hollywood exception. So, it's interesting to note that just before that 60 Minutes episode aired (and just before Halloween), the MPAA sent a "scary" filing to the FCC warning it how the US would always be a broadband laggard if it didn't stomp out piracy. The full filing (warning:pdf) claims, repeatedly, that piracy is sucking up all our bandwidth and getting rid of that would somehow make it cheaper to install faster internet connections. (much more) http://techdirt.com/articles/20091103/0434486780.shtml From rforno at infowarrior.org Wed Nov 4 18:50:57 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 4 Nov 2009 13:50:57 -0500 Subject: [Infowarrior] - NY files antitrust suit against Intel Message-ID: <1AB80587-83BE-46B0-8DB8-55991FE1FEAE@infowarrior.org> November 5, 2009 Cuomo Files Federal Antitrust Lawsuit Against Intel By ASHLEE VANCE http://www.nytimes.com/2009/11/05/technology/companies/05chip.html?_r=1&ref=technology&pagewanted=print Following the lead of foreign regulators, New York?s attorney general, Andrew M. Cuomo, filed a federal antitrust lawsuit Wednesday against Intel, the world?s largest chip maker. The lawsuit charges that Intel violated state and federal laws by abusing its dominant position in the chip market to keep its main rival, Advanced Micro Devices, at bay. Intel has faced similar lawsuits in Asia and Europe, and in May the European Commission fined the company a record $1.45 billion for antitrust violations. These cases have largely revolved around deals Intel had struck with computer makers and retailers that, regulators said, pressured them into picking the company?s microprocessors ? which serve as the central chip inside personal computers and servers ? instead of competing products from A.M.D. ?Rather than compete fairly, Intel used bribery and coercion to maintain a stranglehold on the market,? Mr. Cuomo said in a statement. ?Intel?s actions not only unfairly restricted potential competitors, but also hurt average consumers who were robbed of better products and lower prices.? Intel has denied the charges and has filed an appeal against the European Commission?s ruling. The New York attorney general?s suit is the first formal antitrust action against Intel by any government agency in the United States in more than a decade. The Federal Trade Commission has been investigating Intel since 2008, but has not begun formal proceedings against the company. Intel, based in Santa Clara, Calif., also faces a four-year-old antitrust lawsuit filed by A.M.D. in Delaware. That suit is scheduled to go to trial in late March. An Intel spokesman, Chuck Mulloy, said the company will contest the New York suit. ?Neither consumers, who have consistently benefitted from lower prices and increased innovation, nor justice are being served by the decision to file a case now,? he said. ?Intel will defend itself.?Intel shares were up about 1 percent, to $18.68, in midday trading Wednesday. The New York move increases the chances that the F.T.C. will take action against Intel, according to a person who was familiar with the state?s investigation but was not authorized to discuss it. Mr. Cuomo?s staff, this person said, regularly communicates and cooperates with the commission?s staff. "These are separate investigations, but it would be very surprising for New York State to go off on its own without being fairly confident the F.T.C. would pursue Intel as well," the person said. At a news conference announcing the lawsuit, Mr. Cuomo said, ?We have been cooperating with the F.T.C. We have a good, productive dialogue on this matter.? A spokeswoman for the trade commission, Claudia Bourne Farrell, would not comment beyond saying that the commission?s investigation was continuing. Intel and Microsoft have long been the personal computer industry?s two most dominant players. About 90 percent of all PCs rely on Microsoft?s Windows software, while Intel?s chips go into about 80 percent of the PCs and computer servers sold every year. Both companies have caught the attention of antitrust regulators in the past, although Microsoft?s legal battles have been far more confrontational and enduring. In 1993, the F.T.C. dropped a two-year investigation into Intel?s business practices, saying it lacked evidence to back a lawsuit, and the agency ended a second investigation in 2000. In 1995, Intel settled a number of cases with A.M.D., including one involving antitrust charges. ?Intel has been more willing to negotiate with the government and less bellicose than Microsoft,? said Harry First, a professor at the New York University School of Law and the former chief of the New York attorney general?s antitrust bureau. ?Frankly, I think they?ve been smarter litigants and have escaped more than Microsoft.? Over the past couple of years, however, regulators have dug in and secured victories against Intel. In 2005, Japanese regulators determined that Intel had violated antitrust laws in that country, and South Korean antitrust authorities followed suit in a similar case. But the verdict in May by the European Commission was by far the biggest blow against the company. During the press conference, New York prosecutors said Intel abused its monopoly power ?as a central business strategy? rather than just in isolated incidents. In addition, the attorney general?s office claimed to have evidence linking Intel?s top executives to these abusive actions. ?We intend to stop them,? Mr. Cuomo said. In the 80-page lawsuit, Mr. Cuomo appeared to be piggybacking, in part, on extensive e-mail evidence gathered during Europe?s investigation into Intel?s business practices. In the statement, the attorney general pointed to e-mail messages that detail Intel?s interactions with companies like Hewlett-Packard, Dell and I.B.M. that he said support the case against Intel. For example, Intel is accused of paying I.B.M. $130 million to hold back on selling a server based on A.M.D.?s Opteron chip, while also threatening to curtail joint projects if I.B.M. marketed A.M.D.?s products. ?The question is, can we afford to accept the wrath of Intel?? an I.B.M. executive wrote in a 2005 e-mail message, according to Mr. Cuomo?s office. A similar e-mail message from an unnamed Hewlett-Packard executive talks about Intel planning to ?punish? the company for selling products based on A.M.D.?s chips. Most of the past antitrust cases in Europe and Asia have centered on Intel?s actions in the PC market. But Mr. Cuomo?s case seems to place substantial emphasis on the company?s server-chip business as well. In 2003, A.M.D. released a chip called Opteron that thrust the company into the mainstream server market for the first time. For about four years, the product was hailed by analysts and hardware makers as superior to Intel?s Xeon chip. With Opteron on its side, A.M.D. for the first time managed to attract Hewlett-Packard , I.B.M., Dell and Sun Microsystems as server-chip customers. A.M.D. executives, however, have long contended that Intel thwarted the adoption of Opteron through abusive practices and blunted A.M.D.?s ability to capitalize on the product. They have also argued that Intel has blocked A.M.D.?s attempts to place its chips in computers purchased by businesses. Both the server and business PC markets tend to generate higher profits than the consumer computer market. I.B.M., which competes with Intel in the server chip market, and A.M.D. have invested billions of dollars in chip plants in New York. Along with e-mail messages from hardware makers, Mr. Cuomo has presented messages exchanged between Intel executives in which they express antitrust concerns. ?Let?s talk more on the phone as it?s so difficult for me to write or explain without considering antitrust issue,? one Intel executive was said to have written in an April 2006 message. In interviews, a number of antitrust experts found similar e-mail exchanges presented by the European Commission unconvincing and said the regulators? evidence was thin on details. ?I look at it, and I don?t have a lot of confidence in what the E.C. alleges,? said John E. Lopatka, a professor and antitrust expert at Penn State?s Dickinson School of Law. ?It does smack of a brief more than an objective and honest recitation of the results of investigation.? The issue of so-called loyalty discounts provided by a supplier to its customers remains a murky area in United States law. ?European law is much harsher on loyalty discounts,? Mr. First said. ?There is still a lot of debate here among commentators and the courts on this issue, and the Supreme Court has not spoken on how we should treat this sort of loyalty pricing.? Mr. Cuomo?s office said it had examined millions of pages of e-mail messages and documents during its 23-month investigation into Intel and taken testimony from dozens of witnesses. Mr. Cuomo contends that consumers would have benefited from lower prices and better products had there been an even playing field in the chip market. His lawsuit seeks to stop Intel from continuing what he called its anticompetitive practices and to recover damages and penalties. Steve Lohr contributed reporting. From rforno at infowarrior.org Thu Nov 5 03:21:39 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 4 Nov 2009 22:21:39 -0500 Subject: [Infowarrior] - Rise of the stealthy traffic camera fuels drivers' disgust Message-ID: Shudder speed Rise of the stealthy traffic camera fuels drivers' disgust By Neely Tucker Washington Post Staff Writer Thursday, November 5, 2009 You rip open the envelope and there it is: Another darned photo- enforcement traffic ticket. The photograph, the zoom-in on the tag, it's you, baby. Your car. Two weeks ago. Forty-one in a 30-mph zone. It's from your favorite municipality. You can pay $40 now or $80 later. You can also contest it, the infraction letter says, and that's a laugh. You remember seeing that the folks who went down to fight their automated tickets in Montgomery County got convicted 99.7 percent of the time. Like a Soviet election, you think, a sham, a joke, and you, the chump in the parade. There's something that doesn't smell right about these tickets, but you're not quite sure what. Is it the huge profits the government and their cohorts, the camera manufacturers, make on them? The District doubling the number of tickets it issued just two years ago, raking in $36 million last fiscal year? The fact that Redflex, one of the big manufacturers of these cameras, posted a 48 percent jump in revenue last year while the rest of the economy tanked? People get worked up. Put these cyborgs on a ballot, and the voters beat them to the pavement. Three cities Tuesday -- two in Ohio, one in Texas -- voted to rip the things down. In College Station, Tex., the camera manufacturer and their subcontractors reportedly spent $60,000 campaigning to keep them in place, more than five times the amount raised by the opposition, and lost anyway. Voters in Chillicothe, Ohio, went against the cameras at a rate of 72 percent. In Heath, Ohio, the mayor got caught removing anti-camera campaign signs from an intersection. He, and the cameras, got sent packing. < - > http://www.washingtonpost.com/wp-dyn/content/article/2009/11/04/AR2009110404747_pf.html From rforno at infowarrior.org Thu Nov 5 10:38:57 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 5 Nov 2009 05:38:57 -0500 Subject: [Infowarrior] - Verizon to double its cancellation fees Message-ID: Verizon to double its cancellation fees With a whole new line of smart phones coming onto the market, Verizon Wireless said that starting November 15 it is doubling to $350 the penalty fees for subscribers who leave their contracts early. James Gerace, a spokesman for Verizon Wireless, the nation's largest cell phone service, said the fees are for subscribers in one- and two- year contracts on an "advanced device" and that those fees will be pro- rated $10 a month. That means, if you've made it halfway through your two-year contract, the $350 penalty will be reduced $120 to $230. Gerace said the company decided to raise fees because its highest-end phones and small wireless laptops are getting more expensive. The firm subsidizes the cost of its most tricked-out phones to make their prices more appealing to consumers. Verizon is set to release the Droid this Friday for $199 with a two-year contract. The phone is meant to take on Apple's iPhone, which runs exclusively on AT&T. The move could revive debates about early termination fees, the penalties wireless service providers charge users for leaving contracts early. Consumer groups have pushed for legislation to end such fees or at least set rules for the practice. The issue died down as major wireless providers decided to pro-rate fees based on length of contract. http://www.washingtonpost.com/wp-dyn/content/article/2009/11/04/AR2009110404720.html?hpid=sec-tech From rforno at infowarrior.org Thu Nov 5 10:50:22 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 5 Nov 2009 05:50:22 -0500 Subject: [Infowarrior] - Lifestyle Hackers Message-ID: <62D2F8BF-5544-4181-9894-DFE21BC80E15@infowarrior.org> Lifestyle Hackers Jim Routh and Gary McGraw examine why twenty-somethings skateboard right past security controls, and what it means for employers (i.e. you!) Jim Routh and Gary McGraw, CSO November 02, 2009 http://www.csoonline.com/article/print/506309 The insider threat, the bane of computer security and a topic of worried conversation among CSOs, is undergoing significant change. Over the years, the majority of insider threats have carried out attacks in order to line their pockets, punish their colleagues, spy for the enemy or wreak havoc from within. Today's insider threats may have something much less insidious in mind?multitasking and social networking to get their jobs done. There's a growing risk within most organizations today that is clearly an insider threat but is also clearly not caused by a disgruntled or disillusioned employee. In fact, the new insider threat is more likely to manifest itself as a gung-ho new employee or contractor. And more often than not, the new insider threat is a recently hired twenty- something. We've coined the term "lifestyle hacker" to refer to this new cadre of insider threats. The lifestyle hacker does not have malicious intent. Nevertheless, the lifestyle hacker is highly successful at skirting various corporate controls put in place to protect security-related websites and critical endpoints. The most interesting and ironic aspect of the lifestyle hacker is that he is motivated by the pursuit of productivity, often the very same motivation driving the implementation of various corporate controls (including but not limited to Web proxies, DLP solutions, firewalls, etc.). Also see 4 Tips for Writing a Great Social Media Security Policy Tightly managed organizations (especially huge financial corporations) often block access to Web 2.0 capabilities in order to "promote productivity of staff." However, this very same staff often desires to utilize Web 2.0 capabilities (including social networking, external IM, Skype, Twitter, etc.) in the name of enhancing personal productivity. And never the twain shall meet! This conundrum exists as the inherent conflict between those who make the rules and those who break the rules, both of whom are driven by the exact same motivation?being more productive in the work environment. There are two fascinating and problematic aspects of this situation worth mentioning: 1. The population of lifestyle hackers is growing in size and diversity as demographics of new hires shift toward those people who grew up on the Internet. 2. Neither the corporate decision makers who make the rules nor the lifestyle hackers understand the security ramifications of emerging and evolving Web 2.0 capabilities (see McGraw's article "Twitter Security" at www.informit.com/articles/article.aspx?p=1350268). To get a handle on the growth of the lifestyle hacking problem, consider this: One Wall Street firm we're both very familiar with estimated that 45 percent of all security incidents in the past two years were lifestyle hacks. A quick look at demographics reveals what's going on. The root of the problem is that newly minted staff members being hired today were generally born in the late '80s; their managers and rule-imposers are of the Baby Boom generation (born between 1947 and 1961). Baby Boomers were brought up with television as the dominant household technology, while the Net Generation (as Don Tapscott calls them in Growing Up Digital: The Rise of the Net Generation) was exposed to the Internet as early as they can remember (and some even earlier than that). Television is a mostly passive broadcast medium. By contrast, the Internet promotes widespread collaboration. This difference engenders significant divergence in behavior for the two generations. Baby Boomers focus on a single task when under pressure, while the Net Generation prefers multitasking. Baby Boomers don't even like listening to music while they work. Net Gen'ers listen to music (sometimes even watching music videos) while browsing a website or six, instant-messaging with whoever is around, sending text messages and pecking at a Microsoft Office file. The University of Oregon Library published a study that showed that the average Net Gen'er, by the age of 21, has been exposed to: ? 10,000 hours of video games ? 200,000 e-mails ? 20,000 hours of TV ? 10,000 hours of cell phone conversation ? Less than 5,000 hours reading books Some demographers bifurcate the Net Generation into Generation X and Y, but for the purposes of understanding the lifestyle hacker, Net Gen says it all. As Internet-facing technology became ubiquitous and leaped from the home to the mobile device, the Net Generation adapted by incorporating new technology into its very social fabric. The Net Generation prefers SMS texting and using instant messaging in many social situations. (Organizing a particular time and place to meet is rather silly if the people doing the meeting all have cell phones and a vague plan.) Utilizing a texting system as an essential productivity tool in a professional environment is a natural extension of normal Net Gen social behavior. The same can be said for social networks such as Facebook, which offer excellent tools for collaborating on complex problem solving and building effective relationships. Unfortunately, many Baby Boomers have never used Web 2.0 tools at work. Such tools simply did not exist when they entered the work force. As a result, they often view such tools as distractions from doing "real" work. Also see Security and the Generational Divide One high-tech firm did a study on the primary reason for undergraduate offer rejections by prospective new hires and discovered that the number-one reason for rejection was that access to Facebook was blocked. The firm now offers access to Facebook. Along the same lines, but without a solution to the problem, FS-ISAC survey results from April 2009 indicated that over 90 percent of financial service firms block access to social networking sites. The number-one reason for blocking access is a concern over productivity, not security. Ninety- five percent of the firms responding to the survey have no plans to change policies to allow access to social networking sites. You can see the storm clouds gathering. To restate the conundrum, leaders believe that social networking, instant messaging and using SMS constantly in the work environment will lead to lower overall productivity, so they block access. Net Gen'ers believe that Web 2.0 technologies are essential for collaboration and relationship management and that they improve productivity. Impasse. Enter the lifestyle hacker. To sidestep the impasse, a growing number of Net Gen'ers are using their technical savvy to find creative ways of bypassing controls so they can leverage Web 2.0 capabilities. Perhaps an example can make this clear. Dylan (not his real name) was an intern working in the technology department doing server administration for two years while he completed graduate school. He then applied for and was hired as an analyst working in the operational risk department. Dylan established himself as an effective contributor to the department over a period of six months. One day, the corporate security staff noticed a spike in network traffic coming from Dylan's workstation. The large volume of data transfer indicated the possibility of a security breach in which company information was being shoveled off to an outside party. The security staff initiated an investigation. They eventually approached Dylan and completed a forensic analysis of his computer. What they uncovered was that Dylan had constructed a secure tunnel by exploiting a vulnerability in the company's Web proxy, and he was connecting his workstation to his ISP at home. This allowed Dylan to watch pirated movies running on his home PC while he was streaming music from sites no longer filtered by the proxy. As it turns out, Dylan was also modifying a sensitive risk report at the same time. When Dylan's boss was told what was going on, Dylan was asked to leave the firm. His boss was disappointed, since Dylan was one of her most productive employees. Note that Dylan was not malicious and in fact did not intend to break established policies and federal laws. His actions were motivated purely by his desire to multitask, unfettered by the standard controls that all other employees had to live with. The question is, how many "Dylans" work in your organization? And what are you to do if you're the CSO trying to safeguard your firm while also enabling business growth? As usual for computer security, there are no easy answers here, just as there are no simple Web 2.0 technology controls ready for prime-time implementation. Upon reflection, we believe the most important thing to do is to educate staff about the security and brand risks associated with unfettered use of Web 2.0 capabilities while exploring ways to offer tools with collaborative capabilities with a level of control that the organization can manage effectively. This solution is likely to necessitate updating your security policies as well as communications and marketing policies governing publication of the firm's information. In addition, the firm's IT strategy should clearly define a road map for Web 2.0 implementation over time that provides for increased collaboration outside the firm. The right approach for each organization must, of course, be driven by its respective business model, since business and security risks always differ. The good news is that the problem of the lifestyle hacker provides a clear opportunity for innovative leadership by the CIO and the CSO. What is clear is that the technology frontier has moved well beyond the workstation to an increasing constellation of mobile devices and distributed software (some of it already in the cloud). As more processing capability emerges in PDAs, there will be no avoiding them or their distributed software as a work platform. Collaborative technology is here to stay. Solving the Net Gen productivity problem in order to avoid lifestyle hacking is thus a critical aspect of the CSO's job. Finding the right balance for your organization will require innovation, education and, most importantly, courage. We certainly can't hold back Web 2.0 in the name of security! At least not for long. Gary McGraw is chief technology officer at Cigital. Jim Routh is CISO of KPMG. ? CXO Media Inc. From rforno at infowarrior.org Thu Nov 5 13:04:51 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 5 Nov 2009 08:04:51 -0500 Subject: [Infowarrior] - Copyright Treaty Is Policy Laundering at Its Finest Message-ID: Copyright Treaty Is Policy Laundering at Its Finest ? By David Kravets ? November 4, 2009 | ? 7:59 pm | http://www.wired.com/threatlevel/2009/11/policy-laundering/ ? The blogosphere is abuzz over an apparently leaked document showing the United States trying to push its controversial DMCA-style notice- and-takedown process on the world. But since Threat Level already lives in the land of the DMCA, we?re more bothered by the fact that the U.S. proposal goes far beyond that 1998 law, and would require Congress to alter the DMCA in a manner even more hostile to consumers. At issue is the internet section of the Anti-Counterfeiting Trade Agreement being developed under a cloak of secrecy by dozens of countries.The leaked document is a three-page European Commission memo written by an unnamed EU official, which purports to summarizes a private briefing given in September by U.S. trade officials. The language in the Sept. 30 memo shows the United States wants ISPs around the world to punish suspected, repeat downloaders with a system of ?graduated response? ? code for a three-strikes policy that results in the customer eventually being disconnected from the internet with the ISP alone deciding what constitutes infringement and fair use. While the proposal specifically says that three strikes wouldn?t be mandated, it might as well be. That?s because companies that refused to implement the policy would be ejected from the ?safe harbor? that otherwise protects them from copyright infringement lawsuits over the actions of their customers. Currently, the DMCA grants immunity or a ?safe harbor? to internet companies that promptly remove allegedly infringing content at the request of the copyright holder. Only if they fail to do so can they be held liable in court, and face up to $150,000 in damages per infringement. Under the U.S. proposal described in the memo, removing infringing content would no longer be enough to qualify for safe harbor. Companies would have to actively work to combat the flow of unauthorized copyrighted material through their pipes, and specifically implement the ?graduated response? program. Here is the key paragraph: ?On the limitations from 3rd party liability: to benefit from safe- harbours, ISPs need to put in place policies to deter unauthorized storage and transmission of IP infringing content (ex: clauses in customers? contracts allowing, inter alia, a graduated response). From what we understood, the US will not propose that authorities need to create such systems. Instead, they require some self-regulation by ISPs.? Threat Level obtained the document on condition it not be posted, and we haven?t independently verified its authenticity, or that it accurately reflects the positions of the U.S. trade representatives. The document indicates the U.S. refused to turn over anything in writing itself, and briefed EU representatives on the plan orally in the hope of avoiding leaks. The Obama administration has been obsessively secretive about the draft ACTA treaty ? even, at one point, claiming national security could be jeopardized if the proposed treaty?s working documents were disclosed to the public. Now, it seems, we know what the administration is hiding. Obama hasn?t asked Congress to implement a three-strike policy, which could anger consumers and watchdog groups. But if the administration gets three strikes written into ACTA, and the United States signs and ratifies the treaty, Congress would be obliged to change the DMCA to comply with it, while the administration throws its hands in the air and says, ?It wasn?t our idea! It?s that damn treaty!? That practice is common enough to have a name: policy laundering. Language in the leaked text throws open the door to ISP filtering for unauthorized content, though there?s no way for filters to know whether the material constitutes fair use. That plan is similar to a proposal by the Motion Picture Association of America, which wants ISPs to filter for unauthorized motion pictures. The three-strike language would be gold to companies like MediaSentry, which browse peer-to-peer networks for infringing content, and identify a user?s IP address and ISP. MediaSentry?s work was crucial in the RIAA?s 6-year-long litigation campaign that amounted to about 30,000 copyright lawsuits against individual file sharers using Kazaa, Limewire and other services. Until today, the most alarming thing in the proposed ACTA treaty has been the secrecy surrounding it. But now the threat level is higher. It seems the executive branch would rather negotiate with other nations, instead of its own elected officials, about the future of a free and open internet. From rforno at infowarrior.org Thu Nov 5 13:11:36 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 5 Nov 2009 08:11:36 -0500 Subject: [Infowarrior] - Google launches privacy Dashboard service Message-ID: Google launches privacy Dashboard service What does Google know about me? By John Leyden ? Get more from this author Posted in Applications, 5th November 2009 12:30 GMT http://www.theregister.co.uk/2009/11/05/google_privacy_dashboard/ Google has launched a Dashboard service that's designed to show how much the search engine giant knows about its users online activities. The service provides a summary of data associated with a specified Google account. Users gain the ability to view and manage data, which ranges from search engine queries and emails sent through Gmail through to videos viewed on YouTube, and much else besides. Users will usually have already consented to allow Google to keep tabs on their activities online, but the search engine giant's tentacles reach so far that it's tough to know how much information it holds on each of us. Google Dashboard - which is designed to address privacy concerns over the search engine giant's propensity to catalogue data - is accessed by logging into a Google account. Surfers get a list of the number of items held on particular services (Calenders, Blogger, Shopper, Chat, Gmail etc. etc.) linking to the data repositories of these services for more detailed information. Although the Dashboard service goes some way towards answering the question of what Google knows about our lives online, it doesn't really provide many clues about how Google uses this information. In addition, one thing not included in the run-down is cookie-based data Google collects via its huge online ad-serving business. Even so, Google Dashboard holds a lot of potentially sensitive data, providing yet another good reason for users to use hard to guess (strong) passwords on their Gmail or other Google accounts. From rforno at infowarrior.org Fri Nov 6 12:22:50 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 6 Nov 2009 07:22:50 -0500 Subject: [Infowarrior] - Kellogg's Marketing to Fear Message-ID: (the photo of the box is amusing --- it screams 'IMMUNITY', too!! -rf) Marketing to Fear: Cocoa Krispies Boost Your Kids' Immunity? In the middle of the H1N1 influenza epidemic, Kellogg is marketing Cocoa Krispies, Froot Loops and other sugary cereals with claims on the box that the cereal "now helps support your child's immunity." The word "immunity" is printed on the box in a huge font, almost as big as the name of the cereal. San Francisco City Attorney Dennis Herrera wrote a letter to Kellogg's CEO demanding evidence that the cereals really do help support children's immunity. Herrera points out that the company is playing to public fears about the H1N1 flu epidemic, saying "The immunity claims may also mislead parents into believing that service this sugary cereal will actually boost their child's immunity ..." Kellogg argues that critics of its immunity campaign are wrong, pointing out that the cereals contain increased amounts of vitamins A, C and E, which play a role in the immune system, and saying that "Kellogg developed this product in response to consumers expressing a need for more positive nutrition." Marion Nestle, a nutrition professor at New York University, author of Food Politics and other books about the food industry and nutrition, says, "The idea that eating Cocoa Krispies will keep a kid from getting swine flu, or from catching a cold, doesn't make sense. Yes, these nutrients are involved in immunity, but I can't think of a nutrient that isn't involved in the immune system." Professor Nestle wrote to the U.S. Food and Drug Administration protesting Kellogg's claims in August, but hasn't heard back, and FDA official are not permitted to discuss specific cases under consideration. http://www.prwatch.org/node/8672 From rforno at infowarrior.org Fri Nov 6 13:36:52 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 6 Nov 2009 08:36:52 -0500 Subject: [Infowarrior] - Bird messes up the LHC Message-ID: Original URL: http://www.theregister.co.uk/2009/11/05/lhc_bread_bomb_dump_incident/ Large Hadron Collider scuttled by birdy baguette-bomber Bread on the busbars could have seen 'dump caverns' used By Lewis Page Posted in Physics, 5th November 2009 13:23 GMT A bird dropping a piece of bread onto outdoor machinery has been blamed for a technical fault at the Large Hadron Collider (LHC) this week which saw significant overheating in sections of the mighty particle-punisher's subterranean 27-km supercooled magnetic doughnut. According to scientists at the project, had the LHC been operational - it is scheduled to recommence beaming later this month - the snag would have caused it to fail safe and shut down automatically. This would put the mighty machine out of action for a few days while it was restarted, but there would be no repeat of the catastrophic damage suffered last September. On that occasion, an electrical connection in the circuit itself failed violently, causing a massive liquid-helium leak and knock-on damage along hundreds of metres of magnets. Reg readers alerted us yesterday to the temperature rises in the LHC's Sector 81, which began in the early hours of Tuesday morning: most of the collider's operational data can be viewed on the web for all to see. Initial enquiries to CERN press staff led to assurances that the rises were the result of routine tests. However Dr Mike Lamont, who works at the CERN control centre and describes himself as "LHC Machine Coordinator and General Dogsbody" later confirmed that there had indeed been a problem. Lamont, briefing reporters at the control room yesterday, told the Reg that machinery on the surface - the LHC accelerator circuit itself is buried deep beneath the Franco-Swiss border outside Geneva - had suffered a fault caused by "a bit of baguette on the busbars", thought perhaps to have been dropped by a bird. As a result, temperatures in part of the LHC's circuit climbed to almost 8 Kelvin - significantly higher than the normal operating temperature of 1.9, and close to the temperature at which the LHC's niobium-titanium magnets are likely to "quench", or cease superconducting and become ordinary "warm" magnets - by no means up to the task imposed on them. Dr Tadeusz Kurtyka, a CERN engineer, told the Reg that this can happen unpredictably at temperatures above 9.6 K. An uncontrolled quench would be bad news with the LHC in operation, possibly leading to serious damage of the sort which crippled the machine last September. At the moment there are no beams of hadrons barrelling around the huge magnetic doughnut at close to light speed, but when there are, each of the two beams has as much energy in it as an aircraft carrier underway (http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/beam.htm ). If the LHC suddenly lost its ability to keep the beam circling around its vacuum pipe, all that energy would have to go somewhere - with results on the same scale as being rammed by an aircraft carrier. About to get hit by an aircraft carrier? You need a Dump But there's no cause for concern, according to Lamont. The LHC's monitoring and safety systems have always been capable of coping with an incident of this sort, and have been hugely upgraded since last September. Had this week's feathered baguette-packing saboteur struck in coming months, with a brace of beams roaring round the LHC's magnetic motorway, the climbing temperatures would have been noted and the beams diverted - rather in the fashion that a runaway truck or train can be - into "dump caverns" lying a little off the main track of the LHC. In these large artificial caves, each beam would power into a "dump core (http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/components/beam-dump.htm )", a massive 7m-long graphite block encased in steel, water cooled and then further wrapped in 750 tonnes of concrete and iron shielding. The dump core would become extremely hot and quite radioactive, but it has massive shielding and scores of metres of solid granite lie between the cavern and the surface. Nobody up top, except the control room staff, would even notice. This whole process would be over in a trice, well before the birdy bread-bomber's shenanigans could warm the main track up to anywhere near quench temperature. Should the magnets then quench, no carrier- wreck catastrophe would result. According to Lamont, provided the underlying fault didn't take too long to rectify, the LHC could be up and beaming again "within, say, three days" following such an incident. We asked if more such incidents would occur, once the Collider is up and running for real from later this month. "It's inevitable," the particle-wrangling doc told the Reg. "This thing is so complicated and so big, it's bound to have problems sometimes." Meanwhile, it would seem that this particular snag has been solved, as the Sector 81 temperatures are now headed back down (http://hcc.web.cern.ch/hcc/cryo_main/cryo_main.php?region=Sector81 ) to their proper 1.9 K. ? From rforno at infowarrior.org Fri Nov 6 14:14:12 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 6 Nov 2009 09:14:12 -0500 Subject: [Infowarrior] - PIP: Social Isolation and New Technology Message-ID: <639997AB-49D2-4827-BD77-E964AB3AF74D@infowarrior.org> Press Release: Social Isolation and New Technology Nov 4, 2009 http://www.pewinternet.org/Press-Releases/2009/Social-Isolation-and-New-Technology.aspx# (Washington) People who use modern information and communication technologies have larger and more diverse social networks, according to new national survey findings that for the first time explore how people use the internet and mobile phones to interact with key family and friends. These new finding challenge fears that use of new technologies has contributed to a long-term increase in social isolation in the United States. The new findings from the Pew Internet & American Life Project show that, on average, the size of people?s discussion networks ? those with whom people discuss important matters? is 12% larger amongst mobile phone users, 9% larger for those who share photos online, and 9% bigger for those who use instant messaging. The diversity of people?s core networks ? their closest and most significant confidants ? tends to be 25% larger for mobile phone users, 15% larger for basic internet users, and even larger for frequent internet users, those who use instant messaging, and those who share digital photos online. The survey was conducted by researchers from the University of Pennsylvania?s Annenberg School for Communication, led by Keith N. Hampton, Ph.D., Assistant Professor of Communication and the Pew Internet Project. The survey also probed larger issues related to the extent of social isolation in America: At one level, the results challenge previous work. The Pew Internet survey found that Americans are not as isolated as has been previously reported and social isolation has hardly changed since 1985. Only 6% of the adult population has no one with whom they can discuss important matters or who they consider to be ?especially significant? in their life. At another level, the findings confirm that Americans? discussion networks have shrunk by about a third since 1985 and have become less diverse because they contain fewer non-family members. However, contrary to the widespread speculation that the new technology is tied to shrinking social networks and declining network diversity, the Pew Internet study finds that ownership of a mobile phone and participation in a variety of internet activities are associated with larger and more diverse core discussion networks. ?There is a tendency by critics to blame technology first when social change occurs,? argued Prof. Keith Hampton, the lead author of the Pew Internet report, Social Isolation and New Technology. ?This is the first research that actually explores the connection between technology use and social isolation and we find the opposite. It turns out that those who use the internet and mobile phones have notable social advantages. People use the technology to stay in touch and share information in ways that keep them socially active and connected to their communities.? Here are some of the other key findings in the Pew Internet report: ? Some have worried that internet use limits people?s participation in their local communities, but the Pew Internet report finds that most internet activities have little or a positive relationship to local activity. For instance, internet users are as likely as anyone else to visit with their neighbors in person. Cell phone users, those who use the internet frequently at work, and bloggers are more likely to belong to a local voluntary association, such as a youth group or a charitable organization. However, we find some evidence that use of social networking services (e.g., Facebook, MySpace, LinkedIn) substitutes for some neighborhood involvement. ? Challenging the assumption that internet use encourages social contact across vast distances, this study shows that many internet technologies are used as much for local contact as they are for distant communication. ? Internet use does not pull people away from public places. Rather, use is associated with frequent visits to places such as parks, cafes, and restaurants, the kinds of locales where research shows that people are likely to encounter a wider array of people and diverse points of view. Indeed, internet access has become a common component of people?s experiences within many public spaces. For instance, of those Americans who have been in a library within the past month, 38% logged on to the internet while they were there, 18% have done so in a caf? or coffee shop. ? People?s mobile phone use outpaces their use of landline phones as a primary method of staying in touch with their closest family and friends, but face-to-face contact still trumps all other methods. On average in a typical year, people have in-person contact with their core network ties on about 210 days; they have mobile-phone contact on 195 days of the year; landline phone contact on 125 days; text- messaging contact on the mobile phone 125 days; email contact 72 days; instant messaging contact 55 days; contact via social networking websites 39 days; and contact via letters or cards on 8 days. ? Social media activities are associated with several beneficial social activities, including having discussion networks that are more likely to contain people from different backgrounds. For instance, frequent internet users, and those who maintain a blog are much more likely to confide in someone who is of another race. Those who share photos online are more likely to report that they discuss important matters with someone who is a member of another political party. ? While participation in traditional social settings, like neighborhoods, voluntary organizations, and public spaces, remain the strongest predictors for the overall diversity of people?s social networks, internet use, and specifically use of social networking services like Facebook, are also associated with knowing more people from a wider variety of backgrounds. ?All the evidence points in one direction,? said Prof. Hampton. ?People?s social worlds are enhanced by new communication technologies. It is a mistake to believe that internet use and mobile phones plunge people into a spiral of isolation.? From rforno at infowarrior.org Fri Nov 6 19:11:49 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 6 Nov 2009 14:11:49 -0500 Subject: [Infowarrior] - Fort Hood: A First Test for Twitter Lists Message-ID: <1D786D9C-CC60-40B1-B3F8-7965DB2545FF@infowarrior.org> Behind the News, The News Frontier ? November 06, 2009 11:14 AM Fort Hood: A First Test for Twitter Lists In the aftermath of violence, lists suggest the benefits of collaboration By Megan Garber http://www.cjr.org/the_news_frontier/fort_hood_a_first_test_for_twi.php Journalism and curation?it?s becoming increasingly difficult to determine where the one ends and the other begins. The chicken/egg relationship between the two solidified into conventional wisdom during the aftermath of the Iranian election this summer, when journalists?mostly barred from shoe-leather reporting and other, more traditional methods of newsgathering?were forced to play the role of social-media editors. In the dizzying tumult of reporting-by-proxy, mainstream media discovered what Web-native journalism has always taken for granted: that journalism tends to become richer, more compelling, and generally better when it is the result of collaboration. We saw this again yesterday, after an Army major named Nidal Malik Hasan opened fire at Fort Hood, Texas, killing thirteen people and injuring thirty. Only, this time, added to the real-time coverage of the shootings was a new mechanism for breaking-news updates: Twitter lists. In the immediate aftermath of the shootings, news outlets from The New York Times to The Huffington Post to The Today Show created lists that aggregated the Twitter feeds of, among others, national breaking-news sources (CNN, the AP), official sources (the U.S. Army, the Red Cross, the office of Texas governor Rick Perry), local news organizations, and local individuals. The lists?which offer a running stream of information, updates, and commentary from the aggregated feeds?represent a vast improvement over the previous means of following breaking news in real time. In place of free-for-all hashtags?which, valuable as they are in creating an unfiltered channel for communication, are often cluttered with ephemera, re-tweets, and other noise?they give us editorial order. And in place of dubious sources?users who may or may not be who they say they are, and who may or may not be worthy of our trust?the lists instead return to one of the foundational aspects of traditional newsgathering: reliable sources. Lists locate authority in a Twitter feed?s identity?in, as it were, its brand: while authority in hashtagged coverage derives, largely but not entirely, from the twin factors of volume and noise?who tweets the most, who tweets the loudest ?authority in list-ed coverage derives from a tweeter?s prior record. Making lists trustworthy in a way that hashtagged coverage simply is not. Lists, of course, whose architecture mimics Twitter?s ?stream-of- news,? real-time information flow, still feature Twitter?s trademark cognitive deluge: ?I?m finding Twitter?s small factoids on Ft. Hood to be torturous,? the Minnesota-based blogger Bob Collins tweeted last night. ?Think I?ll just pick up the Times tomorrow and take mine with context.? But they also, New Media Landscape-wise, increase reporters? ability to provide that very context. As the Times reporter Michael Luo put it last night: ?Seeing various Ft Hood locals who were tweeting abt shooting being contacted by reporters. Digital form of pack journalism.? But they also represent a new?or, more precisely, a newly facilitated? way for news organizations to collaborate: they allow news outlets essentially to co-opt others? reporting. But in a good way?to the benefit of the news organizations in question and, of course, their audiences. So The New York Times gets to provide its users real-time information from Waco?s NewsChannel 25?and NewsChannel 25, in turn, gets to have its reporting amplified to the readers of the paper of record. Win and win. (And, taking the audience into account: win again.) And while, sure, in the immediate sense, lists narrow the ?democratization of information??those who followed Twitter lists got a less freewheelingly diverse treatment of the shootings than did those who followed the #FortHood hashtag?still, in the longer-term sense, the lists actually expand the democratizing aspects of micromessaging. They empower Twitter as a media platform, helping users to find, per the increasingly apt metaphor, the signal in the noise. Yes, there was overlap and redundancy in yesterday?s coverage? the ?Fort Hood? lists all generally contained the same local news outlets, the same official sources, etc.?but, then, that?s the case whenever different media outlets cover the same events. And while cable outlets were filling their air with equal parts information and speculation, Twitter lists were tempering conjecture with the wisdom- of-crowds brand of mediation that is built into their multi-channel approach. It?s what we saw with the Iran coverage?cross-platform, cross-outlet curation?only more streamlined. And more institutionalized. And, in some ways, more meaningful. As Mashable?s Adam Ostrow put it, What?s really interesting here from a media perspective is that we?re seeing news organizations that compete vigorously for breaking news turning to real-time curation to help tell the story. And the result is certainly a win for media consumers ? rather than searching far and wide for local news from Fort Hood, it?s all being aggregated for us by news organizations we trust. It certainly might be a glimpse of what?s to come from the Twitter Lists feature. I agree. But I?d also take it a step farther. What we saw play out yesterday wasn?t just about the Twitter lists as a general feature; it was about Twitter lists as a general future. Which is to say: it was about what Twitter lists suggest for all media. One of the core elements that has defined journalism, essentially since the first newspapers sprang up in the 17th century, has been competition?and with it, implicitly, the assumption that news is a commodity that news outlets are in a race to capture, create, package, sell. The Web has been utterly disruptive to that dynamic?not merely because it has changed the atomic infrastructure of news and its consumption, but also because it has, in more logistical terms, merged media platforms that used to be physically separate. ?The Internet? is, in its way, a one-stop news shop; and through, in particular, the deceptively simple innovation that is the hyperlink, news outlets are increasingly defined by connection rather than separation. (Thus, the ?Web.?) And that, in turn?fundamentally, if not completely?topples the competitive underpinnings of newsgathering as a profession. Do what you do best, and link to the rest. Twitter lists suggest the institutionalization of this connected mentality, a kind of rudimentary codification of the media?s increasing openness to?and reliance upon?collaboration. They suggest the myriad benefits of cross-pollination. Even at this nascent stage, they represent, simultaneously, both the collapse and the expansion of the journalistic brand?and the recognition that, increasingly, brands are at their strongest when their owners prove willing to weaken them. From rforno at infowarrior.org Sat Nov 7 16:43:32 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sat, 7 Nov 2009 11:43:32 -0500 Subject: [Infowarrior] - Cisco MARS shuts out new third-party security devices Message-ID: This story appeared on Network World at http://www.networkworld.com/news/2009/110609-cisco-mars.html Cisco MARS shuts out new third-party security devices Focus on supporting Cisco devices; some claim it's beginning of end for Cisco Security MARS as multivendor offering By Ellen Messmer , Network World , 11/06/2009 Cisco has finally publicly acknowledged it won't add support for new third-party devices to its security information and event monitoring appliance, ending months of speculation about the future of its Monitoring, Analysis and Response System. Some claim it's the beginning of the end for MARS as a multi-vendor SIEM device. "MARS customers can expect non-Cisco network device data and signature updates to continue for currently supported third-party systems, but no new third-party devices will be added," Cisco declared in a statement, noting that "Cisco MARS continues to focus on supporting Cisco devices for threat identification and mitigation." MARS is used by about 4,000 customers and Cisco is regarded as the largest SIEM vendor. Cisco had been privately briefing at least some of them on its intentions to effectively freeze third-party device support, but until now had refrained from a public statement. Since SIEM equipment is typically used to consolidate alert and event data from multiple vendor sources, the fact that MARS won't be supporting any new non-Cisco equipment suggests customers must now consider migrating from it if third-party vendor support is their chief concern. Analysts from Gartner and Enterprise Strategy Group are advocating that very thing. "Cisco deserves credit for coming clean on MARS support," said Jon Oltsik, analyst with Enterprise Strategy Group (ESG). "That said, rumors of product, customer support and field sales have been circulating for more than a year. In the future, I would hope that Cisco would be more forward and clear on its product plans and address issues like these in a timely manner. The priority here must be on improved security and not proprietary business agenda." Cisco's SIEM competitors this week have eagerly grabbed at the topic of Cisco MARS freezing third-party support because of a Gartner research memo published Oct. 29 in which analyst Mark Nicolett stated, "Cisco has quietly begun informing its customers of a decision to freeze support for most non-Cisco event sources with its [MARS]." In the research note Nicolett said, "Although Cisco has not formally announced its intention to exit the SIEM market, the Cisco sales force is encouraging its MARS customers to find an alternative for log collection and event analysis of non-Cisco event sources." In Gartner's view, the effect of all this is that MARS can no longer be viewed as a viable SIEM for anyone looking for third-party vendor support in the future. "Organizations that need support of non-Cisco event sources should plan to move to a viable SIEM solution," the Gartner research note states. Nicolett says he issued the research note because of what he initially picked up from discussions he happened to have with Gartner customers using MARS, not Cisco directly, though Cisco did confirm the change in strategy when asked about it. Since Cisco had been included in Gartner's influential "Magic Quadrant report on SIEM this spring, when Cisco had provided "no hint of change in strategy," Nicolett says he thought it important to immediately inform Gartner clients on what he had found out. MARS has never been particularly wide in its support for third-party security devices, Nicolett says, but now it can no longer be considered in that role for the future. Gartner isn't going to go back and revise the SIEM Magic Quadrant, but its Oct. 29 research note has to be considered its current findings when it comes to MARS as a SIEM for other than Cisco-related gear. "That note seems to have caused a lot of concern to MARS customers," says Rick Caccia, vice president of product marketing at ArcSight, a SIEM vendor that supports 300 products, including MARS, with a connector toolkit for 1,500 others. Cisco is considered the largest SIEM vendor in the market, but Gartner "threw a bomb in the market with that note," Caccia says. All contents copyright 1995-2009 Network World, Inc. http://www.networkworld.com From rforno at infowarrior.org Sun Nov 8 18:44:39 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 8 Nov 2009 13:44:39 -0500 Subject: [Infowarrior] - Kindle readers beware - big Amazon is watching you read 1984 Message-ID: (nothing really new, but bears repeating ... ---rf) Kindle readers beware - big Amazon is watching you read 1984 The ebook reader may have advantages over unwieldy printed tomes, but it has unexpected drawbacks John Naughton The Observer, Sunday 8 November 2009 http://www.guardian.co.uk/technology/2009/nov/08/amazon-kindle-licence-orwell CHRISTMAS IS coming and you're wondering what to put on your wish list. How about an Amazon Kindle ? the gizmo that enables you to download books, magazines and newspapers and read them on the move? According to the publicity blurb, this cool device "can hold 1,500 books and be read for up to two weeks on a single charge. Its electronic-ink display looks and reads like real paper and has no glare, even in bright sunshine". Sounds good, doesn't it? No more worrying about whether the piles of hardbacks you want to bring to Provence/Tuscany will fit within the miserly Ryanair baggage allowance. And if you ever find yourself stuck for something to read in the train, you can wirelessly order a book from the Amazon store and be reading the opening paragraph in just over a minute. And all for just under ?170. At Amazon.co.uk you find that the Kindle is now available in the UK. If you order today, you can have it in a couple of days. Hooray! Add it to your basket and head on over to checkout. You're just about to click the "Place my order" button when a small, niggling thought pops up. Wasn't there something about Amazon and George Orwell a few months ago? Some kind of a row about consumer rights? Google those words and the first result is a Guardian story headlined "Amazon Kindle users surprised by 'Big Brother' move". Ah, yes: now you remember. The report reads: "Owners of Amazon's Kindle electronic book reader have received a nasty surprise, after discovering that copies of books by George Orwell had been deleted from their gadgets without their knowledge. The books ? downloaded from Amazon.com by American Kindle users ? were remotely deleted after what the US company now says was a rights issue regarding the publisher, MobileReference.com." It seems that Amazon refunded the cost of the books, but told affected customers they could no longer read the books and that the titles were no longer available. Here's the translation: you go to Waterstone's, buy a copy of Orwell's 1984 and take it home. Two days later you get up and find that agents of Waterstone's have entered the house during the night and removed the offending volume. They've left a terse note explaining what they've done and enclosing a credit note for the cost of the book. Enraged, you phone the manager of Waterstone's, who explains that everything is in accordance with the service agreement you accepted when you bought the book. You don't have to be a lawyer to know that this would not be tolerated in the real world of physical objects.Yet it's commonplace ? indeed universal ? in the world of information goods. And what makes it possible is the "End User Licence Agreement" (EULA) that most of us click to accept when we first use hardware, software or online services. The Kindle EULA is a good example. Section 3, which deals with "Digital Content" (such as downloaded books), says that "Unless specifically indicated otherwise, you may not sell, rent, lease, distribute, broadcast, sublicense or otherwise assign any rights to the Digital Content or any portion of it to any third party, and you may not remove any proprietary notices or labels on the Digital Content." In other words, you are forbidden to lend or sell the book you've just "bought". In real-world terms, you can't lend your copy of 1984 to a friend or donate it to the school jumble sale. Under the subsection on "Use of Digital Content', the Kindle EULA says: "Amazon grants you the non-exclusive right to keep a permanent copy of the applicable Digital Content and to view, use, and display such Digital Content an unlimited number of times, solely on the Device or as authorized by Amazon as part of the Service and solely for your personal, non-commercial use." Translation: you can't back up your electronic books on to any other device ? which means that if your Kindle packs up, or if Amazon moves on to another technical standard, you're screwed: your entire digital library has effectively been vaporised. Then you look round your house and note the number of electronic devices that no longer work. I could go on, but you get the point. Verily, technology giveth, but also it taketh away. And sometimes we don't realise until it's too late. Caveat emptor. From rforno at infowarrior.org Sun Nov 8 22:08:08 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 8 Nov 2009 17:08:08 -0500 Subject: [Infowarrior] - EC summary memo on ACTA leaked Message-ID: European Commission "advance warning" summary on ACTA Internet Chapter, 30 Sep 2009 From Wikileaks http://www.wikileaks.org/wiki/European_Commission_%22advance_warning%22_summary_on_ACTA_Internet_Chapter%2C_30_Sep_2009 Released November 6, 2009 Summary The paper presents what appears to be a short summary of the "Internet chapter" of the Anti-Counterfeiting Trade Agreement (ACTA). The summary is supposed to inform member states about the part of ACTA that will deal with internet enforcement and will be discussed at the Seoul meeting of ACTA. Notably, the "Internet chapter" is being drafted by USTR, a US lead ACTA negotiator group, and its creation process even within the ACTA working group itself, remains obscure. While the US has given a "detailed oral description" of the drafted internet chapter, the draft documents themselves are subject to "confidentiality clauses" between "goverment agencies" and a "number of private stakeholders". The presented summary is to give "advance-warning" and "preliminary indication of the content" of the upcoming US proposal. The content of the summary reflect details that have appeared in the media recently, including rights management and especially related enforcement, as well as the US-centric development of enforcement on the Internet. The Seoul meeting has just ended, having taken place from 4th to 6th of November 2009. The US proposal seems to have been debated there more widely. http://www.wikileaks.org/wiki/European_Commission_%22advance_warning%22_summary_on_ACTA_Internet_Chapter%2C_30_Sep_2009 From rforno at infowarrior.org Mon Nov 9 14:09:37 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 9 Nov 2009 09:09:37 -0500 Subject: [Infowarrior] - Why kids don't program Message-ID: <55C75F7F-1391-4D35-88C6-4A58EC484D03@infowarrior.org> Interesting blog post about the role of computers and programming today: http://johnlawrenceaspden.blogspot.com/2009/11/behold-in-its-full-glory-program-that-i.html (I remember doing the exact same things on my old Apple IIe) The last few paragraphs are particularly disturbing and I suspect a logical consequence of the Information Revolution -- "The reason that I mention this is that 27 years ago, a twelve year old child with a new toy and a shiny orange paperback that told him how to use it could draw the graph of sin, and then play with the program to get more interesting programs. Whereas now, in 2009, a forty year old professional computer programmer sitting in front of a box at least 1000 times more powerful with a screen one hundred times the size that can display millions of colours is having to google for how to do it." From rforno at infowarrior.org Mon Nov 9 15:42:56 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 9 Nov 2009 10:42:56 -0500 Subject: [Infowarrior] - Illegal copies of Microsoft Cofee spills onto the web Message-ID: (c/o KM) Illegal copies of Microsoft Cofee spills onto the web Author:Karl Flinders Posted: 10:42 09 Nov 2009 http://www.computerweekly.com/Articles/2009/11/09/238474/illegal-copies-of-microsoft-cofee-spills-onto-the-web.htm Microsoft software that is designed to help the police access encrypted data is loose on the web. The software, known as Computer Online Forensic Evidence Extractor (Cofee), has been put on file-sharing site, according to reports on the web. It is illegal for unauthorised people to use the software or download it. The software helps law enforcement agencies access details about crimes before criminals can wipe the information. "Cofee brings together a number of common digital forensics capabilities into a fast, easy-to-use, automated tool for first responders. And Cofee is being provided [free] to law enforcement around the world," said Microsoft. Police officers with basic computer skills can be taught to use the software in less than 10 minutes. "This enables the officer to take advantage of the same common digital forensics tools used by experts to gather important volatile evidence, while doing little more than simply inserting a USB device into the computer," said Microsoft. Graham Cluley, senior technology consultant at Sophos said ?The genie is out of the bottle.? ?Microsoft and the computer crime authorities will be mightily upset that this was leaked onto the internet for anyone to download via file- sharing sites.? ?Some will be worried that unauthorised users may now have access to such a tool for their own nefarious purposes, but I would also worry that computer criminals could analyse Cofee, and write code that would identify that it is trying to run on their computer and intercept it, securely wiping incriminating data from their hard drives.? From rforno at infowarrior.org Tue Nov 10 03:19:06 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 9 Nov 2009 22:19:06 -0500 Subject: [Infowarrior] - Brazilian Blackout Traced to Sooty Insulators, Not Hackers Message-ID: Threat Level Privacy, Crime and Security Online Brazilian Blackout Traced to Sooty Insulators, Not Hackers ? By Marcelo Soares ? November 9, 2009 | ? 6:15 pm | ? Categories: Cybarmageddon! http://www.wired.com/threatlevel/2009/11/brazil_blackout/ SAO PAULO, Brazil ? A massive 2007 electrical blackout in Brazil has been newly blamed on computer hackers, but was actually the result of a utility company?s negligent maintenance of high voltage insulators on two transmission lines. That?s according to reports from government regulators and others who investigated the incident for more than a year. In a broadcast Sunday night, the CBS newsmagazine 60 Minutes cited unnamed sources in making the extraordinary claim that a two-day outage in the Atlantic state of Espirito Santo was triggered by hackers targeting a utility company?s control systems. The blackout affected 3 million people. Hackers also caused another, smaller blackout north of Rio de Janeiro in January 2005, the network claimed. Brazilian government officials disputed the report over the weekend, and Raphael Mandarino Jr., director of the Homeland Security Information and Communication Directorate, told the newspaper Folha de S. Paulo that he?s investigated the claims and found no evidence of hacker attacks, adding that Brazil?s electric control systems are not directly connected to the internet. The utility company involved, Furnas Centrais El?tricas, told Threat Level on Monday, it ?has no knowledge of hackers acting in Furnas? power transmission system.? A review of official reports from the utility, the country?s independent systems operator group and its energy regulatory agency turns up nothing to support the hacking claim. The earliest explanation for the blackout came from Furnas two days after the Sept. 26, 2007, incident began. The company announced that the outage was caused by deposits of dust and soot from burning fields in the Campos region of Espirito Santo. ?The concentration of these residues would have been exacerbated by the lack of rain in the region for eight months,? the company said. Brazil?s independent systems operator group later confirmed that the failure of a 345-kilovolt line ?was provoked by pollution in the chain of insulators due to deposits of soot? (.pdf). And the National Agency for Electric Energy, Brazil?s energy regulatory agency, concluded its own investigation in January 2009 and fined Furnas $3.27 million (.pdf) for failing to maintain the high-voltage insulators on its transmission towers. Cascading electrical failures like the one in Espirito Santo often have a number of contributing factors, and it?s possible that the poorly maintained insulators were only the most conspicuous element in the 2007 incident. Reports that hackers triggered at least one blackout outside the United States first got wide attention last year, based on comments made by the CIA?s chief cybersecurity officer, Tom Donahue. He declined, however, to identify any country or the specifics of the alleged attacks. The blackout claim even made it into a speech given by President Obama in May. ?In other countries cyberattacks have plunged entire cities into darkness,? Obama said, not mentioning the cities. In an interview with Threat Level last month, former cybersecurity czar Richard Clarke named Brazil as a hack-attack blackout victim, but didn?t provide verifiable details. In some versions of the story, the hackers were trying to extort money from the utility. The 60 Minutes broadcast this week ? which cited six unnamed sources in the intelligence, military and cybersecurity communities ? was the first to peg the story to specific blackouts. CBS did not repeat the extortion claim, reporting instead that the location and motives of the hackers are a mystery. Fallout from the story kept telephones ringing in Brazil?s electricity sector Monday. ?Everyone?s been calling us all day about it,? said a beleaguered spokesman with the National Operator of the Electric System. From rforno at infowarrior.org Tue Nov 10 13:38:01 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 10 Nov 2009 08:38:01 -0500 Subject: [Infowarrior] - Breaking Bing - enter DMCA! Message-ID: <28E023A9-EEFF-4D3E-B374-3681242F1586@infowarrior.org> Not sure how long it will last, but Microsoft appears to be seeking the Streissand Effect by going after the guy who first poined this little feature out with a DMCA notice. ---rick Slashdot: http://yro.slashdot.org/story/09/11/09/2319233/Microsoft-Tries-To-Censor-Bing-Vulnerability?from=rss Original Post from Bing's Cache: http://cc.bingj.com/cache.aspx?d=4879267570255838&mkt=en-CA&setlang=en-US&w=90157511,9ea4ebc5 Breaking Bing Cashback Posted November 4th, 2009 by Samir I?ve never bought anything using Bing Cashback, but the balance of my account is $2080.06. Apparently, I placed two $1 orders on January 24th of this year, and spent another $104,000 on October 24th. Let?s see how these transactions might have ?accidentally? got credited to my account. First, we need to try to figure out how transactions get into Bing Cashback. Microsoft posted some documentation here. The explanation of how a merchant reports transactions to Bing starts on page 20. Merchants have a few options for reporting, but Bing suggests using a tracking pixel. Basically, the merchant adds a tracking pixel to their order confirmation page, which will report the the transaction details back to Bing. The request for the tracking pixel looks something like this: https://ssl.search.live.com/cashback/pixel/index? jftid=0&jfoid=&jfmid= &m[0]=&p[0]=&q[0]= This implementation, while easy for the merchant, has an obvious flaw. Anyone can simulate the tracking pixel requests, and post fake transactions to Bing. I?m not going to explain exactly how to generate the fake requests so that they actually post, but it?s not complicated. Bing doesn?t seem to be able to detect these fake transactions, at least not right away. The six cents I earned in January have ?cleared,? and I?m guessing the remaining $2080 will clear on schedule, unless there is some manual intervention. Even if Bing detects these fake transactions at some point in the future, the current implementation might have another interesting side effect. I haven?t done enough work to say it with confidence, but a malicious user might be able to block another user?s legitimate purchases from being reported correctly by Bing (I only tried this once, but it seemed to work). Posting a transaction to Bing requires sending them an order ID in the request. Bing performs a reasonable sanity check on the order ID, and will not post a transaction that repeats a previously reported order ID. When a store uses predictable order ID?s (e.g. sequential), a malicious user can ?use up? all the future order ID?s, and cause legitimate transactions to be ignored. Reporting would be effectively down for days, causing a customer service nightmare for both Bing and the merchant. Based on what I?ve found, I wouldn?t implement Bing Cashback if I were a merchant. And, as an end user and bargain hunter, it does not seem smart to rely on Bing Cashback for savings. In our next blog post, I?ll demonstrate some other subtle but important reasons to avoid using Bing Cashback. From rforno at infowarrior.org Tue Nov 10 13:41:09 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 10 Nov 2009 08:41:09 -0500 Subject: [Infowarrior] - CORRECTION: Breaking Bing - enter DMCA! References: <28E023A9-EEFF-4D3E-B374-3681242F1586@infowarrior.org> Message-ID: <8352314E-4471-40E0-9ED8-E0F58C77AD12@infowarrior.org> My goof -- Microsoft's lawyers didn't cite DMCA, they cited everything else as reason for the removal of the Bing information from the Internet. I jumped the gun[1] -- guess I need more coffee this morning. My apologies! --rf [1] Usually such 'takedown notices' are DMCA-based. Begin forwarded message: > From: Richard Forno > Date: November 10, 2009 8:38:01 AM EST > To: Infowarrior List > Subject: [Infowarrior] - Breaking Bing - enter DMCA! > Reply-To: rforno at infowarrior.org > > Not sure how long it will last, but Microsoft appears to be seeking > the Streissand Effect by going after the guy who first poined this > little feature out with a DMCA notice. ---rick > > Slashdot: > http://yro.slashdot.org/story/09/11/09/2319233/Microsoft-Tries-To-Censor-Bing-Vulnerability?from=rss > > Original Post from Bing's Cache: > http://cc.bingj.com/cache.aspx?d=4879267570255838&mkt=en-CA&setlang=en-US&w=90157511,9ea4ebc5 > > Breaking Bing Cashback Posted November 4th, 2009 by Samir > I?ve never bought anything using Bing Cashback, but the balance of my > account is $2080.06. Apparently, I placed two $1 orders on January > 24th of this year, and spent another $104,000 on October 24th. Let?s > see how these transactions might have ?accidentally? got credited to > my account. > > First, we need to try to figure out how transactions get into Bing > Cashback. Microsoft posted some documentation here. The explanation of > how a merchant reports transactions to Bing starts on page 20. > Merchants have a few options for reporting, but Bing suggests using a > tracking pixel. Basically, the merchant adds a tracking pixel to their > order confirmation page, which will report the the transaction details > back to Bing. The request for the tracking pixel looks something like > this: > > https://ssl.search.live.com/cashback/pixel/index? > jftid=0&jfoid=&jfmid= > &m[0]=&p[0]=&q[0]= > This implementation, while easy for the merchant, has an obvious flaw. > Anyone can simulate the tracking pixel requests, and post fake > transactions to Bing. I?m not going to explain exactly how to generate > the fake requests so that they actually post, but it?s not > complicated. Bing doesn?t seem to be able to detect these fake > transactions, at least not right away. The six cents I earned in > January have ?cleared,? and I?m guessing the remaining $2080 will > clear on schedule, unless there is some manual intervention. > > Even if Bing detects these fake transactions at some point in the > future, the current implementation might have another interesting side > effect. I haven?t done enough work to say it with confidence, but a > malicious user might be able to block another user?s legitimate > purchases from being reported correctly by Bing (I only tried this > once, but it seemed to work). Posting a transaction to Bing requires > sending them an order ID in the request. Bing performs a reasonable > sanity check on the order ID, and will not post a transaction that > repeats a previously reported order ID. When a store uses predictable > order ID?s (e.g. sequential), a malicious user can ?use up? all the > future order ID?s, and cause legitimate transactions to be ignored. > Reporting would be effectively down for days, causing a customer > service nightmare for both Bing and the merchant. > > Based on what I?ve found, I wouldn?t implement Bing Cashback if I were > a merchant. And, as an end user and bargain hunter, it does not seem > smart to rely on Bing Cashback for savings. In our next blog post, > I?ll demonstrate some other subtle but important reasons to avoid > using Bing Cashback. > _______________________________________________ > Infowarrior mailing list > Infowarrior at attrition.org > https://attrition.org/mailman/listinfo/infowarrior > From rforno at infowarrior.org Tue Nov 10 13:42:20 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 10 Nov 2009 08:42:20 -0500 Subject: [Infowarrior] - DOJ asked for news site's visitor lists Message-ID: Justice Dept. asked for news site's visitor lists by Declan McCullagh http://news.cnet.com/8301-13578_3-10394026-38.html?part=rss&subj=news&tag=2547-1_3-0-20 In a case that raises questions about online journalism and privacy rights, the U.S. Department of Justice sent a formal request to an independent news site ordering it to provide details of all reader visits on a certain day. The grand jury subpoena also required the Philadelphia-based Indymedia.us "not to disclose the existence of this request" unless authorized by the Justice Department, a gag order that presents an unusual quandary for any news organization. Kristina Clair, a 34-year-old Linux administrator living in Philadelphia who provides free server space for Indymedia.us, said she was shocked to receive the Justice Department's subpoena. (The Independent Media Center is a left-of-center amalgamation of journalists and advocates that, according to their principles of unity and mission statement, work toward "promoting social and economic justice" and "social change.") The subpoena (PDF) from U.S. Attorney Tim Morrison in Indianapolis demanded "all IP traffic to and from www.indymedia.us" on June 25, 2008. It instructed Clair to "include IP addresses, times, and any other identifying information," including e-mail addresses, physical addresses, registered accounts, and Indymedia readers' Social Security numbers, bank account numbers, credit card numbers, and so on. "I didn't think anything we were doing was worthy of any (federal) attention," Clair said in a telephone interview on Monday. After talking to other Indymedia volunteers, Clair ended up calling the Electronic Frontier Foundation in San Francisco, which represented her at no cost. From rforno at infowarrior.org Tue Nov 10 18:23:55 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 10 Nov 2009 13:23:55 -0500 Subject: [Infowarrior] - Pentagon chiefs buy net-security early warning system Message-ID: <8F2E46CB-D27D-4673-ACF9-72BEDF51F719@infowarrior.org> Pentagon chiefs buy net-security early warning system http://www.theregister.co.uk/2009/11/10/raytheon_netops_sa_deal/ By Lewis Page ? Get more from this author Posted in Science, 10th November 2009 15:52 GMT US weapons megacorp Raytheon is chuffed to announce that it and allied firms have landed a $28m deal from the Pentagon to provide an early- warning system for defence against cyber attacks on military networks. The programme in question is referred to by the Defence Information Systems Agency (DISA) as "Network Operations Situational Awareness" or NetOps SA. NetOps SA is just one of the netwar systems the agency intends to procure in coming years, under the general slogan "Arming the Cyber Warrior". (Powerpoint Presentation here.) According to DISA, NetOps SA will primarily use two "classified thin client web applications" known as the Global Information Grid Customizable Operational Picture (GIGCOP) and the User-Defined Operational Picture (CND UDOP). The new NetOps SA deal with Raytheon will see these tools integrated into one system and further developed. "Our work providing end-to-end cybersecurity to the GIG networks is a valuable guide as we design a robust system to protect the DoD's sensitive information," said Raytheon netwar chief Andy Zogg. It seems that the UDOP app at least was originally developed by General Dynamics, who are now part of the NetOps SA consortium along with Raytheon, SAIC, Eye Street Software and BCMC. Overall the purpose of NetOps SA seems to be to act as a kind of early- warning screen for network-defence sysadmin cyberwarriors, letting them know at once when the hats of a different colour begin to probe and meddle with their networks. Or as Raytheon put it, NetOps SA will "enable the Department of Defense to quickly detect network intrusions and assess the overall health of its network". Raytheon seem to be having a bit of a push into cyberwar gear at the moment, having recently unveiled their SureView? "government insider threat management" spy- or mole-sniffer tech to an astonished world. ? From rforno at infowarrior.org Wed Nov 11 15:11:43 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 11 Nov 2009 10:11:43 -0500 Subject: [Infowarrior] - Ohio: Single Movie Download Forces Wi-Fi Network Shutdown Message-ID: Sure, Sony can complain, but you gotta hand it to the IT department for totally overreacting, eh? --rf Single Movie Download Forces Wi-Fi Network Shutdown http://freakbits.com/single-movie-download-forces-wi-fi-network-shutdown-1110 November 10, 2009, by enigmax8 A free Wi-Fi service said to be used by hundreds of people has been completely disabled after a complaint was made to the operating ISP by Sony Pictures. OneCommunity is an award-winning, non-profit organization serving Northern Ohio by connecting public and non-profit institutions to their fiber-optic network, fostering economic development. One project was completed around five years ago, offering a free and open Wi-Fi network serving the area outside Coshocton?s County Courthouse. Last week, the network ? said to be used by hundreds of people ? was completely shut down by the county?s Information Technology Department, after Sony Pictures tracked one user sharing a single movie and issued a complaint. Aside from use by the public, Coshocton?s Sheriff?s department is said to have used the network to file incident reports, with traders using it to check the validity of customer credit cards. Of course, the MPAA was quick to offer its usual hard-luck stories for the article in the Coshocton Tribune. Taking down the entire network over a single infringement is a ridiculous knee-jerk reaction which amounts to collective punishment, but in the face of intimidating threats by a powerful company like Sony, the county obviously felt it was left with little choice. Let?s hope they don?t ban all cars from the street outside if someone exceeds the speed limit. From rforno at infowarrior.org Thu Nov 12 00:01:26 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 11 Nov 2009 19:01:26 -0500 Subject: [Infowarrior] - HP to Acquire 3Com for $2.7 Billion Message-ID: HP to Acquire 3Com for $2.7 Billion Will create networking industry powerhouse with a proven, edge-to-data center set of solutions and global reach PALO ALTO, Calif., and MARLBOROUGH, Mass., Nov. 11, 2009 http://www.hp.com/hpinfo/newsroom/press/2009/091111xa.html HP and 3Com Corporation (NASDAQ: COMS) (?3Com?) today announced that they have entered into a definitive agreement under which HP will purchase 3Com, a leading provider of networking switching, routing and security solutions, at a price of $7.90 per share in cash or an enterprise value of approximately $2.7 billion. The terms of the transaction have been approved by the HP and 3Com boards of directors. This combination will transform the networking industry and underscore HP?s next-generation data center strategy built on the convergence of servers, storage, networking, management, facilities and services. The resulting business outcome will help customers simplify the network, deploy a unique and innovative edge-to-core network fabric for the enterprise and improve IT service delivery capabilities, all delivered with best-in-class price-performance. ?Companies are looking for ways to break free from the business limitations imposed by a networking paradigm that has been dominated by a single vendor,? said Dave Donatelli, executive vice president and general manager, Enterprise Servers and Networking, HP. ?By acquiring 3Com, we are accelerating the execution of our Converged Infrastructure strategy and bringing disruptive change to the networking industry. By combining HP ProCurve offerings with 3Com?s extensive set of solutions, we will enable customers to build a next- generation network infrastructure that supports customer needs from the edge of the network to the heart of the data center.? ?Our extensive product line and innovative technology together with HP?s breadth and scale will expand our global opportunity,? said Bob Mao, chief executive officer, 3Com. ?3Com?s networking products are based on a modern architecture which has been designed to offer better performance, require less power and eliminate administrative complexity when compared against current network offerings. Our products are enterprise proven and widely deployed in the world?s largest banks, manufacturers, Internet service providers, public utilities and retailers.? The acquisition of 3Com will dramatically expand HP?s Ethernet switching offerings, add routing solutions and significantly strengthen the company?s position in China ? one of the world?s fastest-growing markets ? via the H3C offerings. In addition, the combination will add a large and talented research and development team in China that will drive the acceleration of innovations to HP?s networking solutions. 3Com also brings to HP best-of-breed network security capabilities through its TippingPoint portfolio. For the past four years, TippingPoint has been the leader in Gartner?s ?Magic Quadrant? in its evaluation of leading network security products. Approximately 30 percent of the Fortune 1000 companies have already deployed TippingPoint intrusion prevention systems. ?We are confident that we can run our entire global business of 300,000-plus employees, including our next-generation data centers, entirely on the new HP networking solutions,? said Randy Mott, executive vice president and chief information officer, HP. ?Based on our experience and extensive testing of 3Com?s products, we are planning to undertake a global rollout within HP as soon as possible after the completion of the acquisition.? Under the terms of the merger agreement, 3Com stockholders will receive $7.90 for each share of 3Com common stock that they hold at the closing of the merger. The acquisition is subject to customary closing conditions, including the receipt of domestic and foreign regulatory approvals and the approval of 3Com?s stockholders. The transaction is expected to close in the first half of calendar 2010. HP anticipates that the transaction will be slightly dilutive to fiscal 2010 non-GAAP earnings. Audio webcast This afternoon HP will conduct an audio webcast for financial analysts and stockholders to discuss HP?s agreement to acquire 3Com. Audio webcast for financial analysts and stockholders: 5 p.m. ET / 2 p.m. PT, hosted by Dave Donatelli, executive vice president and general manager of Enterprise Servers and Networking at HP. Access the live audio webcast at www.hp.com/investor/hpwebcast. From rforno at infowarrior.org Thu Nov 12 00:32:51 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 11 Nov 2009 19:32:51 -0500 Subject: [Infowarrior] - Microsoft Patents Sudo? Message-ID: Microsoft Patents Sudo?!! Wednesday, November 11 2009 @ 10:36 AM EST Lordy, lordy, lordy. They have no shame. It appears that Microsoft has just patented sudo, a personalized version of it. Here it is, patent number7617530. Thanks, USPTO, for giving Microsoft, which is already a monopoly, a monopoly on something that's been in use since 1980 and wasn't invented by Microsoft. Here's Wikipedia's description of sudo, which you can meaningfully compare to Microsoft's description of its "invention". This is why what the US Supreme Court does about software patents means so much. Hopefully they will address the topic in their decision on Bilski. Sudo is an integral part of the functioning of GNU/Linux systems, and you use it in Mac OSX also. Maybe the Supreme Court doesn't know that, and maybe the USPTO didn't realize it. But do you believe Microsoft knows it? Perhaps Microsoft would like everyone in the world to pay them a toll at least, even if they don't want to use Microsoft's software? Like SCO, but with more muscle behind the request? Or maybe it might be used as a barrier to competition? What do you personally believe Microsoft wants patents on things like sudo for? To make sure innovative new companies can compete on an even playing field with Microsoft? And how do you like the final wording of the patent? < -- > http://www.groklaw.net/article.php?story=20091111094923390 From rforno at infowarrior.org Thu Nov 12 01:18:07 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 11 Nov 2009 20:18:07 -0500 Subject: [Infowarrior] - Did Fox News alter footage of a conservative rally? Message-ID: (Not being political by posting this - rather raising the issue of perceived reality in the media's presentation of events as something to think about. I'm sure it's happened elsewhere, too. --rf) Did Fox News alter footage of a conservative rally? http://news.yahoo.com/s/ynews/20091111/ts_ynews/ynews_ts977 Wed Nov 11, 4:52 pm ET "The Daily Show" host Jon Stewart pointed out inconsistencies in alternating "Hannity Show" shots of a recent conservative rally on the steps of the Capitol Building. This has led to accusations of Fox News splicing video footage shot at a larger Glenn Beck rally held in Washington two months ago with video shot at last week's rally, thus falsifying footage to make the more recent protest appear bigger than it was. Watch "The Daily Show" clip: The controversy stems from an anti-health-care reform rally held last Thursday on the steps of the Capitol Building in Washington, D.C., which Rep. Michelle Bachmann termed the "Super Bowl of Freedom." The Washington Post estimated that the rally drew roughly 10,000 attendees, but Fox News commentator Sean Hannity claimed on his show that the event drew 20,000 people, while Bachmann, appearing as a guest on Hannity's show, estimated that number to be much higher, up to 45,000. In the process of lauding the turnout "in the middle of the day on a Thursday," Hannity showed footage of scores of people assembled in protest, presumably at last week's rally. But "The Daily Show" pointed out an oddity in the footage during their broadcast last night: The condition of the sky and the coloring of the leaves lacked consistency in some of the shots, leading Stewart to insinuate that Hannity incorporated footage from coverage of Beck's "9/12" rally into his show's segment on Bachmann's event, in order to bolster the perception of its turnout. Stewart and "The Daily Show" have been quite critical of Fox News in the past, often using its vast archive of past video clips to lampoon the cable news channel's famous "Fair and Balanced" claims, with Fox News hosts like Bill O'Reilly occasionally allocating airtime to fighting back, claiming that "The Daily Show" often takes what he and his colleagues at Fox News say out of context to make them appear hypocritical and to paint their news coverage as partisan. A Comedy Central spokesperson says that they weren't tipped off to the footage shown on Hannity's show, and that it was simply the work of one of " the eagle-eyed staffers" on "The Daily Show" who noticed the discrepancies during a routine check of Fox News programming. When contacted by Yahoo! News about the matter, a Fox News spokesperson declined to comment, but added that Hannity will address the issue on his show airing Wednesday night. -- Brett Michael Dykes is a contributor to the Yahoo! News Blog From rforno at infowarrior.org Thu Nov 12 12:38:38 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 07:38:38 -0500 Subject: [Infowarrior] - How to DDOS a federal wiretap Message-ID: (paper link@ http://micah.cis.upenn.edu/papers/calea.pdf) How to DDOS a federal wiretap By Robert McMillan November 11, 2009 08:40 PM ET http://www.computerworld.com/s/article/9140717/How_to_DDOS_a_federal_wiretap?taxonomyId=17 IDG News Service - Researchers at the University of Pennsylvania say they've discovered a way to circumvent the networking technology used by law enforcement to tap phone lines in the U.S. The flaws they've found "represent a serious threat to the accuracy and completeness of wiretap records used for both criminal investigation and as evidence in trial," the researchers say in their paper, set to be presented Thursday at a computer security conference in Chicago. Following up on earlier work on evading analog wiretap devices called loop extenders, the Penn researchers took a deep look at the newer technical standards used to enable wiretapping on telecommunication switches. They found that while these newer devices probably don't suffer from many of the bugs they'd found in the loop extender world, they do introduce new flaws. In fact, wiretaps could probably be rendered useless if the connection between the switches and law enforcement are overwhelmed with useless data, something known as a denial of service (DOS) attack. Four years ago, the University of Pennsylvania team made headlines after hacking an analog loop extender device they'd bought on eBay. This time, the team wanted to look at newer devices, but they couldn't get a hold of a switch. So instead they took a close look at the telecommunication industry standard -- ANSI Standard J-STD-025 -- that defines how switches should transmit wiretapped information to authorities. This standard was developed in the 1990s to spell out how telecommunications companies could comply with the 1994 Communications Assistance for Law Enforcement Act (CALEA). "We asked ourselves the question of whether this standard is sufficient to have reliable wiretapping," said Micah Sherr, a post- doctoral researcher at the university and one of the paper's co- authors. Eventually they were able to develop some proof-of-concept attacks that would disrupt devices. According to Sherr, the standard "really didn't consider the case of a wiretap subject who is trying to thwart or confuse the wiretap itself." It turns out that the standard sets aside very little bandwidth -- 64K bits per second -- for keeping track of information about phone calls being made on the tapped line. When a wire tap is on, the switch is supposed to set up a 64Kbps Call Data Channel to send this information between the telco and the law enforcement agency doing the wiretap. Normally this channel has more than enough bandwidth for the whole system to work, but if someone tries to flood it with information by making dozens of SMS messages or VoIP (voice over Internet protocol) phone calls simultaneously, the channel could be overwhelmed and simply drop network traffic. That means that law enforcement could lose records of who was called and when, and possibly miss entire call recordings as well, Sherr said. Back in 2005, the FBI downplayed the Penn team's loop extender research, saying that it applied to only about 10 percent of wire taps. The J- standard studied in this paper is much more widely used, however, Sherr said. An FBI representative did not return messages seeking comment for this story. The researchers wrote a program that connected to a server over Sprint's 3G wireless network 40 times per second, enough to flood the Call Data Channel. They say that they could get the same results by programming a computer to make seven VoIP calls per second or to fire off 42 SMS messages per second. These techniques would work on mobile phones or VoIP systems, but not on analog devices, Sherr said. Because the researchers weren't able to test their techniques on real- world systems they don't know for certain that they could thwart a wiretap. But Sherr believes that "there are definitely dangers" in the way the standard is written. "Because it's a black-box system, we don't know for sure." Of course, criminals have plenty of easier ways to dodge police surveillance. They can use cash to buy prepaid mobile phones anonymously, or reach out to their accomplices with encrypted Skype calls, said Robert Graham, CEO with Errata Security. Luckily for the cops, criminals usually don't take their communications security that seriously. "Most criminals are stupid," he said. "They just use their same cell phone." From rforno at infowarrior.org Thu Nov 12 13:17:40 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 08:17:40 -0500 Subject: [Infowarrior] - OT: Catholic 'Church' politics Message-ID: <837DC1B1-A11D-4512-AA92-006039079EC4@infowarrior.org> (if political views on religion bother you, move on now.) If anyone had any doubts the Catholic 'Church' [1] is first and foremost a political organisation interested ONLY in its own future, here are two current examples from this week. The first example is particularly hypocritical given the 'Church's' belief of love-thy- neighbor and much-pontificated work for social programs. If the 'Church' wants to lobby the government, it needs to shed its tax-exempt status. As the Bloomberg commentator in the second article writes, "It?s also worth recalling that those who oppose a public policy on religious or any other grounds are free to do so in this pluralistic country of ours. But forcing others to adhere to specific religious beliefs who don?t share them by writing them into law is un- American." I'm gathering "love-thy-neighbor" is to be replaced with "love-thy- neighbor, but only if they share your social views." What Would Jesus Do, people? (Note: as with my political views, I have been a religious independent since 2002) We now return you to your regularly scheduled morning. -rf [1] I use quotes in referring to the Catholic 'Church' markedly. Given the organisation's actions and history in recent years, you really have to wonder. Source: http://www.washingtonpost.com/wp-dyn/content/article/2009/11/11/AR2009111116943.html?hpid=topnews "The Catholic Archdiocese of Washington said Wednesday that it will be unable to continue the social service programs it runs for the District if the city doesn't change a proposed same-sex marriage law, a threat that could affect tens of thousands of people the church helps with adoption, homelessness and health care." and in a Bloomberg article on the House health care vote regarding how Pelosi needed to offer an abortion amendment to curry favour with conservative democrats: Source: http://www.bloomberg.com/apps/news?pid=20601039&sid=aZK3pEl6wih0 "That would be the Catholic church, the main force behind the anti- abortion amendment sponsored by Michigan Democrat Bart Stupak. Pelosi found herself negotiating with the U.S. Conference of Catholic Bishops, whose members had been lobbying Congress for health reform and against abortion and building support for their position through churches around the country." From rforno at infowarrior.org Thu Nov 12 13:19:56 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 08:19:56 -0500 Subject: [Infowarrior] - Latest Taser Could Zap Farther, Shock Longer, Hurt Kids Message-ID: <7E82C77C-ED81-41F9-BDC9-D8C30BD9ADFE@infowarrior.org> Latest Taser Could Zap Farther, Shock Longer, Hurt Kids ? By David Hambling ? November 11, 2009 | ? 8:58 am | http://www.wired.com/dangerroom/2009/11/latest-taser-could-zap-farther-shock-longer-hurt-kids/ A new electroshock weapon being developed by Taser could zap people up to 175 feet away ? and keep on applying pain for as long as three minutes in a row. Which is pretty tough to take, since it only takes a second or two of shocks to make most people cry out in agony. The new 40mm projectile resembles a super-sized version of the shotgun- fired XREP Taser projectile. And like the XREP, it will attach itself to the target and incapacitate him or her with a series of electric jolts. But this one will have some notable differences ? from how far it flies to the dangers it might pose. (I describe the project in New Scientist magazine.) X26 Tasers already with the military have a range of about 35 feet. ?This project will likely increase the standoff range by at least a factor of five over already fielded electromuscular devices,? says Wes Burgei, a project engineer at the U.S. military?s Joint Non-lethal Weapons Directorate, which has given Taser $2.5 million to work on the weapon. the initial version of the 40mm Taser should have a range in the region of 175 feet ? almost doubling the XREP?s 100-foot range. The problem is making it safe. A projectile that is accurate out to that distance may have a dangerous impact force at close range. According to Burgei, much of the project is focusing on the design of the nose, which is likely to crumple or otherwise disperse the impact force. Steve Wright, a specialist in nonlethal weapons at Leeds Metropolitan University in the UK, is concerned that that the projectile may be particularly dangerous to children. ?Children especially are vulnerable to a condition known as ?commotio cordis? - a sudden and often fatal disturbance in heart rhythm sometimes caused by a blunt impact to the precordial region of the chest which is transmitted to the heart,? he tells Danger Room. The problem was highlighted with a similar 40mm electroshock projectile in the 1990?s, the Jaycor Sticky Shocker. It had a much less ambitious range of just 30 feet. This Human Effects Advisory Report noted that ?the Sticky Shocker?s blunt impact could cause commotio cordis, which will cause death. It also could cause serious injuries similar to those caused by sports projectiles such as a baseball. These include contusions, concussions, fractures, internal injuries, eye injuries and dental injuries. There also may be a low probability of liver fracture.? Work on the Sticky Shocker was discontinued. Mitigating the impact force will clearly be a major issue with the new 40mm projectile. Long range effectiveness may be a trade-off against a minimum range, and it may be that the weapon cannot be safely used at targets less than (say) ten meters away. The duration of the shock is also likely to be a significant issue. In Taser training, a one-second shock is administered. Danger Room?s Editor Noah tried this earlier this year. ?It was brutal - like sticking your finger in a socket over and over and over again. I screamed in pain as he zapped me. I screamed some more after it was over,? he whined. The standard Police Taser gives a five-second shock for each trigger pull. The XREP projectile produces a twenty-second shock. The greater standoff range means that it will take that much longer to reach the suspect and apprehend them, so the longer period is required. The new projectile might continue shocking for as long as three minutes, according to the JNLWD?s own reference book. Burgei says that the military and the manufacturer haven?t yet agreed on the period of incapacitation, though. ?The actual required incapacitation time is still to be determined. However, the projectile is designed such that the output is controlled by an onboard integrated circuit. As such, when requirements become solidified, the incapacitation time can be adjusted to meet them.? It might be less than three minutes ?- but it might be more. The proposed uses include ?military law enforcement, detainee operations, vessel boarding, and access control.? For vessel boarding in particular, it might be desirable to have a longer incapacitation period to ensure that it lasts for long enough for boarders to take control. The standard Taser does not shock continuously but produces nineteen short shocks per second. The effects of continuing this for a prolonged period do not seem to have been well explored ? understandable, since human testing would amount to torture. However, a Taser patent application does mention one of the major issues involved: ?Because the strike stage and hold stage may immobilize by interfering with skeletal muscle control by the target?s nervous system, a rest stage may allow the target to take a breath.? This suggests there may be issues with inacapicitation for an extended period. Taser International takes spends quite a bit of time on safety testing, and the JNLWD have a very thorough vetting process, too. However, once the technology is out there it is likely to inspire imitators who are less committed to the safety and well-being of the target. From rforno at infowarrior.org Thu Nov 12 19:19:48 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 14:19:48 -0500 Subject: [Infowarrior] - Hotmail imposes tracking cookies for logout Message-ID: <431183CB-61D8-46AA-AE71-C282CB076033@infowarrior.org> Hotmail imposes tracking cookies for logout http://www.theregister.co.uk/2009/11/12/hotmail_cookies/ By Chris Williams ? Get more from this author 2th November 2009 15:19 GMT Hotmail users are now unable to log out of their account if the browser they are using does not accept third party cookies. The move by Microsoft raises security concerns, particularly as PCs on corporate networks and in cybercafes and libraries are often set to reject cookies. The error screen* that greets users who try to log out tells them they must re-enable third party cookies or close every browser window. Third party cookies are most commonly used by advertising networks to track surfers across the web. We've asked Microsoft what is behind its demand they are enabled, and whether it's considered the potential security implications. We'll update this story when it gets back to us. Thanks to Reg reader Phil for spotting the change. ? From rforno at infowarrior.org Thu Nov 12 19:53:57 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 14:53:57 -0500 Subject: [Infowarrior] - EU likely to hand over SWIFT data to US Message-ID: <682E3D1F-1F50-4369-A56E-8914A8B52910@infowarrior.org> EU draft council decision on sharing of banking data with the US and restructuring of SWIFT, 10 Nov 2009 http://www.wikileaks.org/wiki/EU_draft_council_decision_on_sharing_of_banking_data_with_the_US_and_restructuring_of_SWIFT%2C_10_Nov_2009 From Wikileaks Released November 12, 2009 Summary The CIA and other intelligence agencies have long been interested in the Society for Worldwide Interbank Financial Telecomminications, or SWIFT. The Society, headquartered in Belgium, is the primary system used for international, and some national, bank transfers. Whoever controls SWIFT has access to the full details of millions of yearly bank transfers, including, banks, time, names, amount and account numbers. Since 2002 the US government entered into a secret agreement to acquire SWIFT records. Data handed over each year [to the CIA] by the Society for Worldwide Interbank Financial Telecommunication, or Swift, includes the details of an estimated 4.6 million British banking transactions.[1][2] This document (see below) presents a new classified draft Council of the European Union decision on the "processing and transfer of Financial Messaging Data" from the EU to the US, as part of the "Terrorist Finance Tracking Programme". The 24 paged draft, dated 10th of November 2009, if agreed to, will have substantial impact on the European SWIFT banking system and the privacy of European financial data. From rforno at infowarrior.org Thu Nov 12 21:55:02 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 12 Nov 2009 16:55:02 -0500 Subject: [Infowarrior] - TSA Changes Rules On Airport Searches Message-ID: <8415128A-9DFA-4AAA-A148-0E0566FACFA8@infowarrior.org> (c/o DaveL) TSA Changes Rules On Airport Searches ... Very Quietly Thu, 12 Nov '09 Searches Must Be Related To Airline Safety http://www.aero-news.net/index.cfm?ContentBlockID=615e45f8-4a6e-4e58-a2d7-84971d404419 TSA has changed two rules about airport searches after an aide to Congressman Ron Paul recorded an incident on his iPhone. The rules changes have prompted the ACLU to drop legal action against TSA on behalf of Steve Bierfeldt. Bierfeldt was detained in March while attempting to board a plane at Lambert-St. Louis International Airport carrying $4,700 in cash. TSA agents spent half an hour questioning him about why he was carrying so much cash, and Bierfeldt recorded the exchange on his iPhone. Bierfeldt is the director of development for 'Campaign for Liberty', a group formed by Congressman Ron Paul's after his failed presidential bid. Bierfeldt attempted to send a metal box with the cash and checks through a metal detector at the airport, precipitating the confrontation. The Washington Times reports that Bierfeldt questioned under what authority TSA detained him for carrying the cash. At one point, a TSA officer asked Bierfeldt "Are you from this planet?" and accused him of acting like a child for questioning his authority. TSA spokeswoman Lauren Gaches said the new "internal directives" stipulate that TSA may not question why someone is carrying large amounts of cash through the airport. The new rules say "screening may not be conducted to detect evidence of crimes unrelated to transportation security" and that large amounts of cash do not comprise a threat to an airliner. The second directive says "traveling with large amounts of cash is not illegal." However, TSA said it would not release copies of the directives without a Freedom of Information request. The ACLU had filed the suit on behalf of Bierfeldt because "We had been hearing of so many reports of TSA screeners engaging in wide- ranging fishing expeditions for illegal activities," Ben Wizner, a staff lawyer for the ACLU, told the paper. The new rules do not, however affect a situation where a TSA agent might come across illegal drugs, for instance, during the course of a routine screening. From rforno at infowarrior.org Fri Nov 13 13:17:53 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 13 Nov 2009 08:17:53 -0500 Subject: [Infowarrior] - The chart RIAA doesn't want you to see Message-ID: <23645F9E-98FB-4B94-B819-163481FDAAC2@infowarrior.org> (As the OP posting to BoingBoing said, "Which raises the question: is it really copyright law's job to make sure that last years winners keep on winning? Or is it enough to ensure that there will always be winners?") Do music artists fare better in a world with illegal file-sharing? http://labs.timesonline.co.uk/blog/2009/11/12/do-music-artists-do-better-in-a-world-with-illegal-file-sharing/ This is the graph the record industry doesn?t want you to see. It shows the fate of the three main pillars of music industry revenue - recorded music, live music, and PRS revenues (royalties collected on behalf of artists when their music is played in public) over the last 5 years. We?ve broken each category into two sub-categories so that, for any chunk of revenue - recorded music sales, for instance - you can see the percentage that goes to the artist, and the percentage that goes elsewhere. (In the case of recorded music, the lion?s share of revenue goes to the record label; in the case of live, the promoter takes a cut etc.) Hopefully, this analysis - and there?s more on the nuts and bolts of our method below - sheds some factual light on the claims and counter- claims that are paranoically sweeping across the music industry establishment, not least that put forward by the singer Lily Allen in this paper recently - and the BPI - that artists are losing out as a result of the fall in sales of recorded of music. The most immediate revelation, of course, is that at some point next year revenues from gigs payable to artists will for the first time overtake revenues accrued by labels from sales of recorded music. Why live revenues have grown so stridently is beyond the scope of this article, but our data - compiled from a PRS for Music report and the BPI - make two things clear: one, that the growth in live revenue shows no signs of slowing and two, that live is by far and away the most lucrative section of industry revenue for artists themselves, because they retain such a big percentage of the money from ticket sales. (It?s often claimed that live revenues are only/mostly benefitting so- called ?heritage acts?. Unfortunately, the data doesn?t shed any light on this because live revenues are not broken down by type of act, gig size or ticket price.) An even more striking thing, perhaps, emerges in this second graph, namely that revenues accrued by artists themselves have in fact risen over the past 5 years, despite the fall in record sales. (All the blue bars in the chart above represent revenues that go directly to artists. As you can see, the ?blue total? has risen noticeably.) This is mostly because of live revenues, but also because of the growing amount collected by the PRS on behalf of artists, which accounts for a much bigger chunk of industry revenues than most people realise. (PRS revenues in fact break down into 4 categories - Broadcast and Online, Public Performance, Mechanical, International. You can explore this in more detail in this spreadsheet, which contains all our data.) It?s interesting too that, overall, industry revenues have grown in the period - though admittedly not by much - which arguably adds strength to the notion that, when the BPI releases its annual report claiming how much ?the music industry? has suffered from the growth in illegal file-sharing, what it perhaps should be saying is how much the record labels have suffered. For other people in the industry, not least artists, the future arguably holds more promise. A couple of notes about our methods: the data, as pointed out, comes from the PRS and the BPI. We are grateful to the PRS in particular for helping us with a model to work out what percentage of a particular chunk of industry revenue was likely to be returned to artists. In the case of recorded music, we used an average 90/10 per cent split between labels/artists. In the case of live we used a 90/10 split between artists/promoters. We hit one major snag. The PRS report gives a figure for annual live music revenues but it does not indicate what percentage of that goes to venues. (Before doing the split for live music revenues between artist and promoter, you first need to take out the percentage that goes to the venue.) We asked several big concert promoters and venue managers - AEG Europe, Carling Academy, and the PRS itself - what percentage of gig revenue one could reasonably assume, on average, went to the venue, and none would make an estimate. The closest we came to an answer was a remark from a senior industry source said ?only a small percentage of live goes to venues?. That?s the best we had to work with. We?ve therefore done the above calculations on the assumption that 10 per cent of live revenues go to the venue, but in these two graphs, we show how the situation would change if that figure rose to 20 per cent. We would welcome any feedback on a more accurate figure to use for the venue?s share of live revenues, and any more general feedback on our methods. From rforno at infowarrior.org Fri Nov 13 15:36:05 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 13 Nov 2009 10:36:05 -0500 Subject: [Infowarrior] - Copyright overreach takes a world tour Message-ID: <37ABF64F-E0E6-463E-8E49-930BBC25C265@infowarrior.org> Copyright overreach takes a world tour By Rob Pegoraro Sunday, November 15, 2009 http://www.washingtonpost.com/wp-dyn/content/article/2009/11/13/AR2009111300852_pf.html Some of the better ideas in the computing industry never make it into stores, and not because of expensive hardware, complex code or inadequate bandwidth. You can only blame laws that keep otherwise desirable products off the market. Want a program to copy a DVD to your iPod? You can't pick one up in a shop, thanks to the 1998 vintage Digital Millennium Copyright Act. The DMCA bans that software, along with other tools that might help you use movie downloads, e-books and other "protected" files in ways not specifically allowed by their vendors. We may be watching a sequel to the DMCA story today. An international copyright agreement, negotiated under unusual secrecy, could impose a further round of restrictions on our use of digital technology. This Anti-Counterfeiting Trade Agreement, "ACTA" for short, represents an attempt by the United States and other countries to set common rules for violations of intellectual-property laws. The United States hopes to use ACTA to export its laws, but in the process it might have to import others. It's difficult to talk about ACTA without using words like "may," "could" and "might" because of how little has been disclosed about it. The nations involved -- including Japan, South Korea, Canada and the members of the European Union -- have agreed to post little beyond topics that the agreement should address. The Bush and Obama administrations have cited national-security concerns to justify withholding further information. They have revealed ACTA texts only to selected individuals (few representing consumer interests) who sign non-disclosure agreements. Congress, meanwhile, has no say, as ACTA is being written as an "executive agreement," not a treaty that would require Senate ratification. Much information about ACTA has come from leaked documents posted to such sites as http://wikileaks.org; other details have been pried out through Freedom of Information Act requests by such groups as the Electronic Frontier Foundation and Knowledge Ecology International. You can weave these scraps into a frightening future in which a site like YouTube couldn't stay in business and allegations of file sharing could lead your Internet provider to kick you offline. The reality of ACTA doesn't seem that bad: Many of the provisions attacked by critics are part of U.S. law and such bilateral free-trade deals as the pending agreement between the U.S. and South Korea. But it's bad enough. ACTA's first problem lies in the idea of making the DMCA a world standard. Globalizing that law's ban on circumventing digital locks would do no favors for people elsewhere. But it would also cement the DMCA's status at home, making it harder to fix its user-hostile provisions. More important, the ACTA talks would be a rare negotiation if they resulted in one state giving up nothing in return for every other state accepting its ideas. A U.S. trade official who would not speak for attribution emphasized that the government's proposals for ACTA only color within the lines of existing U.S. laws. But trying to globalize them invites fine-print changes in emphasis or detail that could lead to changes in their enforcement here. Consider the notion of "graduated response," the idea that Internet providers should discipline or disconnect users who keep sharing movies or music. The DMCA allows for that, but ACTA could increase incentives for Internet providers to act as copyright cops. We can't know for sure without seeing ACTA's text. Free-trade agreements and ACTA's agenda include stopping counterfeit copyrighted works at borders. That trade official said the U.S. wants to exempt travelers from mandatory searches of iPods or laptops -- let's set aside the risk of overreach by law enforcement at airports -- but we won't know how much privacy to expect without seeing ACTA's text. And free-trade deals and ACTA call for more rigorous mandatory penalties, including jail time. But it's tough to gauge how inflexible those punishments might be without seeing ACTA's text. The U.S. trade official, when asked how ACTA's secrecy squared with President Obama's pledges of open government, said the administration is working to open the process further. But more official blog posts and news releases won't suffice. An agreement this sweeping won't get far without public debate. Just ask one of the biggest advocates of stronger copyright laws, the Motion Picture Association of America. In a statement sent by its press office, chief policy officer Greg Frazier wrote that "clearly anything other than making the text public will not satisfy the ACTA naysayers," adding that "we will encourage the U.S. government" to do so. The fair way to craft a deal like this in the daylight. When the United Nations' World Intellectual Property Organization proposes to write new treaties, it does so in public, publishing drafts and allowing outside parties to report on the progress of talks. If ACTA's ground rules don't allow that, it's time to scrap this project -- before we inflict a 1.1 version of the DMCA on ourselves and the world. From rforno at infowarrior.org Fri Nov 13 18:35:41 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 13 Nov 2009 13:35:41 -0500 Subject: [Infowarrior] - The Cyberwar Plan Message-ID: <3F7B025A-9EF1-4039-95FD-D4AB7538E115@infowarrior.org> The Cyberwar Plan It's not just a defensive game; cyber-security includes attack plans too, and the U.S. has already used some of them successfully. by Shane Harris Saturday, Nov. 14, 2009 http://www.nationaljournal.com/njmagazine/cs_20091114_3145.php In May 2007, President Bush authorized the National Security Agency, based at Fort Meade, Md., to launch a sophisticated attack on an enemy thousands of miles away without firing a bullet or dropping a bomb. At the request of his national intelligence director, Bush ordered an NSA cyberattack on the cellular phones and computers that insurgents in Iraq were using to plan roadside bombings. The devices allowed the fighters to coordinate their strikes and, later, post videos of the attacks on the Internet to recruit followers. According to a former senior administration official who was present at an Oval Office meeting when the president authorized the attack, the operation helped U.S. forces to commandeer the Iraqi fighters' communications system. With this capability, the Americans could deceive their adversaries with false information, including messages to lead unwitting insurgents into the fire of waiting U.S. soldiers. Former officials with knowledge of the computer network attack, all of whom requested anonymity when discussing intelligence techniques, said that the operation helped turn the tide of the war. Even more than the thousands of additional ground troops that Bush ordered to Iraq as part of the 2007 "surge," they credit the cyberattacks with allowing military planners to track and kill some of the most influential insurgents. The cyber-intelligence augmented information coming in from unmanned aerial drones as well as an expanding network of human spies. A Pentagon spokesman declined to discuss the operation. Bush's authorization of "information warfare," a broad term that encompasses computerized attacks, has been previously reported by National Journal and other publications. But the details of specific operations that specially trained digital warriors waged through cyberspace aren't widely known, nor has the turnaround in the Iraq ground war been directly attributed to the cyber campaign. The reason that cyber techniques weren't used earlier may have to do with the military's long-held fear that such warfare can quickly spiral out of control. Indeed, in the months before the U.S. invasion of Iraq in March 2003, military planners considered a computerized attack to disable the networks that controlled Iraq's banking system, but they backed off when they realized that those networks were global and connected to banks in France. By early 2007, however, two senior officials with experience and faith in the power of cyber-warfare to discretely target an adversary stepped into top military and intelligence posts. Mike McConnell, a former director of the National Security Agency, took over as director of national intelligence in February of that year. And only weeks earlier, Army Gen. David Petraeus became the commander of all allied forces in Iraq. McConnell, who presented the request to Bush in the May 2007 Oval Office meeting, had established the first information warfare center at the NSA in the mid-1990s. Petraeus, a devotee of counterinsurgency doctrine, believed that cyberwar would play a crucial role in the strategy he had planned as part of the surge. In September 2007, the general told Congress, "This war is not only being fought on the ground in Iraq but also in cyberspace." Some journalists have obliquely described the effectiveness of computerized warfare against the insurgents. In The War Within, investigative reporter Bob Woodward reports that the United States employed "a series of top-secret operations that enable [military and intelligence agencies] to locate, target, and kill key individuals in extremist groups such as Al Qaeda, the Sunni insurgency, and renegade Shia militias. ... " The former senior administration official said that the actions taken after Bush's May 2007 order were the same ones to which Woodward referred. (At the request of military and White House officials, Woodward withheld "details or the code word names associated with these groundbreaking programs.") Woodward wrote that the programs began "in about May 2006." But the former administration official emphasized that the specific operations that turned the advantage back to U.S. forces came a year later. Published reports suggest that military commanders were eyeing cyber- warfare techniques in advance of Bush's 2007 order. In an October 2005 article in Aviation Week & Space Technology, reporter David Fulghum noted, "Computer network attack and exploitation... are also now the primary tools in combating what senior U.S. Army officials identify as their No. 1 target -- the wireless communications networks used by insurgents and terrorists." In 2005, military planners focused their efforts largely on sensors that could intercept wireless signals in the combat zone, not on the penetration of the cellular phone network itself. Pursuing the latter would be a far more ambitious and riskier maneuver that, by law, would require presidential authorization. It would also call upon the secret skills of the NSA's com-puter hackers. The lessons of the 2007 cyberwar are instructive today, as the director of the NSA, Army Lt. Gen. Keith Alexander, is expected to take over the Defense Department's new Cyber Command. The command will be the vanguard of the Obama administration's cyberwar efforts, as well as the front-line defender of military computer networks. U.S. networks, like those of the Iraqi fighters, are also vulnerable to outside attack, and an increasing number of penetrations over the past two years have led Defense officials to put cyber-security at the top of their agenda. Cyber-defenders know what to prepare themselves for because the United States has used the kinds of weapons that now target the Pentagon, federal agencies, and American corporations. They are designed to steal information, disrupt communications, and commandeer computer systems. The U.S. is forming a cyberwar plan based largely on the experience of intelligence agencies and military operations. It is still in nascent stages, but it is likely to support the conduct of conventional war for generations to come. Some believe it may even become the dominant force. A New Way Of War Senior military leaders didn't come of age in a digital world, and they've been skeptical of computerized attacks. Mostly younger officers, who received their early combat education through video games and Dungeons & Dragons, wage these battles. To them, digital weapons are as familiar and useful as rifles and grenades. Over the past few years, however, the cyber-cohort has gained influence among the ranks of military strategists, thanks in large part to the ascendancy of Gen. Petraeus. The man widely credited with rescuing the U.S. mission in Iraq is also a devotee of "information operations," a broad military doctrine that calls for defeating an enemy through deception and intimidation, or by impairing its ability to make decisions and understand the battlefield. In past conflicts, the military has jammed enemy communication systems with electromagnetic waves or dropped ominous leaflets from planes warning enemy forces of imminent destruction. Today, cyber-warriors use the global telecommunications network to commandeer an adversary's phones or shut down its Web servers. This activity is a natural evolution of the information war doctrine, and Petraeus has elevated its esteem. Computerized tools to penetrate an enemy's phone system are only one part of the cyberwar arsenal. And they are perhaps the least worrisome. Alarmed national security officials, and the president himself, are paying more attention than ever to devastating computer viruses and malicious software programs that can disable electrical power systems, corrupt financial data, or hijack air traffic control systems. In 2007, after McConnell got Bush's sign-off for the cyber campaign in Iraq, he warned the president that the United States was vulnerable to such attacks. Then-Treasury Secretary Henry Paulson Jr., who was present at the meeting, painted a chilling scenario for Bush. He said that in his former position as the CEO of Goldman Sachs, his biggest fear was that someone would gain access to the networks of a major financial institution and alter or corrupt its data. Imagine banks unable to reconcile transactions and stock exchanges powerless to close trades. Confidence in data, Paulson explained, supported the entire financial system. Without it, the system would collapse. The following year, when a lack of confidence in the accuracy of Bear Stearns's accounts threatened to bring down that major bank, McConnell tried to use the experience as a teaching opportunity. He privately warned other senior administration officials that a cyberattack could cause the same painful consequences, and he began studying what an attack on the system that clears market trades might look like. According to The New York Times, officials were halfway through their research when the credit markets froze. A senior intelligence official remarked, "We looked at each other and said, 'Our market collapse has just given every cyber-warrior out there a playbook.' " Bush's response to cyber-threats took the form of a multibillion- dollar defense plan, known as the Comprehensive National Cybersecurity Initiative. In its initial stages, the plan was classified, and critics later complained that the administration had cut itself off from valuable expertise and debate. But according to McConnell, who spoke about the initiative at a recent panel discussion at the International Spy Museum in Washington, the initiative was classified because it involved an "attack," or offensive, component. McConnell, an authority on cyberwar, chose his words deliberately, and it was a telling admission. "Computer network attack" is a technical term, describing an action designed to cause real-world consequences for an adversary -- such as those that Paulson and McConnell warned the president about in the Oval Office, and such as those that the U.S. used in Iraq. The United States' cyber strategy, in other words, encompassed defensive tactics and an offensive plan. The Obama administration inherited the CNCI and has enhanced it with the creation of a national cyber-security coordinator, a White House official who is supposed to ensure that the defensive and offensive sides work together. Cyber-Forces Already Deployed As the White House vets candidates for the "cyber-czar" post, the military and intelligence agencies are honing their cyber skills and have already marshaled their forces. "We have U.S. warriors in cyberspace that are deployed overseas and are in direct contact with adversaries overseas," said Bob Gourley, who was the chief technology officer for the Defense Intelligence Agency and is a board member of the Cyber Conflict Studies Association. These experts "live in adversary networks," Gourley said, conducting reconnaissance on foreign countries without exchanging salvos of destructive computer commands. "Like two ships in the same waters, aware of each other's presences, it doesn't mean they're bumping or firing on each other." President Obama confirmed that cyber-warriors have aimed at American networks. "We know that cyber-intruders have probed our electrical grid," he said at the White House in May, when he unveiled the next stage of the national cyber-security strategy. The president also confirmed, for the first time, that the weapons of cyberwar had claimed victims. "In other countries, cyberattacks have plunged entire cities into darkness." With every attack, network defenders learn new techniques, which in turn make them better warriors. If they are fortunate enough to capture the weapon itself, they can pick apart its command codes -- its digital DNA -- and appropriate them. "You can analyze the attack code, change it, and then use it or counter the next attack," said Dave Marcus, the director of security research and communications for McAfee Labs, which dissects cyber-threats for government agencies. The same expertise required to build a virus or an attack program to knock down an opponent's firewall can be put to work building more- sophisticated virus detection systems and stronger firewalls. "Our defense is informed by our offense," Gourley said. Because the United States has studied how attacks are waged, "we certainly would know how to cause these effects," said Sami Saydjari, the president and founder of the Cyber Defense Agency, a private security company, and a former Defense Department employee. "If the president gave an order, we'd have cadres of people who'd know how to do that." The Man-Made Battlefield Military officers describe cyberspace as the fifth domain of war, after land, sea, air, and space. But cyberspace is unique in one important respect -- it's the only battlefield created by humans. "We have invented this, and it cuts across those other four," said retired Air Force Lt. Gen. Harry Raduege, who ran the Defense Information Systems Agency from 2000 to 2005. He was responsible for the defense and operation of the Pentagon's global information network. "Cyberspace has no boundaries," Raduege said. "It's just everywhere, and it permeates everything we do.... We continue to improve our capabilities, but so do the adversaries." No nation dominates the cyber-battlefield today. "Military forces fight for the ownership of that domain," said Matt Stern, a retired lieutenant colonel who commanded the Army's 2nd Information Operations Battalion and who now works in the private sector as the director of cyber accounts for General Dynamics Advanced Information Systems. "But because of the ubiquitous nature of cyberspace -- and anyone's ability to access it -- military forces must not only contend with the threats within their operational environment, they must also fight against threats in cyberspace that are global in nature." Cyberspace is also the domain that, as of now, the United States stands the greatest chance of ceding to another nation. In July, an independent study of the overall federal cyber-workforce described it as fragmented and understaffed. The study blamed a hiring process that takes too long to vet security clearances, low salaries, and the lack of a unified hiring strategy. "You can't win the cyberwar if you don't win the war for talent," said Max Stier, the president of the Partnership for Public Service, an advocacy group that helped write the study. The co-author was Booz Allen Hamilton, the government contracting firm where former intelligence Director McConnell now runs the cyber-security business. The Defense Department graduates only about 80 students per year from schools devoted to teaching cyber-warfare. Defense Secretary Robert Gates has said that the military is "desperately short" of cyber- warriors and that the Pentagon wants four times as many graduates to move through its teaching programs over the next two years. That will be difficult, considering that the military and intelligence agencies compete directly with industry for top talent. Beltway contractors have been on a hiring spree ever since the Bush administration began the comprehensive cyber-security plan. Raytheon, which has assisted Pentagon special-operations forces using advanced cyber-technology, posted an ad to its website earlier this year titled "Cyber Warriors Wanted." The company announced 250 open positions -- more than three times as many as the Defense Department is moving through its education programs. Despite a relative shortage of skilled warriors, the military services have charged vigorously into cyberspace. The Army, Navy, Air Force, and Marines all have their own cyber-operations groups, which handle defense and offense, and they've competed with one another to control the military's overall strategy. It now appears that the individual service components will report to the new Cyber Command, which will be led by a four-star general. (NSA Director Alexander, the presumptive candidate, has three stars, and his promotion would require the Senate's approval.) The military may be organizing for a cyberwar, but it's uncertain how aggressive a posture it will take. Some have argued for creating an overt attack capability, the digital equivalent of a fleet of bombers or a battalion of tanks, to deter adversaries. In a 2008 article in Armed Forces Journal, Col. Charles Williamson III, a legal adviser for the Air Force Intelligence, Surveillance, and Reconnaissance Agency, proposed building a military "botnet," an army of centrally controlled computers to launch coordinated attacks on other machines. Williamson echoed a widely held concern among military officials that other nations are building up their cyber-forces more quickly. "America has no credible deterrent, and our adversaries prove it every day by attacking everywhere," he wrote. Williamson titled his essay, "Carpet Bombing in Cyberspace." Responding to critics who say that by building up its own offensive power, the United States risks starting a new arms race, Williamson said, "We are in one, and we are losing." A Fight For First Other experts concur that the United States cannot claim to be the world's dominant cyber-force. Kevin Coleman, a senior fellow with the security firm Technolytics and the former chief strategist for the Web pioneer Netscape, said that China's and Russia's abilities to defend and attack are just as good as America's. "Basically, it's a three-way tie for first." China has proved its prowess largely by stealing information from U.S. officials and corporate executives. Last year, the head of counterintelligence for the government told National Journal that Chinese cyber-spies routinely pilfer strategy information from American businesspeople in advance of their meetings in China. And a computer security expert who consults for the government said that during a trip to Beijing in December 2007, U.S. intelligence officials discovered spyware programs designed to clandestinely remove information from personal computers and other electronic equipment on devices used by Commerce Secretary Carlos Gutierrez and possibly other members of a U.S. trade delegation. (See NJ, 5/31/08, p. 16.) But it is the Russian government that has done the most to stoke fears of a massive cyberwar between nations. Most experts believe that Russian sources launched a major attack in April 2007 against government, financial, and media networks in Estonia. It came on the heels of a controversy between Estonian and Russian officials over whether to move a statue honoring Soviet-era war dead. Estonia, one of the most "wired" nations on Earth, is highly dependent upon access to the Internet to conduct daily business, and the cyberattack was crippling. A year later, many security experts accused Moscow of launching a cyberattack on Georgia as conventional Russian military forces poured into the country. The assault was aimed at the Georgian centers of official command and public communication, including websites for the Georgian president and a major TV network. The suspected Russian attacks startled military and civilian cyber- experts around the globe because of their scale and brazenness. "Estonia was so interesting because it was the first time anyone ever saw an entire country knocked out," said Ed Amoroso, the chief security officer for AT&T. "The whole place is like a little mini- version of what our federal government has aspired to" in terms of conducting so much business online. "It scared the heck out of people." The attacks also underscored one of the most befuddling aspects of cyberwar. Not all of the computers that attacked Estonia were in Russia. The machines, in fact, were scattered throughout 75 countries and were probably hijacked by a central master without their owners' knowledge. Many of the soldier-machines in this global botnet were in the United States, an Estonian ally. To launch a counteroffensive, Estonia would have had to attack American computers as well as those in other friendly countries. On May 5 of this year, lawmakers on the House Armed Services Subcommittee on Terrorism and Unconventional Threats and Capabilities asked the NSA's Alexander whether the attacks on Estonia and Georgia met the definition of cyberwar. "On those, you're starting to get closer to what would be [considered war]," he said. "The problem you have there is who -- the attribution." Although it was obvious to most experts that the culprits were Russian, it's easy for attackers to mask their true location. The anonymity of the Internet provides many alibis. Furthermore, it's hard to know whether the Russian government committed the attack, hired cyber-mercenaries to do it, or simply looked the other way as patriotic hackers turned their sights on rival countries. Over the Fourth of July weekend this year, a series of attacks struck websites used by the White House, the Homeland Security Department, the Secret Service, the NSA, and the State and Defense departments, as well as sites for the New York Stock Exchange and NASDAQ. The attacks also hit sites in South Korea, and suspicion immediately turned to North Korea. But again, the inability to attribute the source with certainty impeded any response. The attacks appear to have emanated from about 50,000 computers still infected with an old computer virus, which means that their owners probably had no idea they were participating in a cyber-offensive. Some of those machines were inside the United States, said Tom Conway, the director of federal business development for McAfee. "So what are you going to do, shoot yourself?" Holding Fire The pitfalls of cyberwar are one reason that the United States has been reluctant to engage in it. The U.S. conducted its first focused experiments with cyberattacks during the 1999 bombing of Yugoslavia, when it intervened to stop the slaughter of ethnic Albanians in Kosovo. An information operations cell was set up as part of the bombing campaign. The cell's mission was to penetrate the Serbian national air defense system, published accounts and knowledgeable officials said, and to make fake signals representing aircraft show up on Serbian screens. The false signals would have confused the Serbian response to the invasion and perhaps destroyed commanders' confidence in their own defenses. According to a high-level military briefing that Federal Computer Week obtained in 1999, the cyber-operation "could have halved the length of the [air] campaign." Although "all the tools were in place ... only a few were used." The briefing concluded that the cyber-cell had "great people," but they were from the "wrong communities" and "too junior" to have much effect on the overall campaign. The cyber-soldiers were young outsiders, fighting a new kind of warfare that, even the briefing acknowledged, was "not yet understood." War planners fear unleashing a cyber-weapon that could quickly escape their control, a former military officer experienced in computer network operations said. These fears hark back to the first encounter with a rampant Internet virus, in 1988. A Cornell University student named Robert Morris manufactured a program that was intended to measure the size of the Internet but ended up replicating itself massively, infecting machines connected to the network. The military took a lesson from the so-called Morris worm, the former officer said. Only four years after the war in Yugoslavia, planners again held off on releasing a potentially virulent weapon against Iraq. In the plan to disable the Iraqi banking network in advance of the U.S. invasion, the Pentagon determined that it might also bring down French banks and that the contagion could spread to the United States. "It turns out that their computer systems extend well outside Iraq," a senior Air Force official told Aviation Week & Space Technology in March 2003. "We're also finding out that Iraq didn't do a good job of partitioning between the military and civilian networks. Their telephone and Internet operations are all intertwined. Planners thought it would be easy to get into the military through the telephone system, but it's all mixed in with the civilian [traffic]. It's a mess." This official said that to penetrate the military systems, the United States would risk what planners began calling "collateral computer network attack damage." Because of the widespread damage that cyber-weapons can cause, military and intelligence leaders seek presidential authorization to use them. "They're treated like nuclear weapons, so of course it takes presidential approval," the former military officer said. McConnell, the ex-intelligence director, has compared the era of cyberwar to "the atomic age" and said that a coordinated attack on a power grid or transportation or banking systems "could create damage as potentially great as a nuclear weapon over time." Unlike atomic bombs, however, cyber-weapons aren't destroyed in the attack. "Once you introduce them to the battlefield, it's trivially easy for the other side to capture your artillery, as it were, and then use it against you if you're not already inoculated against it, and then against other friendlies," said Ed Skoudis, a co-founder of the research and consulting firm InGuardians and an instructor with the SANS Institute, which trains government employees in cyber-security. The risk of losing control of a weapon provides a powerful incentive not to use it. But until a new computer virus is spotted in the wilds of the Internet, no one can be certain how to repel it. That gives every aggressor the advantage of surprise. "Why would you expect an adversary to lay their cards on the table until it counts?" said Tom McDermott, a former deputy director of information security at the NSA. "Why would you expect to have seen the bad stuff yet?" The Case For Restraint During his subcommittee testimony in May, Gen. Alexander was asked whether the United States needed the cyber-equivalent of the Monroe Doctrine, a set of clearly defined interests and the steps the government would take to protect them. Without offering any specific proposals, Alexander responded simply, "I do." The Obama administration's former White House chief of cyber-security, Melissa Hathaway, has called for international cyberspace agreements. In a number of speeches in 2008 while still with the Bush administration, Hathaway proposed a Law of the Sea Treaty for the Internet, which, she said, is the backbone of global commerce and communications, just as the oceans were centuries ago. The odds for a broad international framework aren't good, however. The Russian government has proposed a treaty limiting the use of cyber- weapons, but the State Department has rejected the idea, preferring to focus on improving defenses and prosecuting cyberattacks as crimes. Officials are also wary of any strategy by the Russian government to constrain other nations' ability to attack. In September, a panel of national security law experts convened by the American Bar Association and the National Strategy Forum, a Chicago-based research institute, concluded that the prospects for any multinational agreement are bleak. "The advantages of having a cyber-warfare capacity are simply too great for many international actors to abjure its benefits," the panel stated. Students of cyberwar find parallels between the present day and the early 1960s, when the advent of intercontinental missiles ushered in not only the space age but also an arms race. Like outer space then, cyberspace is amorphous and opaque to most, and inspires as much awe as dread. In this historical analogy, experts have embraced a Cold War deterrent to prevent the cyber-Armageddon that military and intelligence officials have been warning about -- mutually assured destruction. Presumably, China has no interest in crippling Wall Street, because it owns much of it. Russia should be reluctant to launch a cyberattack on the United States because, unlike Estonia or Georgia, the U.S. could fashion a response involving massive conventional force. The United States has already learned that it makes no sense to knock out an enemy's infrastructure if it disables an ally's, and possibly America's own. If nations begin attacking one another's power grids and banks, they will quickly exchange bombs and bullets. Presumably, U.S. war planners know that. And it may be the most compelling reason to keep their cyber-weapons sharp but use them sparingly. From rforno at infowarrior.org Fri Nov 13 18:38:14 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 13 Nov 2009 13:38:14 -0500 Subject: [Infowarrior] - NASA: Water found in moon crater Message-ID: <9A5643C7-3083-466D-B415-C39DE9794EB1@infowarrior.org> NASA: Water found in moon crater By ALICIA CHANG The Associated Press Friday, November 13, 2009; 1:31 PM http://www.washingtonpost.com/wp-dyn/content/article/2009/11/13/AR2009111301986.html?hpid=topnews LOS ANGELES -- It turns out there's plenty of water on the moon - at least near the lunar south pole, scientist said Friday. "Indeed, yes, we found water. And we didn't find just a little bit, we found a significant amount," said Anthony Colaprete, a principal project investigator at NASA's Ames Research Center. The discovery came from an analysis of data from a spacecraft NASA intentionally crashed into the moon last month. Colaprete estimated the impact kicked up at least 25 gallons of water. Significant water would make it easier to set up a base camp for astronauts. Previous spacecraft have detected the presence of hydrogen in lunar craters near the poles. In September, scientists reported finding tiny amounts of water mixed into the lunar soil all over the lunar surface. The mission actually involved two moon shots. First, an empty rocket hull slammed into Cabeus crater. The shepherding spacecraft recorded the drama live before it also crashed into the same spot minutes later. --- On the Net: LCROSS mission:http://www.nasa.gov/mission(underscore)pages/LCROSS/main/index.html From rforno at infowarrior.org Sat Nov 14 00:53:59 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 13 Nov 2009 19:53:59 -0500 Subject: [Infowarrior] - EFF: Copyright Watch Launched Message-ID: <53E822A6-FFC9-4196-B29E-AECE1CFBE7CE@infowarrior.org> Copyright Watch: http://www.copyright-watch.org/ Electronic Frontier Foundation Media Release For Immediate Release: Friday, November 13, 2009 International Activists Launch New Website to Gather and Share Copyright Knowledge Anyone Can Track National Copyright Laws Globally with San Francisco - The Electronic Frontier Foundation (EFF), Electronic Information for Libraries (eIFL.net), and other international copyright experts joined together today to launch Copyright Watch -- a public website created to centralize resources on national copyright laws at www.copyright-watch.org. "Copyright laws are changing across the world, and it's hard to keep track of these changes, even for those whose daily work is affected by them," said Teresa Hackett, Program Manager at eIFL.net. "A law that is passed in one nation can quickly be taken up by others, bilateral trade agreements, regional policy initiatives, or international treaties. With Copyright Watch, people can learn about the similarities and differences in national copyright laws, and they can use that information to more easily spot patterns and emerging trends." Copyright Watch is the first comprehensive and up-to-date online repository of national copyright laws. To find links to national and regional copyright laws, users can choose a continent or search using a country name. The site will be updated over time to include proposed amendments to laws, as well as commentary and context from national copyright experts. Copyright Watch will help document how legislators around the world are coping with the challenges of new technology and new business models. "Balanced and well-calibrated copyright laws are extremely important in our global information society," said Gwen Hinze, International Policy Director at EFF. "Small shifts in the balance between the rights of copyright owners and the limitations and exceptions relied on by those who use copyrighted content can destroy or enable business models, criminalize or liberate free expression and everyday behavior, and support the development of new technologies that facilitate access to knowledge for all the world's citizens. We hope that Copyright Watch will encourage comparative research and help to highlight more and less flexible copyright regimes." "Details of copyright law used to be important only for a few people in creative industries," added Danny O'Brien, International Outreach Coordinator at EFF. "But now, with the growth of the Internet and other digital tools, we are all authors, publishers, and sharers of copyrighted works. Copyright Watch was created so citizens of the world can share and compare information about their countries' laws." Funding to create Copyright Watch was generously provided by the Open Society Institute. Copyright Watch: http://www.copyright-watch.org For this release: http://www.eff.org/press/archives/2009/11/13 From rforno at infowarrior.org Sat Nov 14 05:06:07 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sat, 14 Nov 2009 00:06:07 -0500 Subject: [Infowarrior] - DNS Problem Linked to DDoS Attacks Gets Worse Message-ID: <144719A5-D126-4AB3-A2CB-3C3D5A31A85A@infowarrior.org> DNS Problem Linked to DDoS Attacks Gets Worse By Robert McMillan, IDG News Service - Fri Nov 13, 2009 4:20PM EST http://tech.yahoo.com/news/pcworld/20091113/tc_pcworld/dnsproblemlinkedtoddosattacksgetsworse Internet security experts say that misconfigured DSL and cable modems are worsening a well-known problem with the Internet's DNS (domain name system), making it easier for hackers to launch distributed denial-of-service (DDoS) attacks against their victims. According to research set to be released in the next few days, part of the problem is blamed on the growing number of consumer devices on the Internet that are configured to accept DNS queries from anywhere, what networking experts call an "open recursive" or "open resolver" system. As more consumers demand broadband Internet, service providers are rolling out modems configured this way to their customers said Cricket Liu, vice president of architecture with Infoblox, the DNS appliance company that sponsored the research. "The two leading culprits we found were Telefonica and France Telecom," he said. In fact, the percentage of DNS systems on the Internet that are configured this way has jumped from around 50 percent in 2007, to nearly 80 percent this year, according to Liu. Though he hasn't seen the Infoblox data, Georgia Tech Researcher David Dagon agreed that open recursive systems are on the rise, in part because of "the increase in home network appliances that allow multiple computers on the Internet." "Almost all ISPs distribute a home DSL/cable device," he said in an e- mail interview. "Many of the devices have built-in DNS servers. These can sometimes ship in 'open by default' states." Because modems configured as open recursive servers will answer DNS queries from anyone on the Internet, they can be used in what's known as a DNS amplification attack. In this attack, hackers send spoofed DNS query messages to the recursive server, tricking it into replying to a victim's computer. If the bad guys know what they're doing, they can send a small 50 byte message to a system that will respond by sending the victim as much as 4 kilobytes of data. By barraging several DNS servers with these spoofed queries, attackers can overwhelm their victims and effectively knock them offline. DNS experts have known about the open recursive configuration problem for years, so it's surprising that the numbers are jumping up. However, according to Dagon, a more important issue is the fact that many of these devices do not include patches for a widely publicized DNS flaw discovered by researcher Dan Kaminsky last year. That flaw could be used to trick the owners of these devices into using Internet servers controlled by hackers without ever realizing that they've been duped. Infoblox estimates that 10 percent of the open recursive servers on the Internet have not been patched. The Infoblox survey was conducted by The Measurement Factory, which gets its data by scanning about 5 percent of the IP addresses on the Internet. The data will be posted here in the next few days. According to Measurement Factory President Duane Wessels, DNS amplification attacks do occur, but they're not the most common form of DDoS attack. "Those of us that track these and are aware of it tend to be a little bit surprised that we don't see more attacks that use open resolvers," he said. "It's kind of a puzzle." Wessels believes that the move toward the next-generation IPv6 standard may be inadvertently contributing to the problem. Some of the modems are configured to use DNS server software called Trick or Tread Daemon (TOTd) -- which converts addresses between IPv4 and IPv6 formats. Often this software is configured as an open resolver, Wessels said. From rforno at infowarrior.org Sat Nov 14 05:17:49 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sat, 14 Nov 2009 00:17:49 -0500 Subject: [Infowarrior] - Terms of Digital Book Deal With Google Revised Message-ID: <240D1E87-1521-4C03-A3A1-44F7C2E34E9C@infowarrior.org> Terms of Digital Book Deal With Google Revised By BRAD STONE and MIGUEL HELFT Published: November 13, 2009 http://www.nytimes.com/2009/11/14/technology/internet/14books.html?hp SAN FRANCISCO ? Google and groups representing book publishers and authors filed a modified version of their controversial books settlement with a federal court on Friday. The changes would pave the way for other companies to license Google?s vast digital collection of copyrighted out-of-print books, and might resolve Google?s conflicts with European governments. The settlement, for a 2005 lawsuit over Google?s ambitious plan to digitize books from major American libraries, outlined a plan to create a comprehensive database of in-print and out-of-print works. But the original agreement, primarily between Google, the Authors Guild and the Association of American Publishers, drew much criticism. The Justice Department and others said Google was potentially violating copyright law, setting itself up to unfairly control access to electronic versions of older books and depriving authors and their heirs of proper compensation. The revisions to the settlement primarily address the handling of so- called orphan works, the millions of books whose rights holders are unknown or cannot be found. The changes call for the appointment of an independent fiduciary, or trustee, who will be solely responsible for decisions regarding orphan works. The trustee, with Congressional approval, can grant licenses to other companies who also want to sell these books, and will oversee the pool of unclaimed funds that they generate. If the money goes unclaimed for 10 years, according to the revised settlement, it will go to philanthropy and to an effort to locate rights holders. In the original settlement, unclaimed funds reverted to known rights holders after five years. The changes also restrict the Google catalog to books published in the United States, Britain, Australia or Canada. That move is intended to resolve objections from the French and German governments, which complained that the settlement did not abide by copyright law in those countries. The revised settlement could make it easier for other companies to compete with Google in offering their own digitized versions of older library books because it drops a provision that was widely interpreted as ensuring that no other company could get a better deal with authors and publishers than the one Google had struck. ?We?re disappointed that we won?t be able to provide access to as many books from as many countries through the settlement as a result of our modifications, but we look forward to continuing to work with rightsholders from around the world to fulfill our longstanding mission of increasing access to all the world?s books,? the engineering director for Google Book Search, Dan Clancy, wrote in a blog post on the company?s Web site on Friday evening.In the next week, Judge Denny Chin of the United States District Court for the Southern District of New York is expected to set a date for a fairness hearing, where arguments from both sides will be heard about whether or not to approve the settlement. The changes are not likely to please all the opponents of the original settlement. But the parties are hoping they will placate the concerns raised by the Justice Department, which in September asked a federal judge to reject the original $125 million agreement. While the decision on whether to approve the deal will be in the hands of Judge Chin, the Justice Department?s opinion is an important factor. Gina Talamona, a spokeswoman for the Justice Department, said that the department would review the filing, and that its investigation into possible anticompetitive practices involving the rights to digital books was continuing. Google and its partners had hailed the original agreement, signed in October 2008, as a public good. They said it would allow Google to create an immense digital library that would expand access to millions of out-of-print books, while creating new ways for authors and publishers to profit from digital versions of their works. Google?s library would be searchable online, and users would have free access to 20 percent of the text in each book. Google would also sell subscriptions to the entire collection to universities and other institutions. Every public library in the United States would be able to offer its patrons free access to the full collection at one terminal. Users would be able to buy access to full texts at home. Google, authors and publishers would split all revenue generated through the system. As part of the settlement, Google would pay to establish a Books Rights Registry, to be run by representatives of authors and publishers, that would administer payments. But earlier this year, academics, legal scholars and some librarians expressed concern that the settlement would grant Google a virtual monopoly over orphan works, making it nearly impossible for anyone else to build a comprehensive digital library. Some librarians feared that without competition, Google would be free to raise prices arbitrarily. Other critics said the agreement turned copyright law on its head by granting Google the license to profit from works unless rights holders objected. Some argued that orphan works authors and foreign authors were not properly represented by the Authors Guild. The proposed settlement prompted several hundred filings with the court, the vast majority opposing all or parts of the deal. In a Sept. 18 filling, the Justice Department echoed many of the concerns. While saying that the settlement provided many benefits, it urged Judge Chin to reject it, saying it raised antitrust, class- action and copyright issues. But the Justice Department also encouraged the parties to work to modify the agreement to salvage its benefits and overcome its problems. The Justice Department filing prompted the parties to withdraw the original agreement and revise it. From rforno at infowarrior.org Sat Nov 14 22:57:58 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sat, 14 Nov 2009 17:57:58 -0500 Subject: [Infowarrior] - 14 tech firms form cybersecurity alliance for government Message-ID: <5B20E5F7-665C-4610-A8EA-4B1491927976@infowarrior.org> 14 tech firms form cybersecurity alliance for government Lockheed Martin, top suppliers launch initiative for government market ? By Wyatt Kash ? Nov 12, 2009 Thirteen leading technology providers, together with Lockheed Martin, today announced the formation of a new cybersecurity technology alliance. The announcement coincided with the opening of a new NexGen Cyber Innovation and Technology Center in Gaithersburg, Md., designed to test and develop new information and cybersecurity solutions for government and commercial customers. The alliance represents a significant commitment on the part of competing technology companies to work collaboratively on new ways to detect and protect against cyber threats and develop methods that could automatically repair network systems quickly after being attacked. The companies participating in the Cyber Security Alliance include APC by Schneider Electric, CA, Cisco, Dell, EMC Corp. and its RSA security division, HP, Intel, Juniper Networks, McAfee, Microsoft, NetApp, Symantec and VMware. Art Coviello, EMC executive vice president and president of RSA, speaking on behalf of the new alliance at the center?s dedication ceremony, highlighted the importance of combining the strengths of the companies at the NexGen center. ?Our adversaries operate in sophisticated criminal ecosystems that enable and enhance their attacks,? he said. To defend against such attacks, ?we need to build effective security ecosystems based on collaboration, knowledge sharing and industry best practices.? ?One of the challenges in moving from being reactive to being predictive,? said Lockheed Martin chairman, president and chief executive officer, Robert Stevens, ?is the need to model real-world attacks and develop resilient cyber defenses to keep networks operating while they?re under attack.? That and the ability to test solutions from end-to-end across a variety of hardware and software technologies are among the primary goals of the new cyber innovation and technology center. Nearly $10 million worth of software and equipment was contributed to the NexGen center by members of the Cyber Security Alliance, according to Charles Croom, vice president of Cyber Security Solutions for Lockheed Martin Information Systems & Global Services. The 25,000-square-foot design and collaboration center is co-located with Lockheed Martin?s new global cyber innovation range and the corporation?s network defense center. The network defense center routinely handles 4 million e-mail messages and about 10T of data per day en route to and from Lockheed Martin?s 140,000 employees. Analysts there look continually for malicious activity and data patterns, such as executable software code embedded in a PDF attachment. The new NexGen facility will be able to tap into the defense center?s data feeds, or simulate government agency computing environments, and test various approaches to mitigate cyberattacks, according to Richard Johnson, chief technology officer for Lockheed Martin Information Systems & Global Services. It can also be used to test ways of improving operating efficiencies, he said. The center includes seven collaboration areas as well as high definition video teleconferencing capabilities. The new center also features dedicated distributed cloud computing and virtualization capabilities. Those capabilities would permit an agency to simulate a network under attack and test various responses. For instance, analysts could replicate an operating network and freeze it on a second virtual location, in order to study the nature of the attack, while still supporting the primary network. ?We face significant known and unknown threats to our critical infrastructure,? Croom said. ?We not only need solid defenses but also the right technologies to predict and prevent future threats.? Croom said the new Cyber Security Alliance, and in particular the ability for experts from participating companies to work jointly on some of the harder problems agencies face, is one of elements that distinguishes the NexGen from other testing facilities. About the Author Wyatt Kash is editor in chief for Government Computer News. From rforno at infowarrior.org Mon Nov 16 00:23:48 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 15 Nov 2009 19:23:48 -0500 Subject: [Infowarrior] - New Law to Bar Misuse of Genetic Testing by Employers Message-ID: <518089FF-BACF-4331-BF3A-9D1A84D3E543@infowarrior.org> November 16, 2009 New Law to Bar Misuse of Genetic Testing by Employers By STEVEN GREENHOUSE http://www.nytimes.com/2009/11/16/business/16genes.html?_r=1&hp=&pagewanted=print A landmark antidiscrimination law ? the Genetic Information Nondiscrimination Act ? will take effect in the nation?s workplaces next weekend, prohibiting employers from requesting genetic testing or considering someone?s genetic background in hiring, firing or promotions. The act also prohibits health insurers and group plans from requiring such testing or using genetic information ? like a family history of heart disease ? to deny coverage or set premiums or deductibles. ?It doesn?t matter who?s asking for genetic information, if it?s the employer or the insurer, the point is you can?t ask for it,? said John C. Stivarius Jr., a trial lawyer based in Atlanta who advises businesses about the new law. The biggest change resulting from the law is that it will ? except in a few circumstances ? prohibit employers and health insurers from asking employees to give their family medical histories. The law also bars group health plans from the common practice of rewarding workers, often with lower premiums or one-time payments, if they give their family medical histories when completing health risk questionnaires. ?Genetic information is very broad,? said J. D. Piro, a principal in the Health Care Law Group at Hewitt Associates. ?It doesn?t simply include my own genetic information, such as do I have a risk for cancer. It also includes my family medical history ? do I have any relatives who have had cancer or leukemia.? Genetic tests help determine whether someone is at risk of developing an inherited disease or medical condition. These tests identify variations in people?s genes, like whether a woman has a predisposition for ovarian cancer. Such testing can help determine which course of treatment might work best for fighting a specific cancer or for helping a patient?s body process a specific drug. The new law (commonly called GINA) was passed by Congress last year because many Americans feared that if they had a genetic test, their employers or health insurers would discriminate against them, perhaps by firing them or denying coverage. In a nationwide survey, 63 percent of respondents said they would not have genetic testing if employers could access the results. ?The message to employees is they should now be able to get whatever genetic counseling or testing they need and be less fearful about doing so,? said Peggy R. Mastroianni, associate legal counsel for the federal Equal Employment Opportunity Commission. The act takes effect on Nov. 21 for all companies with 15 or more employees. It applies to group health insurers whose plan years begin on or after Dec. 7, and it took effect for individual health insurance plans last May. The act does not apply to life insurers. While the act makes it illegal for employers to intentionally acquire genetic information, it includes a ?water cooler? exception, as in a case where a manager overhears one employee telling another that his father had a stroke. Under the act, it is legal for a manager to learn from an obituary that an employee?s mother died of breast cancer. And if a manager asks why a worker took off a week to care for his father under the Family Medical Leave Act, it generally will not be considered illegal if the employer learns that the worker?s father has pancreatic cancer. The act nonetheless prohibits use of such inadvertent knowledge to alter the terms, conditions or privileges of someone?s employment. ?The challenge becomes what if down the road, the employee has a lot of absences or his performance is off, and you discipline the employee,? said Michael P. Aitken, director of governmental affairs for the Society of Human Resource Management. ?The employee could come back and say, ?That?s because you knew I had a genetic marker.? ? The act and its accompanying regulations allow group health plans to request family medical histories to help determine whether an employee should be placed in a disease management or wellness program to combat, for instance, high blood pressure. The regulations stress that employees must give that information voluntarily and that the group plan cannot request such information before health plan enrollment or use it in any way for underwriting. Under the regulations, group health plans, in seeking information for wellness programs, cannot attach a request for family medical history to any penalty or, as is far more common, any benefit. But wellness programs can request family medical history if there is no financial benefit attached. ?This can be a big deal,? said Mr. Stivarius, the Atlanta lawyer. ?A lot of people incentivize employees to provide this family medical information. They give them some extra paid time off if they participate in surveys. Now they can?t do that.? Mr. Piro of Hewitt said many employers were only now realizing that their health risk questionnaires might violate the law. ?A lot of employers have to scramble to scrub this information out of their health risk questionnaires,? he said. ?The alternative is to modify their reward structures so that they?re not considered to be purchasing or requiring the genetic information.? Susan Pisano, a spokeswoman for America?s Health Insurance Plans, said the new rules were a challenge for insurers because they were taking effect during the open enrollment period. She said her industry group disagreed with the federal agencies? interpretation that the law bars incentives to encourage employees to fill out family medical histories. From rforno at infowarrior.org Mon Nov 16 14:30:55 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 16 Nov 2009 09:30:55 -0500 Subject: [Infowarrior] - A 'feel-good' label for 'at-risk' kids? Message-ID: <3BDB8F57-5963-4017-A1D8-3B34B3AEB248@infowarrior.org> OFFS, right?? -rf A 'feel-good' label for 'at-risk' kids? By Jay Mathews Monday, November 16, 2009 http://www.washingtonpost.com/wp-dyn/content/article/2009/11/15/AR2009111502189_pf.html I sympathize with those who might not be comfortable with the latest plan to rid our schools of at-risk kids. Several educators across the country, including Alexandria Superintendent Morton Sherman, have decided not to call them that anymore. Henceforth they will be known as "at-promise" children. "We use the term 'at-promise' in Alexandria City Public Schools to describe children who have the potential to achieve at a higher rate than they are currently achieving," Sherman said in a July 23 op-ed in the Alexandria Gazette Packet. "Really, all children are at-promise, because we, as educators, have made a promise to each and every child that we will work toward higher achievement for all." Cathy David, Alexandria schools deputy superintendent, said at a School Board meeting last December: "The previous 'at-risk' model was a deficit model that identified and categorized children by criteria such as low income, special education, ethnicity or English language proficiency, with the assumption that if the criteria fit the child, then the child must have some sort of deficit. The 'at-promise' model comes from strengths." Word of this change has spread slowly. I first heard it a few days ago from a teacher. I sought reaction from people I know who stay current on educational trends. They weren't thrilled. "This is a perfect example of school systems concentrating on feel- good language instead of admitting that part of the problem of low achievement is caused by the lack of motivation and effort on the student's part," said Vern Williams, a nationally recognized math teacher in Fairfax County. Abigail Thernstrom, a McLean-based education scholar and vice chairwoman of the U.S. Commission on Civil Rights, said, "The schools can change the rhetoric, but at the end of the day, all that counts is what they actually accomplish." Former Arlington County School Board chairman David Foster said "at- promise" is "a politically correct term that conveys no meaning." "The wordsmithing of 'at risk' vs. 'at promise' is an example of K-12 gobbledygook at its worst -- not only a distinction without a difference but a really awkward phrasing at that," said J. Martin Rochester, a political science professor at the University of Missouri- St. Louis. Still, Sherman and David are exemplary educators who are thoughtful about their jobs. They knew they were going to get slammed for this, as did other teachers who have adopted it. But they taught their students to strive for clarity in speaking and writing, and "at risk" wasn't doing that for them. "At promise" has been floating around for at least a decade. The earliest media reference I could find was in a September 1997 Associated Press article about a mentoring program for junior high students in Norfolk, Neb. The term did not find fertile soil until 2004, when motivational speaker and educational consultant Larry Bell used it often in a speech to a San Diego conference sponsored by SIATech, a nonprofit group that runs 14 Job Corps training centers in four states. Two SIATech officials, Eileen Holmes and Linda Dawson, were so inspired that they started holding at-promise conferences. Then they established the Reaching At-Promise Students Association to spread the notion that every child has potential to improve. Attention-grabbing labels frequently blossom in the education world, then wither away. This might be just one more. But that does not mean that the people embracing it are wrong. Educators accept without thinking many concepts that encourage unhealthy policies. What about our focus on the achievement gap, which urges improvement for minority students but implies that white kids are doing well enough? Why not seek more achievement for all? Isn't that what the at-promise concept means? The educators who have adopted this buzz phrase will be getting more than their share of taunting e-mails. At-promise students may lose the label before they knew they had it. But the teachers I know who do the most for kids are positive thinkers, just like the at-promise people. Maybe we should be, too. From rforno at infowarrior.org Mon Nov 16 16:59:55 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 16 Nov 2009 11:59:55 -0500 Subject: [Infowarrior] - SLR Paper: Deep Secrecy Message-ID: <2CA62DC9-724D-4012-AFF3-07390D6F0F7E@infowarrior.org> Deep Secrecy David Pozen http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1501803 Stanford Law Review, Forthcoming Abstract: This Article offers a new way of thinking and talking about government secrecy. In the vast literature on the topic, little attention has been paid to the structure of government secrets, as distinct from their substance or function. Yet these secrets differ systematically depending on how many people know of their existence, what sorts of people know, how much they know, and how soon they know. When a small group of similarly situated officials conceals from outsiders the fact that it is concealing something, the result is a deep secret. When members of the general public understand they are being denied particular items of information, the result is a shallow secret. Every act of state secrecy can be located on a continuum ranging between these two poles. Attending to the depth of state secrets, the Article shows, can make a variety of conceptual and practical contributions to the debate on their usage. The deep/shallow distinction provides a vocabulary and an analytic framework with which to describe, assess, and compare secrets, without having to judge what they conceal. It sheds light on how secrecy is employed and experienced, which types are likely to do the most damage, and where to focus reform efforts. And it gives more rigorous content to criticisms of Bush administration practices. Elaborating these claims, the Article also mines new constitutional territory - providing an original account of the role of state secrecy generally, as well as deep secrecy specifically, in our constitutional order. Keywords: State Secrets, Executive Power, Constitutional Theory, Separation of Powers, Democratic Deliberation, Bureaucratic Culture, National Security, Mosaic Theory, Black Holes, Church Committee, Bush Administration, Rumsfeld, CIA, FISA, FOIA From rforno at infowarrior.org Tue Nov 17 00:50:46 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 16 Nov 2009 19:50:46 -0500 Subject: [Infowarrior] - Viacom's top lawyer: suing P2P users "felt like terrorism" Message-ID: Viacom's top lawyer: suing P2P users "felt like terrorism" Michael Fricklas, Viacom's general counsel, tells a group of Yale Law students that he's a huge fan of fair use, doesn't want to take down your YouTube mashup, and has no plans to start suing P2P users in federal courts?but he still loves DRM and "three strikes" laws. By Nate Anderson http://arstechnica.com/tech-policy/news/2009/11/viacoms-top-lawyer-suing-p2p-users-felt-like-terrorism.ars Michael Fricklas is Viacom's general counsel, and it's his job to oversee the company's legal efforts, including its $1 billion lawsuit against YouTube. When people talk about Big Content, they're talking about people like Fricklas. So it might be surprising to watch him tell a class of Yale law students this month that suing end users for online copyright infringement is "expensive, and it's painful, and it feels like bullying." While the recording industry was big on this approach for a while, Fricklas certainly understands the way it came across to the public when some college student went up against "very expensive lawyers and unlimited resources and it felt like terrorism." Customers "need to be treated with respect," he added, and that respect extends even to DRM?much of which has been "really bad." When it comes to Big Content's copyright stances, Fricklas is on board with some of the criticisms leveled at the content industries?and he doesn't want to take your mashup down. "Even as part of a big company, and as a consumer, and as a guy who loves technology and loves gadgets and all the interesting things that are happening on the Internet, I kind of agree with [the criticisms]," he said. "I actually care a lot about fair use? What we're really focused on in our business right now is the exact copy." Fricklas points to the recent MTV music awards, where Kanye West rushed the stage, grabbed the mic, and delivered his Internet-meme- producing-line, "I'mma let you finish, but?" Viacom quickly uploaded the evening's footage into the content recognition engines of sites like YouTube, which can then block exact uploads of the same footage or allow rightsholders to monetize it with ads. Viacom used the tool to block copies of the clip, but not without offering a solution of its own: the clip was hosted on Viacom websites and was embeddable and linkable. The company wanted the clip to go viral and wanted people sticking it on their blogs? but it wanted them to use the official Viacom-hosted version, and it made it as easy as possible for people to do so. (Viacom was happy to link to parodies of the clip in question, even when they were hosted on different sites and used bits of the original clip.) Fair use, not suing your customers, providing the content people want in the way that they want it?it sounds pretty good. So why are we in the middle of what copyright scholar William Patry calls the "Copyright Wars"? Kinder, gentler, but still lovin' DRM Part of the answer is that "Big Content" is of course a convenient fiction; every creator and company has a different outlook, is staffed by different individuals, and relies more or less heavily on exclusive rights under the Copyright Act. Viacom, for instance, creates copyrighted works every day, but it's also a heavy "fair user." Consider The Daily Show, for instance, and think about just how much of its daily show relies on video footage from other organizations. Fricklas even showed a spoof movie poster that Viacom had done years ago?for which it was sued by famous photographer Annie Leibowitz?and with which it eventually prevailed in court, claiming parodic fair use. The company also runs various user-generated content sites of its own, so it has a direct stake in many of these copyright issues from both sides of the question. There are plenty of copyright maximalists still in the business, those whose mantra is "more copyright is always better," but Fricklas insists he's not one of these. But he's also no copyfighter, however, and he remains a vigorous backer of tools like DRM and graduated response. While his brief talk was hardly a detailed explication of his thought on all issues copyright-related, it did illustrate why tensions exist between consumers and even forward-thinking content creators. DRM While bashing the experience of many earlier DRM schemes, Fricklas is a firm believe in the basic concept, saying that it allows consumers to have experiences they could not have without DRM (or not at the same prices). The classic cases are 1) online content rental (usually movies) and 2) online streaming (audio and video). While DRM has largely vanished on paid audio downloads, it still exists in many streaming and subscription services. Record labels aren't keen to allow users to pay for a month of music, download 80,000 tracks, and then stop subscribing. Movie rentals and on-demand streaming (iTunes, Hulu, Netflix, Epix, etc.) pose similar challenges, and all use some form of encryption to keep a bit of control over content. Sure, it's all available on the Intarwebs, but some percentage of people won't be willing to locate and grab all the same files from P2P, even though they might be willing to run a simple, local streamripper. Fricklas argues that DRM is essential to these kinds of rental models, and we're willing to concede the general point, when it's done well. (Despite using Netflix and Hulu regularly, I have yet to be impeded by any sort of encryption or DRM, and there's no real issue about making backup copies when the content lives in the cloud.) But consumer frustration with DRM isn't generally about rentals; it's about ownership, and video producers have been unwilling to remove DRM either on physical media (in fact, Blu-ray's gotten much tougher) or digital downloads. This certainly isn't a "new" business model in any way, and DRM on these products does in fact butt up against consumer rights (fair use) and expectations in obvious ways. Ripping a DVD to an iPod, using an external Blu-ray drive to load a film onto a PC for a long trip, making backup copies of those expensive Disney films your kids love, using a film clip in a mashup or piece of criticism?these are all rendered difficult or impossible to do legally by DRM. What is content protection "enabling" here? One argument sometimes heard from rightsholders is that DRM applied to ownership models still "enables" other models like rental because unencrypted Blu-ray discs (for instance) would be easily pirated. And once pirated, they exist all over the Internet, and people can simply download them for free instead of dropping $3 on an online rental. But the films inevitably make their way to the 'Net regardless of such protections (and often in advance of the "protected" versions even being offered for sale at all), so it's hard to see how this applies. A comparison with the music business is instructive here; after pushing hard for DRM, the industry eventually abandoned it once it realized that the system made it overly dependent on the dominant DRM provider (in this case, Apple). And the digital physical format for music, the compact disc, has gone unprotected for a couple of decades. The result? Streaming and subscription models continue to proliferate at places like Rhapsody, Spotify, Last.fm, Lala, and the Zune store. The "DRM enables new business models" idea may have some truth to it, but the movie and video businesses are more than happy to apply tough DRM to their old-style ownership models, long after even music has abandoned the practice. As the examples above indicate, DRM also goes far beyond copyright law in restricting what buyers can do with things like Blu-ray discs. In this sense, code trumps law, and it's a criticism that people like Fricklas recognize (it appeared on one of his slides, but was not discussed in the talk). Their answer?do things like offer digital, computer-ready copies of films on Blu-ray and DVDs?is helpful, though it simply swaps one DRM scheme for another. Graduated response Another area of tension between consumers and rightsholders is graduated response, sometimes referred to as "three-strikes" policies that sanction those accused of repeat copyright infringement online. While the content industries like to tout graduated response as a kinder, gentler way to handle these issues, the worldwide public hasn't been sold on the plan. The European Parliament voted several times to ban such schemes unless they had judicial oversight, while France's attempt at passing a graduated response law was defeated once in the legislature and once by the Constitutional Council before finally being passed. New Zealand had to scrap its three-strikes plan and start over after resistance from users and ISPs, and the UK is in the midst of a furious row over the idea. Graduated response has never been introduced in Congress, and no major ISP has agreed to adopt the approach voluntarily. Still, Fricklas is big on the idea. It's definitely a saner solution to the issue than hauling college kids into federal court, and feature sanctions "more proportional to the harm." (This is certainly debatable when it comes to France-style disconnections and blacklists, however, especially on family accounts.) And Fricklas wants to make sure that there are rights of appeal, since the process can sometimes be a bit too "guilty until proven innocent." But he'd like to see it handled in a "non-court way" through an ombudsman or arbiter, not through a judge. Being able to appeal the issue to a judge would certainly increase everyone's costs and could result in more of the same spectacle that Fricklas hopes to avoid, but Internet access has become a fundamental utility. When sanctions, and especially disconnection, are on the table, issues of due process and law become critical (and even France's scheme now has judicial oversight of the final step, disconnection). This is especially true given what Fricklas said earlier in his talk when he was bashing the record industry for suing individuals: IP addresses can be spoofed, mistakes can be made, and even with an IP address it's often not possible to tell who actually did the sharing. For all these reasons, a mandatory graduated response with Internet disconnections and no judicial right of appeal will remain one of the areas on which consumers and rightsholders just can't see eye-to-eye. The whole talk is worth watching (it's 37 minutes) if you want to better understand Big Content's copyright perspective from one of its top practitioners. Of special note is the segment on Viacom's "innovation," where Fricklas defends the company against charges often made against copyright owners that they are all but incapable of doing anything new and interesting. It's also a good reminder of places where consumers and the content industry part ways, and why the Copyright Wars continue to be fought. From rforno at infowarrior.org Tue Nov 17 04:01:01 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 16 Nov 2009 23:01:01 -0500 Subject: [Infowarrior] - Network Shuttered By MPAA Re-Opens Message-ID: <419FC6FF-6614-4E62-B9D8-D0BBD0B3D60D@infowarrior.org> Key takeaway for the MPAA and their bretheren is in the final paragraph. Think they'll learn? Survey says.....NO. --rf Wi-Fi Network Shuttered By MPAA Re-Opens After week of bad press, Sony suddenly feels cooperative 05:21PM Friday Nov 13 2009 by Karl Bode http://www.dslreports.com/shownews/WiFi-Network-Shuttered-By-MPAA-ReOpens-105492 Earlier this week we reported how a free, tiny (1,000 feet total) municipal Wi-Fi network in Ohio was forced to shut down after an MPAA legal warning. A network user had apparently transferred a file copyrighted by Sony Pictures, and instead of risking a costly legal fight, the network decided to simply shut down. The news quickly spread across the Internet, something that apparently didn't make Sony all that comfortable. One local user sends us this local NBC affiliate report that says the network has been turned back on after a request by Sony: Levine says the news of the shut down spread very quickly from D.C. to California in less than a week, and people from across the country bombarded Sony Pictures Entertainment with complaints about big companies picking on small towns. Finally, Levine explains that Jim Kennedy, SVP of Corporate Communications for Sony Pictures Entertainment, e-mailed the county and asked them to turn the wi-fi service back on because of the complaints. Sony says they'll kindly "help the county identify ways to prevent similar offenses from happening in the future." Of course if the MPAA and Sony had approached the network owners like human beings in the first place -- instead of engaging in the kind of scorched earth tactics they've employed for several years now -- they probably wouldn't have gotten the bad press to begin with. From rforno at infowarrior.org Tue Nov 17 12:43:59 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 17 Nov 2009 07:43:59 -0500 Subject: [Infowarrior] - "Unfriend" is US dictionary's 2009 word of the year Message-ID: http://www.washingtonpost.com/wp-dyn/content/article/2009/11/16/AR2009111603964.html?hpid=sec-nation Unfriend is US dictionary's 2009 word of the year The Associated Press Monday, November 16, 2009; 9:58 PM NEW YORK -- What word sums up 2009? How about unfriend? That's the New Oxford American Dictionary's 2009 Word of the Year. It means to remove someone as a friend on a social networking Web site such as Facebook. Each year Oxford University Press tracks how the English language is changing and chooses a word that best reflects the mood of the year. Oxford lexicographer Christine Lindberg says unfriend has "real lex appeal." Finalists for 2009 also included netbook, which is a small laptop, and sexting, which is sending sexually explicit texts and pictures by cell phone. From rforno at infowarrior.org Tue Nov 17 13:56:04 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 17 Nov 2009 08:56:04 -0500 Subject: [Infowarrior] - =?windows-1252?q?US_Struggles_with_=91Electronic_?= =?windows-1252?q?Fratricide=92_in_Afghanistan?= Message-ID: <8DD3EE0E-FF43-4229-B2CD-19ECBFA67F4B@infowarrior.org> U.S. Struggles with ?Electronic Fratricide? in Afghanistan ? By Nathan Hodge ? November 16, 2009 | ? 1:13 pm | http://www.wired.com/dangerroom/2009/11/us-struggles-with-electronic-fratricide-in-afghanistan/ In Afghanistan, western militaries use radio frequency jammers to keep troops safe from remotely-detonated bombs. But those jammers and other gadgets have contributed to a ?pollution? of the airwaves so severe that over 200 systems at Afghanistan?s main air base can?t talk to one another. This problem of so-called ?electronic fratricide? first appeared in Iraq, where jammers made it tough to control drones and ground robots. Last year, for instance, Commander William Guarini, the head of the U.S. Navy?s Riverine Squadron 1, publicly complained that the service?s Silver Fox drone (pictured here) was ?very susceptible? to electromagnetic interference. ?In particular with our convoys, with our electronic countermeasure systems going off, they really degrade our range,? he said. ?And then we have a problem recovering [the drone].? As the military sends more gear to Afghanistan, members of the military?s tight-knit electronic warfare community are worrying about a repeat of the problem. Writing in the latest issue of Aviation Week & Space Technology (subscription only, sorry), reporters Dave Fulghum and Robert Wall describe concerns recently aired at a meeting of the Association of Old Crows, the professional organization for electronic warfare specialists. Equipment is flowing into the main bases at Kandahar and Bagram (where the classified Area 84 is growing exponentially) at a rate that alarms some U.S. Army officials. They have publicly complained (at the recent Old Crows Assn. show) that at Bagram Air Base alone there are 200 systems that cannot communicate with one another. Critics predict the polluted electronic environment around Baghdad?which has slashed the range of data links and foiled the coverage of some radars and improvised explosive device-jammers?is quickly being duplicated in Afghanistan. Further complicating matters, Afghanistan has a complex, mountainous terrain that can often make it more difficult to operate many of the sensors that were used successfully in Iraq. Quoting an anonymous senior defense official, Fulghum and Wall report that the buildup of drones and manned aircraft in Afghanistan was ?being crippled by a lack of aviation ramp space, personnel and sensors that can deal with terrain that bears almost no resemblance to Iraq.? From rforno at infowarrior.org Wed Nov 18 00:34:52 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 17 Nov 2009 19:34:52 -0500 Subject: [Infowarrior] - Cybersecurity: GAO Says It All Again Message-ID: <75FF9C5C-0830-4003-AE33-5ECE5488FE21@infowarrior.org> While I've not read it yet, a quick skim suggests this is More of The Same Stuff (tm) we've seen for the past 15 years. Any bets on whether this would qualify as a cut-and-paste document from 1999? 2002? 2005? 2008? --rf Continued Efforts Are Needed to Protect Information Systems from Evolving Threats Statement of Gregory C. Wilshusen, Director Information Security Issues David A. Powner, Director Information Technology Management Issues For Release on Delivery Expected at 10:00 a.m. EST Tuesday, November 17, 2009 PDF Report at http://cryptome.org/gao-10-230t.pdf From rforno at infowarrior.org Wed Nov 18 22:57:54 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 18 Nov 2009 17:57:54 -0500 Subject: [Infowarrior] - The Accessibility Paradox Message-ID: <00B3E51B-EA20-4561-8CF8-97DB57658BA7@infowarrior.org> The Accessibility Paradox | Peer to Peer Review Barbara Fister, Gustavus Adolphus College, St. Peter, MN -- Library Journal, 10/29/2009 http://www.libraryjournal.com/article/CA6704324.html?nid=2673&source=title&rid=1950725787 The book world has been harrumphing about a battle among big box stores to sell the season's biggest books at the cheapest price. In order to draw customer into their stores, Target and Wal-Mart are making ten bestselling author's books available for under ten bucks. (Wisconsin is missing all the excitement?they have a law against dumping goods below wholesale prices ?but Amazon has joined in the fray, so Wisconsinites can still go online and pre-order bestsellers at low-low prices.) The American Booksellers Association has even asked the Department of Justice to intervene. I'm somewhat bemused to see a Barbara Kingsolver book among the discounted books?attention shoppers! Critique of corporate greed and US imperialism on sale in aisle three! But I'm also taken aback by the horrified response of the book industry. I thought the big crisis was that nobody reads. Now it turns out the problem is that books are so popular with the masses they're being used as bait to draw in shoppers. Come on, guys, get your story straight! Which is it? Information wants to be valued It strikes me that this issue is somewhat parallel to the love-hate relationship that many academic librarians have had with the Internet. Although our complicated relationship is improving, there are still some silly assumptions floating around. Oh no, our reference stats are down! Hurrah! People are able to find answers without our help. That's awesome! Anybody can publish on the web, unlike scholarly journals which are peer-reviewed. Fine, but don't tell me all peer-reviewed journal articles are shining examples of reason and academic brilliance. A lot of them are finely-sliced research rehashing the same findings, or are closely examined and exquisitely detailed trivia. Besides, there are plenty of examples of peer review failing in spectacular ways?and examples of wonderful peer-reviewed journals that were born free online. But this is my favorite: Unlike information you find on the web, we pay for the information in our databases, and you get what you pay for. No, actually, with what you pay for you get a lot of junk that you don't even want, but you have no choice. You want this journal? You have to subscribe to this pricey bundle. Either that, or you purchase one article at a time for your users, something more and more libraries are doing. You spend less, but the information never visits the library?it goes straight from the publisher to the desk of one user. All the library gets is the bill. Apart from failing on its merits, the argument that paid is better than free is self-contradicting. We can't tell students that purchased information is by definition better than free and, at the same time, beg faculty to recognize how broken the current system is and please, please, please make their work open access. Fortunately, most librarians have gotten used to the fact that the Internet is a tremendous boon to researchers and that free information is a fantastic idea. Sure, we haven't yet reallocated our organizational resources to recognize this fact?our staff time is much more likely to be devoted to acquiring and messing about with purchased information than in making good information from our archives, our labs, or the web more easily available. That work is "other duties as assigned" or soft-funded special projects that suffer from lack of support. But at least we've reached the point where most librarians no longer have a kneejerk resistance to Wikipedia or insist that information acquired through the library is the only legitimate kind. We're even getting past being numbed by the realization that the OPAC sucks no matter how much money we spend on it and are exploring open source solutions that are better and that cost far less. But we still worry about our obsolescence. If information is given away for free, or if remote users don't realize we paid for it, when the traditional work of libraries is shifted to things other than buying and organizing physical containers of information, how will we justify our existence? Being a purchasing agent for faculty really isn't my idea of a fun future, or a sustainable one. To frame the issue another way: What value will the library have for academia if we actually succeed and have an open access future? Will the library, like the perfect Communist state, wither away? We need to separate our value?the way we curate information, champion its availability in the face of intolerance of unpopular ideas and economic disparity, and create conditions for learning how to find and use good information?from the amount of money it takes to acquire stuff on the not-so-open market. We need to be quite clear that good information is good information, no matter how it's funded. And we need to find creative ways to partner with those who add value to information and find sustainable models for the editorial work that can make good academic work better. Cheap thrills The other day at the grocery store I was tickled to see a spinner near the checkout lane filled with Little Golden Books. The Pokey Little Puppy! Scuffy the Tugboat! They don't cost 35 cents anymore, but for under three bucks I could buy the same titles I talked my mother into buying me a long time ago. That's cheap! Yet nobody seems to think having these inexpensive books in grocery stores cheapens books. Independent booksellers, quite honestly, are not threatened by ten popular books being treated as loss leaders. For years they've been disadvantaged by bestseller discounts at chains, discount stores, and Amazon; they haven't been able to compete on the sale of bestsellers for years, so successful indies have excelled at other things?being a community resource, capitalizing on their curatorial skills to fill niches, helping people discover books one-on-one. The major trade publishers whose business model is built around the Big Book, the out- of-the-ballpark hit, have more reason to worry. Their idea of success is selling lots of copies of a book to occasional readers rather than satisfying the varied tastes of avid readers. If the occasional reader thinks ten bucks is all they should pay, then the big-book theory of profits crumbles. But that doesn't mean there is no future for books. In fact, they are such a popular draw that they are being used as ammunition in a pitched battle for consumers' attention. Surely, that's reason for celebration. Somebody will have to publish books because readers obviously want them. The economics may shift, but at least the product is in demand. In the same way, we have come to realize that libraries will endure. Our professional roles will change, our relationship to our users and to the information they seek will evolve, but there is still a demand for high quality information and for assistance in learning how to make good choices. Creating, curating, and publishing are likely to start blending in interesting new ways. The value of what we do will still be there. We just have to figure out how to stop equating the price tag with our actual worth, roll up our sleeves, and find interesting new ways to make good information freely available. Isn't that what libraries have always been about? Barbara Fister is a librarian at Gustavus Adolphus College, St. Peter, MN, a contributor to ACRLog, and an author of crime fiction. Her next mystery, Through the Cracks, will be published by Minotaur Books in 2010. From rforno at infowarrior.org Thu Nov 19 03:29:14 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 18 Nov 2009 22:29:14 -0500 Subject: [Infowarrior] - Entertainment Industry Wants More People To Know About OpenBitTorrent Tracker Message-ID: <6D8A6BA4-6BF3-4E9D-AC17-CAFFA682E7B8@infowarrior.org> Entertainment Industry Wants More People To Know About OpenBitTorrent Tracker from the for-what-reason? dept http://techdirt.com/articles/20091118/1218246994.shtml The definition of insanity, the saying goes, is doing the same thing over and over again and expecting different results. For the past decade, the entertainment industry has sued one site or service after another that was used for unauthorized file sharing at some time. In every single case, the act of suing that site or service ended up only serving to massively increase attention and usage of those services. Suing Napster made Napster into the service to use. Ditto with Kazaa and Grokster. The Pirate Bay wasn't that big until Hollywood got Swedish authorities to raid the operations and confiscate the servers. So, here we go again -- except this time it's even more ridiculous. Entertainment industry representatives have filed a lawsuit against the OpenBitTorrent tracker's hosting company (Update: noting that the lawsuit is against the hosting company), which is not a file sharing site or service at all. It's just an open tracker. Now, I recognize that folks in the entertainment industry aren't particularly knowledgeable about how technology works, but at some point, aren't they supposed to at least understand the basics? The tracker alone is not responsible for anything here -- and even more ridiculous is that the OpenBitTorrent guys (despite not being in the US) set up a DMCA- like process for taking down any info_hash if they want (which, by the way, was the reason the industry claimed it didn't sue Google -- because it took down links on request -- but now that OpenBitTorrent does the same thing, it's a problem?). Either way, with the rise of trackerless solutions means that even taking this site down won't much matter. Still, it makes you wonder what they're thinking over in the entertainment industry other than ways to increase their legal bills. From rforno at infowarrior.org Thu Nov 19 15:18:23 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 19 Nov 2009 10:18:23 -0500 Subject: [Infowarrior] - Resource: Finding the laws that govern us Message-ID: <60DB10A3-3B75-4658-BB61-101235D1874C@infowarrior.org> (I can't wait for Westlaw or LexisNexis to start screaming over this! --rf) http://googleblog.blogspot.com/2009/11/finding-laws-that-govern- us.html Finding the laws that govern us 11/17/2009 09:05:00 AM As many of us recall from our civics lessons in school, the United States is a common law country. That means when judges issue opinions in legal cases, they often establish precedents that will guide the rulings of other judges in similar cases and jurisdictions. Over time, these legal opinions build, refine and clarify the laws that govern our land. For average citizens, however, it can be difficult to find or even read these landmark opinions. We think that's a problem: Laws that you don't know about, you can't follow ? or make effective arguments to change. Starting today, we're enabling people everywhere to find and read full text legal opinions from U.S. federal and state district, appellate and supreme courts using Google Scholar. You can find these opinions by searching for cases (like Planned Parenthood v. Casey), or by topics (like desegregation) or other queries that you are interested in. For example, go to Google Scholar, click on the "Legal opinions and journals" radio button, and try the query separate but equal. Your search results will include links to cases familiar to many of us in the U.S. such as Plessy v. Ferguson and Brown v. Board of Education, which explore the acceptablity of "separate but equal" facilities for citizens at two different points in the history of the U.S. But your results will also include opinions from cases that you might be less familiar with, but which have played an important role. We think this addition to Google Scholar will empower the average citizen by helping everyone learn more about the laws that govern us all. To understand how an opinion has influenced other decisions, you can explore citing and related cases using the Cited by and Related articles links on search result pages. As you read an opinion, you can follow citations to the opinions to which it refers. You can also see how individual cases have been quoted or discussed in other opinions and in articles from law journals. Browse these by clicking on the "How Cited" link next to the case title. See, for example, the frequent citations for Roe v. Wade, for Miranda v. Arizona (the source of the famous Miranda warning) or for Terry v. Ohio (a case which helped to establish acceptable grounds for an investigative stop by a police officer). As we worked to build this feature, we were struck by how readable and accessible these opinions are. Court opinions don't just describe a decision but also present the reasons that support the decision. In doing so, they explain the intricacies of law in the context of real- life situations. And they often do it in language that is surprisingly straightforward, even for those of us outside the legal profession. In many cases, judges have gone quite a bit out of their way to make complex legal issues easy to follow. For example, in Korematsu v. United States, the Supreme Court justices present a fascinating and easy-to-follow debate on the legality of internment of natural born citizens based on their ancestry. And in United States v. Ramirez- Lopez, Judge Kozinski, in his dissent, illustrates the key issue of the case using an imagined good-news/bad-news dialogue between the defendant and his attorney. We would like to take this opportunity to acknowledge the work of several pioneers, who have worked on making it possible for an average citizen to educate herself about the laws of the land: Tom Bruce (Cornell LII), Jerry Dupont (LLMC), Graham Greenleaf and Andrew Mowbray (AustLII), Carl Malamud (Public.Resource.Org), Daniel Poulin (LexUM), Tim Stanley (Justia), Joe Ury (BAILII), Tim Wu (AltLaw) and many others. It is an honor to follow in their footsteps. We would also like to acknowledge the judges who have built this cathedral of justice brick by brick and have tried to make it accessible to the rest of us. We hope Google Scholar will help all of us stand on the shoulders of these giants. Posted by Anurag Acharya, Distinguished Engineer From rforno at infowarrior.org Thu Nov 19 15:33:39 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 19 Nov 2009 10:33:39 -0500 Subject: [Infowarrior] - UK to create "Pirate Finder General" Message-ID: (Hrmmm Medieval Law Merchants are coming back??? --rick) BREAKING: Leaked UK government plan to create "Pirate Finder General" with power to appoint militias, create laws http://www.boingboing.net/2009/11/19/breaking-leaked-uk-g.html A source close to the British Labour Government has just given me reliable information about the most radical copyright proposal I've ever seen. Secretary of State Peter Mandelson is planning to introduce changes to the Digital Economy Bill now under debate in Parliament. These changes will give the Secretary of State (Mandelson -- or his successor in the next government) the power to make "secondary legislation" (legislation that is passed without debate) to amend the provisions of Copyright, Designs and Patents Act (1988). What that means is that an unelected official would have the power to do anything without Parliamentary oversight or debate, provided it was done in the name of protecting copyright. Mandelson elaborates on this, giving three reasons for his proposal: 1. The Secretary of State would get the power to create new remedies for online infringements (for example, he could create jail terms for file-sharing, or create a "three-strikes" plan that costs entire families their internet access if any member stands accused of infringement) 2. The Secretary of State would get the power to create procedures to "confer rights" for the purposes of protecting rightsholders from online infringement. (for example, record labels and movie studios can be given investigative and enforcement powers that allow them to compel ISPs, libraries, companies and schools to turn over personal information about Internet users, and to order those companies to disconnect users, remove websites, block URLs, etc) 3. The Secretary of State would get the power to "impose such duties, powers or functions on any person as may be specified in connection with facilitating online infringement" (for example, ISPs could be forced to spy on their users, or to have copyright lawyers examine every piece of user-generated content before it goes live; also, copyright "militias" can be formed with the power to police copyright on the web) Mandelson is also gunning for sites like YouSendIt and other services that allow you to easily transfer large files back and forth privately (I use YouSendIt to send podcasts back and forth to my sound-editor during production). Like Viacom, he's hoping to force them to turn off any feature that allows users to keep their uploads private, since privacy flags can be used to keep infringing files out of sight of copyright enforcers. This is as bad as I've ever seen, folks. It's a declaration of war by the entertainment industry and their captured regulators against the principles of free speech, privacy, freedom of assembly, the presumption of innocence, and competition. This proposal creates the office of Pirate-Finder General, with unlimited power to appoint militias who are above the law, who can pry into every corner of your life, who can disconnect you from your family, job, education and government, who can fine you or put you in jail. More to follow, I'm sure, once Open Rights Group and other activist organizations get working on this. In the meantime, tell every Briton you know. If we can't stop this, it's beginning of the end for the net in Britain. From rforno at infowarrior.org Thu Nov 19 20:30:27 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 19 Nov 2009 15:30:27 -0500 Subject: [Infowarrior] - An introduction to the FBI's anti-cyber crime network Message-ID: An introduction to the FBI's anti-cyber crime network The FBI explained how its anti-cyber crime task force works at a Congressional hearing this week, and outlined the Bureau's latest accomplishments, which include catching the masterminds of a coordinated raid on over 1,000 ATM machines. But nobody thinks the United States is prepared to stop a really bad attack through cyberspace on our financial or physical networks. By Matthew Lasar | Last updated November 19, 200 http://arstechnica.com/web/news/2009/11/an-introduction-to-the-fbis-anti-cybercrime-network.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss The Federal Bureau of Investigation told Congress this week that when it comes to cyber crime, terrorist groups like Al Qaeda aren't the sharpest pencils in the cup, but they're not out of the game either. "It is always worth remaining mindful that terrorists do not require long term, persistent network access to accomplish some or all of their goals," Steven R. Chabinsky, one of the Bureau's Cyber Division directors, explained to a Senate Judiciary Subcommittee. "Rather, a compelling act of terror in cyberspace could take advantage of a limited window of opportunity to access and then destroy portions of our networked infrastructure." And there are lots of such windows, Chabinsky added, since, "we, as a nation, continue to deploy new technologies without having in place sufficient hardware or software assurance schemes, or sufficient security processes that extend through the entire lifecycle of our networks." Thus the FBI has set up its own network to respond to whatever comes down the pike. Time will tell, and probably soon, how effective it is, but Chabinsky laid it out all the parts at the hearing. They include a division within the bureau, an inter-federal task force, an alliance with state, local, and industry enforcers, and a consumer complaint center. Big news Before unpacking these components, it should be noted that cyber crime is big news these days, with top officials repeatedly warning that the United States is not prepared for a major attack through the net on its financial or physical structures. "The architecture of the Nation?s digital infrastructure, based largely upon the Internet, is not secure or resilient," the White House concluded in its recent Cyberspace Policy Review. Millions of Americans got a sense of the global situation on a recent 60 Minutes feature, which noted that a cyber attack probably took out the power in several cities in Brazil between 2005 and 2007. Then they learned about our "electronic Pearl Harbor," described by Jim Lewis of the Center for Strategic and International Studies: "Some unknown foreign power, and honestly, we don't know who it is," Lewis explained to 60 Minutes' Steve Kroft, "broke into the Department of Defense, to the Department of State, the Department of Commerce, probably the Department of Energy, probably NASA. They broke into all of the high tech agencies, all of the military agencies, and downloaded terabytes of information." And last November some sleuths, possibly just by leaving thumbnail drives around, managed to get into the U.S. Central Command network (CENTCOM). Thumbnail drives are now banned from use at the agency. That is why the White House cyberspace assessment concluded that the Federal government "is not organized to address this growing problem effectively now or in the future." And that's why we're seeing Capitol Hill hearings on the extant structure and how to improve it. Here's how the FBI is fitted to deal with the problem at this point. Phish fries The FBI's first line of defense against cyber crime is its Cyber Division. It has about 2,000 special agents who have received some kind of instruction in this field, and another 1,000 with more advanced training. The Cyber Division's most noted recent accomplishment was a raid completed in October dubbed "Operation Phish Fry." The 100 people caught in this sting are accused of stealing about $1.5 million from U.S. bank account holders via phony email solicitations?complete with links to bogus bank websites. About half the defendants are Egyptian citizens who sent out the phishing messages and broke into the bank accounts. The other half hail from Nevada, California, and North Carolina. They're accused of transferring the ill-gotten money to US bank accounts, then siphoning it out of the country. What was significant about Phish Fry was that it involved an unprecedented partnership with Egyptian police. Catching up with these kind of assaults isn't easy. It took about a year for the Cyber Division to collar the Eastern European masterminds of a massive simultaneous heist of 2,100 ATMs in 280 cities in the US, Canada, Japan, the Ukraine, and Hong Kong. The Great ATM Robbery was quite an operation, which involved penetrating a credit/debit card processing company, identifying PIN numbers, then coordinating a global network of baddies who strolled over to ATMs and collectively helped themselves to $9 million in cash. But the ultimate goal is stopping these virtual raiders before they strike. The FBI's Operation Dark Market seems to be the closest step towards that Holy Grail. The agency claims the so-named online network was a kind of exclusive stock exchange for crooks, where they bought and sold stolen financial data. Dark Market had 2,500 registered members. An FBI operative managed to talk his way into a job as a systems administrator for the cabal. The end result was 56 collars around the world. Infragard Then there's Infragard. Coordinated by the FBI, it's is a fellowship of federal, state, local, industry, and academic cybercrook catchers and watchers. Infragard has about 33,000 participants in almost 90 cities around the country, and you can apply to become a member yourself. The point is to build an accessible community for the FBI to contact on any given cyber-crime problem, especially in the private sector, where IT managers and policy folk are understandably touchy about this stuff. "No governmental entity should be involved in monitoring private communications networks as part of a cybersecurity initiative," warned Gregory T. Nojeim of the Center for Democracy and Technology, speaking before that Senate hearing. Mindful of these concerns, Infragard hangs out around the margins between government and the private sector, "to promote ongoing timely dialogue," in the FBI's own words. Its chapters work with FBI Field Offices in the same geographic area. Infragardians conference on the latest technology and hold hacking contests. Here's the deal, as far as we can tell. You join Infraguard and become part of the FBI's information cohort. In exchange, you get the following cool stuff: ? "Network with other companies that help maintain our national infrastructure. Quick Fact: 350 of our nation's Fortune 500 have a representative in InfraGard. ? Gain access to an FBI secure communication network complete with VPN encrypted website, webmail, listservs, message boards and much more. ? Learn time-sensitive, infrastructure related security information from government sources such as Department of Homeland Security and the FBI." Needless to say, this makes people nervous. The Progressive magazine ran an expos? about Infragard in 2008 titled "The FBI Deputizes Business." The piece suggested that the organization may have given its members authority to "shoot to kill" in national emergencies. The FBI strongly denies this. "Patently false," FBI Cyber Division director Shawn Henry called the assertion. But it's likely that civil- liberties-minded observers will continue to squint at Infragard for the foreseeable future. Complain complain complain Then there's the Internet Crime Complaint Center, a collaboration between the FBI, the National White Collar Crime Center, and the Bureau of Justice Assistance (BJA). The point of IC3, as it's called, is to provide a place for victims of online theft to make complaints, a centralized system for the government to take them, and a means to learn what the bad guys are up to this week. IC3 received almost 280,000 complaints last year and did something about over 70,000 of them. In many instances it referred them to state and local law enforcement agencies. IC3 also issues regular advisories on the latest mischief. These include alerts on the latest social networking fraud techniques, tips for SQL programmers on protecting their sites from hackers, and even warnings about e-mails pretending to be FBI warnings about Al Qaeda. The FBI, it should be noted, is just one component of the National Cyber Investigative Joint Task Force, which it leads, and which consists of representatives from 19 government agencies that struggle with cyber crime. But it's unclear to what extent that coalition is going to have any obvious impact on the ground war against large scale roguery on the Internet. The spotlight will more likely continue to shine on the Bureau and Department of Justice's efforts in this regard? success measured by results to some, or judged by others by their impact on the nation's civil liberties. From rforno at infowarrior.org Thu Nov 19 23:56:29 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 19 Nov 2009 18:56:29 -0500 Subject: [Infowarrior] - Stopping the ACTA Juggernaut Message-ID: November 19th, 2009 Stopping the ACTA Juggernaut Legislative Analysis by Eddan Katz http://www.eff.org/deeplinks/2009/11/stopping-acta-juggernaut The ACTA juggernaut continues to roll ahead, despite public indignation about an agreement supposedly about counterfeiting that has turned into a regime for global Internet regulation. The Office of the United States Trade Representative (USTR) has already announced that the next round of Anti-Counterfeiting Trade Agreement (ACTA) negotiations will take place in January ? with the aim of concluding the deal "as soon as possible in 2010." For the rest of us, with access to only leaks and whispers of what ACTA is about, there are many troubling questions. How can such a radical proposal legally be kept so secret from the millions of Net users and companies whose rights and freedoms stand to be affected? Who decides what becomes the law of the land and by what influence? Where is the public oversight for an agreement that would set the legal rules for the knowledge economy? And what can be done to fix this runaway process? We wrestle with these questions in an essay on ?The Impact of ACTA on the Knowledge Economy?(PDF here) in the Yale Journal of International Law (November 2009 edition). We explain how ACTA got this far, in this form, and propose four mechanisms for USTR transparency reforms, that will give the public a voice in ACTA, if U.S. citizens ? and their elected officials ? speak loudly and quickly enough. In brief, the ACTA process has been deliberately more secretive than customary practices in international decision-making bodies to evade the debates about intellectual property (IP) at established multilateral institutions. The Office of the USTR has chosen to negotiate ACTA as a sole executive agreement. Because of a loophole in democratic accountability on sole executive agreements, the Office of the USTR can sign off on an IP Enforcement agenda without any formal congressional involvement at all. But the negotiations do not have to be secret, and the sole executive agreement process does have mechanisms for oversight: they have not been used in ACTA, but can and should be. The excuse for using sole executive agreements is that ACTA will be fully respectful of U.S. law. But the constraint of coloring within the lines of US law, as one anonymous trade official described it, is a fragile linchpin upon which the weight of public trust and democratic legitimacy is bearing down. In an interview with "Inside U.S. Trade", for their June 19, 2009 edition (paywall link here), the USTR was far less confident: When pressed whether the U.S. would be open to any negotiated difference from U.S. law in the ACTA, the official said that the goal of the U.S. "is to stick as closely to U.S. law as possible." How can the USTR negotiate an international agreement that sets new global IP enforcement norms requiring changes to U.S. law and policy as an Executive Agreement, without the knowledge or involvement of Congress? Having failed to get similar proposal adopted via the World Customs Organization, the USTR conceived ACTA as a plurilateral agreement, avoiding the checks and balances of existing multilateral norm-setting bodies. After the announcement of ACTA but prior to commencing formal negotiations, the USTR had prepared a confidentiality agreement that it asked all negotiating countries to accept, which explicitly binds the negotiating partners from public disclosure. The USTR has exploited this as the justification for classifying all correspondence between negotiating countries in the interest of national security under Executive Order 12958. The Mexican IP Office, which is hosting the next ACTA negotiations, still gave indications that the documents will not be made available to the public. The Internet Chapter was reportedly delivered to negotiating partners in physical, watermarked copies designed to guard against leaks. If the traditional justification for secrecy in trade negotiations is to safeguard details of sensitive US positions in negotiations for diplomatic advantage over other foreign governments, then why is this confidentiality agreement being used to prevent disclosure of ACTA texts to its own citizens? Upon the expiration of Trade Promotion Authority in 2007, the USTR chose to negotiate ACTA as a sole executive agreement. As a result, ACTA will not require congressional advice and approval, which is integral to the constitution's delicate balance of executive and legislative powers. As staunch a defender of executive privilege as John Yoo once convincingly argued that the limits of executive power to negotiate foreign agreements on intellectual property matters unchecked would deprive the House of its constitutional function. From early on, civil society has protested ACTA's secrecy, and despite continued public pressure, the USTR?s transparency theater rehearsals of internal review have concluded that showing a selective few Washington insiders the Internet provisions under non-disclosure agreements would satisfy the demands of openness, transparency, and oversight. Sole executive agreements are not meant to be unaccountable. There are in fact systems in place to stop our executive (and private interests) from having untrammeled power to change the law. We've outlined four ways that Congress, or an Administration sincere about transparency, could put their house in order. Reform trade advisory committees for more diverse representation Input to U.S. trade negotiators on IP needs to reflect the views of all stakeholders in the U.S. knowledge economy to counterbalance the disproportionate influence of lobbyists for incumbent industries. This requires reform of the current trade advisory committee system to include civil society and technology industry participation in the tier 3 industry trade advisory committee on intellectual property, ITAC-15, or the creation of new equivalent level advisory committees. Public interest values such as health and consumer protection should play an important role in the new bipartisan trade policy for the knowledge economy. Strengthen congressional oversight and negotiating objectives Congressional oversight of foreign trade negotiations, especially agreements affecting areas of non-trade domestic policy, should require the USTR to comply with negotiating objectives that reflect the interests of all stakeholders in the U.S. economy. In addition to the labor and environmental standards articulated in proposed bills like the TRADE Act (H.R. 3012), IP enforcement provisions in agreements must not undermine internationally agreed upon commitments on public health, and flexibilities that protect citizens? access to knowledge, nor obstruct IP exceptions and limitations appropriate for the digital age. In addition, the Congressional Oversight Group, a statutory supervisory group comprising members of the House and the Senate designed to liaise with the Trade Representative could conduct a thorough review and certify that the new negotiating objectives have been met before a trade agreement could be brought for a congressional vote. Institutionalize transparency guidelines for trade negotiations Given the significance of the substantive provisions being debated to Internet users, the ACTA process especially should enable citizens to participate and provide input on the public policy impacts like in other negotiations, where it is customary practice to make documents available. The Office of the USTR incorporating these reforms should heed the Attorney General's instruction to adopt a presumption in favor of disclosure to usher in the President's new era of open Government. At a minimum, negotiating texts, when distributed to all negotiating countries should be made public. Implement the State Department?s Circular 175 procedure. Finally, the State Department plays an important role in checking the unfettered power of the USTR through its Circular 175 Procedure. These are the regulations that "ensure the proper exercise of the treaty- making power." The State Department Foreign Affairs Manual goes into great detail on the Legal Advisor's criteria for review of international agreements. There are multiple procedures on hand, including formal congressional consultation, when there is a serious question regarding the type of agreement being negotiated. [11 FAM 723.4(b)] It is also made clear that the approval of authorization to negotiate does not constitute advance approval of the text or authorization to enter into the agreement. [11 FAM 724.2] The State Department investigates whether the proposed agreement is "in conflict with other international agreements or U.S. law" [11 FAM 722(2)] and whether it follows the "general international practice as to similar agreements." [11 FAM 723.3(8)] Most significantly for the public's stake in Internet freedom, the Circular 175 declares that: The interest of the public be taken into account and, where in the opinion of the Secretary of State or his or her designee the circumstances permit, the public be given an opportunity to comment. [11 FAM 725.1(6)] The Office of the USTR transparency practices must be reformed, and they have failed at reforming themselves. Now that the leaked documents confirm everything we feared, it is time to take a look at how we might hold USTR Ambassador Kirk and Assistant McCoy, the lead ACTA negotiator, to account for their promises: ? On diverse representation for advice on trade: "I can assure you that I am committed to working very closely with Congress and all interested stakeholders on all of our trade agreements and negotiations, including ACTA." (Ronald Kirk Confirmation Hearings, March 9, 2009) ? On congressional oversight and legislative power: "Q: Will the ACTA rewrite U.S. law? A: No. Only the U.S. Congress can change U.S. law." (ACTA Fact Sheet, August 4, 2008) ? On transparency practices: President Obama?s trade officials met with several civil society groups and promised a thorough review of the USTR policies regarding transparency. The review is expected to be completed within a few months. The process will include a meeting within a month to discuss initial specific proposals for openness and transparency. Citizens and NGOs are encouraged to think about the specific areas where openness and transparency can be enhanced and how. (USTR Transparency Review KEI Report, March 19, 2009 - as reviewed by Daniel Sepulveda, Assistant USTR for Congressional Affairs) ? On public participation: The ACTA negotiations "[p]articipants also discussed the importance of transparency including the availability of opportunities for stakeholders and the public in general to provide meaningful input into the negotiating process." (USTR Press Release, November 6, 2009) Such accountability is available in the U.S. system, but it cannot come from the Office of the USTR alone. If ACTA is going to regulate the global Internet, we believe that should warrant the opportunity for public comment. From rforno at infowarrior.org Fri Nov 20 13:20:39 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 20 Nov 2009 08:20:39 -0500 Subject: [Infowarrior] - Resource: Online Media Legal Network Message-ID: <8FCAC86D-0EF9-4E5F-8654-EA4E2A9B51EF@infowarrior.org> (Disclosure: I am not affiliated with nor compensated by the OMLN for this post ---rick) http://www.omln.org/aboutus What We Do The Online Media Legal Network (OMLN) is a legal referral service that connects qualifying online journalism ventures and digital media creators with lawyers willing to provide legal services on a pro bono or reduced-fee basis. OMLN supports promising ventures and innovative thinkers in online and digital media by providing access to legal help that would otherwise be unavailable. Lawyers participating in the network can assist qualifying clients with a broad range of legal issues, including business formation and governance, copyright licensing and fair use, access to government information, pre-publication review of content, and representation in litigation. Who We Are The OMLN is an initiative of the Citizen Media Law Project (CMLP). CMLP is jointly affiliated with the Berkman Center for Internet & Society at Harvard University and the Center for Citizen Media at Arizona State University. We are grateful for the generous financial support of the John S. and James L. Knight Foundation. From rforno at infowarrior.org Fri Nov 20 13:27:35 2009 From: rforno at infowarrior.org (Richard Forno) Date: Fri, 20 Nov 2009 08:27:35 -0500 Subject: [Infowarrior] - Some Courts Raise Bar on Reading Employee Email Message-ID: <77DCF4E6-6364-4BB6-87C4-41014F68F904@infowarrior.org> Some Courts Raise Bar on Reading Employee Email By DIONNE SEARCEY http://online.wsj.com/article/SB125859862658454923.html?mod=rss_Today%27s_Most_Popular Big Brother is watching. That is the message corporations routinely send their employees about using email. But recent cases have shown that employees sometimes have more privacy rights than they might expect when it comes to the corporate email server. Legal experts say that courts in some instances are showing more consideration for employees who feel their employer has violated their privacy electronically. Driving the change in how these cases are treated is a growing national concern about privacy issues in the age of the Internet, where acquiring someone else's personal and financial information is easier than ever. "Courts are more inclined to rule based on arguments presented to them that privacy issues need to be carefully considered," said Katharine Parker, a lawyer at Proskauer Rose who specializes in employment issues. In past years, courts showed sympathy for corporations that monitored personal email accounts accessed over corporate computer networks. Generally, judges treated corporate computers, and anything on them, as company property. Now, courts are increasingly taking into account whether employers have explicitly described how email is monitored to their employees. That was what happened in a case earlier this year in New Jersey, when an appeals court ruled that an employee of a home health-care company had a reasonable expectation that email sent on a personal account wouldn't be read. And last year, a federal appeals court in San Francisco came down on the side of employee privacy, ruling employers that contract with an outside business to transmit text messages can't read them unless the worker agrees. The ruling came in a lawsuit filed by Ontario, Calif., police officers who sued after a wireless provider gave their department transcripts of an officer's text messages in 2002. The case is on appeal to the U.S. Supreme Court. Lawyers for corporations argue that employers are entitled to take ownership of the keystrokes that occur on work property. In addition, employers fear productivity drops when workers spend too much time crafting personal email messages. "Employers are right to expect their employees when they are paid for their time at work are actually working," said Jane McFetridge, a lawyer who handles employment issues for the Chicago office of Jackson Lewis. Many workers log in to personal email accounts from the office. In a 2009 study by the Ponemon Institute, a Traverse City, Mich.-based data- security research firm, 52% of employees surveyed said they access their personal email accounts from their work computer. Of those individuals, 60% said they send work documents or spreadsheets to their personal email addresses. Data security experts say such actions could invite viruses or security leaks. More corporations are monitoring employees' email traffic. In a June survey of 220 large U.S. firms commissioned by Proofpoint Inc., a provider of email security and data loss prevention services, 38% of companies said they employ staff to read or otherwise analyze the content of outgoing email, up from 29% last year. More companies also say they are worried about information leaks: Thirty-four percent of respondents said their businesses had been affected by the exposure of sensitive or embarrassing information, up from 23% in 2008. The growing concerns about security and privacy comes as expanding technology muddies the waters between personal and professional. "Computers are becoming recognized as being so much a part of the ongoing personal as well as professional life of employees and everyone else that courts are more sympathetic all the time to granting greater recognition to privacy," said Floyd Abrams, a First Amendment attorney at Cahill Gordon & Reindel LLP. Employees often assume their communications on personal email accounts should stay private even if they are using work-issued computers or smart phones. But in most instances when using a work device, emails of all kinds are captured on a server and can be retrieved by an employer. Still, in some cases courts are finding that unless they have explicitly told the employee they will monitor email, they don't have the legal right to do it -- even if the email in question was a personal one sent using a work account, rather than a personal address. In a case earlier this year in New Jersey, a worker on the brink of resigning from her job at the Loving Care Agency Inc. used a personal, password-protected Yahoo account on a work laptop to email her lawyer to hash out the details of a workplace discrimination suit she was planning to file against the agency. After the employee, Marina Stengart, left her job and filed suit, her employer extracted the emails from the hard drive of her computer laptop. A lower court found that the emails from Ms. Stengart were company property, because the company's internal policies had put her on sufficient notice that her emails would be viewed. But a New Jersey appellate court disagreed, ruling in her favor in June, ordering the company to turn over the emails to Ms. Stengart and delete them from their hard drives. The court's ruling went so far as to dissect the company's internal policies about employee communications and decided they offered "little to suggest that an employee would not retain an expectation of privacy in such [personal] emails." "We reject the employer's claimed right to rummage through and retain the employee's emails to her attorney," the appellate court ruling said. Loving Care, which declined to comment, has appealed the ruling. The case is pending in the New Jersey Supreme Court. In another case this year, Bonnie Van Alstyne, a former vice president of sales and marketing at Electronic Scriptorium Ltd., a data- management company, was in the thick of a testy legal battle in Virginia state court with the company over employment issues when it came to light that her former boss had been accessing and reading her personal AOL email account. The monitoring went on for more than a year, continuing after Ms. Van Alstyne left the company. Ms. Van Alstyne sometimes used her personal email account for business purposes, and her supervisor said he was concerned that she was sharing trade secrets. The supervisor, Edward Leonard, had accessed her account "from home and Internet cafes, and from locales as diverse as London, Paris, and Hong Kong," according to legal filings in the case. Ms. Van Alstyne sued Mr. Leonard and the company for accessing her email without authorization. A jury sided with her, and the case eventually settled. Nicholas Hantzes, a lawyer for the company and Mr. Leonard, said employers could learn from the case that to avoid legal tangles they "should do everything they can to discourage employees from using personal email for business purposes." ?Sarah Needleman contributed to this article. Write to Dionne Searcey at dionne.searcey at wsj.com Printed in The Wall Street Journal, page A17 From rforno at infowarrior.org Sun Nov 22 12:47:59 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 07:47:59 -0500 Subject: [Infowarrior] - =?windows-1252?q?_A_Friend=92s_Tweet_Could_Be_an_?= =?windows-1252?q?Ad?= Message-ID: A Friend?s Tweet Could Be an Ad http://www.nytimes.com/2009/11/22/business/22ping.html By BRAD STONE Published: November 21, 2009 Tuesday was another typical day for John Chow, blogger and Internet entrepreneur in Vancouver, British Columbia. Mr. Chow treated his 50,000 Twitter followers to a photograph of his lunch (barbecued chicken and French fries), discussed the weather in Vancouver and linked to a new post on his Internet business blog. Then he earned $200 by telling his fans where they could buy M&M?s with customized faces, messages and colors. Mr. Chow is among a growing group of celebrities, bloggers and regular Internet users who are allowing advertisers to send commercial messages to their personal contacts on social networks. For the last month, he has used the services of Ad.ly, a start-up based in Los Angeles, and Izea, based in Orlando, Fla., to periodically surrender his Twitter stream to the likes of Charter Communications, the Make a Wish Foundation and an online seminar about working from home. In October, Mr. Chow?s income from Twitter ads was around $3,000. ?I get paid for pushing a button,? he said. It is perhaps the last frontier in advertising ? getting regular people to send a sentence or two of text, on behalf of paying advertisers, to their friends and admirers. The idea, according to the entrepreneurs who are developing such services for Twitter and other Web networks, is that people trust recommendations from those they know and respect, while they increasingly ignore nearly ever other kind of ad message in print, on television and online. Even the Internet giants are warming to the idea of harnessing informal chats between friends to promote their products and services. This month, Amazon.com said it would start paying commissions to individuals who refer buyers to the site via Twitter messages. (People must first sign up for Amazon Associates, a program in which Amazon pays Web publishers for referrals to its site.) But the bigger opportunity may be in matching advertisers with so- called influencers ? the more popular users of services like Twitter. A number of start-ups, like Ad.ly, Izea and Peer2, a division of Creative Asylum, a Hollywood ad agency, are pursuing the opportunity to put persuasive messages into regular dialogue on social networks. ?We don?t want to create an army of spammers, and we are not trying to turn Facebook and Twitter into one giant spam network,? said Joey Caroni, co-founder of Peer2. ?All we are trying to do is get consumers to become marketers for us.? For the most popular celebrities and bloggers on Twitter, such advertising can generate a surprisingly sizable payday. Ad.ly and Izea, which runs a service called Sponsored Tweets, say celebrities like Kim Kardashian, Dr. Drew and the musician Ernie Halter can earn up to $10,000 by sending a single message to their hundreds of thousands of followers. (Sample ad Tweet from Mr. Halter, which included a link: ?sponsored: yo! cheese doodles is giving away sweet prizes in the ?rock the cheese? video contest. Check it!?) Izea receives at least 15 percent of the advertiser?s payment to more popular Twitter users, and up to half for the less distinguished. Ad.ly takes a 30 percent cut across the board. While both companies note their celebrity connections and the involvement of big advertisers like Microsoft and NBC, they really salivate at the prospect of marrying less notable Internet personalities with the huge pool of smaller advertisers. For example, an expert on cycling, with 1,000 Twitter followers, might agree to send an ad about a new bike helmet ? a message that might well be implicitly trusted by his followers. One problem is that many Internet users eschew the idea of these ads, saying they commercialize authentic dialogue and undermine people?s credibility. ?It interferes with your relationship with your friends and your audience,? said Robert Scoble, a technology blogger with more than 100,000 followers on Twitter, who says he ?unfollows? people on Twitter who send him ads. Facebook does not allow members to insert paid ads into status updates or profiles. ?For us, it goes against the authenticity of the page,? said Brandon McCormick, a Facebook spokesman. Peer2 gets around the ban by offering users points instead of dollars; points are redeemable for Amazon products. Part of the unease with this emerging form of advertising is rooted in the past. Three years ago, with a service called PayPerPost, Izea paid bloggers to pitch products to their readers. The endorsements were not clearly labeled as ads, and the service kicked up a dust storm of criticism in the blogosphere. Ted Murphy, the C.E.O. of Izea, now a 30-person business backed by $10 million in venture capital, said the company initially ?made a big mistake? by not setting disclosure standards for publishers and advertisers. Today, ad networks promote their standards; Izea?s ads on Twitter are typically demarcated with signifiers like ?#ad? or ?#sponsor.? ONE new company trying to add transparency to the business is Likes.com of San Francisco, which plans to introduce its ad network in December. The company encourages bloggers and Twitter users to specify their tastes in restaurants, movies, books and other products, and then to publish those recommendations to their blogs and social network pages. Advertisers can then see who has favored their products in the past, and how effective their recommendations have been at getting people to click on links. Depending on the advertiser, bloggers and Tweeters will be paid for every ad they send out, or every time someone clicks on the link. Every Likes.com ad is clearly labeled as such, and once people click on a link, they are taken to another page that is also clearly labeled as a sponsorship. People are limited to posting an ad from Likes.com once every other day. ?We are trying to limit it, to prevent people from losing their following,? said Bindu Reddy, a former Google product manager who started the company with her husband, Arvind Sundararajan, a former Google engineer. ?We know people are queasy about this.? From rforno at infowarrior.org Sun Nov 22 13:38:22 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 08:38:22 -0500 Subject: [Infowarrior] - OT: Five reasons we hate Goldman Sachs Message-ID: <45D8B938-698E-460A-A96A-0DCCB9BA6356@infowarrior.org> November 19, 2009, 1:28 PM ET GS a short? And five reasons we hate Goldman Sachs Cody Willard http://blogs.marketwatch.com/cody/2009/11/19/gs-a-short-and-five-reasons-we-hate-goldman-sachs/tab/print/ Here are five reasons why we want Goldman Sachs destroyed and buried so we can dance on its grave and why these crony apologists are wrong when they say that the ?populist outrage at Goldman Sachs is misplaced?. 1. The AIG bailout was a covert bailout of Goldman and we want our money back. Every dime of it. Goldman had been placing a bunch of bets against real estate derivatives at a casino called AIG. Goldman started to realize that AIG didn?t have enough money to pay all the bets they?d taken, so sucked some $6 billion out of AIG in the weeks before AIG went belly up (a cash drain which indeed helped caused AIG to go belly up). But Goldman still had $13 billion in profitable bets that they?d placed at the AIG casino and without the cash they were due from those bets, Goldman would be insolvent and be forced into bankruptcy. So Goldman called up the chairman at the NY Fed, one Stephen Friedman, and asked for welfare help. Stephen used to run Goldman before he decided to move over and run the NY Fed arm of Goldman ? I mean, the NY Fed arm of the Federal Reserve (which come to think of it, is owned by Goldman and the other banks that it bailed out with your taxpayer money). Stephen promptly went out and bought tens of thousands of shares of Goldman Sachs stock to supplement the millions he already owned of it, and then had the NY Fed cover all the bets at the AIG casino in full with taxpayer money. Yup, Goldman?s former chairman used his power despite all those obvious conflicts of interest, and funneled a full $13 billion of taxpayer money to Goldman Sachs via the bailout of AIG. We want every dime of the AIG counterparty bailout back. We could buy 2.6 million Americans $5000 worth of insurance with the amount of money that Goldman got from AIG from the taxpayer. 2. Goldman became a ?financial holding company? after it became a ?bank holding company?after it realized it was going to be insolvent even after it got Stephen Friedman to write them a $13 billion check from AIG funded with taxpayer money. Goldman had to lobby for special exemptions and all kinds of favoritism in order to get such a petition passed by all the bureaucracies who are supposed to be doing all kinds of due diligence in order to make us citizens believe that either of the ?holding company? status means anything other than the fact that the ?holding company? gets access to cheap welfare loans from the Fed and guarantees against losses for the holding company which mean that the taxpayer is always left holding the bag. Okay, and here?s where we really get outraged by this ?financial holding company? status crap. See, since Goldman?s got that status (and since it?s also ?too big to fail? of course) it can go out and gamble tens of billions of dollars on currencies, commodities, bonds, Treasuries, stocks, derivatives, private equity, venture capital and anything else they want to gamble on ? and if they make money, they keep the profits and payout bonuses, but if they, heaven forbid, actually lose money on that levered gambling addiction they have?well, that taxpayer is going to eat the losses. Goldman is guaranteed privatized gains and socialized losses. We want that stopped now and we want every dime of profit they?ve made gambling this year applied against the government deficit. 3. We know for a fact that Goldman?s executives get to talk to and even advise the Treasury and the Fed on how the Treasury and the Fed should be buying and selling in the Treasuries market, in the derivatives markets, in the overnights markets, in the CDO markets and so on. Does anybody reading this article actually believe that Goldman doesn?t use all that information to place those bets that are resulting in all those record trading profits for Goldman this year? Come on. And not only are they screwing other private investors with such front-running,but it?s usually you and me the taxpayers on the other side of these trades this year. We want Goldman execs to have absolutely no private access to government officials. Given all the obvious and repeated conflicts of interest in such interactions with taxpayer funds and policies on the line, let?s require Goldman and the Treasury/Fed to conduct all interactions completely in the public via webcam, conference calls, or even Op Eds. But no more calls or private meetings between Goldman dudes and government dudes. 4. Goldman was packaging and selling toxic derivatives for hundreds of billions of dollars to investors around the world, telling those investors that such derivatives were safe and smart bets. At the same time, Goldman was out at the AIG casino not just hedging their own exposure to the derivatives while they were packaging them, but Goldman was actually betting against those very products. They were literally selling products they were so confident would fail that they bet tens of billions of dollars of their own money at AIG against those products they were telling investors were safe. We want some perpwalks for this obvious fraud. 5. Goldman propaganda is insulting to anybody paying any attention. - Goldman says: ?We didn?t want or need TARP money.? Lie! They were so desperate for capital at that point, they took $10 billion in TARP funds and needed ANOTHER $5 billion in funds from Warren Buffett. Buffett put the screws on Goldman with onerous, expensive terms on that loan, and Goldman was so desperate they took it anyway. - Goldman says: ?We already paid back the taxpayer.? Uh, like I said above, you?re still gambling with my money keeping the profits since you got lucky and front ran the taxpayer in a bull market for the last six months and we still want every dime of the AIG bailout back too. Goldman and the taxpayer ain?t even close to square. - Goldman says: ?We were just smart and have done nothing wrong.? Oh, wait Lloyd Blankfein, the CEO, finally admitted that the company ?participated in things that were clearly wrong.? Like I said, let?s prosecute those clear wrongdoings! - Goldman says: ?We were hedged against any AIG losses even without the taxpayer.? Lie ? those AIG bets would have been a $13 billion write off that Goldman would have been fighting for in a legal bankruptcy if Stephen Friedman, former Goldman chairman, hadn?t orchestrated a complete bailout for Goldman via AIG when Stephen was buying Goldman stock behind the scenes while running the NY Fed. That?s part of why everybody said it was a ?credit crisis? at the time ? nobody had the money to cover all the bets and the counter bets and the hedges at places like Goldman. ? Goldman begged for and got tons of help from the taxpayer, and even if you weren?t against the Wall Street bailouts like I was from day one, you?re probably livid at Goldman?s arrogance and greed and denials. Hey Goldman, if nothing else, how about a little gratitude for us saving your butt when you needed it. The rage against Goldman isn?t just populist. The rage against Goldman isn?t just popular. The rage against Goldman is right. And unfortuntely, the only thing you and I can do about it is to vote out every single incumbent who empowered Goldman and its ilk with all their bailouts, stimulus and other wealth redistribution policies. I?d also look to short Goldman on strength now that the stock has finally dipped about 10% from its highs. I?m not sure what else this company can do to jack up its profits in the near term even as it destroys its brand for the future. I?d look at slowly but surely building a Goldman short position. Maybe even some long-dated put options ? say something at the $200 strike range out in 2011 or so. You?d pay a little premium once again, but you?d expose less capital and limit your losses by using the put instead of outright shorting the stock. Regardless of Goldman as a trade or an investment ? but for the sake of our society: You tell me ? would America be better off without Goldman Sachs? = http://RevolutioNewsletter.com http://twitter.com/codywillard From rforno at infowarrior.org Sun Nov 22 13:43:28 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 08:43:28 -0500 Subject: [Infowarrior] - OT: Revisiting a Fed Waltz With A.I.G. Message-ID: November 22, 2009 Fair Game Revisiting a Fed Waltz With A.I.G. By GRETCHEN MORGENSON http://www.nytimes.com/2009/11/22/business/22gret.html?hp=&pagewanted=print A RAY of sunlight broke through the Washington fog last week when Neil M. Barofsky, special inspector general for the Troubled Asset Relief Program, published his office?s report on the government bailout last year of the American International Group. It?s must reading for any taxpayer hoping to understand why the $182 billion ?rescue? of what was once the world?s largest insurer still ranks as the most troubling episode of the financial disaster. And it couldn?t have come at a more pivotal moment. Many in Washington want to give more regulatory power to the Federal Reserve Board, the banking regulator that orchestrated the A.I.G. bailout. Through this prism, the actions taken in the deal by Treasury Secretary Timothy F. Geithner, who was president of the Federal Reserve Bank of New York at the time, grow curiouser and curiouser. Of special note in the report: the Fed failed to develop a workable rescue plan when A.I.G., swamped by demands that it pay off huge insurance contracts that it couldn?t make good on as the economy tanked, began to sink. The report takes the Fed to task as refusing to use its power and prestige to wrestle concessions from A.I.G.?s big, sophisticated and well-heeled trading partners when the government itself had to pay off the contracts. The Fed, under Mr. Geithner?s direction, caved in to A.I.G.?s counterparties, giving them 100 cents on the dollar for positions that would have been worth far less if A.I.G. had defaulted. Goldman Sachs, Merrill Lynch, Soci?t? G?n?rale and other banks were in the group that got full value for their contracts when many others were accepting fire-sale prices. On the question of whether this payout was what the report describes as a ?backdoor bailout? of A.I.G.?s counterparties, Mr. Barofsky concluded: ?The very design of the federal assistance to A.I.G. was that tens of billions of dollars of government money was funneled inexorably and directly to A.I.G.?s counterparties.? The report noted that this was money the banks might not otherwise have received had A.I.G. gone belly-up. The report zaps Fed claims that identifying banks that benefited from taxpayer largess would have dire consequences. Fed officials had refused to disclose the identities of the counterparties or details of the payments, warning ?that disclosure of the names would undermine A.I.G.?s stability, the privacy and business interests of the counterparties, and the stability of the markets,? the report said. When the parties were named, ?the sky did not fall,? the report said. Finally, Mr. Barofsky pokes holes in arguments made repeatedly over the past 14 months by Goldman Sachs, A.I.G.?s largest trading partner and recipient of $12.9 billion in taxpayer money in the bailout, that it had faced no material risk in an A.I.G. default ? that, in effect, had A.I.G. cratered, Goldman wouldn?t have suffered damage. In short, there?s an awful lot jammed into this 36-page report. Even before publishing this analysis, Mr. Barofsky had made a name for himself as one of the few truth tellers in Washington. While others estimate how much the taxpayer will make on various bailout programs, Mr. Barofsky has said that returns are extremely unlikely. His office has also opened 65 cases to investigate potential fraud in various bailout programs. ?When I first took office, I can?t tell you how many times I?d be having a sit-down and warning about potential fraud in the program and I would hear a response basically saying, ?Oh, they?re bankers, and they wouldn?t put their reputations at risk by committing fraud,? ? Mr. Barofsky told Bloomberg News a little over a week ago, adding: ?I think we?ve done a good job of instilling a greater degree of skepticism that what comes from Wall Street isn?t necessarily the holy grail.? Mr. Barofsky says the Fed failed to strong-arm the banks when it was negotiating payouts on the A.I.G. contracts. Rather than forcing the banks to accept a steep discount, or ?haircut,? the Fed gave the banks $27 billion in taxpayer cash and allowed them to keep an additional $35 billion in collateral already posted by A.I.G. That amounted to about $62 billion for the contracts, which the report describes as ?far above their market value at the time.? Mr. Geithner, who oversaw those negotiations, said in an interview on Friday that the terms of the A.I.G. deal were the best he could get for taxpayers. He considered bailing out A.I.G. to be ?offensive,? he said, but deemed it necessary because a collapse would have undermined the financial system. ?We prevented A.I.G. from defaulting because our judgment was that the damage caused by failure would have been much more costly for the economy and the taxpayer,? Mr. Geithner said. ?To most Americans, this looked like a deeply unfair outcome and they find it hard to see any direct benefit. But in fact, their savings are more valuable and secure today.? The report said that while bailing out Goldman and other investment banks might not have been the intent behind the Fed?s A.I.G. rescue, it certainly was its effect. ?By providing A.I.G. with the capital to make these payments, Federal Reserve officials provided A.I.G.?s counterparties with tens of billions of dollars they likely would have not otherwise received had A.I.G. gone into bankruptcy,? the report stated. As Goldman prepares to pay out nearly $17 billion in bonuses to its employees in one of its most profitable years ever, it is important that an authoritative, independent voice like Mr. Barofsky?s reminds us how the taxpayer bailout of A.I.G. benefited Goldman. A Goldman spokesman, Lucas van Praag, said that Goldman believed ?that a collapse of A.I.G. would have had a very disruptive effect on the financial system and that everyone benefited from the rescue of A.I.G.? Regarding his firm?s own dealings with A.I.G., Mr. van Praag said that Goldman believed that its ?exposure was close to zero? because it insulated itself from a downturn in A.I.G.?s fortunes through hedges and collateral it had already received. (Goldman?s complete response is here.) The inspector noted in his report that Goldman made several arguments for why it believed it was not materially at risk in an A.I.G. default, but he is skeptical of the firm?s reasoning. So is Janet Tavakoli, an expert in derivatives at Tavakoli Structured Finance, a consulting firm. ?On Sept. 16, 2008, David Viniar, Goldman?s chief financial officer, said that whatever the outcome at A.I.G., the direct impact of Goldman?s credit exposure would be immaterial,? she said. ?That was false. The report states that if the New York Fed had negotiated concessions, Goldman would have suffered a loss.? The report says that Goldman would have had difficulty collecting on the hedges it used to insulate itself from an A.I.G. default because everyone?s wallets would have been closing in a panic. ?The prices of the collateralized debt obligations against which Goldman bought protection from A.I.G. were in sickening free fall, and the cost of replacing A.I.G.?s protection would have been sky-high,? she said. ?Goldman must have known this, because it underwrote some of those value-destroying C.D.O.?s.? Ms. Tavakoli argues that Goldman should refund the money it received in the bailout and take back the toxic C.D.O.?s now residing on the Fed?s books ? and to do so before it begins showering bonuses on its taxpayer-protected employees. ?A.I.G., a sophisticated investor, foolishly took this risk,? she said. ?But the U.S. taxpayer never agreed to be a victim of investments that should undergo a rigorous audit.? Perhaps Mr. Barofsky will do that audit, and closely examine the securities that A.I.G. insured and that Wall Street titans like Goldman underwrote. Goldman contends that it had a contractual right to the funds it received in the A.I.G. bailout and that the securities it returned to the government in the deal have increased in value. For his part, Mr. Geithner disputed much of the inspector general?s findings. He also took issue with the conclusion that the Fed failed to develop a contingency plan for an A.I.G. rescue and largely depended on plans proffered by the banks themselves. He said the report?s view that the Fed didn?t use its might to get better terms in the rescue was unfair. ?This idea that we were unwilling to use leverage to get better terms misses the central reality of the situation ? the choice we had was to let A.I.G. default or to prevent default,? he said. ?We could not enforce haircuts without causing selective defaults and selective defaults would have brought down the company.? Mr. Geithner also said that the ?perception that this decision by the government, not my decision alone, was made to protect any individual investment bank is unfounded.? Less than two weeks after the A.I.G. bailout, Mr. Geithner took the firm?s side when he criticized a Sept. 28, 2008, article in The New York Times that I wrote about the A.I.G. bailout. That article included Goldman?s statement that it wouldn?t have been affected by an A.I.G. collapse. Among other things, the article, like Mr. Barofsky?s report, questioned Goldman?s assertion. According to an e-mail message that Goldman sent to the New York Fed at the time, Mr. Geithner talked about the article with Mr. Viniar, Goldman?s chief financial officer, before calling me. When Mr. Geithner called, he said that Goldman had no exposure to an A.I.G. collapse and that the article had left an incorrect impression about that. When I asked Mr. Geithner if he, as head of the regulatory agency overseeing Goldman, had closely examined the firm?s hedges, he said he had not. Mr. Geithner told me on Friday that he spoke with Mr. Viniar that day to ensure that Goldman?s hedges were adequate. And, notwithstanding the inspector general?s findings, he said he still believes Goldman was hedged. Probing, in-depth analyses of regulatory responses to the financial meltdown are worth their weight in gold. Mr. Barofsky?s certainly is. Yet in its rush to put financial reforms into effect, Congress seems uninterested in investigating or grappling with truths contained in such reports ? and until it does, our country?s economic and financial system will continue to be at risk. From rforno at infowarrior.org Sun Nov 22 13:46:01 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 08:46:01 -0500 Subject: [Infowarrior] - Recent Air Force Law Review discusses Cyberlaw Message-ID: Recent Air Force Law Review discusses Cyberlaw Posted 11/20/2009 Updated 11/20/2009 http://www.maxwell.af.mil/news/story.asp?id=123178704 by Carl Bergquist Air University Public Affairs 11/20/2009 - MAXWELL AIR FORCE BASE, Ala. -- Volume 64 of the Air Force Law Review is now available in hardcopy and online. Published this year, it is sub-titled the "Cyberlaw Edition." Largely the result of a symposium held at the Judge Advocate General School at Maxwell Air Force Base, the edition addresses many of the issues involving the cyber domain. "About a year ago, we held a symposium here [Maxwell], and cyberlaw was discussed," Capt. Scott Hodges, a JAG School Professional Outreach Division instructor, said. "It was decided during the symposium to do some research and write some literature on the subject." The captain said a "big focus" of the symposium was Russia using cyber warfare against the nation of Georgia before actually attacking the country. The Cyberlaw Edition investigates that and many other issues involving cyberspace, such as when does a computer attack equate to an act of war? In addressing that question, Air Force Maj. Graham Todd, one of the Cyberlaw Edition authors, said the European Union's Convention on Cybercrime is an international treaty intended to create consistency in criminal laws related to internet activity. Specifically, the convention provides that parties involved will adopt laws that criminalize cyberspace crimes such as unlawful access, unlawful interception and interfering with data or systems. "The U.S. Department of Defense defines a computer network attack as, 'Actions taken through the use of computer networks to disrupt, deny, degrade or destroy information resident in computers and computer networks, or the computers and networks themselves," he said in Volume 64. "Whether a cyberspace crime or a cyberspace attack, the goal is to affect someone else's data, or use data to affect property." Air Force Lt. Col. Joshua Kastenberg, another author for the Cyberlaw Edition, brought up the ramifications of private U.S. companies allowing the country of Georgia to use their systems to help keep the country's communications links open. Would this bring those companies into the conflict? He said the owner of TSHost, or Tulip Systems, an Atlanta, Ga., based Web hosting company, offered the use of their systems to the government of Georgia, and Georgian officials transferred critical government internet services to Tulip servers in the United States. "In an admission, the TSHost chief executive officer stated the company had volunteered its servers to 'protect' the nation of Georgia's internet sites from malicious traffic," he said in the report. "TSHost further revealed that after it relocated Georgian Web sites to the United States, attacks traced to Moscow and St. Petersburg ensued against TSHost servers." A third author, Air Force Lt. Col. Patrick Franzese, maintains that cyberspace is not a common domain, and countries throughout the world can and should regulate the domain to prevent cyberspace attacks. "The United States can choose to take the lead in recognizing and establishing state sovereignty in cyberspace," he stated. "By establishing state sovereignty in cyberspace, the United States, as well as other states, will develop the framework to consider other cyberspace issues." Captain Hodges said one of the purposes of the articles in Volume 64 is to bring up issues for debate and thought, but he noted the articles don't always give answers to the issues addressed. He said copies of the report went to all law schools and all Air Force legal offices, and it is hoped Volume 64 will stimulate additional research and study on cyberlaw. "As for us at the JAG School, we want people to be aware the Cyberlaw Edition is out there for their use," he said. "Volume 64 of the Law Review can be obtained by emailing me atscott.hodges at maxwell.af.mil, and can also be found on the Web atwww.afjag.af.mil/library/." From rforno at infowarrior.org Sun Nov 22 13:46:46 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 08:46:46 -0500 Subject: [Infowarrior] - Policing Net Neutrality Message-ID: <21E812B3-01E0-461C-B888-BA1CD55C2B3A@infowarrior.org> Policing Net Neutrality Enforcing the FCC's upcoming network-neutrality rules will be easier said than done BY MITCHELL LAZARUS // NOVEMBER 2009 http://spectrum.ieee.org/telecom/internet/policing-net-neutrality?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+IeeeSpectrum+(IEEE+Spectrum) The U.S. Federal Communications Commission wants its coming rules on network neutrality to revive the ?free and open? Internet of the 1990s. Sorry, FCC. Those days are gone. Your rules won?t bring them back. The early Internet was free and open for two main reasons. The Internet was (and still is) based on a flexible set of protocols that anyone can use. And access to the Internet was (but is no more) afforded by hundreds of competitive Internet service providers (ISPs). Back in the dial-up days, nobody talked about net neutrality?the concept that ISPs shouldn?t be able to block or slow particular Internet applications or uses. There was no need. An ISP that tampered with customers? content would soon be gone. ISP competition was itself the result of an FCC decision: the 1985 Computer III order, which promised every ISP access to essential telephone network facilities. To be sure, some phone companies were slow to comply, and FCC enforcement was uneven. But at least the ISPs had the law on their side. Not so for broadband. Starting in 2002, the FCC relieved broadband Internet providers of any obligation to open their facilities to other ISPs. Now most users have access to only one or two broadband providers, giving those companies a throttlehold over customers? content. It was the abuse of this power that led to the present calls for network neutrality. Some of the problems arose because most broadband ISPs are also in the telephone or cable-TV business and are threatened by less expensive services available on the Internet. The ISP operation has both motive and means to protect the parent company?s interests. Recently, for example, cable ISP Comcast Corp. blocked content that might compete with its pay-per-view service, and wire-line phone company Madison River Communications Corp. blocked access to Vonage Holdings Corp., a provider of Voice over Internet Protocol. The FCC cited these instances, among others, in proposing the new rules. Will those rules stop the abuses? Will they bring back the freewheeling, anything-goes atmosphere that fostered so much innovation back in the dial-up days? Probably not. For one thing, the proposed rules are worded very generally. Take this example: ?Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.? The needed specifics are completely missing. This is like a traffic sign that reads, ?Drive safely.? Extreme cases aside, pulled-over drivers can always insist that they obeyed the law?ISPs likewise. Another problem is the FCC?s stated intent to consider claimed violations of the rules on a case-by-case basis. This can be exceedingly slow?and expensive to boot. Flash forward a few years. The net-neutrality rules are in place. A smart programmer has come up with the next YouTube or Facebook or Twitter. Users flock to it by the millions. But the application threatens the business interests of a big cable-TV or phone company. Its affiliated ISP blocks subscribers from using the application. But now there are rules against this sort of thing. Our innovator goes to the FCC. There he must demonstrate that the ISP violated the net- neutrality rules. The vague language of those rules leaves plenty of room for the ISP?s lawyers to argue otherwise. Assuming the innovator wins that point, further dispute is likely over what the ISP must do to stop discriminating?after which the ISP may appeal, first within the FCC and then in court. Drawing up a complaint and fending off the ISP?s defenses could run well into five figures. Appeals could easily take the total into six- digit territory. A company like Comcast or Verizon Communications could absorb this kind of expense without blinking, but few entrepreneurs can. Even fewer Internet entrepreneurs could consider it. Remember, the big innovations of the Internet came from college kids in dorm rooms, dropouts in their parents? garages, start-ups working in bare offices. These are the people the FCC should protect. By contrast, the ISP defending its actions is likely to be a telecommunications giant with a standing army of blue-suited lawyers. Maybe the FCC will do the legal lifting on the innovator?s behalf. I hope so, because the alternative would be to have a nice set of net- neutrality rules with no teeth. Even if the FCC itself were to prosecute rule-breaking ISPs, the process would probably be slow. Expect a year or more for a first decision from the FCC; another year, at least, for an internal appeal; and two years in the courts. Most Internet entrepreneurs would likely run out of time and money in the meantime. And even if the innovator does manage to prevail, years down the road, odds are that by then the one-time killer app will have missed its chance to go viral. The proposed rules will certainly help to keep the Internet safe for well-heeled content providers like Google. But unless the FCC makes the enforcement process fast and cheap?or can coerce the big ISPs into compliance, as I propose in a recent blog entry?it will do little for the creative community that developed the Internet from a nerdy plaything into the massive resource we all rely on today. Unless individuals have swift recourse against discrimination by an ISP, the heady days of Internet innovation will soon draw to a close. About the Author Mitchell Lazarus is a partner in the Washington, D.C., area law firm of Fletcher, Heald & Hildreth. He blogs regularly at www.commlawblog.com. From rforno at infowarrior.org Mon Nov 23 00:53:35 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 22 Nov 2009 19:53:35 -0500 Subject: [Infowarrior] - Microsoft and News Corp eye web pact Message-ID: Microsoft and News Corp eye web pact http://www.ft.com/cms/s/0/a243c8b2-d79b-11de-b578-00144feabdc0.html By Matthew Garrahan in Los Angeles, Richard Waters in San Francisco and Andrew Edgecliffe-Johnson in New York Published: November 22 2009 23:01 | Last updated: November 22 2009 23:01 Microsoft has had discussions with News Corp over a plan that would involve the media company?s being paid to ?de-index? its news websites from Google, setting the scene for a search engine battle that could offer a ray of light to the newspaper industry. The impetus for the discussions came from News Corp, owner of newspapers ranging from the Wall Street Journal of the US to The Sun of the UK, said a person familiar with the situation, who warned that talks were at an early stage. However, the Financial Times has learnt that Microsoft has also approached other big online publishers to persuade them to remove their sites from Google?s search engine. News Corp and Microsoft, which owns the rival Bing search engine, declined to comment. One website publisher approached by Microsoft said that the plan ?puts enormous value on content if search engines are prepared to pay us to index with them?. Microsoft?s interest is being interpreted as a direct assault on Google because it puts pressure on the search engine to start paying for content. ?This is all about Microsoft hurting Google?s margins,? said the web publisher who is familiar with the plan. But the biggest beneficiary of the tussle could be the newspaper industry, which has yet to construct a reliable online business model that adequately replaces declining print and advertising revenues. In a possible sign of negotiations to come, Google last week played down the importance of newspaper content. Matt Brittin, Google?s UK director, told a Society of Editors conference that Google did not need news content to survive. ?Economically it?s not a big part of how we generate revenue,? he said. News Corp has been exploring online payment models for its newspapers and has taken an increasingly hard line against Google. Rupert Murdoch, News Corp chairman, has said that he would use legal methods to prevent Google ?stealing stories? published in his papers. Microsoft is desperate to catch Google in search and, after five years and hundreds of millions of dollars of losses, Bing, launched in June, marks its most ambitious attempt yet. Steve Ballmer, chief executive of Microsoft, has said that the company is prepared to spend heavily for many years to make Bing a serious rival to Google. Microsoft has sought to differentiate Bing by drawing in material not found elsewhere, though has not demanded exclusivity from content partners. Bing accounted for 9.9 per cent of searches in the US in October, up from 8.4 per cent at its launch, according to ComScore. James Murdoch, chairman and chief executive of News Corp Europe and Asia, hinted last week that the company was making progress with its online plans. ?We think that there?s a very exciting marketplace, potentially a wholesale market place for digital journalism that we?ll be developing,? he said Copyright The Financial Times Limited 2009. You may share using our article tools. Please don't cut articles from FT.com and redistribute by email or post to the web. From rforno at infowarrior.org Tue Nov 24 01:13:30 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 23 Nov 2009 20:13:30 -0500 Subject: [Infowarrior] - Scribblings of the Czar of the Ministry of Information Message-ID: <094D5CA2-CF6F-4A50-9BB0-DE1E67FAA5DE@infowarrior.org> Scribblings of the Czar of the Ministry of Information Marla Singer on 11/23/2009 19:22 -0500 http://www.zerohedge.com/article/scribblings-czar-ministry-information The Office of Information and Regulatory Affairs was, in an irony that will quickly become apparent, created by the Paperwork Reduction Act of 1980. Last month, after some delay by meddling and petty Senators with the temerity to express concern over the nominee's political views, Cass R. Sunstein was confirmed by the Senate as OIRA's head making him the current administration's latest "Czar." 20 days later Mr. Sunstein's book, "On Rumors: How Falsehoods Spread, Why We Believe Them, What Can Be Done," hit the stands. Good timing probably. Sunstein probably wasn't expecting to receive a confirmable appointment when he first started work on the piece- though at 88 pages of grade-school level prose he may well have begun writing quite a bit after the Wall Street Journal leaked his appointment in January of this year. Perhaps as recently as last month, actually. One would expect that a more public airing of the prose in Sunstein's work, which seems almost singularly focused on shutting up "members of the Republican Party spread[ing] rumors about the appointee of a Democratic president," (have anyone in particular in mind?) without causing a constitutional crisis might have caused problems. Normally, such goings on would slip thankfully under the radar at Zero Hedge. Unfortunately, a newly emerging posture with respect to free speech evident in the stance of the United States seems to have erected itself since the current administration's swearing(s) in and, hence, such matters attract our attention. Particularly troubling are the following passages: "On the Internet in particular, people might have a right to "notice and take down." Under this approach, modeled on the copyright provisions of the Digital Millennium Copyright Act, those who run websites would be obliged to take down falsehoods upon notice. It is true that this approach might be burdensome. It is also true that because of the nature of the Internet, notice and takedown cannot provide a complete solution. But if it is taken down, it will not be in quite so many places, and at least the victim of the falsehood will be able to say that it was taken down." And: "What would be so terrible about a requirement that people take down libelous material after they are given notice that it is libelous- at least if they do not have reason to believe that the material is accurate or at least supported by evidence?" We cannot think of a worse model for "falsehood" regulation than the Digital Millennium Copyright Act, which effectively enables anyone with an internet connection to bluff U.S. (and many foreign) internet service providers into forcibly removing content that fails to possess even a passing resemblance to infringement. Zero Hedge, of course, experienced this first-hand more than once in the days before our technological exodus from the gentle regulatory ministrations of the United States. In addition, answering the question "what would be so terrible" posed by Sunstein's second passage is quite simple. Shifting the burden of proof to the content provider "on notice," is a major shift from current practice (outside of the DMCA) and presents a difficult logical problem. "Prove the content is not libelous," is rather a troublesome contention. Since when do we reverse the burden of proof on content providers and require of them positive certifications of the non-libelous nature of their prose in order to be blessed with the right to publish? Who exactly will determine what constitutes "accurate or at least supported by evidence"? When will this finding be made? (Before or after the "take down"?) When will CNBC be forced to close up shop or move their studio to the Caymans? Will this impact Bartiromo's wardrobe significantly? (Please tell us it will not alter Gasparino's). Actually, we think we know the answers to these questions already and therefore, quite obviously, Zero Hedge's Q2 2009 "short American free- speech" trade continues to provide some of our portfolio's most dramatically outsized risk-adjusted returns. From rforno at infowarrior.org Wed Nov 25 01:51:46 2009 From: rforno at infowarrior.org (Richard Forno) Date: Tue, 24 Nov 2009 20:51:46 -0500 Subject: [Infowarrior] - More Bing vulns revealed Message-ID: Negative Cashback from Bing Cashback Posted November 23rd, 2009 by Samir In a recent post that Microsoft made me take down, I wrote about some security holes in Bing Cashback. While technical flaws are somewhat interesting to me, most Bountii users don?t really care about them. Starting today, I?ll write about some non-technical flaws in Bing Cashback. My biggest problem with Bing Cashback is a hidden ?feature? that I?m calling ?negative cashback.? Here?s a quick demo: < - > http://bountii.com/blog/2009/11/23/negative-cashback-from-bing-cashback/ From rforno at infowarrior.org Thu Nov 26 02:13:33 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 25 Nov 2009 21:13:33 -0500 Subject: [Infowarrior] - Obama Wants Computer Privacy Ruling Overturned Message-ID: Obama Wants Computer Privacy Ruling Overturned ? By David Kravets ? November 25, 2009 | http://www.wired.com/threatlevel/2009/11/obama-wants-computer-privacy-ruling-overturned/ The Obama administration is seeking to reverse a federal appeals court decision that dramatically narrows the government?s search-and-seizure powers in the digital age. Solicitor General Elena Kagan and Justice Department officials are asking the 9th U.S. Circuit Court of Appeals to reconsider its August ruling that federal prosecutors went too far when seizing 104 professional baseball players? drug results when they had a warrant for just 10. The 9th U.S. Circuit Court of Appeals? 9-2 decision offered Miranda- style guidelines to prosecutors and judges on how to protect Fourth Amendment privacy rights while conducting computer searches. Kagan, appointed solicitor general by President Barack Obama, joined several U.S. attorneys in telling the San Francisco-based court Monday that the guidelines are complicating federal prosecutions in the West. The circuit, the nation?s largest, covers nine states: Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon and Washington. ?In some districts, computer searches have ground to a complete halt,? the authorities wrote. ?Many United States Attorney?s Offices have been chilled from seeking any new warrants to search computers.? (.pdf) The government is asking the court to review the case with all of its 27 judges, which it has never done. If the court agrees to a rehearing, a new decision is not expected for years, and the August decision would be set aside pending a new ruling. Either way, the U.S. Supreme Court has the final say. The controversial decision, which the government said was contrary to Supreme Court precedent, outlined new rules on how the government may search computers. (.pdf) Ideally, when searching a computer?s hard drive, the government should cull the specific data described in the search warrant, rather than copy the entire drive, the court said. When that is not possible, the authorities must use an independent third party under the court?s supervision, whose job it would be to comb through the files for the specific information, and provide it, and nothing else, to the government, according to the ruling. The government said the decision was already chilling at least one rape case in Washington State. ?Federal agents received information from their counterparts in San Diego that two individuals had filmed themselves raping a 4-year-old girl and traded the images via the internet,? the government wrote. ?The agents did not obtain a warrant to search the suspects? computers, however, because of concerns that any evidence discovered about other potential victims could not be disclosed by the filter team. The agents therefore referred the case to state authorities.? The circuit?s ruling came in a case that dates to 2004, when federal prosecutors probing a Northern California steroid ring obtained warrants to seize the results of urine samples of 10 pro baseball players at a Long Beach, California drug-testing facility. The players had been tested as part of a voluntary drug-deterrence program implemented by Major League Baseball. Federal agents serving the search warrant on the Comprehensive Drug Testing lab wound up making a copy of a directory containing a Microsoft Excel spreadsheet with results of every player that was tested in the program. Then, back in the office, they scrolled freely through the spreadsheet, ultimately noting the names of all 104 players who tested positive. The government argued that the information was lawfully found in ?plain sight,? just like marijuana being discovered on a dining room table during a court-authorized weapons search of a home. But the court noted that the agents actively scrolled to the right side of the spreadsheet to peek at all the players test results, when they could easily have selected, copied and pasted only the rows listing the players named in the search warrant. Four players whose names were seized, and who were not linked to the BALCO investigation, have been leaked to The New York Times. They are Alex Rodriguez, David Ortiz, Manny Ramirez and Sammie Sosa. Two dissenting judges wrote that the majority was also sidestepping its own precedent in which the circuit court had denied the suppression of child pornography evidence found on a computer during a search for the production of false identification cards pursuant to a valid warrant. In 2006, the 9th Circuit originally sided with the government in a 2-1 decision, which the court overturned with an 11-judge ?en banc? panel in August. From rforno at infowarrior.org Thu Nov 26 02:16:37 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 25 Nov 2009 21:16:37 -0500 Subject: [Infowarrior] - MPAA to FCC: critics of video blocking proposals are lying Message-ID: <0904EC6F-4293-42CA-BF8F-370D1AA88757@infowarrior.org> MPAA to FCC: critics of video blocking proposals are lying Hollywood is now resorting to calling critics of its analog stream- blocking proposal liars, while talking out of both sides of its mouth about DVD encryption and piracy. But the brunt of this accusation, Public Knowledge, still insists that shutting down the output to millions of HDTVs won't benefit consumers. By Matthew Lasar | Last updated November 25, 2009 12:43 PM http://arstechnica.com/tech-policy/news/2009/11/mpaa-to-fcc-critics-of-video-blocking-proposals-are-lying.ars The movie studios have a new Holy Grail, it seems: Federal Communications Commission permission to cable companies to shut down the analog streams on video-on-demand movie programming. As Ars readers know, we've been covering this issue for a while. But the Motion Picture Association of America's latest letter to the FCC pulls out all the stops, rhetoric-wise, calling criticisms of this scheme "complete and utter nonsense that only can be intended to stir up baseless fears among consumers that their equipment will suddenly go dark and be unusable for any purpose." These are "deplorable claims," the MPAA told the FCC on Monday. Plus they "distort the truth." They're also "simply and irrefutably untrue," the trade association adds (in case you didn't get it yet). False untruthfulness The main target of MPAA's outrage is the advocacy group Public Knowledge, one of whose spokespersons, Harold Feld, has an ongoing video series called "Five Minutes With Harold Feld," in which the aforementioned offers his takes on "incredibly boring and wonky things" and tries "to make them slightly less boring, because this stuff is important." The allegedly offensive five-minute video in question deals with what MPAA wants, which is technically called "Selectable Output Control"?shutting down the analog stream to HDTVs and other devices because it is less secure (copyable) than digital streams, which can be scrambled. The FCC currently prohibits the practice. The studios say they want to plug the "analog hole" with SOC because it will allow them to offer the public pre-DVD VoD movie releases with less threat of piracy. The problem, as Feld's video on this subject points out, is that a considerable amount of analog only connected equipment won't be able to receive these offerings. "And for this," Feld skeptically declares, "we're going to break 25 million television sets, and break your TiVO, and break your Slingbox, and make sure you can't use it on VoD anymore, because [Feld looking especially skeptical here] it's so important to get these movies to video-on- demand earlier." Feld's "deplorable claims" are "absolutely, 100 percent untrue," MPAA counters. "The use of SOC would have no impact whatsoever on the ability of existing television sets, Tivos, Slingboxes or any other consumer product to work in exactly the same fashion that such devices work today. While products with only unprotected outputs and inputs would not be able to receive the new early window offerings that would be made possible by the SOC waiver, no device would be broken. Nor would any consumer be unable to receive traditional VOD in the same way that he or she does today." A considerable amount of time in this debate is being spent rather theatrically denouncing words that clearly function as metaphors. As we've pointed out, although SOC won't render analog-only HDTVs and other home theater equipment "broken," as in "physically damaged with wires poking out of the set," it will disable the ability of this gear to access what will immediately become the most valuable offering on television: pre-DVD release VoD movies. What's the problem anyway? The rest of MPAA's filing is a long list of ways that movies are copied and illegally distributed on the Internet?further proof positive that the studios need SOC. Among other claims, the filing insists that real time duplication of HBO per-per-view events on various websites represents clear evidence that "thieves steal this content through unprotected outputs." From this litany, a disinterested reader might conclude that Hollywood's efforts to stop this activity have been amazingly unsuccessful, and the producers might want to reconsider their approach to the problem. MPAA, for example, decries the fact that "literally every DVD that MPAA member studios released for rental or purchase during the past year has been made available for unlawful downloading or streaming online." If that is the case, why does the MPAA extol the virtues of its DVD Content Scramble System (CSS) before the United States Copyright Office? CSS and other "protection technologies" have allowed content producers to "distribute their valuable content in higher quality, more convenient digital formats," MPAA wrote in the office's latest proceeding on exemptions to the Digital Millennium Copyright Act. As a result, "DVDs have become one of the most widely adopted consumer electronics products in history, and the pace of adoption has been unprecedented. Consumers have greater access to movies and TV shows than ever before." It doesn't seem to matter. In Hollywood-think on this issue, the solution to each apparent technology/regulation failure is a new tech fix that requires new rules and a new explanation of why it won't hurt consumers. TV watchers, MPAA notes, are already used to not seeing stuff on their cable boxes. So what's the problem? "A typical subscriber today already encounters numerous instances where a particular channel or service is not available. For example, a given consumer might not subscribe to a cable company?s high- definition service or might not receive premium channels (such as HBO). In either case, if consumers were to attempt to access one of these channels, they would receive an on-screen message advising them that their service does not include access to the requested content." MPAA goes so far as to suggest that if SOC isn't granted, individual movie studios will begin releasing pre-DVD content on non-cable distribution alternatives?such as SONY's experiments with encrypted Internet streams sent directly to its Bravia HDTVs (Hancock already streams and Cloudy With a Chance of Meatballs is coming next). Thus, the trade group warns, "it is denial of the waiver that could result in a scenario in which millions of consumers would have to buy new equipment to receive new content offerings." That is, of course, assuming that consumers would bother to do so, given the enormous amount of content they can already get over the 'Net. Not-so-subtle Needless to say, Public Knowledge is taking strong exception to statements that pretty much call the group a pack of liars. MPAA's latest commentary "utterly fails to demonstrate that anybody steals content through the analog hole," PK's Gigi Sohn declared in a commentary published this morning. And "by attacking Public Knowledge and specifically Harold's integrity, it is a not-so-subtle effort to spin the debate over this waiver as 'copyleft' Public Knowledge versus 'reasonable' Hollywood, which only wants this itsy bitsy waiver so that it can provide the 'pro-consumer' benefit of making movies available on video on demand a few weeks earlier than they are now." Sohn notes that a wide variety of organizations besides PK oppose SOC, including the Consumer Electronics Association and the Independent Film and Television Alliance. No one knows how much their concerns count with the new FCC, which has yet to take a stand on this controversial issue. From rforno at infowarrior.org Thu Nov 26 02:18:01 2009 From: rforno at infowarrior.org (Richard Forno) Date: Wed, 25 Nov 2009 21:18:01 -0500 Subject: [Infowarrior] - ICANN condemns registry DNS redirection Message-ID: <58D1CAA9-800C-483A-B932-637040515CF0@infowarrior.org> ICANN condemns registry DNS redirection http://www.theregister.co.uk/2009/11/25/icann_dns_redirection_condemned/ Seeks ban on SiteFinder-like typo-squatting By Dan Goodin in San Francisco ? Get more from this author Posted in Networks, 25th November 2009 21:28 GMT The group that oversees the internet's address system is taking a hard stance against domain name registries that redirect internet users to third-party sites when a non-existent URL is typed. Earlier this week, the Internet Corporation of Assigned Names and Numbers (ICANN) said the practice - known as NXDOMAIN substitution and DNS redirection - threatens net stability and deteriorates user experience. In a memorandum (PDF) published Tuesday, ICANN went on to reiterate that all managers of newly created top-level domains would be prohibited from following the practice under draft rules now being considered. The proposed restriction is aimed at preventing the kind of controversy that was created in 2003 when VeriSign introduced a service that automatically redirected all mistyped addresses ending in .com and .net to a proprietary website. Internet purists howled in protest, arguing that VeriSign's SiteFinder breached time-honored practices for handling mistyped or non-existent addresses. (VeriSign soon dropped the service). ICANN's prohibition is aimed at managers of so-called registry-class domain names, or RCDNs, better described as the registries that act as the gate keepers for top-level domains such as .com, .info, or .biz. "Normally if someone wants to make use of a domain, they have to register it (and pay a fee for the right to use it)," ICANN's memo states. "In the case of NXDOMAIN substitution in a RCDN, the registry would be making use (and perhaps profit) from all or a subset of the uninstantiated domains without having registered or paid for them." It would appear that the prohibition, which was discussed in June during an ICANN meeting in Australia, has no effect on internet service providers and other services that redirect subscribers who type non-existent addresses. Services including Comcast, Verizon, and Virgin have been known to offer such services, often with no warning or easy way for users to turn it off. Other services, most notably, OpenDNS, have built an entire business off of the practice. What sets this last one against the rest is that it's entirely opt-in. That means users who want to prevent themselves from accidentally ending up at a harmful site because they mistyped a URL have to go through the trouble of configuring their systems to use the service. VeriSign's SiteFinder, by contrast, didn't. ? From rforno at infowarrior.org Thu Nov 26 20:05:03 2009 From: rforno at infowarrior.org (Richard Forno) Date: Thu, 26 Nov 2009 15:05:03 -0500 Subject: [Infowarrior] - Computer hacker Gary McKinnon to be extradited to US Message-ID: <2D07A1B2-71F1-4DB5-B7B0-405701D00DAE@infowarrior.org> Computer hacker Gary McKinnon to be extradited to US ? Afua Hirsch, legal affairs correspondent ? guardian.co.uk, Thursday 26 November 2009 19.50 GMT http://www.guardian.co.uk/world/2009/nov/26/computer-hacker-gary-mckinnon-extradition/print Computer hacker Gary McKinnon is at serious risk of suicide, relatives said today, after the home secretary rejected a last-ditch attempt to prevent his extradition to the US. In a letter today Alan Johnson ordered McKinnon's removal to the US on charges of breaching US military and Nasa computers, despite claims by his lawyers that extradition would make the 43-year old's death "virtually certain". "The secretary of state is of the firm view that McKinnon's extradition would not be incompatible with his [human] rights", said the letter, dated 26 November. "His extradition to the United States must proceed forthwith". The decision, described by lawyers as "callous", has prompted new fears about McKinnon's well-being. The letter rejected new expert medical evidence that the health of McKinnon, who has Asperger's syndrome, had deteriorated dramatically since losing his case in the high court in July, and meant that extradition would violate his right to life. "Gary is at risk of suicide, I'm extremely worried about him", said McKinnon's mother Janis Sharp. "This government is terrified of speaking up to America, and now they are allowing vulnerable people to be pursued for non-violent crime when they should be going after terrorists. Why are they doing this?" The decision is a final blow for McKinnon, from north London, who was accused in 2002 of using his home computer to hack into 97 US military and Nasa computers, causing damage which the US government claims will cost over $700,000 to repair. Earlier this year the high court rejected arguments that the extradition would violate McKinnon's rights, after lawyers argued the prospect of up to 60-years' imprisonment in an American 'supermax' jail would cause mental harm because of his Asperger's syndrome and depressive illness. The home secretary has insisted that he had received assurances from the US government, including a guarantee that McKinnon would be assessed by doctors and psychologists were he transferred to a US jail, and would receive "appropriate medical care and treatment", including counselling and medication, in a letter from the US Department of Justice this February, seen by the court. Lawyers had also argued that the director of public prosecutions could prosecute McKinnon in the UK, on lesser charges of computer misuse, preventing his extradition. The charges are less serious in the UK than the US, where McKinnon faces a prison sentence of up to 60 years. "The CPS wanted to prosecute Gary, but they were told from the very top to stand aside and let American take him", said Sharp. The case comes after sustained controversy over the US Extradition Treaty, designed to speed up extradition between the two countries but which critics insist works in favour of Americans and fails to adequately protect British people from extradition. McKinnon's legal team had hoped to join his case to the case of Ian Norris, the retired business chief facing extradition to the US to face trial on charges of obstructing justice due to his alleged role in an illegal cartel whose case will be heard in the supreme court on Monday. Lawyers had argued that McKinnon's case raised similar legal issues, and should have been considered by the 9-strong panel of Supreme Court Justices sitting next week. Attempts to be heard in the supreme court failed however, although McKinnon's legal team said they would be seeking a judicial review of today's decision. "The Americans have waited three years before requesting Gary's extradition, and the government is too terrified to say no", said Sharp. "What America wants, America gets", Sharp added. "I think it's disgusting". "This is a hold over from Bush. We thought with Obama it would be different. Now the first person in the world to be extradited to the US for computer misuse is going to be a guy with Asperger's. All our lives have been ruined by this ? the heart just sinks." From rforno at infowarrior.org Sun Nov 29 14:06:47 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 29 Nov 2009 09:06:47 -0500 Subject: [Infowarrior] - Olympic Ooops! Message-ID: <2ECE64C0-0ED8-48BE-84E2-D64F081F0215@infowarrior.org> (The 'kicker' is in the final paragraph of the extract below. Heh. -- rf) Cartoon smut law to make life sucky for Olympic organisers Internet + iffy logo = Ha ha! By John Ozimek Posted in Law, 29th November 2009 10:02 GMT Government zeal in pursuing anyone suspected of harbouring paedophilic tendencies may shortly rebound ? with unintended consequences for the 2012 Olympic logo (http://www.bbc.co.uk/london/content/images/2007/06/04/2012_logo_white_385x450.jpg ). Earlier this month, the Coroners & Justice Bill 2009 (http://www.opsi.gov.uk/acts/acts2009/ukpga_20090025_en_5#pt2-ch2-pb1-l1g62 ) received the Royal Assent. This Act was another of those portmanteau pieces of legislation for which the current government is famous, mixing up new regulations on the holding of inquests, driving offences, provocation in murder cases and, crucially, a new law making it a criminal offence to be found in possession of an indecent cartoon image of a child. The horror facing the unpopular Olympics logo is that this is a strict liability offence. If an image is indecent, or held to be so by a jury, it is no good the Olympic Committee claiming that it was not intended as such. Regular readers will be aware of the controversy that surrounded the current logo since the day it was launched. Critics were not impressed by the ?400,000 that had allegedly been shelled out (http://news.bbc.co.uk/sport1/hi/other_sports/olympics_2012/6718243.stm ) to creative consultancy Wolff Olins to come up with the design. However, it was the logo?s perceived suggestiveness - with many sniggering that it appeared to show Lisa Simpson performing an act of fellatio - that excited internet controversy. < - > http://www.theregister.co.uk/2009/11/29/olympic_logo_lisa_simpson/ From rforno at infowarrior.org Sun Nov 29 14:49:01 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 29 Nov 2009 09:49:01 -0500 Subject: [Infowarrior] - Seltzer: Anticircumvention Versus Open Innovation Message-ID: The Imperfect is the Enemy of the Good: Anticircumvention Versus Open Innovation Wendy Seltzer Harvard University - Berkman Center for Internet & Society; University of Colorado Law School http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1496058 Berkeley Technology Law Journal, Vol. 25, 2010 Abstract: Digital Rights Management, law-backed technological control of usage of copyrighted works, is clearly imperfect: It often fails to stop piracy and frequently blocks non-infringing uses. Yet the drive to correct these imperfections masks a deeper conflict, between the DRM system of anticircumvention and open development in the entire surrounding media environment. This conflict, at the heart of the DRM schema, will only deepen, even if other aspects of DRM can be improved. This paper takes a systemic look at the legal, technical, and business environment of DRM to highlight this openness conflict and its effects. Scholars have described DRM?s failures to protect copyright exceptions, its failures to stop unauthorized copying, and its impact on complementary innovation. This paper takes those debates as background to focus on the foreclosure of an entire mode of development and its opportunities for user innovation. Under an anticircumvention regime, the producers of media content can authorize or deny authorization to technologies for playing their works. Open source technologies and their developers cannot logically be authorized. ?Open-source DRM? is a contradiction in terms, for open source encourages user modification (and copyleft requires its availability), while DRM compels ?robustness? against those same user modifications. Since DRM aims to control use of content while permitting the user to see or hear it, it can be implemented only in software or hardware that is able to override its user?s wishes?and can?t be hacked to do otherwise. For a DRM implementation to make any sense, therefore, its barriers against user modification of the rights management must be at least as strong as those against user access to its protected content. I characterize a ?DRM imperative? and explore the technical incompatibilities between regulation by code and exploration of code. We see DRM centralizing development and forcing the black-boxing of complementary media technology, in a widening zone as it mandates that protected media be played only on compliant devices, that those may output media content only to other compliant devices, etc. The home media network is thus progressively closed to open-source development. Foreclosing open development costs us technically, economically, and socially. We lose predicted technological improvements, those of user- innovators (von Hippel) or disruptive technologies (Christensen) from outside the incumbent-authorized set, that could offer new options for content creators and audiences (such as better playback, library, mixing, and commerce options). We lose social and cultural opportunities for commons-based peer production. In the full cost-benefit analysis of anticircumvention, the loss to open innovation would outweigh the gains from this imperfect mechanism of copyright enforcement. Treating code literally as law leaves the law with too many harmful side effects. Keywords: copyright, anticircumvention, digital copyright, DRM, digital rights management, user innovation, free software, open innovation Accepted Paper Series http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1496058 From rforno at infowarrior.org Sun Nov 29 14:55:00 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 29 Nov 2009 09:55:00 -0500 Subject: [Infowarrior] - Spy agencies foil Obama plan for transparency Message-ID: <545CF7C6-E1E1-419B-9971-728984F02EEE@infowarrior.org> Boston Globe November 29, 2009 Release Of Secret Reports Delayed Spy agencies foil Obama plan for transparency By Bryan Bender, Globe Staff http://www.boston.com/news/nation/washington/articles/2009/11/29/declassification_of_secret_documents_to_be_delayed/ WASHINGTON - President Obama will maintain a lid of secrecy on millions of pages of military and intelligence documents that were scheduled to be declassified by the end of the year, according to administration officials. The missed deadline spells trouble for the White House?s promises to introduce an era of government openness, say advocates, who believe that releasing historical information enforces a key check on government behavior. They cite as an example the abuses by the Central Intelligence Agency during the Cold War, including domestic spying and assassinations of foreign officials, that were publicly outlined in a set of agency documents known as the ?family jewels.?? The documents in question - all more than 25 years old - were scheduled to be declassified on Dec. 31 under an order originally signed by President Bill Clinton and amended by President George W. Bush. But now Obama finds himself in the awkward position of extending the secrecy, despite his repeated pledges of greater transparency, because his administration has been unable to prod spy agencies into conformance. Some of the agencies have thrown up roadblocks to disclosure, engaged in turf battles over how documents should be evaluated, and have reviewed only a fraction of the material to determine whether releasing them would jeopardize national security. In the face of these complications, the White House has given the agencies a commitment that they will get an extension beyond Dec. 31 of an undetermined length - possibly years, said the administration officials, who spoke on the condition they not be identified discussing internal deliberations. It will be the third such extension: Clinton granted one in 2000 and Bush granted one in 2003. The documents, dating from World War II to the early 1980s, cover the gamut of foreign relations, intelligence activities, and military operations - with the exception of nuclear weapons data, which remain protected by Congress. Limited to information generated by more than one agency, the records in question are held by the Central Intelligence Agency; the National Security Agency; the departments of Justice, State, Defense, and Energy; and other security and intelligence agencies. None of the agencies involved responded to requests for comment, saying they could not discuss internal deliberations. ?They never want to give up their authority,?? said Meredith Fuchs, general counsel at the National Security Archive, a research center at George Washington University that collects and publishes declassified information. ?The national security bureaucracy is deeply entrenched and is not willing to give up some of the protections they feel they need for their documents.?? The failure to meet the disclosure deadline ?does not augur well for new, more ambitious efforts to advance classification reform,?? said Steven Aftergood, a specialist on government secrecy at the Federation of American Scientists in Washington. ?If binding deadlines can be extended more or less at will, then any new declassification requirements will be similarly subject to doubt or defiance.?? Obama laid out broad goals for reforming the system in May, when he ordered a 90-day review by the National Security Council. Government, he said, ?must be as transparent as possible and must not withhold information for self-serving reasons or simply to avoid embarrassment.?? The review is part of Obama?s efforts to make all government operations more public, including his decision to release White House visitor logs and set up a new office to expedite the release of government files under the Freedom of Information Act. Among the revisions Obama said he wanted considered were the establishment of a National Declassification Center to coordinate and speed up the process, as well as new procedures to prevent what he called ?over classification.?? But officials said an executive order that has been drafted by the White House to replace a disclosure order that Bush signed in 2003 is meeting resistance from key national security and intelligence officials, delaying its approval. ?The next phase is most crucial,?? said William J. Bosanko, director of the Information Security Oversight Office at the National Archives and Records Administration, who was appointed by Obama in April 2008 to oversee the government classification system. ?It is a bit of a test. You have an administration that has committed to certain things and tried to shape the direction but then you have the bureaucracy which is very adept at resisting change.?? A key concern among intelligence agencies is that they could lose what amounts to veto power over disclosure of their secrets that are maintained by other agencies, according to several officials who have been privy to the agency comments on the draft executive order. Also, a turf war has broken out over which spy agency should be represented on a panel set up in 1996 to hear appeals from people who are seeking the release of information. Obama aides want the Office of the Director of National Intelligence, set up in 2005 to oversee all spy agencies, to replace the CIA, much to the consternation of CIA officials, the officials said. The White House is meeting even more resistance on its position that no information shall remain classified indefinitely. Depending on the type of information involved, the White House is proposing that virtually all classified information - not just some categories - be automatically released 25 years, 50 years, or in the case of records that involve intelligence sources, 75 years after they are created. The draft Obama guidelines, a copy of which were obtained by Aftergood, include an additional five-year extension for the most sensitive documents. Defense and intelligence information undergoes a more rigorous review before being made public - often decades after it is generated - than more general government files that do not require officials to have special security clearances to handle them. The documents in question are considered part of the nation?s permanent record, and therefore hold special historical significance. Only three percent of government records are so designated. As the delays mount, so does the backlog of classified data to be reviewed. Aftergood and others worry that if automatic deadlines are not enforced, many documents will never reach the public because the agencies who have custody of them can continue to make the same arguments. ?The only way to get a handle on this is to allow classification to expire at some point,?? said Aftergood. ?This is information that is not just from years ago, but generations ago. The new delay is discouraging because the innovations in the Clinton order are being subverted. That means even bolder reforms that some of us hope for will be that much more difficult.?? Still, even if such information is eventually declassified, that doesn?t mean that the public will get to see it in a timely manner. Officials estimate that there are 400 million pages of historical documents that have been declassified but remain in government records centers and have not been processed at the National Archives, where the public can view them. From rforno at infowarrior.org Mon Nov 30 01:47:40 2009 From: rforno at infowarrior.org (Richard Forno) Date: Sun, 29 Nov 2009 20:47:40 -0500 Subject: [Infowarrior] - USAF orders 2200 Sony PS3s Message-ID: <5F454417-3F1F-4DA2-A7AD-4A2DCB07CA4B@infowarrior.org> US Air Force orders 2200 Sony PS3s Extending supercomputing Linux cluster By James Sherwood 25th November 2009 13:19 GMT http://www.reghardware.co.uk/2009/11/25/ps3_supercomputer/ The US Air Force plans to buy a whopping 2200 PlayStation 3 games consoles which it will use to expand an existing PS3-based supercomputer. The current cluster of consoles contains 336 PS3s, each connected by their RJ45 ports to a common 24-port Gigabit Ethernet hub, Air Force online documentation states. The entire set-up runs on an in-house developed Linux-based OS. However, the expanded PS3 supercomputer will be used to further the Air Force?s ?architectural studies? which ?determine what software and hardware technologies are implemented [in] military systems?. The Air Force hasn?t said much more than this, preferring to keep its intentions close to its medal-bedecked chest. However, it did describe one possible scenario where the PS3 supercomputer could be used to determine additional software and hardware requirements for advanced computing architectures and high-performance embedded computing applications. The PS3 supercomputer has previously been used to test methods of processing multiple radar images into higher resolution composite images ? known as Back Projection Synthetic Aperture Radar Imager formation ? additional Air Force documents revealed. It is unclear when then the US Air Force hopes to have its 2536-strong PS3 supercomputer up and running. Presumably it's after the squadies are done playing Call of Duty: Modern Warfare 2. ? From rforno at infowarrior.org Mon Nov 30 15:28:05 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 30 Nov 2009 10:28:05 -0500 Subject: [Infowarrior] - Secrecy in science is a corrosive force Message-ID: <6B6F18E1-E03E-40A2-83FE-EA63B10B2CEE@infowarrior.org> Secrecy in science is a corrosive force By Michael Schrage Published: November 27 2009 11:11 | Last updated: November 27 2009 11:11 http://www.ft.com/cms/s/0/8aefbf52-d9e1-11de-b2d5-00144feabdc0.html With no disrespect to sausages and laws, Bismarck?s most famous aphorism clearly requires updating. ?Scientific research? is bidding furiously to make the global shortlist of things one should not see being made. Understandably so. Sciences at the cutting edge of statistics and public policy can make blood sports seem genteel. Scientists aggressively promoting pet hypotheses often relish the opportunity to marginalise and neutralise rival theories and exponents. The malice, mischief and Machiavellian manoeuvrings revealed in the illegally hacked megabytes of emails from the University of East Anglia?s prestigious Climate Research Unit, for example, offers a useful paradigm of contemporary scientific conflict. Science may be objective; scientists emphatically are not. This episode illustrates what too many universities, professional societies, and research funders have irresponsibly allowed their scientists to become. Shame on them all. The source of that shame is a toxic mix of institutional laziness and complacency. Too many scientists in academia, industry and government are allowed to get away with concealing or withholding vital information about their data, research methodologies and results. That is unacceptable and must change. Only recently in America, for example, have academic pharmaceutical researchers been required to disclose certain financial conflicts of interest they might have. On issues of the greatest importance for public policy, science researchers less transparent than they should be. That behaviour undermines science, policy and public trust. Dubbed ?climate-gate? by global warming sceptics, the most outrageous East Anglia email excerpts appear to suggest respected scientists misleadingly manipulated data and suppressed legitimate argument in peer-reviewed journals. These claims are forcefully denied, but the correspondents do little to enhance confidence in either the integrity or the professionalism of the university?s climatologists. What is more, there are no denials around the researchers? repeated efforts to avoid meaningful compliance with several requests under the UK Freedom of Information Act to gain access to their working methods. Indeed, researchers were asked to delete and destroy emails. Secrecy, not privacy, is at the rotten heart of this bad behavior by ostensibly good scientists. Why should research funding institutions and taxpayers fund scientists who deliberately delay, obfuscate and deny open access to their research? Why should scientific journals publish peer-reviewed research where the submitting scientists have not made every reasonable effort to make their work ? from raw data to sophisticated computer simulations ? as transparent and accessible as possible? Why should responsible policymakers in America, Europe, Asia and Latin America make decisions affecting people?s health, wealth and future based on opaque and inaccessible science? They should not. The issue here is not about good or bad science, it is about insisting that scientists and their work be open and transparent enough so that research can be effectively reviewed by broader communities of interest. Open science minimises the likelihood and consequences of bad science. Debilitating and even fatal side-effects of new drugs might have been detected sooner if pharmaceutical companies had been compelled to share data on all the trials they ran, not just favourable ones. Similarly, the flawed and successfully overturned 1999 child murder conviction of Sally Clark might never have occurred if the statistical errors made by expert witness pediatrician Sir Roy Meadow had been questioned earlier. Data withholding played a distortive and destructive role in the cold fusion frenzy 20 years ago, when two scientists announced they had produced energy by cold fusion, only to be widely and quickly denounced by the scienitific community. Concealment and secrecy invites mischief; too many scientists seeking influence accept the invitation. Achieving this is simple and inexpensive. It is not done by more rigorous enforcement of the Freedom of Information Act, although that would help. It comes from branding ?openness? into every link of the scientific research value chain. Public or tax-deductible research funding should be contingent upon maximum transparency. Scientists and affiliated institutions that will not make the research process as transparent as the end result will be asked to return the money or risk denial of future funds. University accreditation should be contingent not just upon faculty research and publication but by demonstrating policies and practices that champion data sharing. Professional societies and journals should make data sharing a condition of membership and publication. Researchers must be pushed to be more open at every step of their process. The Royal Society not only makes data sharing a precondition of publication, it provides up to 10 megabytes of free space for supplementary data on its website. Unfortunately, too many scientific societies and publishers are less than rigorous or insistent about openness. Strip them of their tax-deductible status. Make opennes a condition of tax advantage. Of course commercial and proprietary issues can influence the manner of data sharing and transparency. But the East Anglia emails represent an individual and institutional imperative to err on the side of minimal disclosure even as researchers sought to maximise the academic and political impact of their work. That is perverse. Public interest suggests scientists and their sponsoring institutions be made as legally, financially, professionally and ethically as uncomfortable as possible about concealing and withholding relevant research information. If the University of East Anglia had been sharing more of its data and the computer models and statistical simulations running that data, the email hack would have been much ado about nothing. When doing important research about the potential future of the planet, scientists should have nothing to hide. Their obligation to the truth is an obligation to openness. The writer researches the economics of innovation and technology transfer at MIT and is a visiting researcher at London?s Imperial College Copyright The Financial Times Limited 2009. You may share using our article tools. Please don't cut articles from FT.com and redistribute by email or post to the web. From rforno at infowarrior.org Mon Nov 30 20:09:32 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 30 Nov 2009 15:09:32 -0500 Subject: [Infowarrior] - EU ACTA Analysis Leaks: Confirms Plans For Global DMCA Message-ID: <686010CB-3542-4B1B-B38F-74625A47E206@infowarrior.org> EU ACTA Analysis Leaks: Confirms Plans For Global DMCA, Encourage 3 Strikes Model PDF http://www.michaelgeist.ca/content/view/4575/125/ Monday November 30, 2009 The European Commission analysis of ACTA's Internet chapter has leaked, indicating that the U.S. is seeking to push laws that extend beyond the WIPO Internet treaties and beyond current European Union law (the EC posted the existence of the document last week but refused to make it publicly available). The document contains detailed comments on the U.S. proposal, confirming the U.S. desire to promote a three-strikes and you're out policy, a Global DMCA, harmonized contributory copyright infringement rules, and the establishment of an international notice-and-takedown policy. The document confirms that the U.S. proposal contains seven sections: Paragraph 1 - General obligations. These focus on "effective enforcement procedures" with expeditious remedies that deter further infringement. The wording is similar to TRIPs Article 41, however, the EU notes that unlike the international treaty provisions, there is no statement that procedures shall be fair, equitable, and/or proportionate. In other words, it seeks to remove some of the balance in the earlier treaties. Paragraph 2 - Third party liability. The third party liability provisions focus on copyright, though the EU notes that it could (should) be extended to trademark and perhaps other IP infringement. The goal of this section is to create an international minimum harmonization regarding the issue of what is called in some Member States "contributory copyright infringement". The U.S. proposal would include "inducement" into the standard, something established in the U.S. Grokster case, but not found in many other countries. This would result in a huge change in domestic law in many countries (including Canada) as the EU notes it goes beyond current eu law. Paragraph 3 - Limitations on 3rd Party Liability. This section spells out how an ISP may qualify for a safe harbour from the liability established in the earlier section. These include an exemption for technical processes such as caching. As reported earlier, ACTA would establish a required notice-and-takedown system, which goes beyond Canadian law (and beyond current EU law). Moreover, ACTA clearly envisions opening the door to a three-strikes and you're out model, as the EU document states: EU understands that footnote 6 provides for an example of a reasonable policy to address the unauthorized storage or transmission of protected materials. However, the issue of termination of subscriptions and accounts has been subject to much debate in several Member States. Furthermore, the issue of whether a subscription or an account may be terminated without prior court decision is still subject to negotiations between the European Parliament and the Council of Telecoms Ministers regarding the Telecoms Package. Paragraph 4 - Anti-circumvention Provisions. ACTA would require civil and criminal penalties associated with anti-circumvention provisions (legal protection for digital locks). The EU notes that this goes beyond the requirements of the WIPO Internet treaties and beyond current EU law which "leaves a reasonable margin of discretion to Member States." The EU also notes that there is no link between the anti-circumvention provisions and copyright exceptions. The U.S. proposal also requires the anti-circumvention provisions to apply to TPMs that merely protect access to a work (rather than reproduction or making available). This would again go beyond current EU law to include protection against circumventing technologies like region coding on DVDs. From a Canadian perspective, none of this is currently domestic law. As previously speculated, the clear intent is to establish a Global DMCA. Paragraph 5 - Civil and Criminal Enforcement of Anti-Circumvention. This section requires both civil and criminal provisions for the anti- circumvention rules, something not found in the WIPO Internet treaties. The anti-circumvention provisions are also designed to stop countries from establishing interoperability requirements (ie. the ability for consumers to play purchased music on different devices). The EU notes that this not consistent with its law, which states "Compatibility and interoperability of the different systems should be encouraged." Of course, might reasonable ask why such a provision is even in ACTA. Paragraph 6 - Rights Management Information protection. This section includes similar criminal and civil requirements for rights management information. Paragraph 7 - Limitations to Rights Management Information protection. In summary, the EU analysis confirms the earlier leak (though the Internet chapter has seven sections, rather than five). The fears about the U.S. intent with respect to ACTA are confirmed - extending the WIPO Internet treaties, creating a Global DMCA, promoting a three- strikes and you're out model, even stopping efforts to create interoperability mandates. ACTA would render current Canadian copyright law virtually unrecognizable as the required changes go far beyond our current rules (and even those contemplated in prior reform bills). This begs the question of whether the Department of Foreign Affairs negotiation mandate letter really goes this far given the domestic changes that would be required. This latest leak also reinforces the need for all governments to come clean - releasing both the ACTA text and government analysis of the treaty should be a condition of any further participation in the talks. From rforno at infowarrior.org Mon Nov 30 20:13:56 2009 From: rforno at infowarrior.org (Richard Forno) Date: Mon, 30 Nov 2009 15:13:56 -0500 Subject: [Infowarrior] - 10 Alternatives To Mininova Message-ID: 10 Alternatives To Mininova Written by TorrentWatcher on November 26, 2009 http://torrentfreak.com/10-alternatives-to-mininova-091126/ After nearly five years of loyal service, Mininova disabled access to over a million torrent files when it partly shut down its website. Starting today, only approved publishers are able to upload files to the site, but luckily there are plenty of alternatives and potential replacements BitTorrent users can flock to. With an impressive 175,820,430 visits and close to a billion page views in the last 30 days, Mininova set a record that they will be unable to break in the near future. Last August a Dutch court ruled that Mininova had to remove all links to ?infringing? torrent files, with disastrous consequences. Since it is technically unfeasible to pre-approve or filter every potentially infringing torrent file, the Mininova team decided to throw in the towel and only allow torrents to be submitted by approved uploaders. This move resulted in the deletion of more than a million torrents, many of which were not infringing any copyrights at all. Thankfully, there are still plenty of alternatives for those BitTorrent users who are looking for the latest Ubuntu, OpenSUSE or Fedora release. Below we provide a random list of public torrent sites that are still open, but there are of course hundreds more sites we could have included. If your personal favorite is missing, feel free to post it in the comments below ? preferably with your reasons why it should be included in any upcoming lists. 1. Torrentzap 2. Fenopy 3. ExtraTorrent 4. KickassTorrents 5. BTjunkie 5. Monova 7. isoHunt 8. yourBitTorrent 9. The Pirate Bay 10. ShareReactor Update: The owner of Monova, told TorrentFreak that he has reserved all Mininova usernames for people who want to make the switch to his site. The account names can be claimed here. Also, we replaced some sites in the original top 10 because they went down or started to serve trojans,or viruses. Saved in: