Showing posts with label CFAA. Show all posts
Showing posts with label CFAA. Show all posts

Tuesday, April 29, 2014

Spring Edition of CCR's Massive Round-Up of New Law Articles on the CFAA, Cybercrime, Privacy, 4th Amendment, Surveillance, and more

Some impressive articles have been published since the last round-up I did in February; if you missed that post, see: Massive round-up of new law articles, covering privacy, Fourth Amendment, GPS, cell site, cybercrime, big data, revenge porn, drones, and more

New Legal Scholarship (with abstracts where available)

Orin S. Kerr, The Next Generation Communications Privacy Act, 162 U. Pa. L. Rev. 373 (2014)
In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to regulate government access to Internet communications and records. ECPA is widely regarded as outdated, and ECPA reform is now on the Congressional agenda. At the same time, existing reform proposals retain the structure of the 1986 Act and merely tinker with a few small aspects of the statute. This Article offers a thought experiment about what might happen if Congress were to repeal ECPA and enact a new privacy statute to replace it. 
The new statute would look quite different from ECPA because overlooked changes in Internet technology have dramatically altered the assumptions on which the 1986 Act was based. ECPA was designed for a network world with high storage costs and only local network access. Its design reflects the privacy threats of such a network, including high privacy protection for real-time wiretapping, little protection for noncontent records, and no attention to particularity or jurisdiction. Today’s Internet reverses all of these assumptions. Storage costs have plummeted, leading to a reality of almost total storage. Even U.S.-based services now serve a predominantly foreign customer base. A new statute would need to account for these changes. 
This Article contends that a next generation privacy act should contain four features. First, it should impose the same requirement on access to all contents. Second, it should impose particularity requirements on the scope of disclosed metadata. Third, it should impose minimization rules on all accessed content. And fourth, it should impose a two-part territoriality regime with a mandatory rule structure for U.S.-based users and a permissive regime for users located abroad.

**And a Response to Kerr's article: Ryan Calo, Communications Privacy for and by Whom?, 162 U. Pa. L. Rev. Online 231 (2014) **
Andrea M. Matwyshyn, Privacy, the Hacker Way, 87 S. Cal. L. Rev. 1 (2014)
This Article seeks to clarify the relationship between contract law and promises of privacy and information security. It challenges three commonly held misconceptions in privacy literature regarding the relationship between contract and data protection—the propertization fatalism, the economic value fatalism, and the displacement fatalism—and argues in favor of embracing contract law as a way to enhance consumer privacy. Using analysis from Sorrell v. IMS Health Inc., marketing theory, and the work of Pierre Bourdieu, it argues that the value in information contracts is inherently relational: consumers provide “things of value”—rights of access to valuable informational constructs of identity and context—in exchange for access to certain services provided by the data aggregator. This Article presents a contract-based consumer protection approach to privacy and information security. Modeled on trade secret law and landlord-tenant law, it advocates for courts and legislatures to adopt a “reasonable data stewardship” approach that relies on a set of implied promises—nonwaivable contract warranties and remedies—to maintain contextual integrity of information and improve consumer privacy. 
Matthew F. Meyers, GPS “Bullets” and the Fourth Amendment, 4 Wake Forest L. Rev. Online 18 (2014) (No Abstract)

From the Fordham Law Review, April 2014 | Vol. 82, No. 5:
Peter Margulies, The NSA in Global Perspective: Surveillance, Human Rights, and International Counterterrorism (No Abstract)
Casey J. McGowan, The Relevance of Relevance: Section 215 of the USA PATRIOT Act and the NSA Metadata Collection Program
In June 2013, a National Security Agency (NSA) contractor, Edward Snowden, leaked classified documents exposing a number of secret government programs. Among these programs was the “telephony metadata” collection program under which the government collects records from phone companies containing call record data for nearly every American. News of this program created considerable controversy and led to a wave of litigation contesting the validity of the program. 
The legality of the metadata collection program has been challenged on both constitutional and statutory grounds. The program derives its authority from Section 215 of the USA PATRIOT Act, codified as 50 U.S.C. § 1861. The statute requires that there be reasonable grounds to believe the data collected is “relevant to an authorized investigation.” The government deems all these records “relevant” based on the fact that they are used to find patterns and connections in preventing terrorist activity. Critics of the program, however, assert that billions of records cannot possibly be relevant when a negligible portion of those records are actually linked to terrorist activity. This Note examines the conflicting interpretations of “relevant,” and concludes that while the current state of the law permits bulk data collection, the power of the NSA to collect records on such a large scale must be reined in.
Thomas Rosso, Unlimited Data?: Placing Limits on Searching Cell Phone Data Incident to a Lawful Arrest 
The “search incident to arrest exception” is one of several exceptions to the general requirement that police must obtain a warrant supported by probable cause before conducting a search. Under the exception, an officer may lawfully search an arrestee’s person and the area within the arrestee’s immediate control without a warrant or probable cause, so long as the search is conducted contemporaneously with the lawful arrest. The U.S. Supreme Court has justified the exception based on the need for officers to discover and remove any weapons or destructible evidence that may be within the arrestee’s reach. Additionally, the Court has held that, under the exception, police may search any containers found on the arrestee’s person without examining the likelihood of uncovering weapons or evidence related to the arrestee’s offense. In light of these principles, should the exception permit officers to search the data of a cell phone found on an arrestee’s person? 
In January 2014, the Supreme Court granted certiorari to review two appellate rulings and resolve a split among the circuits and state courts on this question. This Note examines three approaches courts have taken to resolve the issue: a broad approach, a middle approach, and a narrow approach. This Note argues that the Supreme Court should adopt the narrow approach and prohibit warrantless searches of cell phone data under the exception.
Stephen Moor, Cyber Attacks and the Beginnings of an International Cyber Treaty, North Carolina Journal of International Law and Commercial Regulation (Fall 2013) (No Abstract)

Katherine Booth Wellington, Cyberattacks on Medical Devices and Hospital Networks: Legal Gaps and Regulatory Solutions, 30 Santa Clara High Tech. L.J. 139 (2014)
Cyberattacks on medical devices and hospital networks are a real and growing threat. Malicious actors have the capability to hack pacemakers and insulin pumps, shut down hospital networks, and steal personal health information. This Article analyzes the laws and regulations that apply to cyberattacks on medical devices and hospital networks and argues that the existing legal structure is insufficient to prevent these attacks. While the Computer Fraud and Abuse Act and the Federal Anti-Tampering Act impose stiff penalties for cyberattacks, it is often impossible to identify the actor behind a cyberattack—greatly decreasing the deterrent power of these laws. Few laws address the role of medical device manufacturers and healthcare providers in protecting against cyberattacks. While HIPAA incentivizes covered entities to protect personal health information, HIPAA does not apply to most medical device manufacturers or cover situations where malicious actors cause harm without accessing personal health information. Recent FDA draft guidance suggests that the agency has begun to impose cybersecurity requirements on medical device manufacturers. However, this guidance does not provide a detailed roadmap for medical device cybersecurity and does not apply to healthcare providers. Tort law may fill in the gaps, although it is unclear if traditional tort principles apply to cyberattacks. New legal and regulatory approaches are needed. One approach is industry self-regulation, which could lead to the adoption of industry-wide cybersecurity standards and lay the groundwork for future legal and regulatory reform. A second approach is to develop a more forward-looking and flexible FDA focus on evolving cybersecurity threats. A third approach is a legislative solution. Expanding HIPAA to apply to medical device manufacturers and to any cyberattack that causes patient harm is one way to incentivize medical device manufactures and healthcare providers to adopt cybersecurity measures. All three approaches provide a starting point for considering solutions to twenty-first century cybersecurity threats.
Merritt Baer, Who is the Witness to an Internet Crime: The Confrontation Clause, Digital Forensics, and Child Pornography, 30 Santa Clara High Tech. L.J. 31 (2014)
The Sixth Amendment’s Confrontation Clause guarantees the accused the right to confront witnesses against him. In this article I examine child pornography prosecution, in which we must apply this constitutional standard to digital forensic evidence. I ask, “Who is the witness to an Internet crime?” 
The Confrontation Clause proscribes the admission of hearsay. In Ohio v. Roberts, the Supreme Court stated that the primary concern was reliability and that hearsay might be admissible if the reliability concerns were assuaged. Twenty-four years later, in Crawford v. Washington, the Supreme Court repositioned the Confrontation Clause of the Sixth Amendment as a procedural right. Even given assurances of reliability, “testimonial” evidence requires a physical witness. 
This witness production requirement could have been sensible in an era when actions were physically tied to humans. But in an Internet age, actions may take place at degrees removed from any physical person. 
The hunt for a witness to digital forensic evidence involved in child pornography prosecution winds through a series of law enforcement protocols, on an architecture owned and operated by private companies. Sentencing frameworks associated with child pornography similarly fail to reflect awareness of the way that actions occur online, even while they reinforce what is at stake. 
The tensions I point to in this article are emblematic of emerging questions in Internet law. I show that failing to link the application of law and its undergirding principles to a digital world does not escape the issue, but distorts it. This failure increases the risk that our efforts to preserve Constitutional rights are perverted or made impotent.
Yana Welinder, Facing Real-Time Identification in Mobile Apps & Wearable Computers, 30 Santa Clara High Tech. L.J. 89 (2014)
The use of face recognition technology in mobile apps and wearable computers challenges individuals’ ability to remain anonymous in public places. These apps can also link individuals’ offline activities to their online profiles, generating a digital paper trail of their every move. The ability to go off the radar allows for quiet reflection and daring experimentation—processes that are essential to a productive and democratic society. Given what we stand to lose, we ought to be cautious with groundbreaking technological progress. It does not mean that we have to move any slower, but we should think about potential consequences of the steps that we take. 
This article maps out the recently launched face recognition apps and some emerging regulatory responses to offer initial policy considerations. With respect to current apps, app developers should consider how the relevant individuals could be put on notice given that the apps will not only be using information about their users, but also about the persons being identified. They should also consider how the apps could minimize their data collection and retention and keep the data secure. Today’s face recognition apps mostly use photos from social networks. They therefore call for regulatory responses that consider the context in which users originally shared the photos. Most importantly, the article highlights that the Federal Trade Commission’s first policy response to consumer applications that use face recognition did not follow the well-established principle of technology neutrality. The article argues that any regulation with respect to identification in real time should be technology neutral and narrowly address harmful uses of computer vision without hampering the development of useful applications. 
Valerie Redmond, Note, I Spy with My Not So Little Eye: A Comparison of Surveillance Law in the United States and New Zealand, 37 Fordham Int’l L.J. 733 (2014) (No Abstract)

Lawrence Rosenthal, Binary Searches and the Central Meaning of the Fourth Amendment, 22 Wm. & Mary Bill Rts. J. 881 (2014) (No Abstract)

Jason P. Nance, School Surveillance and the Fourth Amendment, 2014 Wisc. L. Rev. 79 (2014)
In the aftermath of several highly publicized incidents of school violence, public school officials have increasingly turned to intense surveillance methods to promote school safety. The current jurisprudence interpreting the Fourth Amendment generally permits school officials to employ a variety of strict measures, separately or in conjunction, even when their use creates a prison-like environment for students. Yet, not all schools rely on such strict measures. Recent empirical evidence suggests that low-income and minority students are much more likely to experience intense security conditions in their schools than other students, even after taking into account factors such as neighborhood crime, school crime, and school disorder. These empirical findings are problematic on two related fronts. First, research suggests that students subjected to these intense surveillance conditions are deprived of quality educational experiences that other students enjoy. Second, the use of these measures perpetuates social inequalities and exacerbates the school-to-prison pipeline.    
Under the current legal doctrine, students have almost no legal recourse to address conditions creating prison-like environments in schools. This Article offers a reformulated legal framework under the Fourth Amendment that is rooted in the foundational Supreme Court cases evaluating students’ rights under the First, Fourth, and Fourteenth Amendments. The historical justification courts invoke to abridge students’ constitutional rights in schools, including their Fourth Amendment rights, is to promote the educational interests of the students. This justification no longer holds true when a school creates a prison-like environment that deteriorates the learning environment and harms students’ educational interests. This Article maintains that in these circumstances, students’ Fourth Amendment rights should not be abridged but strengthened.
Meredith Mays Espino, Sometimes I Feel Like Somebody’s Watching Me . . . Read?: A Comment On The Need For Heightened Privacy Rights For Consumers Of Ebooks, 30 J. Marshall J. Info. Tech. & Privacy L. 281 (2013)

Emily Katherine Poole, Hey Girls, Did You Know? Slut-Shaming on the Internet Needs to Stop, 48 USF L. Rev. 221 (2013)
When it comes to sexual expression, females are denied the freedoms enjoyed by males. Even though sexual acts often take both a male and a female, it is the girl that faces society’s judgment when her behavior is made public. The Internet has created a forum for such "slut shaming" to occur on a whole new level. Now when a girl is attacked for her sexuality, her attackers can be spread across the U.S., or even the world. The Internet is an incredible resource for sharing and gaining information, but it is also allowing attacks on female sexuality to flourish.  
While slut shaming can and does occur to females of all ages, this Articles focuses on its prevalence among teen and preteen girls, falling under the umbrella of cyberbullying. Because actions and legislation that address cyber slut-shaming can also remedy other types of cyberbullying, the problems and proposed solutions elaborated in this Article can be expanded to include all types of cyberbullying. I chose to focus on one specific and pervasive harm — that caused by sexual shaming — to help bring attention to both the repercussions of cyberbullying and to the broader problem of gender inequality that persists in forums and social networking sites across the Internet. 
Sprague, Robert, No Surfing Allowed: A Review and Analysis of Legislation Prohibiting Employers from Demanding Access to Employees’ and Job Applicants’ Social Media Accounts (January 31, 2014). Albany Law Journal of Science and Technology, Vol. 24, 2014
This article examines recent state legislation prohibiting employers from requesting username and password information from employees and job applicants in order to access restricted portions of those employees’ and job applicants’ personal social media accounts. This article raises the issue of whether this legislation is even needed, from both practical and legal perspectives, focusing on: (a) how prevalent the practice is of requesting employees’ and job applicants’ social media access information; (b) whether alternative laws already exist which prohibit employers from requesting employees’ and job applicants’ social media access information; and (c) whether any benefits can be derived from this legislative output. After analyzing the potential impact of this legislation on employees, job applicants, and employers, this article concludes that such legislation is but an answer seeking a problem and raises more questions than it answers.
From the Washington University Law Review, Volume 89| Number 1| March 2014
Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 
Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers—or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail—credit—the law focuses on credit history, not the derivation of scores from data.  
Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores. 
Elizabeth E. Joh, Policing by Numbers: Big Data and the Fourth Amendment, 89 Wash. L. Rev. 35 
The age of “big data” has come to policing. In Chicago, police officers are paying particular attention to members of a “heat list”: those identified by a risk analysis as most likely to be involved in future violence. In Charlotte, North Carolina, the police have compiled foreclosure data to generate a map of high-risk areas that are likely to be hit by crime. In New York City, the N.Y.P.D. has partnered with Microsoft to employ a “Domain Awareness System” that collects and links information from sources like CCTVs, license plate readers, radiation sensors, and informational databases. In Santa Cruz, California, the police have reported a dramatic reduction in burglaries after relying upon computer algorithms that predict where new burglaries are likely to occur. Unlike the data crunching performed by Target, Walmart, or Amazon, the introduction of big data to police work raises new and significant challenges to the regulatory framework that governs conventional policing. This article identifies three uses of big data and the questions that these tools raise about conventional Fourth Amendment analysis. Two of these examples, predictive policing and mass surveillance systems, have already been adopted by a small number of police departments around the country. A third example — the potential use of DNA databank samples — presents an untapped source of big data analysis. While seemingly quite distinct, these three examples of big data policing suggest the need to draw new Fourth Amendment lines now that the government has the capability and desire to collect and manipulate large amounts of digitized information. 
Lawrence B. Solum, Artificial Meaning, 89 Wash. L. Rev. 69  (No Abstract)
Harry Surden, Machine Learning and the Law, 89 Wash. L. Rev. 87 (No Abstract)
David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (No Abstract)
All of Volume 40, Issue 2 of the William Mitchell Law Review: Legal Issues in a World of Electronic Data, which includes the following articles:
Roland L. Trope and Stephen J. Humes, Before Rolling Blackouts Begin: Briefing Boards on Cyber Attacks That Target and Degrade the Grid 
Damien Riehl and Jumi Kassim, Is “Buying” Digital Content Just “Renting” for Life? Contemplating a Digital First-Sale Doctrine 
Stephen T. Middlebrook and Sarah Jane Hughes, Regulating Cryptocurrencies in the United States: Current Issues and Future Directions 
Nathan Newman, The Costs of Lost Privacy: Consumer Harm and Rising Economic Inequality in the Age of Google
Slobogin, Christopher, Panvasive Surveillance, Political Process Theory and the Nondelegation Doctrine (April 23, 2014). Georgetown Law Journal, Vol. 102, 2014; Vanderbilt Public Law Research Paper No. 14-13 (SSRN)
Using the rise of the surveillance state as its springboard, this Article makes a new case for the application of administrative law principles to law enforcement. It goes beyond asserting, as scholars of the 1970s did, that law enforcement should be bound by the types of rules that govern other executive agencies, by showing how the imperative of administrative regulation flows from a version of John Hart Ely’s political process theory and principles derived from the closely associated nondelegation doctrine. Part I introduces the notion of panvasive law enforcement — large-scale police actions that are not based on individualized suspicion — and exposes the incoherence of the Supreme Court’s “special needs” treatment of panvasive investigative techniques under the Fourth Amendment. It then contrasts the Court’s jurisprudence, and the variations of it proposed by scholars, to the representation-reinforcing alternative suggested by Ely’s work, which would require that panvasive searches and seizures be approved by a body that is representative of the affected group and be applied evenly. Part II explores the impact of political process theory on panvasive surveillance that is not currently considered a search or seizure under the Fourth Amendment, using fusion centers, camera surveillance, drone flights and the NSA’s metadata program as examples. Part III mines administrative law principles to show how the rationale underlying the nondelegation doctrine — if not the (supposedly moribund) doctrine itself — can help ensure that the values of representative democracy and transparency are maintained even once control over panvasive surveillance is largely ceded to the Executive Branch.

Kerr, Orin S., The Fourth Amendment and the Global Internet (April 23, 2014). Stanford Law Review, Vol. 65, 2015, Forthcoming (SSRN)
This article considers how Fourth Amendment law should adapt to the increasingly worldwide nature of Internet surveillance. It focuses on two types of problems not yet addressed by courts. First, the Supreme Court’s decision in United States v. Verdugo-Urquidez prompts several puzzles about how the Fourth Amendment treats monitoring on a global network where many lack Fourth Amendment rights. For example, can online contacts help create those rights? What if the government mistakenly believes that a target lacks Fourth Amendment rights? How does the law apply to monitoring of communications between those who have and those who lack Fourth Amendment rights? The second category of problems follows from different standards of reasonableness that apply outside the United States and at the international border. Does the border search exception apply to purely electronic transmission? And if reasonableness varies by location, is the relevant location the search, the seizure, or the physical person?  
The article explores and answers each of these questions through the lens of equilibrium-adjustment. Today’s Fourth Amendment doctrine is heavily territorial. The article aims to adapt existing principles for the transition from a domestic physical environment to a global networked world in ways that maintain the preexisting balance of Fourth Amendment protection. On the first question, it rejects online contacts as a basis for Fourth Amendment protection; allows monitoring when the government wrongly but reasonably believes that a target lacks Fourth Amendment rights; and limits monitoring between those who have and those who lack Fourth Amendment rights. On the second question, it contends that the border search exception should not apply to electronic transmission and that reasonableness should follow the location of data seizure. The Internet requires search and seizure law to account for the new facts of international investigations. The solutions offered in this article offer a set of Fourth Amendment rules tailored to the reality of global computer networks.
Marthews, Alex and Tucker, Catherine, Government Surveillance and Internet Search Behavior (March 24, 2014) (SSRN) 
This paper uses data from Google Trends on search terms from before and after the surveillance revelations of June 2013 to analyze whether Google users' search behavior shifted as a result of an exogenous shock in information about how closely their internet searches were being monitored by the U. S. government. We use data from Google Trends on search volume for 282 search terms across eleven different countries. These search terms were independently rated for their degree of privacy-sensitivity along multiple dimensions. Using panel data, our result suggest that cross-nationally, users were less likely to search using search terms that they believed might get them in trouble with the U. S. government. In the U. S., this was the main subset of search terms that were affected. However, internationally there was also a drop in traffic for search terms that were rated as personally sensitive. These results have implications for policy makers in terms of understanding the actual effects on search behavior of disclosures relating to the scale of government surveillance on the Internet and their potential effects on international competitiveness. 
Hollis, Duncan B., Re-Thinking the Boundaries of Law in Cyberspace: A Duty to Hack? (April 12, 2014). in Cyberwar: Law & Ethics for Virtual Conflicts (J. Ohlin et al., eds., Oxford University Press, 2014 Forthcoming) (SSRN)
Warfare and boundaries have a symbiotic relationship. Whether as its cause or effect, States historically used war to delineate the borders that divided them. Laws and borders have a similar relationship. Sometimes laws are the product of borders as when national boundaries delineate the reach of States’ authorities. But borders may also be the product of law; laws regularly draw lines between permitted and prohibited conduct or bound off required acts from permissible ones. Both logics are on display in debates over international law in cyberspace. Some characterize cyberspace as a unique, self-governing ‘space’ that requires its own borders and the drawing of tailor-made rules therein. For others, cyberspace is merely a technological medium that States can govern via traditional territorial borders with rules drawn ‘by analogy’ from pre-existing legal regimes.  
This chapter critiques current formulations drawing law from boundaries and boundaries from law in cyberspace with respect to (a) its governance; (b) the use of force; and (c) international humanitarian law (IHL). In each area, I identify theoretical problems that exist in the absence of any uniform theory for why cyberspace needs boundaries. At the same time, I elaborate functional problems with existing boundary claims – particularly by analogy – in terms of their (i) accuracy, (ii) effectiveness and (iii) completeness. These prevailing difficulties on whether, where, and why borders are needed in cyberspace suggests the time is ripe for re-appraising the landscape.  
This chapter seeks to launch such a re-thinking project by proposing a new rule of IHL – a Duty to Hack. The Duty to Hack would require States to use cyber-operations in their military operations whenever they are the least harmful means available for achieving military objectives. Thus, if a State can achieve the same military objective by bombing a factory or using a cyber-operation to take it off-line temporarily, the Duty to Hack requires that State to pursue the latter course. Although novel, I submit the Duty to Hack more accurately and effectively accounts for IHL’s fundamental principles and cyberspace’s unique attributes than existing efforts to foist legal boundaries upon State cyber-operations by analogy. Moreover, adopting the Duty to Hack could constitute a necessary first step to resolving the larger theoretical and functional challenges currently associated with law’s boundaries in cyberspace.
Stopczynski, Arkadiusz and Greenwood, Dazza and Hansen, Lars Kai and Pentland, Alex, Privacy for Personal Neuroinformatics (April 21, 2014) (SSRN)
Human brain activity collected in the form of Electroencephalography (EEG), even with low number of sensors, is an extremely rich signal raising legal and policy issues. Traces collected from multiple channels and with high sampling rates capture many important aspects of participants' brain activity and can be used as a unique personal identifier. The motivation for sharing EEG signals is significant, as a mean to understand the relation between brain activity and well-being, or for communication with medical services. As the equipment for such data collection becomes more available and widely used, the opportunities for using the data are growing; at the same time however inherent privacy risks are mounting. The same raw EEG signal can be used for example to diagnose mental diseases, find traces of epilepsy, and decode personality traits. The current practice of the informed consent of the participants for the use of the data either prevents reuse of the raw signal or does not truly respect participants' right to privacy by reusing the same raw data for purposes much different than originally consented to. Here we propose an integration of a personal neuroinformatics system, Smartphone Brain Scanner, with a general privacy framework openPDS. We show how raw high-dimensionality data can be collected on a mobile device, uploaded to a server, and subsequently operated on and accessed by applications or researchers, without disclosing the raw signal. Those extracted features of the raw signal, called answers, are of significantly lower-dimensionality, and provide the full utility of the data in given context, without the risk of disclosing sensitive raw signal. Such architecture significantly mitigates a very serious privacy risk related to raw EEG recordings floating around and being used and reused for various purposes.
Reeves, Shane R. and Johnson, William J., Autonomous Weapons: Are You Sure These are Killer Robots? Can We Talk About It? (April 30, 2014). The Army Lawyer 1 (April 2014) (SSRN)
The rise of autonomous weapons is creating understandable concern for the international community as it is impossible to predict exactly what will happen with the technology. This uncertainty has led some to advocate for a preemptive ban on the technology. Yet the emergence of a new means of warfare is not a unique phenomenon and is assumed within the Law of Armed Conflict. Past attempts at prohibiting emerging technologies use as weapons — such as aerial balloons in Declaration IV of the 1899 Hague Convention — have failed as a prohibitive regime denies the realities of warfare. Further, those exploring the idea of autonomous weapons are sensitive not only to their legal obligations, but also to the various ethical and moral questions surrounding the technology. Rather than attempting to preemptively ban autonomous weapons before understanding the technology’s potential, efforts should be made to pool the collective intellectual resources of scholars and practitioners to develop a road forward. Perhaps this would be the first step to a more comprehensive and assertive approach to addressing the other pressing issues of modern warfare.
Timothy C. MacDonnell, Justice Scalia’s Fourth Amendment: Text, Context, Clarity, And Occasional Faint-Hearted Originalism (SelectedWorks) (2014)
Since joining the United States Supreme Court in 1986, Justice Scalia has been one of the most prominent voices on the Fourth Amendment, having written twenty majority opinions, twelve concurrences and eight dissents on the topic. Justice Scalia’s Fourth Amendment opinions have had a significant effect on the Court’s jurisprudence relative to the Fourth Amendment. Under his pen, the Court has altered its test for determining when the Fourth Amendment should apply; provided a vision for how technology’s encroachment on privacy should be addressed; and articulated the standard for determining whether government officials are entitled to qualified immunity in civil suits involving alleged Fourth Amendment violations. In most of Justice Scalia’s opinions, he has championed the originalist/textualist theory of constitutional interpretation. Based on that theory, he has advocated that the text and context of the Fourth Amendment should govern how the Court interprets most questions of search and seizure law. His Fourth Amendment opinions have also included an emphasis on clear, bright-line rules that can be applied broadly to Fourth Amendment questions. However, there are Fourth Amendment opinions in which Justice Scalia has strayed from these commitments; particularly in the areas of the special needs doctrine and qualified immunity. The article asserts that Justice Scalia’s non-originalist approach in these spheres threatens the cohesiveness of his Fourth Amendment jurisprudence, and could, if not corrected, unbalance the Fourth Amendment in favor of law enforcement interests.

Tuesday, February 4, 2014

Massive round-up of new law articles, covering privacy, Fourth Amendment, GPS, cell site, cybercrime, big data, revenge porn, drones, and more

This Article examines a question that has become increasingly important in the emerging surveillance society: Should the law treat information as private even though others know about it? This is the third-party privacy problem. Part II explores two competing conceptions of privacy — the binary and contextual conceptions. Part III describes two features of the emerging surveillance society that should change the way we address the third-party privacy problem. One feature, “surveillance on demand,” results from exponential increases in data collection and aggregation. The other feature, “uploaded lives,” reflects a revolution in the type and amount of information that we share digitally. Part IV argues that the binary conception cannot protect privacy in the surveillance society because it fails to account for the new realities of surveillance on demand and uploaded lives. Finally, Part V illustrates how courts and legislators can implement the contextual conception to deal with two emerging surveillance society problems — facial recognition technology and geolocation data.

Privacy laws rely on the unexamined assumption that the collection of data is not speech. That assumption is incorrect. Privacy scholars, recognizing an imminent clash between this long-held assumption and First Amendment protections of information, argue that data is different from the sort of speech the Constitution intended to protect. But they fail to articulate a meaningful distinction between data and other more traditional forms of expression. Meanwhile, First Amendment scholars have not paid sufficient attention to new technologies that automatically capture data. These technologies reopen challenging questions about what “speech” is. 
This Article makes two overdue contributions to the First Amendment literature. First, it argues that when the scope of First Amendment coverage is ambiguous, courts should analyze the government’s motive for regulating. Second, it highlights and strengthens the strands of First Amendment theory that protect the right to create knowledge. Whenever the state regulates in order to interfere with the creation of knowledge, that regulation should draw First Amendment scrutiny. 
In combination, these claims show clearly why data must receive First Amendment protection. When the collection or distribution of data troubles lawmakers, it does so because data has the potential to inform and to inspire new opinions. Data privacy laws regulate minds, not technology. Thus, for all practical purposes, and in every context relevant to privacy debates, data is speech.
The police tend to think that those who evade surveillance are criminals. Yet the evasion may only be a protest against the surveillance itself. Faced with the growing surveillance capacities of the government, some people object. They buy “burners” (prepaid phones) or “freedom phones” from Asia that have had all tracking devices removed, or they hide their smartphones in ad hoc Faraday cages that block their signals. They use Tor to surf the internet. They identify tracking devices with GPS detectors. They avoid credit cards and choose cash, prepaid debit cards, or bitcoins. They burn their garbage. At the extreme end, some “live off the grid” and cut off all contact with the modern world. 
These are all examples of what I call privacy protests: actions individuals take to block or to thwart government surveillance for reasons unrelated to criminal wrongdoing. Those engaged in privacy protests do so primarily because they object to the presence of perceived or potential government surveillance in their lives. How do we tell the difference between privacy protests and criminal evasions, and why does it matter? Surprisingly scant attention has been given to these questions, in part because Fourth Amendment law makes little distinction between ordinary criminal evasions and privacy protests. This Article discusses the importance of these ordinary acts of resistance, their place in constitutional criminal procedure, and their potential social value in the struggle over the meaning of privacy.
Conor M. Reardon, Cell Phones, Police Recording, and the Intersection of the First and Fourth Amendments, 63 Duke Law Journal 735-779 (2013). Abstract:
In a recent spate of highly publicized incidents, citizens have used cell phones equipped with video cameras to record violent arrests. Oftentimes they post their recordings on the Internet for public examination. As the courts have recognized, this behavior lies close to the heart of the First Amendment. 
But the Constitution imperfectly protects this new form of government monitoring. Fourth Amendment doctrine generally permits the warrantless seizure of cell phones used to record violent arrests, on the theory that the recording contains evidence of a crime. The Fourth Amendment inquiry does not evaluate a seizing officer’s state of mind, permitting an official to seize a video for the very purpose of suppressing its contents. Moreover, Supreme Court precedent is typically read to ignore First Amendment interests implicated by searches and seizures. 
This result is perverse. Courts evaluating these seizures should stop to recall the Fourth Amendment’s origins as a procedural safeguard for expressive interests. They should remember, too, the Supreme Court’s jurisprudence surrounding seizures of obscene materials—an area in which the Court carefully shaped Fourth Amendment doctrine to protect First Amendment values. Otherwise reasonable seizures can become unreasonable when they threaten free expression, and seizures of cell phones used to record violent arrests are of that stripe. Courts should therefore disallow this breed of seizure, trusting the political branches to craft a substitute procedure that will protect law-enforcement interests without doing violence to First Amendment freedoms.
Elizabeth Friedler, Protecting the Innocent—the Need to Adapt Federal Asset Forfeiture Laws to Protect the Interests of Third Parties in Digital Asset Seizures, Cardozo Arts & Entertainment Law Journal, Volume 32, Issue 1 (2013).

Jana Sutton, Of Information, Trust, and Ice Cream: A Recipe for a Different Perspective on the Privacy of Health Information, 55 Ariz. L. Rev. 1171 (2014). Abstract:
The concept of privacy is inescapable in modern society. As technology develops rapidly and online connections become an integral part of our daily routines, the lines between what may or may not be acceptable continue to blur. Individual autonomy is important. We cannot, however, allow it to suffocate the advancement of technology in such vital areas as public health. Although this Note cannot lay out detailed instructions to balance the desire for autonomy and the benefits of free information, it attempts to provide some perspective on whether we are anywhere close to striking the right balance. When the benefits of health information technology are so glaring, and yet its progress has been so stifled, perhaps we have placed far too much value—at least in the health care context—on individual privacy.
Kevin S. Bankston & Ashkan Soltani, Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones, 123 YALE L.J. ONLINE 335 (2014). Abstract:
In United States v. Jones, five Supreme Court Justices wrote that government surveillance of one’s public movements for twenty-eight days using a GPS device violated a reasonable expectation of privacy and constituted a Fourth Amendment search. Unfortunately, they didn’t provide a clear and administrable rule that could be applied in other government surveillance cases. In this Essay, Kevin Bankston and Ashkan Soltani draw together threads from the Jones concurrences and existing legal scholarship and combine them with data about the costs of different location tracking techniques to articulate a cost-based conception of the expectation of privacy that both supports and is supported by the concurring opinions in Jones.
Schmitt, Michael N. and Vihul, Liis, The International Law of Attribution During Proxy 'Wars' in Cyberspace (January 30, 2014). 1 Fletcher Security Review (2014 Forthcoming). Abstract:
The article examines the use of non-State actors by States to conduct cyber operations against other States. In doing so, it examines attribution of a non-State actor's cyber operations to a State pursuant to the law of State responsibility, attribution of a non-State actor's cyber armed attack to a State for the purposes of a self-defense analysis, and attribution of cyber military operations to a State in the context of determining whether an international armed conflict has been initiated. These three very different legal inquiries are often confused with each other. The article seeks to deconstruct the issue of attribution into its various normative components.
Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93 (2014). Abstract:
The rise of “Big Data” analytics in the private sector poses new challenges for privacy advocates. Through its reliance on existing data and predictive analysis to create detailed individual profiles, Big Data has exploded the scope of personally identifiable information (“PII”). It has also effectively marginalized regulatory schema by evading current privacy protections with its novel methodology. Furthermore, poor execution of Big Data methodology may create additional harms by rendering inaccurate profiles that nonetheless impact an individual’s life and livelihood. To respond to Big Data’s evolving practices, this Article examines several existing privacy regimes and explains why these approaches inadequately address current Big Data challenges. This Article then proposes a new approach to mitigating predictive privacy harms—that of a right to procedural data due process. Although current privacy regimes offer limited nominal due process-like mechanisms, a more rigorous framework is needed to address their shortcomings. By examining due process’s role in the Anglo-American legal system and building on previous scholarship about due process for public administrative computer systems, this Article argues that individuals affected by Big Data should have similar rights to those in the legal system with respect to how their personal data is used in such adjudications. Using these principles, this Article analogizes a system of regulation that would provide such rights against private Big Data actors.
Larkin, Paul J., 'Revenge Porn,' State Law, and Free Speech (January 14, 2014).  Abstract:
For most of our history, only celebrities — presidents, movie stars, professional athletes, and the like — were at risk of having their everyday exploits and activities photographed and shown to the world. But that day is gone. Today, we all face the risk of being made into a celebrity due to the ubiquity of camera-equipped cell phones and the ease of uploading photographs or videos onto the Internet. But a particularly troubling aspect of this phenomenon goes by the name of "revenge porn" — that is, the Internet posting of photographs of naked former wives and girlfriends, sometimes in intimate positions or activities. Revenge porn is an example of malicious conduct that injures the welfare of someone who mistakenly trusted an intimate partner. Tort law traditionally has allowed parties to recover damages for such violations of privacy, and criminal law also can prohibit such conduct, but there are several First Amendment defenses that the responsible parties can assert to fend off liability. This article argues that allowing a victim of revenge porn to recover damages for publication that breaches an implicit promise of confidentiality is faithful to tort and criminal law principles and will not punish or chill the legitimate expression of free speech.
Jonathan Olivito, Beyond the Fourth Amendment: Limiting Drone Surveillance Through the Constitutional Right to Informational Privacy, 74 Ohio St. L.J. 669 (2013). 

The entirety of Volume 74, Issue 6 in the Ohio State Law Journal; Symposium: The Second Wave of Global Privacy Protection (Titles Below)
Peter Swire, The Second Wave of Global Privacy Protection: Symposium Introduction, 74 Ohio St. L.J. 841 (2013). 
Ann Bartow, Privacy Laws and Privacy Levers: Online Surveillance Versus Economic Development in the People’s Republic of China, 74 Ohio St. L.J. 853 (2013). 
Andrew Clearwater & J. Trevor Hughes, In the Beginning . . . An Early History of the Privacy Profession, 74 Ohio St. L.J. 897 (2013). 
Claudia Diaz, Omer Tene & Seda Gürses, Hero or Villain: The Data Controller in Privacy Law and Technologies, 74 Ohio St. L.J. 923 (2013). 
A. Michael Froomkin, “PETs Must Be on a Leash”: How U.S. Law (and Industry Practice) Often Undermines and Even Forbids Valuable Privacy Enhancing Technology, 74 Ohio St. L.J. 965 (2013). 
Woodrow Hartzog, Social Data, 74 Ohio St. L.J. 995 (2013). 
Dennis D. Hirsch, In Search of the Holy Grail: Achieving Global Privacy Rules Through Sector-Based Codes of Conduct, 74 Ohio St. L.J. 1029 (2013). 
Gus Hosein & Caroline Wilson Palow, Modern Safeguards for Modern Surveillance: An Analysis of Innovations in Communications Surveillance Techniques, 74 Ohio St. L.J. 1071 (2013). 
Anil Kalhan, Immigration Policing and Federalism Through the Lens of Technology, Surveillance, and Privacy, 74 Ohio St. L.J. 1105 (2013). 
Bartosz M. Marcinkowski, Privacy Paradox(es): In Search of a Transatlantic Data Protection Standard, 74 Ohio St. L.J. 1167 (2013). 
Thomas Margoni & Mark Perry, Deep Pockets, Packets, and Harbors, 74 Ohio St. L.J. 1195 (2013). 
Omer Tene, Privacy Law’s Midlife Crisis: A Critical Assessment of the Second Wave of Global Privacy Laws, 74 Ohio St. L.J. 1217 (2013). 
Yofi Tirosh & Michael Birnhack, Naked in Front of the Machine: Does Airport Scanning Violate Privacy? 74 Ohio St. L.J. 1263 (2013). 
Yang Wang, Pedro Giovanni Leon, Xiaoxuan Chen, Saranga Komanduri, Gregory Norcie, Kevin Scott, Alessandro Acquisti, Lorrie Faith Cranor & Norman Sadeh, From Facebook Regrets to Facebook Privacy Nudges, 74 Ohio St. L.J. 1307 (2013). 
Tal Z. Zarsky & Norberto Nuno Gomes de Andrade, Regulating Electronic Identity Intermediaries: The “Soft eID” Conundrum, 74 Ohio St. L.J. 1335 (2013).
The entirety of Volume 14, Issue 1 of the  Journal of High Technology Law (2014) (Titles Below).
After Jones, The Deluge: The Fourth Amendment's Treatment Of Information, Big Data And The Cloud , Lon A. Berk, 14 J. High Tech L. 1 (2014). 
The Legislative Response To Employers' Requests For Password Disclosure, Jordan M. Blanke, 14 J. High Tech L. 42 (2014). 
A Shot In The Dark: An Analysis Of The SEC's Response To The Rise Of Dark Pools Edwin Batista, 14 J. High Tech L. 83 (2014). 
Privacy Protections Left Wanting: Looking At Doctrine And Safeguards On Law Enforcements' Use Of GPS Tracking And Cell Phone Records With A Focus On Massachusetts, Lloyd Chebaclo, 14 J. High Tech L. 120 (2014).

Friday, January 10, 2014

CFAA amendments, new criminal statute proposed in Senator Leahy’s bill


On Wednesday, Senator Patrick Leahy (D-Vt.) introduced the Personal Data Privacy and Security Act of 2014. Senator Leahy’s bill, first introduced back in 2005, intends to "better protect[] Americans from the growing threats of data breaches and identity theft,” according to a press release issued by the Senator.

Included within the bill are amendments to the Computer Fraud and Abuse Act (18 U.S.C. § 1030). Senator Leahy stated that the bill “includes the Obama administration’s proposal [full text] to update the Computer Fraud and Abuse Act, so that attempted computer hacking and conspiracy to commit computer hacking offenses are subject to the same criminal penalties, as the underlying offenses.” The bulk of Senator Leahy’s amendments to the CFAA occur in Title I: Enhancing Punishment for Identity Theft and Other Violations of Data Privacy and Security (Sections 101 through 110). These changes would include adding the CFAA under the Racketeer Influenced and Corrupt Organizations (RICO) Act (Section 101),  maximizing penalties under the CFAA (Section 103), and clarifying that both "conspiracy" and "attempt" to commit a computer hacking offense are subject to the same penalties as completed, substantive offenses (Section 105), just to name a few.

Also added within the bill would be a new criminal statute: 18 U.S.C. § 1041 Concealment of security breaches involving sensitive personally identifiable information. According to Senator Leahy, the new statute would provide "tough criminal penalties for anyone who would intentionally and willfully conceal the fact that a data breach has occurred when the breach causes economic damage to consumers.” According to the section-by-section summary, the new statute would
makes it a crime for a person who knows of a security breach which requires notice to individuals under Title II of this Act, and who is under obligation to provide such notice, to intentionally and willfully conceal the fact of, or information related to, that security breach.
So in addition to adding a strict security breach notification law (Section 211 - 221), Senator Leahy's bill would create criminal penalties for intentionally and willfully concealing the security breach or "information realted to" that breach.

Overall, the bill contains a number of amendments that would be of interest to anyone in the information privacy or security field. Senator Leahy has made a section-by-section outline of the bill available, as well as the bill's full text.

Friday, November 1, 2013

Exiting CTO who copied source code and company files wins dismissal of CFAA claim; Thoughts on the CFAA post-Nosal

Viral Tolat, ex-CTO of Integral Development Company, is accused by his former company of copying gigabytes of source code and confidential files on his way out the door to a position with another company. He copied the source code to multiple places and uploaded some of the data to his personal Google Docs account. In Integral's First Amended Complaint, it alleged, inter alia, that Tolat violated the CFAA (and the analogous Cali statute) by misappropriating Integral data in derogation of the company's confidentiality policy and Tolat's employment agreement; Integral also alleged that Tolat "exceeded authorized access" because he had no "legitimate reason" to copy the source code (Tolat knew next to nothing about programming).

A federal judge in the N.D. Cal. did not buy Integral's allegations of "hacking" and granted Tolat's motion to dismiss those claims; the court's holding was based on United States v. Nosal's narrow reading of the CFAA. The court reiterated the premise in Nosal that the CFAA was meant to criminalize unauthorized access to information, not the misappropriation of information obtained through authorized access. 

The order granting Tolat's motion to dismiss the hacking claims is here: Integral Dev. Co. v. Tolat, No: 3:12-CV-06575-JSW (N.D. Cal. Oct. 25, 2013).

In holding, as a matter of law, that the CFAA did not apply to Tolat's conduct, the court stated:
The Ninth Circuit has rejected the contention that the terms "exceeds authorized access" within the meaning of the CFAA applies where someone has access to a computer's information but is limited in permissible use of that information. The plain language of the CFAA "target[s] the unauthorized procurement or alteration of information, not its misuse or misappropriation."
Integral does not and cannot allege that Tolat gained improper or unauthorized access to Integral's computers for illegitimate purpose. Rather, Integral alleges that Tolat "copied, downloaded and removed numerous Integral source code files . . . when he clearly had no legitimate reason to do so." Integral does not allege that Tolat used improper methods to gain access to the source code, but rather concedes, as it must, that at the time of the alleged acquisition of the materials, Tolat was working for Integral and had access to virtually all of Integral's trade secret information and confidential and proprietary intellectual property. (citations in entire quote omitted)  
Integral argued strenuously, in its brief opposing Tolat's Motion to Dismiss, that the company had a written confidentiality policy that Tolat was aware of and clearly violated when he uploaded company files and source code (trade secrets) to "the cloud" (i.e. his personal Google Docs). And thus, the argument continued, the existence of the policy and the knowing violation by Tolat was sufficient to create civil liability under the CFAA. The court's opinion, which ultimately held the CFAA inapplicable, summarily rejected Integral's argument by simply ignoring it altogether. I interpret the court's failure to even touch the merits of this argument as an implicit rejection of the "wide" interpretation of the CFAA Integral attempted to forward.

Wide interpretations of the CFAA have, in the most general sense, attempted to define liability (civil or criminal) for "hacking" by tying the statute (or defining the scope of it, at least in part) to the policies or terms of service drafted by private parties. The fundamental flaw in the wide approach is the unmooring of the CFAA from its original legislative purpose - real hacking; a wide interpretation also injects fluctuation into the law (or, perhaps, constitutes a "slippery slope"), allowing a serious federal crime to evolve whenever corporate policies or terms of service change (often at the whim of in-house counsel or in response to information technology changes).

Conversely, narrow interpretations of the CFAA reject (correctly, I would argue) any attempt to expand the scope of the CFAA beyond the purpose for which it was enacted. This is the interpretation of the CFAA I have consistently argued for and is the one adopted by the 9th Circuit in Nosal (an opinion that, to be clear, was binding on the court here).

The CFAA has become a flawed statute through no fault of its own. It is merely an antiquated remnant of a different era, poorly suited to address an area of law (and technology) that is constantly evolving at an incredible pace. The CFAA is, by analogy, the abacus in a room full of iPhone 5Ss. Attempting to fix the CFAA through ever wider interpretations of its scope is, to be honest, nothing more than the judiciary answering the CFAA's anachronism with acquiescence. This acquiesce is not innocuous, however. It carries with it a dangerous and misguided solution: granting legislative fiat over the CFAA's scope to private entities instead of Congress.

The rest of the documents for the case:

First Amended Complaint

Defendant's Motion to Dismiss, inter alia, the hacking claims

Plaintiff's Opposition to the MTD

Defendant's Reply to the Plaintiff's Opposition


Monday, October 28, 2013

New CFAA Case: Complaint alleges "crippling, simultaneous, mass departure," along with destruction of documents and data

On 10/8/13 an interesting new complaint was filed \alleging, inter alia, violations of the CFAA. An order in the case was recently issued, summarizing the dispute as follows: "the very core of Cunningham Lindsey’s claims and request for a preliminary injunction is that Vericlaim intentionally and unlawfully focused its efforts at recruiting and encouraging a mass exodus of Cunningham Lindsey employees during the late summer of 2013."

The Complaint outlines more fully the claims of Cunningham Lindey against Bonanni et. al (including Vericlaim), describing the mass exodus and destruction of files (where the CFAA claim arises). Below are some excerpts from the complaint:




The case is Cunningham Lindsey v. Bonanni, No. 1:13-CV-2528 (M.D. P.A. Oct. 22, 2013); the previous link is to the Oct. 22nd order.

Complaint - Filed 10/8/13

Pl.'s Motion for a Temporary Restraining Order - Filed 10/8/13

Friday, October 18, 2013

Recent Journal of Criminal Law & Criminology issue focuses on cybercrime

Volume 103, Issue 3 of the Journal of Criminal Law & Criminology, a student-run publication at Northwestern University School of Law, features a variety of articles tackling the complexities of cybercrime. The issue is the culmination of a Symposium held at Northwestern University on February 1, 2013. As the Symposium Editor, Lily Katz, states in her Forward, the Symposium intended to address the "important conceptual, doctrinal, and empirical legal questions" raised by cybercrime.

The issue features a great line-up of authors addressing a variety of topics. For instance, Professor David Thaw, a visiting Assistant Professor at the University of Connecticut School of Law, "examines the tension" between the two differing viewpoints on "whether private contracts, such as website terms of use or organizational acceptable use policies should be able to define the limits of authorization and access for purposes of criminal sanctions under the CFAA." The piece authored by Professor Derek Bambauer, an Associate Professor at the University of Arizona James E. Rogers College of Law, takes a somewhat broad look at the interests of privacy and security. Professor Bambauer argues that "security and privacy can, and should, be treated as distinct concerns" and that "separating privacy from security has important practical consequences."

The recent issue of the Journal of Criminal Law & Criminology provides some great articles worth checking out. Here are the links to the articles

Lily Katz, Foreword, 103 J. Crim. L. & Criminology 663 (2013) 

Derek E. Bambauer, Privacy Versus Security, 103 J. Crim. L. & Criminology 667 (2013)

Thomas P. Crocker, Order, Technology, and the Constitutional Meanings of Criminal Procedure, 103 J. Crim. L. & Criminology 685 (2013)

David Gray, Danielle Keats Citron, & Liz Clark Rinehart, Fighting Cybercrime After United States v. Jones, 103 J. Crim. L. & Criminology 745 (2013)

David Thaw, Criminalizing Hacking, not Dating: Reconstructing the CFAA Intent Requirement, 103 J. Crim. L. & Criminology 907 (2013)

Jessica E. Notebaert, Comment, The Search For a Constitutional Justification For The Noncommercial Prong of 18 U.S.C. § 2423(C), 103 J. Crim. L. & Criminology 949 (2013)

Wednesday, October 16, 2013

CFAA claim dismissed in Givaudan Fragrances Corp. v. Krivda

On September 26, 2013, the court in Givaudan Fragrances Corp. v. Krivda issued an order dismissing Givaudan's claim that one of its former employees, James Krivda, violated the Computer Fraud and Abuse Act (18 U.S.C. § 1030). This dismissal, granted by Judge Peter Sheridan of the District Court of New Jersey, provides yet another example of a court distinguishing between “unauthorized use of information” and the “unauthorized access to information” when interpreting the CFAA.

According to the court order (and this 2009 opinion, which provides a much more detailed factual summary), Krivda was a perfumer at Givaudan Fragrances, “a manufacturer of fragrances for consumer products and the fine fragrance industry.” In April of 2008, Krivda resigned from Givaudan to take a position with a Givaudan competitor, MANE International. Givaudan alleged that, prior to his departure, Krivda printed over 500 confidential fragrance formulas from Givaudan’s management database. Displeased with Krivda’s explanation as to why he printed the formulas just days prior to his departure from the company, Givaudan filed a complaint against Krivda.

Specifically, Givaudan’s complaint alleged, amoung other claims, that Krivda violated § 1030(a)(4) of the CFAA, which holds liable a person who
knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value, unless the object of the fraud and the thing obtained consists only of the use of the computer and the value of such use is not more than $5,000 in any 1-year period
Krivda moved for partial summary judgment to dismiss the CFAA claim. Krivda argued that, as a perfumer for Givaudan, he was provided access to the formula management database and therefore did not "access a protected computer without authorization" or "exceed authorized access." Givaudan argued that, while Krivda had access to the database, he was not authorized to "review and print" formulas maintained on their database.

In granted Krivda’s motion, the court stated that “the Computer Fraud and Abuse Act § 1030(a)(4), prohibits the unauthorized access to information rather than unauthorized use of such information."  “The inquiry” the court stated, “depends not on the employee's motivation for accessing the information, but rather whether the access to that information was authorized.” The court concluded that Krivda’s authorization to access the database ended its CFAA inquiry.
Here, Krivda was authorized to access that information, namely, Givaudan's computerized formula management database system, a fact Givaudan does not dispute. . . . [T]he term "exceeds authorized access," refers to one who had access to part of a system and then accessed other parts of the computer system to which he had no permissible access. Here Krivda had permissible access to the formula management database system. Givaudan's proposition that Krivda could not "review and print" does not fall within the definition of exceeds authorized access. In applying the summary judgment standard and utilizing Givaudan's version of the facts, it is clear that Krivda had access to the computerized formula management system, and Krivda entered areas to which he had access. Summary judgment is granted . . . .
The “use” vs “access” distinction has been a common discussion among courts faced with interpreting the CFAA, particularly in the employee context. A few months back, I wrote a post discussing a Southern District of New York case, JBCHoldings v. Pakter, in which the court determined that the plain meaning of “without authorization” and “exceeds authorized access” “plainly speaks to permitted access, not permitted use.” However, I also discussed how some circuits have adopted a broader interpretation of the CFAA, in which the misuse of information by an employee would satisfy the statute's terminology. In the criminal context, I touched on this issue a bit when discussing Untied States v. Vargas, where a NYPD officer was charged under the CFAA for, among other claims, conducting improper searches on the precincts’ NCIC system to gain information on fellow NYPD officers.

For a more detailed look on the issue, I would suggest this recent Comment by JD candidate Alden Anderson, The Computer Fraud and Abuse Act: Hacking Into The Authorization Debate, published in this summer's issue of Jurimetrics: The Journal of Law, Science, and Technology.

Tuesday, October 15, 2013

District court holds that parody social media accounts do not violate the CFAA

In Matot v. CH, No. 6:13-cv-153 (D. Ore. 2013), the district court held that the creation of parody social media accounts does not violate the Computer Fraud and Abuse Act (CFAA).

Last year, the Ninth Circuit adopted a reading of the CFAA that does not allow for the law to be applied to the violation of a website's terms of service. United States v. Nosal, 676 F.3d 854 (9th Cir. 2012). A broad reading would allow such violations (for example, falsifying your age on a dating website) to be punishable under the CFAA through criminal and civil action. Some courts have adopted the broad reading (United States v. Rodriguez, 628 F.3d 1258 (11th Cir. 2010); United States v. John, 597 F.3d 263 (5th Cir. 2010); Int’l Airport Ctrs., LLC v. Citrin, 440 F.3d 418 (7th Cir. 2006).

In Matot, the plaintiff argued that the "defendants created false social media profiles in his name and likeness," violating the "without authorization" provision of the CFAA. The district court, however, found the argument to go against the Ninth Circuit's interpretation of the CFAA and the rule of lenity.

Sunday, October 6, 2013

Federal Ct. in web scraping case: accusations of "hacking" and "theft" could be defamatory, but privileged under facts


Can accusing someone of harvesting data from a publicly accessible webpage, by referring to that conduct as "hacking" and/or "theft," be a defamatory statement? Under the facts noted below, a federal court just said "yes," but ultimately found the statements privileged. There is an interesting discussion in the opinion about "protecting" website data with an exclusion in robots.txt (although, as an aside, robots.txt doesn't protect much of anything), and whether that choice to exclude makes any legal difference. The court also discusses the unsettled nature of CFAA law at the time the statement was made; to the court, the muddled precedent regarding whether scraping public web data was a CFAA violation was germane to determining if an accusation of "hacking" was accurate (i.e. a legal cause of action under the CFAA could be sustained).

As an initial matter, here is Mirriam-Webster Online's definition of "hack":
intransitive verb
...
4
a :  to write computer programs for enjoyment
b :  to gain access to a computer illegally
noun (1)
...
6
:  a usually creative solution to a computer hardware or programming problem or limitation 
hack 1  (hk)
v. hacked, hack·ing, hacks
v.tr.
...
3.
a. Informal To alter (a computer program): hacked her text editor to read HTML.
b. To gain access to (a computer file or network) illegally or without authorization: hacked the firm's personnel database.

v.intr.

a. To write or refine computer programs skillfully.
b. To use one's skill in computer programming to gain illegal or unauthorized access to a file or network: hacked into the company's intranet.
...
The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved. 
hack1
vb
...
7. (Electronics & Computer Science / Computer Science) to manipulate a computer program skilfully, esp, to gain unauthorized access to another computer system
...
Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003
And from the Oxford English Dictionary, Copyright © 2013 Oxford University Press
"hacking"
1.
...
 d. The use of a computer for the satisfaction it gives; the activity of a hacker (hacker n. 3). colloq. (orig. U.S.).
1976   J. Weizenbaum Computer Power & Human Reason iv. 118   The compulsive programmer spends all the time he can working on one of his big projects. ‘Working’ is not the word he uses; he calls what he does ‘hacking’.
1984   Times 7 Aug. 16/2   Hacking, as the practice of gaining illegal or unauthorized access to other people's computers is called.
1984   Sunday Times 9 Dec. 15/2   Hacking is totally intellectual—nothing goes boom and there are no sparks. It's your mind against the computer.
In Tamburo v. Dworkin, -- F.Supp.2d -- (N.D. Ill. Sept. 26, 2013), Judge Joan B. Gottschall granted Henry (another named defendant) motion for summary judgment; the causes of action against Dworkin were (1) tortious interference with a contractual relationship, (2) tortious interference with prospective economic damage, (3) defamation per se, and (4) defamation per quod.

The court stated the facts as follows:
The essential facts in this 2004 case are undisputed. Defendant Kristen Henry, a dog breeder and computer programmer, spent almost five years creating an extensive database of dog pedigrees, which she made freely available for use by fellow breeders through her web site. Plaintiffs John Tamburo and Versity Corporation (“Versity”) used an automated web browser to harvest the data from Henry’s website. They incorporated it into software which they attempted to sell to dog breeders for a profit. Henry was outraged. When the plaintiffs spurned her requests to cease using her data, she reached out to the dog breeding community, through emails and online messages, for assistance in responding to the plaintiffs’ misappropriation of her work. This lawsuit arose from her statements.
Henry (defendant) accused Tamburo of "hacking" in a Freerepublic.com article, as well as in an email; Henry also made various statements to a dog enthusiast message board using the words "theft" and "steal." One of the statements read: "[Tamburo] has written an agent robot to go to these individual sites and steal certain files...that were not offered to them except through a query user interface for page by page query of a single dog’s pedigree at a time."

Addressing the defamation allegations, the court analyzed whether the statements were non-actionable because they were either substantially true or protected by privilege. The court first discussed the defendant's use/lack thereof of robots.txt, which the court refers to as "the Robot Exclusion Standard." The court stated:
The parties dispute whether Tamburo and Versity evaded security measures to access Bonchien.com. Henry contends that a user could access the data on her site only through a query based search, by entering an individual dog name and the generations of ancestry to be displayed. Tamburo, however, states that the data could also be accessed through the site’s URL. He states in his affidavit that Henry admitted during her deposition that the URL used by the Data Mining Robot to access the web site was plainly visible, and that her allegations that the plaintiffs accessed data from non-public areas of the web site were false. 
Henry states in an affidavit that she did not give Tamburo or Versity express permission to access and gather the data on her website by any automated means, such as the Data Mining Robot. She contends that she placed a “robots.txt” header on the site to keep robots from indexing the site. The robots.txt protocol, or Robot Exclusion Standard, is a convention “to instruct cooperating web crawlers not to access all or part of a website that is publicly viewable. If a website owner uses the robots.txt file to give instructions about its site to web crawlers, and a crawler honors the instruction, then the crawler should not visit any pages on the website. The protocol can also be used to instruct web crawlers to avoid just a portion of the website that is segregated into a separate directory.”
As for allegedly defamatory statements regarding stealing of data, the court found them "substantially true." Here is the court's logic:
Tamburo argues that Henry’s statements that he stole from her are false because he did not commit theft. He did not delete or remove data from Henry’s site (thus depriving her of her property), Henry had made her data freely available, and no robots.txt file was visible on her site at the time the Data Mining Robot copied information on the site. According to Tamburo, because the data was not protected, either legally or by security protections on Henry’s web site, he could not have committed theft by appropriating it. 
Even so, the court concludes that no reasonable jury could find that Henry’s statements were not substantially true. ... It may be true that Tamburo could not be prosecuted or held liable for his actions because the data was publicly available and not protected by adequate security measures. But Tamburo’s argument relies on a narrow legal meaning of “theft.” Under Illinois law, the court must consider whether Henry’s use of the word “theft” is reasonably susceptible to a non- defamatory construction. (citation omitted) It is. To a lay person such as Henry, “theft” can also mean the wrongful act of taking the property of another person without permission. The data Henry had collected could be reasonably understood as her property—she had collected it, and it was her work in compiling it that gave it value. She did not give Tamburo permission to copy it and sell access to it. Although Henry might not be able to successfully sue Tamburo for using her data in this way, the gist of her statements was true: he took the data without her permission.
I can't say I agree with this -- the holding, in essence, means that anyone copying and pasting data from another individuals website is "stealing" that data if pre-approved permission isn't obtained. To me, the choice to post information on the internet, available to anyone in the world, means you assume the risk that your now "public" information will be used by others. You can't steal what is given away for free. And theft normally involves some deprivation of a property interest; what was the website owner deprived of, other than control of the information. Control which was given up when it was posted on the web.

Ultimately, the court held the statements were covered by privilege because "they related to her interests in protecting the substantial time and effort spent accumulating her data and in making it freely available to the community of Schepperke breeders, to promote the health of the breed. She also had an interest in ensuring that her data was presented in a certain way and in controlling the manner in which it could be accessed. Furthermore, the statements were published to people who likewise had an interest in the way in which the dog pedigree data was made available, and they involved a public interest in how access to information available on the internet is regulated."

Also, relating to privilege, the court discussed the current state of CFAA law at the time:
...Henry was a lay person, and the record shows plainly that as of May 4 and 5, 2004, when she made the statements that her data was “stolen,” Henry believed that Tamburo had stolen her data and was attempting to determine whether the law afforded her any protection against that theft. 
Moreover, even had Henry immediately consulted with an attorney, no such actual knowledge that Tamburo’s actions were lawful would have been revealed. Rather, in May 2004, the law governing the automated harvesting of data from web sites was unsettled. For example, a number of courts had held that website owners might have a remedy under the Computer Fraud and Abuse Act (“CFAA”) against defendants who had accessed information on their websites using automated harvesting. (citation omitted) In 2003, the First Circuit reversed a district court that had issued an injunction pursuant to the CFAA against a company using an automated “web scraper” to copy pricing information from a travel website. The district court had relied in part on “the fact that the website was configured to allow ordinary visitors to the site to view only one page at a time.” (citation omitted) The First Circuit disagreed and noted, “It is . . . of some use for future litigation . . . in this circuit to indicate that, with rare exceptions, public website providers ought to say just what non-password protected access they purport to forbid.” ...
The First Circuit’s opinion suggests that it is unlikely Henry could have pursued a CFAA claim, given the state of the law, and Tamburo is correct that a collection of data is normally not subject to copyright protections. See Feist Publ’ns v. Rural Tel. Serv. Co., 499 U.S. 340, 364 (1991) (noting that “copyright rewards originality, not effort”). Even so, further investigation on Henry’s part would not have revealed that Tamburo’s actions were undisputedly legal or illegal. Thus, even if Henry’s lawyer advised her that Tamburo had acted legally and that she did not have a remedy against him, such advice is not dispositive as to whether she abused the qualified privilege in making the statements in question. Henry was entitled to disagree with the lawyer about whether Tamburo had any right to access her database, another lawyer might have held a different opinion, and her statements were made as part of her efforts to seek help in protecting her interests. Thus, the fact that the law has evolved in a way that does not protect Henry’s years of work is not evidence that she made the statements about Tamburo’s theft with “a high degree of awareness of the[ir] probable falsity or entertaining serious doubts as to [their] truth.” (citation omitted)
Finally, addressing hacking, the court stated:
Tamburo argues that Henry’s statement that he committed “hacking” and that he took data from non-public areas of her website are defamatory because they imply illegal activity. He claims that the statements are false because he did not evade any security measures employed on Henry’s site, and no prohibition on robotic browsing was visible on the site.
The statements that Tamburo “wrote an agent robot to take specific files off of specific sites” and that the “files were not in a public venue” are substantially true and thus not actionable. Although Tamburo argues that the files were accessible to him through a URL, it is undisputed that Henry’s site was designed to allow the user to search manually for the pedigree of an individual dog. Nothing in the record indicates that Henry intended to make the entire database available to the public. The “gist” of the statements is therefore true. (citation omitted)
As to the word “hacking,” Henry argues that the term is susceptible to innocent construction because “the term has positive connotations,” implying the development of “a creative solution” to a computer problem. (citation omitted)) The innocent construction rule “requires a court to consider the statement in context and give the words of the statement, and any implications arising from them, their natural and obvious meaning.” (citation omitted) Courts “are to interpret the words of the statement as they appear to have been used and according to the idea they were intended to convey to a reader of reasonable intelligence,” and “should avoid straining” to give a term an innocent meaning. (citation omitted). Although Henry proposes that the word “hacking” can be used to convey an innocent meaning, it is clear from the context of her statement that she meant to imply that the way Tamburo accessed her database was unethical or illegal, not “creative.” Thus, the word, as used by Henry, was defamatory. 
Even so, the statement is protected by the same qualified privilege that renders Henry’s statements about theft non-actionable. Tamburo has presented no evidence showing that Henry abused the privilege. Although she admitted during her deposition that Tamburo had not evaded any security measures on her site, nothing in the record indicates that, at the time she made the statement about “hacking,” on May 5, 2004, she had serious doubts about the truth of the statement. Rather, the evidence shows that Henry designed her website to make data available to the public through a query search, which would provide information about one dog pedigree at a time. There is no dispute that this was the way Henry intended the site to be used, and that Tamburo instead accessed the site in a way that allowed him to copy Henry’s entire database.