Thursday, June 26, 2014

Initial Reactions to Riley v. California

Yesterday, the Supreme Court unanimously ruled that police must obtain a warrant prior to searching the cell phones of the people they arrest in Riley v. California. In an opinion widely heralded as a resounding victory for privacy in the digital age, Chief Justice Roberts wrote:
Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple—get a warrant.
Much has already been written about the landmark decision. Here are some initial reactions to Riley from the law and technology community:

Twitter also weighed in on the case. Below are some thoughts on Riley in 140 characters or less: 
I will continue to update this post with additional writings as they are published. If I have missed any, please comment here or reach out to me on Twitter @natnicol.

This post was updated on June 26, 2014 at 1:59 p.m. MDT, and again at 2:21 p.m. MDT.
This post was updated on June 27, 2014 at 1:15 p.m. MDT.

Tuesday, April 29, 2014

Spring Edition of CCR's Massive Round-Up of New Law Articles on the CFAA, Cybercrime, Privacy, 4th Amendment, Surveillance, and more

Some impressive articles have been published since the last round-up I did in February; if you missed that post, see: Massive round-up of new law articles, covering privacy, Fourth Amendment, GPS, cell site, cybercrime, big data, revenge porn, drones, and more

New Legal Scholarship (with abstracts where available)

Orin S. Kerr, The Next Generation Communications Privacy Act, 162 U. Pa. L. Rev. 373 (2014)
In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to regulate government access to Internet communications and records. ECPA is widely regarded as outdated, and ECPA reform is now on the Congressional agenda. At the same time, existing reform proposals retain the structure of the 1986 Act and merely tinker with a few small aspects of the statute. This Article offers a thought experiment about what might happen if Congress were to repeal ECPA and enact a new privacy statute to replace it. 
The new statute would look quite different from ECPA because overlooked changes in Internet technology have dramatically altered the assumptions on which the 1986 Act was based. ECPA was designed for a network world with high storage costs and only local network access. Its design reflects the privacy threats of such a network, including high privacy protection for real-time wiretapping, little protection for noncontent records, and no attention to particularity or jurisdiction. Today’s Internet reverses all of these assumptions. Storage costs have plummeted, leading to a reality of almost total storage. Even U.S.-based services now serve a predominantly foreign customer base. A new statute would need to account for these changes. 
This Article contends that a next generation privacy act should contain four features. First, it should impose the same requirement on access to all contents. Second, it should impose particularity requirements on the scope of disclosed metadata. Third, it should impose minimization rules on all accessed content. And fourth, it should impose a two-part territoriality regime with a mandatory rule structure for U.S.-based users and a permissive regime for users located abroad.

**And a Response to Kerr's article: Ryan Calo, Communications Privacy for and by Whom?, 162 U. Pa. L. Rev. Online 231 (2014) **
Andrea M. Matwyshyn, Privacy, the Hacker Way, 87 S. Cal. L. Rev. 1 (2014)
This Article seeks to clarify the relationship between contract law and promises of privacy and information security. It challenges three commonly held misconceptions in privacy literature regarding the relationship between contract and data protection—the propertization fatalism, the economic value fatalism, and the displacement fatalism—and argues in favor of embracing contract law as a way to enhance consumer privacy. Using analysis from Sorrell v. IMS Health Inc., marketing theory, and the work of Pierre Bourdieu, it argues that the value in information contracts is inherently relational: consumers provide “things of value”—rights of access to valuable informational constructs of identity and context—in exchange for access to certain services provided by the data aggregator. This Article presents a contract-based consumer protection approach to privacy and information security. Modeled on trade secret law and landlord-tenant law, it advocates for courts and legislatures to adopt a “reasonable data stewardship” approach that relies on a set of implied promises—nonwaivable contract warranties and remedies—to maintain contextual integrity of information and improve consumer privacy. 
Matthew F. Meyers, GPS “Bullets” and the Fourth Amendment, 4 Wake Forest L. Rev. Online 18 (2014) (No Abstract)

From the Fordham Law Review, April 2014 | Vol. 82, No. 5:
Peter Margulies, The NSA in Global Perspective: Surveillance, Human Rights, and International Counterterrorism (No Abstract)
Casey J. McGowan, The Relevance of Relevance: Section 215 of the USA PATRIOT Act and the NSA Metadata Collection Program
In June 2013, a National Security Agency (NSA) contractor, Edward Snowden, leaked classified documents exposing a number of secret government programs. Among these programs was the “telephony metadata” collection program under which the government collects records from phone companies containing call record data for nearly every American. News of this program created considerable controversy and led to a wave of litigation contesting the validity of the program. 
The legality of the metadata collection program has been challenged on both constitutional and statutory grounds. The program derives its authority from Section 215 of the USA PATRIOT Act, codified as 50 U.S.C. § 1861. The statute requires that there be reasonable grounds to believe the data collected is “relevant to an authorized investigation.” The government deems all these records “relevant” based on the fact that they are used to find patterns and connections in preventing terrorist activity. Critics of the program, however, assert that billions of records cannot possibly be relevant when a negligible portion of those records are actually linked to terrorist activity. This Note examines the conflicting interpretations of “relevant,” and concludes that while the current state of the law permits bulk data collection, the power of the NSA to collect records on such a large scale must be reined in.
Thomas Rosso, Unlimited Data?: Placing Limits on Searching Cell Phone Data Incident to a Lawful Arrest 
The “search incident to arrest exception” is one of several exceptions to the general requirement that police must obtain a warrant supported by probable cause before conducting a search. Under the exception, an officer may lawfully search an arrestee’s person and the area within the arrestee’s immediate control without a warrant or probable cause, so long as the search is conducted contemporaneously with the lawful arrest. The U.S. Supreme Court has justified the exception based on the need for officers to discover and remove any weapons or destructible evidence that may be within the arrestee’s reach. Additionally, the Court has held that, under the exception, police may search any containers found on the arrestee’s person without examining the likelihood of uncovering weapons or evidence related to the arrestee’s offense. In light of these principles, should the exception permit officers to search the data of a cell phone found on an arrestee’s person? 
In January 2014, the Supreme Court granted certiorari to review two appellate rulings and resolve a split among the circuits and state courts on this question. This Note examines three approaches courts have taken to resolve the issue: a broad approach, a middle approach, and a narrow approach. This Note argues that the Supreme Court should adopt the narrow approach and prohibit warrantless searches of cell phone data under the exception.
Stephen Moor, Cyber Attacks and the Beginnings of an International Cyber Treaty, North Carolina Journal of International Law and Commercial Regulation (Fall 2013) (No Abstract)

Katherine Booth Wellington, Cyberattacks on Medical Devices and Hospital Networks: Legal Gaps and Regulatory Solutions, 30 Santa Clara High Tech. L.J. 139 (2014)
Cyberattacks on medical devices and hospital networks are a real and growing threat. Malicious actors have the capability to hack pacemakers and insulin pumps, shut down hospital networks, and steal personal health information. This Article analyzes the laws and regulations that apply to cyberattacks on medical devices and hospital networks and argues that the existing legal structure is insufficient to prevent these attacks. While the Computer Fraud and Abuse Act and the Federal Anti-Tampering Act impose stiff penalties for cyberattacks, it is often impossible to identify the actor behind a cyberattack—greatly decreasing the deterrent power of these laws. Few laws address the role of medical device manufacturers and healthcare providers in protecting against cyberattacks. While HIPAA incentivizes covered entities to protect personal health information, HIPAA does not apply to most medical device manufacturers or cover situations where malicious actors cause harm without accessing personal health information. Recent FDA draft guidance suggests that the agency has begun to impose cybersecurity requirements on medical device manufacturers. However, this guidance does not provide a detailed roadmap for medical device cybersecurity and does not apply to healthcare providers. Tort law may fill in the gaps, although it is unclear if traditional tort principles apply to cyberattacks. New legal and regulatory approaches are needed. One approach is industry self-regulation, which could lead to the adoption of industry-wide cybersecurity standards and lay the groundwork for future legal and regulatory reform. A second approach is to develop a more forward-looking and flexible FDA focus on evolving cybersecurity threats. A third approach is a legislative solution. Expanding HIPAA to apply to medical device manufacturers and to any cyberattack that causes patient harm is one way to incentivize medical device manufactures and healthcare providers to adopt cybersecurity measures. All three approaches provide a starting point for considering solutions to twenty-first century cybersecurity threats.
Merritt Baer, Who is the Witness to an Internet Crime: The Confrontation Clause, Digital Forensics, and Child Pornography, 30 Santa Clara High Tech. L.J. 31 (2014)
The Sixth Amendment’s Confrontation Clause guarantees the accused the right to confront witnesses against him. In this article I examine child pornography prosecution, in which we must apply this constitutional standard to digital forensic evidence. I ask, “Who is the witness to an Internet crime?” 
The Confrontation Clause proscribes the admission of hearsay. In Ohio v. Roberts, the Supreme Court stated that the primary concern was reliability and that hearsay might be admissible if the reliability concerns were assuaged. Twenty-four years later, in Crawford v. Washington, the Supreme Court repositioned the Confrontation Clause of the Sixth Amendment as a procedural right. Even given assurances of reliability, “testimonial” evidence requires a physical witness. 
This witness production requirement could have been sensible in an era when actions were physically tied to humans. But in an Internet age, actions may take place at degrees removed from any physical person. 
The hunt for a witness to digital forensic evidence involved in child pornography prosecution winds through a series of law enforcement protocols, on an architecture owned and operated by private companies. Sentencing frameworks associated with child pornography similarly fail to reflect awareness of the way that actions occur online, even while they reinforce what is at stake. 
The tensions I point to in this article are emblematic of emerging questions in Internet law. I show that failing to link the application of law and its undergirding principles to a digital world does not escape the issue, but distorts it. This failure increases the risk that our efforts to preserve Constitutional rights are perverted or made impotent.
Yana Welinder, Facing Real-Time Identification in Mobile Apps & Wearable Computers, 30 Santa Clara High Tech. L.J. 89 (2014)
The use of face recognition technology in mobile apps and wearable computers challenges individuals’ ability to remain anonymous in public places. These apps can also link individuals’ offline activities to their online profiles, generating a digital paper trail of their every move. The ability to go off the radar allows for quiet reflection and daring experimentation—processes that are essential to a productive and democratic society. Given what we stand to lose, we ought to be cautious with groundbreaking technological progress. It does not mean that we have to move any slower, but we should think about potential consequences of the steps that we take. 
This article maps out the recently launched face recognition apps and some emerging regulatory responses to offer initial policy considerations. With respect to current apps, app developers should consider how the relevant individuals could be put on notice given that the apps will not only be using information about their users, but also about the persons being identified. They should also consider how the apps could minimize their data collection and retention and keep the data secure. Today’s face recognition apps mostly use photos from social networks. They therefore call for regulatory responses that consider the context in which users originally shared the photos. Most importantly, the article highlights that the Federal Trade Commission’s first policy response to consumer applications that use face recognition did not follow the well-established principle of technology neutrality. The article argues that any regulation with respect to identification in real time should be technology neutral and narrowly address harmful uses of computer vision without hampering the development of useful applications. 
Valerie Redmond, Note, I Spy with My Not So Little Eye: A Comparison of Surveillance Law in the United States and New Zealand, 37 Fordham Int’l L.J. 733 (2014) (No Abstract)

Lawrence Rosenthal, Binary Searches and the Central Meaning of the Fourth Amendment, 22 Wm. & Mary Bill Rts. J. 881 (2014) (No Abstract)

Jason P. Nance, School Surveillance and the Fourth Amendment, 2014 Wisc. L. Rev. 79 (2014)
In the aftermath of several highly publicized incidents of school violence, public school officials have increasingly turned to intense surveillance methods to promote school safety. The current jurisprudence interpreting the Fourth Amendment generally permits school officials to employ a variety of strict measures, separately or in conjunction, even when their use creates a prison-like environment for students. Yet, not all schools rely on such strict measures. Recent empirical evidence suggests that low-income and minority students are much more likely to experience intense security conditions in their schools than other students, even after taking into account factors such as neighborhood crime, school crime, and school disorder. These empirical findings are problematic on two related fronts. First, research suggests that students subjected to these intense surveillance conditions are deprived of quality educational experiences that other students enjoy. Second, the use of these measures perpetuates social inequalities and exacerbates the school-to-prison pipeline.    
Under the current legal doctrine, students have almost no legal recourse to address conditions creating prison-like environments in schools. This Article offers a reformulated legal framework under the Fourth Amendment that is rooted in the foundational Supreme Court cases evaluating students’ rights under the First, Fourth, and Fourteenth Amendments. The historical justification courts invoke to abridge students’ constitutional rights in schools, including their Fourth Amendment rights, is to promote the educational interests of the students. This justification no longer holds true when a school creates a prison-like environment that deteriorates the learning environment and harms students’ educational interests. This Article maintains that in these circumstances, students’ Fourth Amendment rights should not be abridged but strengthened.
Meredith Mays Espino, Sometimes I Feel Like Somebody’s Watching Me . . . Read?: A Comment On The Need For Heightened Privacy Rights For Consumers Of Ebooks, 30 J. Marshall J. Info. Tech. & Privacy L. 281 (2013)

Emily Katherine Poole, Hey Girls, Did You Know? Slut-Shaming on the Internet Needs to Stop, 48 USF L. Rev. 221 (2013)
When it comes to sexual expression, females are denied the freedoms enjoyed by males. Even though sexual acts often take both a male and a female, it is the girl that faces society’s judgment when her behavior is made public. The Internet has created a forum for such "slut shaming" to occur on a whole new level. Now when a girl is attacked for her sexuality, her attackers can be spread across the U.S., or even the world. The Internet is an incredible resource for sharing and gaining information, but it is also allowing attacks on female sexuality to flourish.  
While slut shaming can and does occur to females of all ages, this Articles focuses on its prevalence among teen and preteen girls, falling under the umbrella of cyberbullying. Because actions and legislation that address cyber slut-shaming can also remedy other types of cyberbullying, the problems and proposed solutions elaborated in this Article can be expanded to include all types of cyberbullying. I chose to focus on one specific and pervasive harm — that caused by sexual shaming — to help bring attention to both the repercussions of cyberbullying and to the broader problem of gender inequality that persists in forums and social networking sites across the Internet. 
Sprague, Robert, No Surfing Allowed: A Review and Analysis of Legislation Prohibiting Employers from Demanding Access to Employees’ and Job Applicants’ Social Media Accounts (January 31, 2014). Albany Law Journal of Science and Technology, Vol. 24, 2014
This article examines recent state legislation prohibiting employers from requesting username and password information from employees and job applicants in order to access restricted portions of those employees’ and job applicants’ personal social media accounts. This article raises the issue of whether this legislation is even needed, from both practical and legal perspectives, focusing on: (a) how prevalent the practice is of requesting employees’ and job applicants’ social media access information; (b) whether alternative laws already exist which prohibit employers from requesting employees’ and job applicants’ social media access information; and (c) whether any benefits can be derived from this legislative output. After analyzing the potential impact of this legislation on employees, job applicants, and employers, this article concludes that such legislation is but an answer seeking a problem and raises more questions than it answers.
From the Washington University Law Review, Volume 89| Number 1| March 2014
Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 
Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers—or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail—credit—the law focuses on credit history, not the derivation of scores from data.  
Procedural regularity is essential for those stigmatized by “artificially intelligent” scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores. 
Elizabeth E. Joh, Policing by Numbers: Big Data and the Fourth Amendment, 89 Wash. L. Rev. 35 
The age of “big data” has come to policing. In Chicago, police officers are paying particular attention to members of a “heat list”: those identified by a risk analysis as most likely to be involved in future violence. In Charlotte, North Carolina, the police have compiled foreclosure data to generate a map of high-risk areas that are likely to be hit by crime. In New York City, the N.Y.P.D. has partnered with Microsoft to employ a “Domain Awareness System” that collects and links information from sources like CCTVs, license plate readers, radiation sensors, and informational databases. In Santa Cruz, California, the police have reported a dramatic reduction in burglaries after relying upon computer algorithms that predict where new burglaries are likely to occur. Unlike the data crunching performed by Target, Walmart, or Amazon, the introduction of big data to police work raises new and significant challenges to the regulatory framework that governs conventional policing. This article identifies three uses of big data and the questions that these tools raise about conventional Fourth Amendment analysis. Two of these examples, predictive policing and mass surveillance systems, have already been adopted by a small number of police departments around the country. A third example — the potential use of DNA databank samples — presents an untapped source of big data analysis. While seemingly quite distinct, these three examples of big data policing suggest the need to draw new Fourth Amendment lines now that the government has the capability and desire to collect and manipulate large amounts of digitized information. 
Lawrence B. Solum, Artificial Meaning, 89 Wash. L. Rev. 69  (No Abstract)
Harry Surden, Machine Learning and the Law, 89 Wash. L. Rev. 87 (No Abstract)
David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (No Abstract)
All of Volume 40, Issue 2 of the William Mitchell Law Review: Legal Issues in a World of Electronic Data, which includes the following articles:
Roland L. Trope and Stephen J. Humes, Before Rolling Blackouts Begin: Briefing Boards on Cyber Attacks That Target and Degrade the Grid 
Damien Riehl and Jumi Kassim, Is “Buying” Digital Content Just “Renting” for Life? Contemplating a Digital First-Sale Doctrine 
Stephen T. Middlebrook and Sarah Jane Hughes, Regulating Cryptocurrencies in the United States: Current Issues and Future Directions 
Nathan Newman, The Costs of Lost Privacy: Consumer Harm and Rising Economic Inequality in the Age of Google
Slobogin, Christopher, Panvasive Surveillance, Political Process Theory and the Nondelegation Doctrine (April 23, 2014). Georgetown Law Journal, Vol. 102, 2014; Vanderbilt Public Law Research Paper No. 14-13 (SSRN)
Using the rise of the surveillance state as its springboard, this Article makes a new case for the application of administrative law principles to law enforcement. It goes beyond asserting, as scholars of the 1970s did, that law enforcement should be bound by the types of rules that govern other executive agencies, by showing how the imperative of administrative regulation flows from a version of John Hart Ely’s political process theory and principles derived from the closely associated nondelegation doctrine. Part I introduces the notion of panvasive law enforcement — large-scale police actions that are not based on individualized suspicion — and exposes the incoherence of the Supreme Court’s “special needs” treatment of panvasive investigative techniques under the Fourth Amendment. It then contrasts the Court’s jurisprudence, and the variations of it proposed by scholars, to the representation-reinforcing alternative suggested by Ely’s work, which would require that panvasive searches and seizures be approved by a body that is representative of the affected group and be applied evenly. Part II explores the impact of political process theory on panvasive surveillance that is not currently considered a search or seizure under the Fourth Amendment, using fusion centers, camera surveillance, drone flights and the NSA’s metadata program as examples. Part III mines administrative law principles to show how the rationale underlying the nondelegation doctrine — if not the (supposedly moribund) doctrine itself — can help ensure that the values of representative democracy and transparency are maintained even once control over panvasive surveillance is largely ceded to the Executive Branch.

Kerr, Orin S., The Fourth Amendment and the Global Internet (April 23, 2014). Stanford Law Review, Vol. 65, 2015, Forthcoming (SSRN)
This article considers how Fourth Amendment law should adapt to the increasingly worldwide nature of Internet surveillance. It focuses on two types of problems not yet addressed by courts. First, the Supreme Court’s decision in United States v. Verdugo-Urquidez prompts several puzzles about how the Fourth Amendment treats monitoring on a global network where many lack Fourth Amendment rights. For example, can online contacts help create those rights? What if the government mistakenly believes that a target lacks Fourth Amendment rights? How does the law apply to monitoring of communications between those who have and those who lack Fourth Amendment rights? The second category of problems follows from different standards of reasonableness that apply outside the United States and at the international border. Does the border search exception apply to purely electronic transmission? And if reasonableness varies by location, is the relevant location the search, the seizure, or the physical person?  
The article explores and answers each of these questions through the lens of equilibrium-adjustment. Today’s Fourth Amendment doctrine is heavily territorial. The article aims to adapt existing principles for the transition from a domestic physical environment to a global networked world in ways that maintain the preexisting balance of Fourth Amendment protection. On the first question, it rejects online contacts as a basis for Fourth Amendment protection; allows monitoring when the government wrongly but reasonably believes that a target lacks Fourth Amendment rights; and limits monitoring between those who have and those who lack Fourth Amendment rights. On the second question, it contends that the border search exception should not apply to electronic transmission and that reasonableness should follow the location of data seizure. The Internet requires search and seizure law to account for the new facts of international investigations. The solutions offered in this article offer a set of Fourth Amendment rules tailored to the reality of global computer networks.
Marthews, Alex and Tucker, Catherine, Government Surveillance and Internet Search Behavior (March 24, 2014) (SSRN) 
This paper uses data from Google Trends on search terms from before and after the surveillance revelations of June 2013 to analyze whether Google users' search behavior shifted as a result of an exogenous shock in information about how closely their internet searches were being monitored by the U. S. government. We use data from Google Trends on search volume for 282 search terms across eleven different countries. These search terms were independently rated for their degree of privacy-sensitivity along multiple dimensions. Using panel data, our result suggest that cross-nationally, users were less likely to search using search terms that they believed might get them in trouble with the U. S. government. In the U. S., this was the main subset of search terms that were affected. However, internationally there was also a drop in traffic for search terms that were rated as personally sensitive. These results have implications for policy makers in terms of understanding the actual effects on search behavior of disclosures relating to the scale of government surveillance on the Internet and their potential effects on international competitiveness. 
Hollis, Duncan B., Re-Thinking the Boundaries of Law in Cyberspace: A Duty to Hack? (April 12, 2014). in Cyberwar: Law & Ethics for Virtual Conflicts (J. Ohlin et al., eds., Oxford University Press, 2014 Forthcoming) (SSRN)
Warfare and boundaries have a symbiotic relationship. Whether as its cause or effect, States historically used war to delineate the borders that divided them. Laws and borders have a similar relationship. Sometimes laws are the product of borders as when national boundaries delineate the reach of States’ authorities. But borders may also be the product of law; laws regularly draw lines between permitted and prohibited conduct or bound off required acts from permissible ones. Both logics are on display in debates over international law in cyberspace. Some characterize cyberspace as a unique, self-governing ‘space’ that requires its own borders and the drawing of tailor-made rules therein. For others, cyberspace is merely a technological medium that States can govern via traditional territorial borders with rules drawn ‘by analogy’ from pre-existing legal regimes.  
This chapter critiques current formulations drawing law from boundaries and boundaries from law in cyberspace with respect to (a) its governance; (b) the use of force; and (c) international humanitarian law (IHL). In each area, I identify theoretical problems that exist in the absence of any uniform theory for why cyberspace needs boundaries. At the same time, I elaborate functional problems with existing boundary claims – particularly by analogy – in terms of their (i) accuracy, (ii) effectiveness and (iii) completeness. These prevailing difficulties on whether, where, and why borders are needed in cyberspace suggests the time is ripe for re-appraising the landscape.  
This chapter seeks to launch such a re-thinking project by proposing a new rule of IHL – a Duty to Hack. The Duty to Hack would require States to use cyber-operations in their military operations whenever they are the least harmful means available for achieving military objectives. Thus, if a State can achieve the same military objective by bombing a factory or using a cyber-operation to take it off-line temporarily, the Duty to Hack requires that State to pursue the latter course. Although novel, I submit the Duty to Hack more accurately and effectively accounts for IHL’s fundamental principles and cyberspace’s unique attributes than existing efforts to foist legal boundaries upon State cyber-operations by analogy. Moreover, adopting the Duty to Hack could constitute a necessary first step to resolving the larger theoretical and functional challenges currently associated with law’s boundaries in cyberspace.
Stopczynski, Arkadiusz and Greenwood, Dazza and Hansen, Lars Kai and Pentland, Alex, Privacy for Personal Neuroinformatics (April 21, 2014) (SSRN)
Human brain activity collected in the form of Electroencephalography (EEG), even with low number of sensors, is an extremely rich signal raising legal and policy issues. Traces collected from multiple channels and with high sampling rates capture many important aspects of participants' brain activity and can be used as a unique personal identifier. The motivation for sharing EEG signals is significant, as a mean to understand the relation between brain activity and well-being, or for communication with medical services. As the equipment for such data collection becomes more available and widely used, the opportunities for using the data are growing; at the same time however inherent privacy risks are mounting. The same raw EEG signal can be used for example to diagnose mental diseases, find traces of epilepsy, and decode personality traits. The current practice of the informed consent of the participants for the use of the data either prevents reuse of the raw signal or does not truly respect participants' right to privacy by reusing the same raw data for purposes much different than originally consented to. Here we propose an integration of a personal neuroinformatics system, Smartphone Brain Scanner, with a general privacy framework openPDS. We show how raw high-dimensionality data can be collected on a mobile device, uploaded to a server, and subsequently operated on and accessed by applications or researchers, without disclosing the raw signal. Those extracted features of the raw signal, called answers, are of significantly lower-dimensionality, and provide the full utility of the data in given context, without the risk of disclosing sensitive raw signal. Such architecture significantly mitigates a very serious privacy risk related to raw EEG recordings floating around and being used and reused for various purposes.
Reeves, Shane R. and Johnson, William J., Autonomous Weapons: Are You Sure These are Killer Robots? Can We Talk About It? (April 30, 2014). The Army Lawyer 1 (April 2014) (SSRN)
The rise of autonomous weapons is creating understandable concern for the international community as it is impossible to predict exactly what will happen with the technology. This uncertainty has led some to advocate for a preemptive ban on the technology. Yet the emergence of a new means of warfare is not a unique phenomenon and is assumed within the Law of Armed Conflict. Past attempts at prohibiting emerging technologies use as weapons — such as aerial balloons in Declaration IV of the 1899 Hague Convention — have failed as a prohibitive regime denies the realities of warfare. Further, those exploring the idea of autonomous weapons are sensitive not only to their legal obligations, but also to the various ethical and moral questions surrounding the technology. Rather than attempting to preemptively ban autonomous weapons before understanding the technology’s potential, efforts should be made to pool the collective intellectual resources of scholars and practitioners to develop a road forward. Perhaps this would be the first step to a more comprehensive and assertive approach to addressing the other pressing issues of modern warfare.
Timothy C. MacDonnell, Justice Scalia’s Fourth Amendment: Text, Context, Clarity, And Occasional Faint-Hearted Originalism (SelectedWorks) (2014)
Since joining the United States Supreme Court in 1986, Justice Scalia has been one of the most prominent voices on the Fourth Amendment, having written twenty majority opinions, twelve concurrences and eight dissents on the topic. Justice Scalia’s Fourth Amendment opinions have had a significant effect on the Court’s jurisprudence relative to the Fourth Amendment. Under his pen, the Court has altered its test for determining when the Fourth Amendment should apply; provided a vision for how technology’s encroachment on privacy should be addressed; and articulated the standard for determining whether government officials are entitled to qualified immunity in civil suits involving alleged Fourth Amendment violations. In most of Justice Scalia’s opinions, he has championed the originalist/textualist theory of constitutional interpretation. Based on that theory, he has advocated that the text and context of the Fourth Amendment should govern how the Court interprets most questions of search and seizure law. His Fourth Amendment opinions have also included an emphasis on clear, bright-line rules that can be applied broadly to Fourth Amendment questions. However, there are Fourth Amendment opinions in which Justice Scalia has strayed from these commitments; particularly in the areas of the special needs doctrine and qualified immunity. The article asserts that Justice Scalia’s non-originalist approach in these spheres threatens the cohesiveness of his Fourth Amendment jurisprudence, and could, if not corrected, unbalance the Fourth Amendment in favor of law enforcement interests.

Thursday, April 24, 2014

Must Read Law Review Article -- Personal Curtilage: Fourth Amendment Security in Public

Andrew Guthrie Ferguson has a new law review article in the April 2014 issue (Vol. 55, No. 4) of William & Mary Law Review, entitled: Personal Curtilage: Fourth Amendment Security in Public. The abstract is below:
Do citizens have any Fourth Amendment protection from sense-enhancing surveillance technologies in public? This Article engages a timely question as new surveillance technologies have redefined expectations of privacy in public spaces. It proposes a new theory of Fourth Amendment security based on the ancient theory of curtilage protection for private property. Curtilage has long been understood as a legal fiction that expands the protection of the home beyond the formal structures of the house. Based on custom and law protecting against both nosy neighbors and the government, curtilage was defined by the actions the property owner took to signal a protected space. In simple terms, by building a wall around one's house, the property owner marked out an area of private control. So, too, the theory of personal curtilage turns on persons being able to control the protected areas of their lives in public by similarly signifying that an area is meant to be secure from others. 
This Article develops a theory of personal curtilage built on four overlapping foundational principles. First, persons can build a constitutionally protected space secure from governmental surveillance in public. Second, to claim this space as secure from governmental surveillance, the person must affirmatively mark that space in some symbolic manner. Third, these spaces must be related to areas of personal autonomy or intimate connection, be it personal, familial, or associational. Fourth, these contested spaces-like traditional curtilage-will be evaluated by objectively balancing these factors to determine if a Fourth Amendment search has occurred. Adapting the framework of traditional trespass, an intrusion by sense-enhancing technologies into this protected personal curtilage would be a search for Fourth Amendment purposes. The Article concludes that the theory of personal curtilage improves and clarifies the existing Fourth Amendment doctrine and offers a new framework for future cases. It also highlights the need for a new vision of trespass to address omnipresent sense-enhancing surveillance technologies.

Wednesday, April 23, 2014

Supreme Court News: Reply Briefs Filed (Apr. 22nd) in Fourth Amendment Cell Phone Cases (Wurie and Riley); Oral Arugment Next Week

In Riley v. California, Petitioner David Leon Riley has filed his reply brief. The cases is summarized by SCOTUSblog as follows:
Issue: Whether evidence admitted at petitioner's trial was obtained in a search of petitioner's cell phone that violated petitioner's Fourth Amendment rights.
In United States v. Wurie, Petitioner United States has filed its reply brief. SCOTUSblog's summary:
Issue: Whether the Fourth Amendment permits the police, without obtaining a warrant, to review the call log of a cellphone found on a person who has been lawfully arrested.
Both cases are scheduled for oral argument on April 29th.

Monday, April 21, 2014

Privacy, Hacking, and Information Security Tools: A Primer for Legal Professionals (Part I)

I thought it might be useful to describe some commonly used tools in the Information Security sphere that should be on every attorney's radar, for myriad reasons. Perhaps you are defending a client who has used such a tool; or, you wish to uphold your obligations under the Model Rules to truly make your attorney-client communications confidential.

This may become a multi-part post, given the plethora of tools out there (and further posts will, to some extent, depend on whether people find this post to be useful - so feedback would be great).

1.   To start, a tool used by hackers, privacy enthusiasts, and others is Tails, "The Amnesic Incognito Live System." It is a LiveCD/Bootable OS that comes packed with baked-in privacy tools; the most important feature being that the network configuration forces all traffic through the Tor Network. From the Tails page, the OS allows you to:
-use the Internet anonymously and circumvent censorship;
-all connections to the Internet are forced to go through the Tor network;
-leave no trace on the computer you are using unless you ask it explicitly;
-use state-of-the-art cryptographic tools to encrypt your files, emails and instant messaging.
So, you can boot with the LiveCD, do all of your surfing anonymously in the Tails OS (modified Linux), and then restart back into your regular operating system without leaving forensic tidbits on the hard drive; the OS operates in running memory, so upon reboot the memory is wiped (RAM does not persist a reboot, with some caveats). The "Warning" page gives a good synopsis of various gotchas that can limit your anonymity and/or complicate the goal of covering your tracks.

Some people, like yours truly, use Tails in a bootable VM image. There are some drawbacks to that approach (it makes it easier to leave forensic artifacts). Thankfully, I'm not doing anything illegal, so I really don't care. It's a good way to get on Tor and ensure all traffic does indeed travel through onion routing.

**Side note - most people are familiar, at least superficially with Tor (given the press surrounding Silk Road). However, there are other closed/anonymous peer-to-peer networks out there, most notably, I2P. **

2. A lot of people are lulled into a false sense of security when they sign-up for offshore or self avowed "totally anonymous" VPN providers. HideMyAss, a popular VPN provider, didn't hide the ass of a LulzSec member, instead providing information to the FBI that assisted in his arrest. More nuanced yet, is that even if you use a VPN provider rgR does not keep logs (an assertion I always take with a grain of salt), VPN users often misconfigure their VPN tunnel and accidentally send DNS requests via their regular ISP. So, your traffic is going over the VPN, but if you are also sending DNS traffic to your ISP over VPN, it is possible to track, at the very least, what sites you are going to (but not, to be sure, the actually content of the traffic itself). Enter the next tool: DNSLeakTest. This tool will run a test against your configuration to show whether or not you are actually using the DNS servers you want to/need to/assumed were set up. For example - when I run the Extended Test using my home internet connection, I receive, inter alia, the following result:




What this image shows is that my DNS is being routed to Charter (my provider), in Wisconsin. To be expected when I am surfing without attempting anonymity. But, I would not want this to show up if I am trying to be anonymous. Using a common VPN provider, I receive the following results, showing my DNS queries are going through their servers:





The key here is that if you are arguing that you never visited (insert site with criminal ties here), and there is a DNS request around the time of the specific activity, you've got a credibility (and evidentiary problem) that is hard to refute. Granted, you are once again trusting the anonymity ("short memory") of the VPN provider's DNS records.

3. When it comes to chatting, many users swear by Cryptocat. The app is described as follows:
Cryptocat is a fun, accessible app for having encrypted chat with your friends, right in your browser and mobile phone. Everything is encrypted before it leaves your computer. Even the Cryptocat network itself can't read your messages.
With the following caveats:
Cryptocat is not a magic bullet. Even though Cryptocat provides useful encryption, you should never trust any piece of software with your life, and Cryptocat is no exception.
Cryptocat does not anonymize you: While your communications are encrypted, your identity can still be traced since Cryptocat does not mask your IP address. For anonymization, we highly recommend using Tor. 
Cryptocat does not protect against key loggers: Your messages are encrypted as they go through the wire, but that doesn't mean that your keyboard is necessarily safe. Cryptocat does not protect against hardware or software key loggers which might be snooping on your keyboard strokes and sending them to an undesired third party. 
Cryptocat does not protect against untrustworthy people: Parties you're conversing with may still leak your messages without your knowledge. 
Cryptocat aims to make sure that only the parties you're talking to get your messages, but that doesn't mean these parties are necessarily trustworthy.
4. With respect to mobile messaging apps, it also should be noted there are various other apps advertising the same anonymity. See the following:
  • Confide - "Your Off-the-Record Messenger" -- From the website: "Spoken words disappear after they're heard. But what you say online remains forever. With confidential messages that self-destruct, Confide takes you off the record."
5. On the hacking side of things, there are a few popular LiveCDs that bundle common hacking tools into an easy to use interface. The following distros are worth taking a look at:
  • Kali Linux - "The most advanced penetration testing distribution, ever" -- (formerly Backtrack) -- Kali is a LiveCD used by penetration testers, hackers, and information security professionals to streamline various hacking/recon/exploitation tasks. It includes Metasploit, the most used exploitation tool out there. Metasploit is the tool of choice for "script kiddies," essentially allowing exploitation of systems with no coding; a hacker normally must only provide a few parameters and choose a payload before the ownage of systems can commence.
6. Finally, much has been made of social engineering as the easiest, most-effective, and hardest to defend method of enterprise infiltration. (In security, the weakest link is often the human element). Social engineering has been used to gain ownership of Twitter accounts (too many examples to note), the RSA breach, etc. See this article from Dark Reading for more evidence: Socially Engineered Behavior To Blame For Most Security Breaches.

The toolkit of choice for script kiddies, penetration testers, and various others is TrustedSec's Social-Engineer Toolkit (SET). TrustedSec's website notes:
The Social-Engineer Toolkit has over 2 million downloads and is aimed at leveraging advanced technological attacks in a social-engineering type environment. TrustedSec believes that social-engineering is one of the hardest attacks to protect against and now one of the most prevalent. 
The Toolkit makes it trivial to create webpages that are identical to real enterprise websites that require credentials (allowing login/password harvesting), and also allowing Man-in-the-Middle attacks where the engineered website is passed off as a legitimate portal while the SSL traffic is stripped in the middle (allowing the "hacker" to obtain unencrypted credentials without alerting the user). The toolkit also automates phishing and has various tools and tips to help trick enterprise users into giving up the keys to the kingdom.


Friday, April 18, 2014

Featured Article: Hacktivism and the First Amendment: Drawing the Line Between Cyber Protests and Crime

Volume 27 of the Harvard Journal of Law & Technology features a student Note by Xiang Li that addresses some of the First Amendment implications of "hacktivism," which Li broadly defined as the “combination of grassroots political protest with computer hacking through the nonviolent use of illegal or legally ambiguous digital tools [to pursue] political ends."

Li's Note, Hacktivism and the First Amendment: Drawing the Line Between Cyber Protests and Crime, argues that while hacktivist activities my not squarely fit within the purview of the First Amendment currently, over time these activities may evolve to a point in which a "categorical prohibition on all forms of hacktivism may sweep up socially productive uses of cyberattacks as a form of protest."

A portion of the Note, with footnotes redacted, appears below:
Does hacktivism constitute a legitimate instrument of protest in twenty-first century America? This Note examines the viability of invoking the First Amendment as a defense to the prosecution of  hacktivism, specifically in the form of cyberattacks, under the Computer Fraud and Abuse Act (“CFAA”). Although existing forms of cyberattacks are unlikely to merit First Amendment protection, this Note argues that hacktivism may evolve over time to fall within the purview of First Amendment protection. A categorical prohibition on all forms of hacktivism may sweep up socially productive uses of cyberattacks as a form of protest.

The argument proceeds in four parts. Section II describes the various forms of cyberattacks currently used by hacktivists, as well as the potential criminal liability for hacktivism under the CFAA. Section III examines the primary obstacle to, and secondary arguments against, invoking First Amendment protections for hacktivism as free speech. Section IV presents two of the central premises underlying the rise of hacktivism and discusses the need to reconceptualize what is currently a privatized cyberspace to make room for public forums that can provide specific access to a target’s online property. Additionally, Section IV discusses the possible evolution of hacktivism to include cyberattacks that generate pop-up windows to communicate protest messages. Such a mechanism could raise the possibility of First Amendment protection whereby the cyberattack constitutes protected speech and the pop-up window qualifies as a public forum, akin to a “cyber sidewalk” adjacent to the target’s online property. Section V concludes.

Friday, April 11, 2014

BREAKING: Third Circuit vacates conviction in United States v. Auernheimer due to improper venue

The United States Court of Appeals for the Third Circuit has just announced that the conviction of Andrew Auernheimer (known by many as Weev) has been reversed on venue grounds.

The opinion states (emphasis added):
This case calls upon us to determine whether venue for  Andrew Auernheimer’s prosecution for conspiracy to violate  the Computer Fraud and Abuse Act (“CFAA”), 18 U.S.C. § 1030, and identity fraud under 18 U.S.C. § 1028(a)(7) was proper in the District of New Jersey. Venue in criminal cases is more than a technicality; it involves “matters that touch closely the fair administration of criminal justice and public confidence in it.” United States v. Johnson, 323 U.S. 273, 276 (1944). This is especially true of computer crimes in the era of mass interconnectivity. Because we conclude  that venue did not lie in New Jersey, we will reverse the District Court’s venue determination and vacate Auernheimer’s  conviction.
More to come. . . 

Thursday, April 10, 2014

Featured Article: The Internet and the Constitution: A Selective Retrospective

The Honorable M. Margaret McKeown of the United States Court of Appeals for the Ninth Circuit has a rather interesting article appearing in volume 9 of the Washington Journal of Law, Technology & Arts.

In her article, The Internet and the Constitution: A Selective Retrospective, Judge McKeown examines the complexities of the Internet and its associated innovations from a legal perspective, from the many jurisdictional and due process challenges, to the implications on the First Amendment and free speech. Judge McKeown's story of "institutional stability in the face of change," however, is one she believes has been lost in the all-to-common narrative: "the Internet is changing all the rules and the system can’t keep up."

I found the entire article fascinating, but for those looking for a cybercrime hook, the article's discussion on “The Fourth Amendment and Privacy,” beginning on page 161, may be of particular interest.

The abstract appears below
Over the last two decades, the Internet and its associated innovations have rapidly altered the way people around the world communicate, distribute and access information, and live their daily lives. Courts have grappled with the legal implications of these changes, often struggling with the contours and characterization of the technology as well as the application of constitutional provisions and principles. Judge M. Margaret McKeown of the United States Court of Appeals for the Ninth Circuit has had a close-up view of many of these Internet-era innovations and the ways the courts have addressed them. In this Article, adapted from her October 2013 Roger L. Shidler Lecture at the University of Washington School of Law, Judge McKeown offers her retrospective thoughts on the ways courts have handled constitutional issues in Internet cases. She also discusses some of the challenges currently facing courts and legislators alike as the U.S. legal system incorporates and accommodates Internet- based technologies and the societal, commercial, governmental, and relational changes they spawn.

Tuesday, April 8, 2014

WI governor signs revenge porn and social media privacy bills into law; privacy bill raises questions

(Update 1: Included link and excerpt from Rep. Sargent's Op-Ed when the bill was introduced, and further comments - to provide some context)

Governor Scott Walker of Wisconsin signed 62 bills into law today, including SB223 (relating to social media privacy) and SB367 (revenge porn).

A full list of the bills he signed can be found here: At a glance: List of 62 bills Gov. Walker signed, and regarding the two bills mentioned above:
Senate Bill 223 – prohibits employers, educational institutions and landlords from requesting or requiring passwords or other protected access to personal internet accounts of students, employees, and tenants. Viewing, accessing and using information from internet accounts, including social media, in the public domain is allowed. Senator Glenn Grothman (R-West Bend) and Representative Garey Bies (R-Sister Bay) authored the bill which unanimously passed the Senate and passed the Assembly on a voice vote; it is Act 208.
Senate Bill 367 – modernizes Wisconsin’s law relating to disseminating private images and expands protections for victims who have their private images distributed without their consent. Senator Leah Vukmir (R-Wauwatosa) and Representative John Spiros (R-Marshfield) authored the bill which passed both the Senate and the Assembly on a voice vote; it is Act 243. 
I criticized the original revenge porn bill proposal in Wisconsin (see: Wisconsin's "revenge porn" bill goes too far. Hypos to ponder and why the legislature should look to Professor Franks ); specifically, I labeled the original proposal as overbroad because the bill did not include a scienter requirement. In the final bill, after a substitute amendment was adopted, the statutory text has been narrowed with just such a requirement. The bill signed into law requires "knowledge":
942.09 (3m) (a) Whoever does any of the following is guilty of a Class A misdemeanor: 
1. Posts, publishes, or causes to be posted or published, a private representation if the actor knows that the person depicted does not consent to the posting or publication of the private representation. 
2. Posts, publishes, or causes to be posted or published, a depiction of a person that he or she knows is a private representation, without the consent of the person depicted.
The social media privacy bill signed by the governor will surely be lauded by privacy advocates as a win for individual autonomy (and freedom from employer/educational institution snooping). But, I find the exceptions to the bill much more intriguing and noteworthy than the protections most will focus on. Particularly, the interesting carve-outs in bold:
(2) Restrictions on employer access to personal Internet accounts.  
   (a) Except as provided in pars. (b), (c), and (d), no employer may do any of the       following:
1. Request or require an employee or applicant for employment, as a condition of employment, to disclose access information for the personal Internet account of the employee or applicant or to otherwise grant access to or allow observation of that account.
2. Discharge or otherwise discriminate against an employee for exercising the right under subd. 1. to refuse to disclose access information for, grant access to, or allow observation of the employee's personal Internet account, opposing a practice prohibited under subd. 1., filing a complaint or attempting to enforce any right under subd. 1., or testifying or assisting in any action or proceeding to enforce any right under subd. 1. 
3. Refuse to hire an applicant for employment because the applicant refused to disclose access information for, grant access to, or allow observation of the applicant's personal Internet account. 
   (b) Paragraph (a) does not prohibit an employer from doing any of the following:

2. Discharging or disciplining an employee for transferring the employer's proprietary or confidential information or financial data to the employee's personal Internet account without the employer's authorization.
3. Subject to this subdivision, conducting an investigation or requiring an employee to cooperate in an investigation of any alleged unauthorized transfer of the employer's proprietary or confidential information or financial data to the employee's personal Internet account, if the employer has reasonable cause to believe that such a transfer has occurred, or of any other alleged employment-related misconduct, violation of the law, or violation of the employer's work rules as specified in an employee handbook, if the employer has reasonable cause to believe that activity on the employee's personal Internet account relating to that misconduct or violation has occurred. In conducting an investigation or requiring an employee to cooperate in an investigation under this subdivision, an employer may require an employee to grant access to or allow observation of the employee's personal Internet account, but may not require the employee to disclose access information for that account.
 So, an employer may not require you to provide access to your personal Internet account on a whim or a hunch. But, if the employer can point to an Acceptable Use Policy, text in an employee handbook, or can establish reasonable cause to believe employment-related misconduct, the employer can require such access. Sure, you don't have to provide your login/password, but in subsection 3, above, you could be required to grant access (whatever that means).

The social media bill's carve-outs sound a lot like CFAA cases of late, and also general social media prying lawsuits as well. How, then, is this bill a boon for employee/student privacy? Also, if my employer requested I grant access to a personal account, as part of an "investigation," I would almost assuredly deny that request, absent a subpoena. I am very curious how these exceptions will be used by employers going forward.

Update 1: 

Rep. Sargent wrote an Op-Ed in the Milwaukee Journal Sentinel when she proposed the bill (with other representatives). See here: Bipartisan bill protects social media accounts

Later, after the bill made it out of the Senate on a 33-0 vote, Sargent issued a press release. See here: Social Media Protection Bill Passes Senate on a 33-0 Vote. An interesting quote from the release:
I’m pleased that this common sense, bi-partisan legislation advanced further through the legislative process today.  It makes sense that personal internet accounts should be given the same, 4th Amendment protections as other aspects of our daily lives.  People have a reasonable expectation of privacy when interacting with their friends and family on Facebook or other sites. An employer, university, or landlord should not have access to private communications on social media sites. As technology evolves, so must our legislative efforts to protect our citizen’s privacy. The current generation will write the laws on social media.  We must do it carefully and with respect for all parties involved.
There should, in my opinion, be an asterisk (*) after that paragraph, noting that the exceptions may indeed swallow a large chunk of the well-intentioned proposal. If the bill's intent was to prevent forced disclosure of account credentials, then the text should have narrowly reflected that (considering, to wit, that the exceptions do not require providing credentials, but merely providing/granting access). Further, just as some courts have attempted to bring TOS/Acceptable Use Policies/Employee Handbooks within the ambit of CFAA liability, this bill allows varying employer-defined standards to dictate whether an employee must grant access to a social media/personal email account.

Hypo: If an employee handbook states no surfing the internet for personal reasons (or updating social media) during work hours and there is "reasonable cause" to believe that a violation occurred - must the employee grant access to the account to prove otherwise? How is that giving personal internet accounts "4th Amendment protections...[similar to those in] other aspects of our daily lives?" What if the employee refuses to grant access - is that grounds for termination?

More fundamentally, though, is this question: now that the bill has become law, who benefitted more from its enactment: employers, or employees?

Monday, April 7, 2014

Court Rules in Favor of FTC, Wyndham Must Face Suit Over Data Breach

Today, a ruling was issued in FTC v. Wyndham Worldwide Corp. The court denied Wyndham's motion to dismiss, rejecting its argument that the Federal Trade Commission does not have authority under Section 5 of the FTC Act to regulate data security practices across all industries.

The U.S. District Court for the District of New Jersey declined to carve out a data-security exception to the FTC's broad regulatory authority under Section 5. It also refused to require the FTC to promulgate data security regulations before bringing "unfairness" claims against companies based on their data security practices, noting that previous enforcement actions "'constitute a body of experience and informed judgment to which courts and litigants may properly resort for guidance.'"

U.S. District Judge Esther Salas made clear that "this decision does not give the FTC a blank check to sustain a lawsuit against every business that has been hacked." However, the ruling disposes of the only viable challenge to the FTC's authority to regulate data security practices.

FTC Chairwoman Edith Ramirez issued a statement on the ruling via Twitter: I wrote about the Wyndham litigation in a previous post. I look forward to further analyzing Judge Salas' ruling in a future post.

Wednesday, April 2, 2014

Undeterred by Challenges to its Authority, FTC Settles Data Security Actions with Credit Karma and Fandango

The Federal Trade Commission (FTC) has settled two more enforcement actions with companies that failed to adequately safeguard consumers’ personal information, despite challenges to its authority to regulate data security practices.

Credit Karma and Fandango Settle FTC Charges

Last week, the FTC announced that credit monitoring service Credit Karma and movie ticket outlet Fandango entered into settlement agreements that will require the companies to submit to 20 years of independent security audits, improve security measures, and refrain from misrepresenting their security and privacy processes. The FTC had charged both companies with violating Section 5 of the FTC Act (Section 5), which prohibits “unfair or deceptive acts or practices in or affecting commerce.” The agency alleged that Fandango and Credit Karma had engaged in unfair business practices by failing to properly implement Secure Sockets Layer (SSL) encryption on their mobile apps, thus leaving users’ payment information and other sensitive data vulnerable to “man-in-the-middle” attacks. The FTC also alleged that Fandango and Credit Karma had misrepresented the security of their apps, thereby deceiving customers.

Since 2002, the FTC has brought and settled more than 50 similar data security enforcement actions against companies including Twitter, Rite Aid, and Petco. The FTC claims that it has broad authority under Section 5 to investigate and censure the data security missteps of companies across all industries, even though there is currently no overarching federal law mandating minimum data security standards.

Until recently, the FTC’s authority to regulate data security practices under Section 5 had gone largely uncontested. But in a highly-anticipated decision, a New Jersey federal court may provide guidance as to the extent of this authority.

FTC v. Wyndham Poses the First Serious Challenge to FTC Authority Over Data Security

In June 2012, the FTC filed a complaint against global hospitality company Wyndham Worldwide Corporation in federal district court, alleging that Wyndham “failed to provide reasonable and appropriate security” measures on their computer networks, which led to a series of large-scale breaches of personal information and more than $10.6 million in fraudulent charges to customers’ accounts.

Specifically, the FTC charged that Wyndham engaged in deceptive business practices in violation of Section 5 by misrepresenting in its privacy policies and elsewhere the security measures it employed to prevent the unauthorized access of customer data. The agency further alleged that Wyndham’s failure to maintain reasonable data security constituted an unfair business practice, also in violation of Section 5.

Wyndham responded by filing a motion to dismiss both the deception and the unfairness claims in the FTC’s complaint. Wyndham asserted, inter alia, that the FTC “has neither the expertise nor the statutory authority to establish data security standards for the private sector” under the “unfairness” prong of Section 5. Wyndham pointed out that the FTC has publicly acknowledged that it “lacks authority to require firms to adopt information practice policies,” and that it has repeatedly asked Congress to grant it broad, cross-industry authority to do so. Instead, Congress has enacted industry-specific legislation – such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), and the Fair Credit Reporting Act (FCRA) – none of which authorized the FTC to bring an action against Wyndham.

In its reply, the FTC argued that Congress deliberately delegated broad authority to the FTC under Section 5 to “permit the FTC to protect consumers from unanticipated, unenumerated threats.” The FTC cited a range of uses of its Section 5 authority that were upheld by the courts, including the regulation of online check drafting and delivery, telephone billing practices, sales of telephone records, and sales of unsafe farm equipment.

In November 2013, Judge Esther Salas of the U.S. District Court for the District Court of New Jersey heard lengthy oral arguments on Wyndham’s motion to dismiss. Counsel for Wyndham argued that a lack of clear statutory authority for the FTC to regulate data security, coupled with the August 2013 release of a draft cybersecurity framework by the National Institute of Standards and Technology, demonstrated that Congress did not intend for the FTC to take the lead on data security enforcement.

At the conclusion of oral arguments, Judge Salas seemed poised to rule in favor of the FTC, denying a motion by Wyndham to stay discovery until she ruled on its motion to dismiss. In January, however, Judge Salas agreed to delay her ruling and allow supplemental briefing after an FTC Commissioner commented on the vagueness in the “unfairness” prong of the FTC’s Section 5 authority during congressional testimony.

A ruling is expected in the coming weeks. If Judge Salas rules in favor of Wyndham, she could seriously undermine the FTC’s authority over data security practices going forward. If she denies Wyndham’s motion to dismiss, the decision could pave the way for increased data security enforcement by the FTC.

After an Unsuccessful Challenge to FTC’s Authority, LabMD to Shut Down

Following Wyndham’s lead, another company challenged the FTC’s authority to regulate data security in an enforcement action brought by the FTC in August 2013. The FTC charged LabMD, a clinical health testing company, with violating Section 5 after the sensitive personal information of 9,300 people was exposed via a public file-sharing network, leading some to have their identities stolen.

In November 2013, LabMD filed a motion to dismiss, arguing that the FTC does not have authority to regulate data security practices with respect to patient health data under the “unfairness” prong of Section 5. LabMD claimed that because it provided cancer diagnoses to the patients of its physician-customers, that its information practices are regulated under HIPAA, which it had not been accused of violating. In its response, the FTC argued that it shares concurrent authority with the Department of Health and Human Services over health information security. Once again, the FTC maintained that Section 5 gives it broad authority over “unfair” data security practices.

In January, the FTC issued an order denying LabMD’s motion to dismiss. It concluded that Congress delegated broad authority to the FTC to regulate “unfair acts or practices,” including those of HIPAA-covered entities. The FTC reiterated its argument in Wyndham that federal courts had upheld its Section 5 authority in a wide variety of contexts. 

Just days after the FTC’s order, LabMD announced that it would shut down, citing the “debilitating effects” of the FTC’s four-year investigation of the company and calling it an “abuse of power.”

LabMD has twice requested federal court review of the FTC’s actions, but the cases were subsequently dismissed and withdrawn. It is not clear whether the company will seek further review.

Thus, the Wyndham litigation presents the only viable challenge to the FTC’s data security enforcement efforts at this time.

Data Security is a Top FTC Priority

Though questions about the FTC’s authority to regulate data security practices remain, the FTC has made data security a “top priority” and shows no signs of slowing its enforcement efforts in this area. Accordingly, federal regulatory action is a very real threat to companies across all industries that fail to implement reasonable data security measures.

Cybercrime Review welcomes Natalie Nicol as a guest writer

I am excited to welcome Natalie Nicol as a guest writer for Cybercrime Review. She hopes to contribute to the blog regularly.

Natalie received her J.D. from University of California, Hastings College of the Law in 2013. During law school, Natalie worked at the Digital Media Law Project, a project of the Berkman Center for Internet & Society at Harvard University; the Electronic Frontier Foundation; and the First Amendment Project. She served as the symposium editor for the Hastings Communication and Entertainment Law Journal, and presented a day-long symposium on the Computer Fraud and Abuse Act last March. She is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Natalie’s interests include Internet law, privacy, free expression, and intellectual property issues. In her free time, she enjoys live music and spending time with her dogs, Cleopatra and Penny. In her current role, she develops online content for lawyers and law firms across the country.
You can follow Natalie on Twitter at @natnicol.