Friday, June 29, 2012

Study details teenager habits on the Internet

McAfee recently released a study concerning the activities of teenagers online, entitled "The Digital Divide: How the Online Behavior of Teens is Getting Past Parents." The study details what teenagers do online including the illegal activities they participate in, what parents know, and how teenagers work to prevent their parents from learning of their bad deeds.


Here are a few interesting results:
  • 70% of teens admit to hiding their online activities from their parents
  • Over 50% of teens have hacked someone's social networking account
  • 32% have accessed pornography online
    • 43% of those access pornography on a weekly basis

Wednesday, June 27, 2012

FBI arrests 24 in international carding scheme

Graphic courtesy of FBI
The FBI announced yesterday 24 arrests in 8 countries for involvement in a carding scheme. They estimate that 400,000 potential victims and potential loss of $205 million were involved. Read more here.


Quick update: TCNiSO cable modem hacker DerEngel's vagueness motion dismissed

DerEngel (Ryan Harris) of TCNiSO, who became famous for hacking cable modems and writing a book on the subject,  has lost a motion to dismiss his case on the grounds that the federal wire fraud statute is unconstitutionally vague. His motion was originally filed prior to his jury trial, but was ruled premature, so subsequent to his conviction on seven of eight counts of wire fraud, he renewed the motion. For reference, the original indictment can be seen, here. He also has a motion for judgment not withstanding the verdict currently before the court, but I find it highly unlikely that a judge will nullify a jury verdict based on the clear cut explanation handed down yesterday in the order denying his vagueness motion.

Harris put forth an interesting argument for vagueness, but a quick reading shows its shakiness:
he asserts that if the wire fraud statute can be applied to punish his sale of cable modem hacking products, then it can also be applied to make criminal the conduct of other companies that create products which are readily susceptible to illegal use and known to be used unlawfully by many of their users. . . . Citing tort cases, the defendant asserts, in effect, that because the manufacturer of a product that is put to a harmful or illegal use is not ordinarily civilly liable for the consequences of that use, sellers of products are not on notice of any potential for criminal liability arising from a third party's use of a product. 
In essence, the argument is irrelevant. The wire fraud statute is intentionally vague, to cover a multitude of such schemes - the focus being on the intent to defraud. It is hard to argue that he did not know that selling his products would result in a loss to ISPs, and I find it even harder to believe that he could remotely argue other legitimate uses.

The court ruled along the same lines, stating that the statute has a broad scope for a reason, and that it has been previously held to cover many different situations, including stealing telephone and satellite service - and this factual scenario is not an analogical leap - it's less than a baby step. Therefore, the court concluded that "in convicting the defendant, the jury had to find that the defendant acted deliberately, with intent to defraud the internet service providers, and knew that his scheme was unlawful - findings that were amply supported by the evidence. These requirements provide further support for the conclusion that the wire fraud statute is not unconstitutionally vague as applied in this case."

An attempt to make the case for "hacking back"

Justin's recent post, "The illegality of striking back against hackers," presents a number of interesting issues with regard to organizations hacking in retaliation against those who hack them first. It is only fair that such an act should be allowed in light of the current state of our legal system. But as Justin correctly states, allowing retaliation is not a clear-cut issue and should not be considered lightly.

Hacking cases are complex. Beyond the cases where hackers go to the Internet to boast about their actions, it can be very difficult for law enforcement and prosecutors to track down the perpetrators. Facing a lack of resources, cybercrime investigators tend to focus their attention on issues such as child pornography. Hacking cases and the identity (or other) thefts that follow present great hurdles for millions of Americans each year.

Of course, there is a remedy for consumers - file a lawsuit. After LinkedIn's recent security breach, many quickly jumped at the chance to file. LinkedIn committed a grave error, and attention needed to be brought to the issue so they'll fix the problem and other companies will be warned as well. No amount of investment in security, however, will make a system perfect and neither will it make a company immune from lawsuits and damage to their reputation when breaches occur.

Likewise, there is also a solution for the hacking victim - file a lawsuit. The CFAA allows a civil suit to be brought for certain damages, but it carries with it a multitude of problems. Often, the hacker could only be found by an investigation that would, in turn, violate the CFAA (see Justin's point number 2). They may be located in another country. They may not have any money, and even if they do, there may be no legal process for getting to it. For these reasons (and many others), companies like LinkedIn are often required to take the beating from the press and users, spend a lot of money beefing up security, and keep their fingers crossed.

Until law enforcement and prosecutors make these cases more of a priority, American organizations (and therefore, consumers) will be left without a true means of protecting themselves. But suppose we modified the CFAA to allow a self defense-type approach. In some ways, being hacked is like being punched in the face. If you retaliate in either situation, it's possible that others will come in defense of the attacker (imagine a bar fight where all of your friends are already outside, and you're now facing five guys twice your size). Similarly, if you were in a crowd and weren't sure who the punch came from, you can't just start hitting everyone to get back at the true puncher. However, if you can find them and timely respond, you may be able to defend yourself from further harm.

There are a few ways in which such a modification would be helpful:
  1. Investigation - Allowing victims to hack back would allow them to collect the information that would be essential to any civil or criminal case - information like the IP address of the hacker.
  2. Security Improvement - Patching security issues is much easier if you know how the infiltration happened. Further, knowing what resources hackers are using would allow technology security teams to better plug the holes in their networks. Perhaps the statute could require mandatory reporting so that the government could collect data in an effort to study developing patterns in the hacking world.
  3. "Cathartic Chest Pounding" (Justin's words) - Billion dollar corporations have at least one thing that common hackers don't - a billion dollars. Not every business has the ability to dedicate essentially unlimited resources to protecting themselves, but these do. Hacking back may result in more attacks at first, but the right successes might turn hackers away. (The problem here, of course, is that if large companies make themselves essentially hack-proof, the market for unauthorized data will result in attacks on small business that have no such resources.)
Obviously, there's no easy solution to this problem, but rest assured - the CFAA is not likely to hinder everyone. Now we have the waiting game to see how prosecutors, Congress, and corporations will respond.

Tuesday, June 26, 2012

The illegality of striking back against hackers

It has been an emerging trend in recent security publications to highlight the interesting trend of companies "hacking back" against infiltrators and potential data exfiltrators. The concept sounds intriguing - if the internet is the wild wild west, then what better way to participate in it than to allow the tumbleweeds to shift in the wind as you and your foe see who can draw first, or, more accurately, get the last shot. However, the Computer Fraud and Abuse Act provides no escape hatch for such actions; there is no Castle Doctrine in federal statutes relating to hacking, and no such doctrine in state cybercrime laws, either. Any such activities are ill-advised, likely illegal, and do nothing but encourage the escalation of cybercrime.

It's certainly clear that this is a response to the plethora of attacks that have happened recently, but I think more tellingly, resonates from the clear embarrassment that permeates any large company's mea culpa when they admit a breach has occurred. In the article above, it notes that firms have popped up that are for-hire counter-strikers. While the notion fulfills the age-old revenge story meme, and could even make hackers think twice about striking your company (if they knew you would take such measures), the legal niceties are nothing even remotely so poetic. Here is a non-exclusive list of the problems I see with such a strategy:

1.  Such actions tread into legal no-mans-land - namely, that as far as I can tell, there is no legal precedent in support of such actions. Conversely, there's a ton of case law that is not on your side which states bluntly that unauthorized access is just that, unauthorized - no matter who the party "hacking" is.

2.  Any sophisticated hacker that would attack a semi-large or multi-national corporation isn't going to be hacking from their Dell PC at home, sitting behind a poorly secured Linksys router. They will be hitting through proxies, utilizing Tor, or more likely executing strikes through already compromised machines. The implication of this is three-fold - (a) in striking back, you may end up attacking an unwitting third-party who is likely also a victim of a computer crime - therefore, you will have even less sympathy if litigation arises; (b) if the originating host is an already compromised third-party, you could accidentally cause greater damage to hosts that are specifically enumerated in the CFAA, such as government computers, those containing national security information, or systems involved in medical care or public health/safety (See the DOJ's Prosecuting Computer Crimes manual) - and end up with a significant felony; or (c) (assuming a world where hacking-back becomes common), end up irritating a non-interested party, motivating them to also attack you.

3. While such actions may embolden or vindicate a hacked entity, they also put a larger target on your forehead. More specifically, if I were a hacker and my goal was simply to exfiltrate data, and you then attack me back, I am highly likely to escalate my attacks quid pro quo. Accordingly, what might have been simply a small case of data loss may turn into full scale damage to your systems; instead of sneaking in and out, you are now susceptible to much more malicious attacks - Denial of Service attempts, deletion of sensitive or irreplaceable data, actual hardware damage, or "doxing" of company executives. This undoubtedly will raise the price of the incident exponentially.

4.  It is unclear to me what an entity stands to gain by hacking back, other than the cathartic chest pounding that may occur when one can say that they "lost the battle, but won the war." Is that really worth a potential prison sentence?  Yes, your efforts could assist law enforcement in tracking down who hacked you, but it won't be so cathartic when the tables are turned, post investigation, to then investigate you for your actions.

5. Lastly, in 2008 the CFAA was amended to include a conspiracy offense, so you may not even need to actually breach an attacker to run afoul of the law. Could a corporate agreement with a strike-back contractor be sufficient to violate 18 U.S.C. 1030(b)?  That is not clear - but I'm betting we are going to find out if this trend evolves into the norm.

Monday, June 25, 2012

Indiana law banning sex offenders from social networking sites upheld

In 2008, Indiana enacted a law that banned certain sex offenders from using social networking if the platform was also used by minors (statute available here). An Indiana resident challenged the statute as violating the First Amendment and suggested the prohibition would forbid him from checking his child's accounts, make political speech online, advertise his business, and connect with family and friends. The district court, however, held that no First Amendment violation exists. Doe v. Prosecutor, 2012 U.S. Dist. LEXIS 86862 (S.D. Ind. 2012).

The court reasoned that the statute is narrowly tailored because it "only precluded [the plaintiff] from using web sites where online predators have easy access to a nearly limitless pool of potential victims.... [and] the vast majority of the internet is still at Mr. Doe's fingertips." For example, the court noted, he may still use LinkedIn because users must be 18 or over.

Additionally, the court suggested that many alternative channels of communication exist such as civic meetings, radio shows, letters to the editor, e-mail, and blogging.

Thursday, June 21, 2012

Facebooking juror fails in asserting SCA claim after forced disclosure of trial-related posts

A California juror recently posted to Facebook about the trial while it was in progress. Upon learning of the act, the juror was required to consent to the court's review in camera of his Facebook postings. He argued that the order violated the Stored Communications Act, but the Court of Appeals of California disagreed (Juror No. One v. The Sup. Court of Sacramento Cnty.No. C067309, (Cal. Ct. App. 2012)).

After trial, one of the jurors told the court that another had posted comments to Facebook about the evidence in the case. That juror had not seen the comments during the trial, but another juror had "liked" one of the posts. The juror-author admitted he posted during the trial, but said the content had nothing to do with evidence. One of the parties in the case attempted to subpoena the juror's Facebook records, but Facebook refused to disclose, citing the SCA. The court later ordered the juror to provide the postings himself.

On appeal, the court held:
Juror Number One has provided this court with nothing, either by way of the petition or the supporting documentation, as to the general nature or specific operations of Facebook. Without such facts, we are unable to determine whether or to what extent the SCA is applicable to the information at issue in this case. For example, we have no information as to the terms of any agreement between Facebook and Juror Number One that might provide for a waiver of privacy rights in exchange for free social networking services. Nor do we have any information about how widely Juror Number One's posts are available to the public. 
But even assuming Juror Number One's Facebook postings are protected by the SCA, that protection applies only as to attempts by the court or real parties in interest to compel Facebook to disclose the requested information. Here, the compulsion is on Juror Number One, not Facebook.
The defendant also suggested that the order violated the Fourth and Fifth Amendments but did not actually present an argument or citation to support the theories.

Tuesday, June 19, 2012

Congratulations to Justin for being cited in a brief to the Wisconsin Supreme Court

Congratulations to my co-blogger, Justin Webb, whose published case note was recently cited in a brief to the Wisconsin Supreme Court. In the brief, the state is, among other issues, responding to an argument that the use of real-time tracking via GPS was unconstitutional when the search warrant specified the use of a passive GPS device (one that records data and is retrieved at a later time to obtain the location information). The device was used for four days, as opposed to multiple weeks in Jones. The case is State v. Brereton, and the AG's brief is available here (2012 WL 2160408).

Justin's note, "Car-ving out the Notions of Privacy: The Impact of GPS Tracking and Why Maynard is a Move in the Right Direction" (95 Marq. L. Rev. 751), presents the different ways courts analyzed GPS-related decisions, and suggests that the DC Circuit's use of the mosaic theory in Maynard (later styled as Jones in the SCOTUS case) is the proper approach. Here's the abstract:
In a controversial decision in 2010, the D.C. Circuit held that warrantless GPS tracking of an automobile for an extended period of time violates the Fourth Amendment. The D.C. Circuit approached the issue in a novel way, using “mosaic theory” to assert that the aggregation of information about an individual's movements, over an extended period of time, violated an individual's reasonable expectation of privacy. This Note discusses how state and federal courts have dealt with warrantless GPS tracking, and ultimately asserts that the Maynard court's decision was correct, insofar as it takes account of the interaction of changing technology and shifting societal notions of privacy. This Note urges the Supreme Court to incorporate an approach similar to Maynard within its Fourth Amendment jurisprudence. This Note concludes that failure to do so will contract already-cramped notions of privacy in the digital age, and facilitate a normative shift in conceptions of privacy that may be detrimental and irreversible.

Massachusetts appellate court to rule on compelled password disclosure of encrypted drive

A Massachusetts trial court, dealing with an encrypted drive in a criminal case, has asked the Massachusetts Appeals Court how to act. The question presented to the appellate court is:
Can the defendant be compelled pursuant to the Commonwealth’s proposed protocol to provide his key to seized encrypted digital evidence, despite the rights and protections provided by the Fifth Amendment to the United States Constitution and Article Twelve of the Massachusetts Declaration of Rights?
The case is Commonwealth v. Gelfgatt, Suffolk Superior Court No. SUCR2010-10491. Read more here in an article by Tom Ralph, Chief of the Cybercrime Division of the Massachusetts Attorney General's Office, published in the Cybercrime Newsletter, a publication of the National Association of Attorneys General and the National Center for Justice and the Rule of Law.

Cybercrime Review has extensively covered encryption issues. Click here for our archive on the topic.

Saturday, June 16, 2012

House Financial Services Committee holds hearing on cyber threats to financial institutions

Image courtesy of stock.xchng
The House Committee on Financial Services recently held a hearing entitled "Cyber Threats to Capital Markets and Corporate Accounts" with witnesses from from all major areas of the financial industry. The testimonies presented great information about cyber attacks on the industry, particularly those of:

The webcast and other testimonies are available here.

Friday, June 15, 2012

Tech Check: 1st Circuit errs in description of file hashing

In United States v. Farlow, 2012 U.S. App. LEXIS 11121 (1st. Cir. Jun. 1, 2012) the 1st Circuit erred in its description of how changing a file affects its hash value.  Judge Thompson stated:
The problem for Farlow is that we have rejected the idea that government agents should so narrowly restrict their searches of digital devices. "When searching digital media for 'chats' and other evidence of enticement" -- like the bodybuilder image -- "government agents cannot simply search certain folders or types of files for keywords." Crespo-Rios, 645 F.3d at 43 (emphasis added). The same goes for other specific identifying information -- like hash values. This is because computer files are highly manipulable. Id. at 43-44. A file can be mislabeled; its extension (a sort of suffix indicating the type of file) can be changed; it can actually be converted to a different filetype (just as a chat transcript can be captured as an image file, so can an image be inserted into a word-processing file and saved as such). See id. Any of these manipulations could change a document's hash value. And in any event a limited hash-value search would not have turned up any chat transcripts (which, again, can be saved as image files) or other evidence of Farlow's New York crimes. The government therefore reasonably executed a broad search that fell within the scope authorized by the valid warrant it obtained.
The highlighted/bolded portion is not in fact, completely true. It is true that capturing a chat transcript as an image, or placing it in a different document does change the hash value. But, merely changing the name of a file, or changing its extension using regular file operations does not change that file's hash value. A friendly example of that on OS X:


















And for clarity's sake, a duplicate test on Windows, using "hashtest2.txt" from the OS X machine as a starting point. I have copied the file and renamed it, as well as copied it and changed the extension:


Notice that the hash never changes, from OS X to Windows. It remains eb1a3227cdc3fedbaec2fe38bf6c044a.

I point this out merely to prevent this erroneous statement from being perpetuated. I do not think, on the whole, that it really makes too much difference in the case itself. I'm open to opinions otherwise.

As a caveat, let me also note that changing a file extension can also occur through a program (i.e. MS Paint) whereby one file format is converted to another (png to jpg, for example), and that would change the hash value. I think the words in this decision are just a little unclear and ambiguous.

Thursday, June 14, 2012

8th Circuit affirms conviction despite defendant's entrapment defense

In United States v. Shinn, 2012 U.S. App. LEXIS 11863 (8th Cir. 2012), the Eighth Circuit affirmed a conviction for attempting to induce a child to engage in criminal sexual activities over an argument of entrapment.

The defendant had engaged in an adult romance chat room conversation with what he believed to be a 14-year-old girl, though really a law enforcement officer. The defendant told her that if she was older, he would want to take her out to dinner but said "you're just too young. . . . you want to stay a virgin for as long as possible." The "girl" then indicated she was not a virgin. The two later discussed getting together once she turned 18. As they continued to chat over several months, the conversation progressed to sexual experiences, and the defendant sent her pictures of himself wearing only underwear.

Nearly three months later, the two finalized plans to meet at a hotel. After arriving, the defendant was arrested, and in his car were condoms and cameras, along with the girl's name and contact information. A search of his computer revealed no chats with minors nor evidence related to child pornography. He was convicted and sentenced to sixty-three months in prison.

At trial, the defendant argued inducement as the chat was in an adult romance chat room, the officer initiated some of the chats, and the alleged girl was portrayed "as a sexually precocious teenager." He also argued there was no evidence of predisposition. However, as the Eighth Circuit acknowledged, the defendant initiated the first five conversations and first mentioned sex (referencing her virginity), and he continued to bring sex up over other conversations. Further, predisposition was shown because he "readily availed himself of the opportunity to perpetrate the crime."

Wednesday, June 13, 2012

ICANN reveal of new TLDs shows plan to introduce .sucks and over 1,000 others

This morning ICANN revealed a list of newly purchased top-level domains (TLDs) that are certain to change a great deal about the face of the Internet. Common TLDs, like .com, .net, and .org have become crowded, and ICANN decided to sell new ones with an application fee of $185,000. Nearly 2,000 applications were received.

Many were purchased by businesses such as .abc, .bing, .nokia, and .polo, seeking to protect their name before someone else has a chance to get to it. Quite foreseeably, Google, Amazon, and Microsoft each purchased a large number of TLDs for their products.

More generic TLDs like .news, .music, .pizza, .vodka were also purchased - mostly by holding companies, likely creating a huge market for domain names in the near future. Perhaps the most interesting was .sucks, which will undoubtedly have many fun applications in the future (perhaps I should plan to register jeffreybrown.sucks?).

The problem with new TLDs, as I've discussed before (here), is that organizations are essentially forced to buy up domain names (and now entire TLDs) in order to protect their images. When the .xxx TLD was launched, companies and universities bought up domains like "www.hoosiers.xxx" to prevent improper use. Many opposed ICANN's plan (including several in Congress).

Click here for the full list.

Tuesday, June 12, 2012

N.J. appellate court finds no reasonable expectation of privacy in cell phone number; distinguishes between "generated" and "assigned" information to reach result

In State v. DeFranco, 2012 N.J. Super. LEXIS 92 (App. Div. Jun. 8, 2012), a New Jersey appellate court held that under the New Jersey Constitution, an individual does not have a reasonable expectation of privacy in their cell phone number. This might not be head turning (at least it wasn't for me), but I was fascinated by how the court reached such a result - by distinguishing between "assigned" information (i.e. your cell phone provider assigns you a number), and "generated" information (i.e. ISP records, bank records, and other records that would be generated by a third party). I don't think I am convinced by this dichotomy, but first, let's get to the facts.

The defendant pled guilty to first and second degree assault, as well as endangering the welfare of a child, arising out of an incident that had happened years beforehand. The majority of the evidence was obtained by having the victim call the defendant on his cell phone (a number that was obtained by a school resource officer (SRO) and provided to a separate law enforcement agency), and essentially have him allocute on the phone to his previous transgressions.

The defendant's major assertion is that his cell phone number was private, and for the SRO to hand this over to law enforcement was a violation of his privacy. Unfortunately for the defendant, he had provided that number previously for a school directory and for a school trip. The directory noted that the numbers within it were private, especially those unlisted, but the defendant never corrected an error which failed to mark his number as unlisted. Based on this disclosure, the court found that even if it were to find a privacy interest in the cell phone number, the defendant would have waived such an interest. But, on to the merits.

The defendant asserted that a cell phone number was similar to bank records, ISP records, and other information that New Jersey courts had found a privacy interest in. The defendant tried to assert that New Jersey ascribed to an "informational privacy" model, a mode adopted by a New Jersey appellate court, but never explicitly adopted by the New Jersey Supreme Court:
In this regard, we note that in the Appellate Division's opinion in Reid, the panel stated that "New Jersey appears to have recognized a right to what has been called 'informational privacy.'" The panel described informational privacy in the following terms:
 Informational privacy has been variously defined as "shorthand for the ability to control                        the acquisition or release of information about oneself," or "an individual's claim to control the terms under which personal information . . . is acquired, disclosed, and used." In general, informational privacy "encompasses any information that is identifiable to an individual. This includes both assigned information, such as a name, address, or social security number, and generated information, such as financial or credit card records, medical records, and phone logs. . . . [P]ersonal information will be defined as any information, no matter how trivial, that can be traced or linked to an identifiable individual." 
 We adopt this formulation.
But, the Supreme Court did not adopt this "informational privacy" formulation when they heard Reid on appeal, stating that "[t]he contours and breadth of the standard are not entirely clear, and we need not address those issues in resolving the narrower constitutional question before us."

Because the Supreme Court rejected this approach, the court, here, rejected the defendant's attempt to squeeze cell phone numbers into such a privacy regime:
We perceive a significant difference between the "generated information" afforded protection by the New Jersey Supreme Court in its privacy decisions and the "assigned information" that defendant seeks to protect in this case. The ISP records, the long-distance billing information, the banking records, and the utility usage records of Reid, Hunt, McAllister, and Domicz, respectively, constituted the keys to the details of the lives of those to which the seemingly innocuous initial information pertained. While in some circumstances, knowledge of a telephone number might be equally revelatory, here it was not. The number was simply a number. In the circumstances of this case, we do not find that defendant's professed subjective expectation of privacy is one that society would be willing to recognize as reasonable.
Fascinating, but ultimately problematic. On the surface, this seems like a very good attempt to make a true distinction between information types, and the amount of privacy that they should receive. But, there are many "assigned" pieces of information that one would argue should receive privacy protection, such as your social security number, your IP address (I would argue that this case muddles Reid because can you really make a distinction between the assigned IP address and the generated information it could reveal), and your credit card number. State statutes protecting the information previously stated are an attestation to protection of "assigned" information, and make this distinction unconvincing. Another example would be a private encryption key assigned by an internet company. I'm sure readers can think of many more examples.  While the N.J. Supreme Court did not adopt the "informational privacy" approach, I don't think they meant to throw all "assigned" information noted above out the window.

Instead of attempting to make arbitrary distinctions that will ultimately fail to be the catch-all the court would like, this case should have been resolved on third-party doctrine alone, due to the defendant handing over the information previously. While New Jersey has tightened privacy in the third-party sphere, a little judicial restraint here to not make a sweeping judgment would have been a better approach. Is the public really unwilling to accept this privacy interest as objectively unreasonable? I'm not so sure, especially if you only disclose that number to a tight knit circle of friends/relatives.

Tor prevents DOJ investigation of child pornography website

A recent Freedom of Information Act request has revealed that the Department of Justice was unable to investigate those involved in a child pornography website due to the use of Tor (read more about Tor here).
The website was not viewable through normal web browsers ... and [] the site and it's contents could only be viewed on the Tor Network....
[B]ecause everyone (all Internet traffic) connected to the TOR network is anonymous, there is not currently a way to trace the origin of the website. As such, no other investigative leads exist.
As Ars Technica reports, Tor has not always stopped law enforcement from investigating illegal activity including the April bust of the Farmer's Market.

Seton Hall L. Rev. publishes CSLI comment

If you're interested in the debate over cell site location information, a recently published comment by Christopher Fox in the Seton Hall Law Review will provide you with excellent background and analysis of the legal issues. Here's an excerpt:
Part II of this Comment will explain the process cell phones use for sending and receiving calls, messages, and information, as well as how CSLI data is computed to produce an approximate location of a cell phone. Part III will provide the relevant Fourth Amendment jurisprudence, explain the language and protections provided under the SCA, and examine the Third Circuit’s interpretation and application of the statute to a § 2703(d) order request. Part IV will argue that the Third Circuit’s cursory analysis of historic CSLI in light of the relevant Fourth Amendment jurisprudence was incorrect, and will highlight the resulting failure to set forth guidelines for § 2703(d) order requests that would end the application of discrepant standards. Finally, Part V will propose an amendment to the statute that will eliminate the current discrepancy and clarify the requirements for compelled disclosure of historic CSLI.
Click here for the comment.

A pending Fifth Circuit case also addresses the issue. Click here to read the briefs at Volokh Conspiracy. For recent events concerning CSLI from our blog, click here.

Monday, June 11, 2012

Video demonstrates packet switching

Check out this video from the World Science Fair, titled "There and Back Again: A Packet's Tale - How does the Internet Work?" It uses really nice graphics to demonstrate packet switching.


Friday, June 8, 2012

Colorado appellate court holds that warrantless search of call history was lawful search incident to arrest

In Colorado v. Taylor, 2012 Colo. App. LEXIS 926 (Jun. 7, 2012) a Colorado appellate court held that an officer's warrantless search of a defendants phone (call history), subsequent to his arrest, was a lawful search incident to arrest. The issue was one of first impression in the court, and lacking on point guidance from the Supreme Court, the court appeared to analogize the cell phone as a "container," and hooking precedent that gave the right to search containers incident to arrest, fit it within that box. However, the court was troubled by that equivalency (a false one, I believe), to some degree, stating that:
We recognize that many modern cell phones, tablets, and other personal electronic devices, like computers, are capable of storing and accessing large amounts of personal information, and we are aware of the concerns of other courts regarding searches for information contained in these devices. 
There is a decent amount of case law on both sides of this issue - resolving it in a plethora of different ways - plain view, search incident to arrest, and even exigent circumstances if the facts can get you there. The court offered what I would call an olive branch to those who feel this is an illegal search:
We agree with the practical consideration proposed by Magistrate Judge Torres of the Southern District of Florida, who stated: Perhaps the better alternative is to a find a technological answer to this technological problem. We don't have the answer, but a good place to start is by a user password protecting the electronic device. Short of that practical step, the solution does not lie with a revamped analysis of the search incident to arrest doctrine. 
I want to focus on this part for a second, because it is an interesting idea to ponder. First, Judge Torres is completely correct that individuals who do not want to run into this situation merely need to lock the phone with a passcode, so that search would not be possible. Such an act would increase an individual's reasonable expectation of privacy in that information. However, there's a problem here - many courts have already stated that individuals have a reasonable expectation of privacy in that information, sans any sort of security protections. Boiled down, that means that the onus is on individuals to bolster a privacy interest that is already recognized, instead of defining, legally, what the scope of search incident to arrest really is. I'll call that a "duct tape" argument - yep it'll fix it, but has it really rooted out and resolved the underlying problem? Absolutely not. Additionally, what about old cell phones that do not have the password lock feature - there are still many of them currently in use. Are those individuals to be treated differently because of their choice of phone?  A resolution based on the technological capability of your phone is untenable.

This is not to say that it is an easy question to answer - but privacy interests should not turn on the distinction above. I also think it is a farce to compare the common understanding of "containers" with that of cell phones. The definition of container is:

Definition of CONTAINER

: one that contains: asa : a receptacle (as a box or jar) for holding goodsb : a portable compartment in which freight is placed (as on a train or ship) for convenience of movement
(Merriam-Webster Online)

How you find cellphone in there, or smartphone for that matter, is amazing. But, just as I say that, let me go on to the special concurrence by Judge Booras.  He flatly disagrees with me, and argues that just because a cellphone can contain way more information than a personal effect, or traditional container, does not transform it into a special class, requiring different treatment under the Fourth Amendment:
The majority acknowledges the concerns expressed by other courts regarding searches for information by way of modern cell phones, tablets, and other personal electronic devices based on the capability of such devices for storing and accessing large amounts of personal information. I write separately simply to point out that many other courts reject the view that the potential volume of information in a cell phone changes its character as a personal effect that may store considerable evidence of the crime for which a suspect has been arrested, and which may be searched incident to arrest
Judge Booras goes on to state that to treat such devices differently would cause "problems":

Likewise, courts have noted problems that would be caused by limiting a search on the basis of the quantity and types of information a device might hold. See United States v. Murphy, 552 F.3d 405, 411 (4th Cir. 2009) (to require police officers to ascertain the storage capacity of a cell phone before conducting a search would be an unworkable and unreasonable rule); United States v. Gomez, 807 F. Supp. 2d 1134, 1149-50 (S.D. Fla. 2011) (crafting a "bright line" rule to guide the scope of a cell phone search is "very difficult," and "exacerbated by the continually advancing technology
and computing capabilities of hardware, such as 'smart' phones"); Diaz, 244 P.3d at 509 (a quantitative approach would create difficult line-drawing problems for both courts and police in determining whether a particular item's storage capacity is constitutionally significant).

I think these arguments overstate the problem. No one is asking police to make calculations on site of the processing speed of the phone, RAM, storage capacity, etc. The police must only make one judgment - is the device capable of containing information that a personal would have a subjective reasonable expectation of privacy in, and the violation of which would be objectively unreasonable to society writ large. Instead of wrangling at the lowest level (i.e. what are the capabilities of the device), as I believe they are, here - let the wrangling be at a higher level - lets require a warrant for devices we reasonably understand to be capable of containing large amounts of personal information - computers, smartphones, PDA's, iPads, etc. We have to draw lines at higher levels of abstraction, or we reduce the arguments to the futile assessments above.

Jones in Review: "Second majority" application to cell site location information

This week, Cybercrime Review is featuring a series of posts that takes a look at how federal and state courts are applying the Supreme Court's decision in United States v. Jones (previously discussed here).

Now that GPS use will often require a search warrant, law enforcement has begun to increasingly use cell site location information (CSLI) in investigations. Some courts read Jones no farther than the trespassory interest detailed by Justice Scalia, and they hold that Jones did not alter the rules for obtaining CSLI. Other courts, however, read the concurring opinions by Justices Alito and Sotomayor to stretch much farther - suggesting that any attempt by government to track citizens long-term can only be done with a search warrant.

United States v. Sereme, 2012 WL 1757702 (M.D. Fla. 2012)
Jones does not apply to the use of CSLI. Because there was no physical trespass, no constitutional violation occurred. 
The Supreme Court has not answered the broader question presented here which is whether the Government's monitoring of an individual's movements through their cell phone for a certain period of time constitutes a “search” within the meaning of the Fourth Amendment, and more importantly whether that “search” requires a warrant issued upon probable cause of some other level of suspicion, such as the traditional reasonable suspicion.
In re Application, 2012 WL 989638 (D. Mass. 2012)
Despite the concerns of Alito and Sotomayor in Jones, probable cause is not required for issuance of a 2703(d) order to obtain historical CSLI until the Supreme Court or Congress “definitively considers the matter.”

Commonwealth v. Pitt, 29 Mass.L.Rptr. 445 (Mass. Sup. Ct. 2012)
CSLI is protected by a reasonable expectation of privacy, and the failure to obtain a warrant in violation of the Fourth Amendment requires suppression.

Wednesday, June 6, 2012

LinkedIn's negligence in failing to adequately secure user passwords

As most of you are aware, LinkedIn's site has apparently been hacked, and 6.5 million passwords of users were exposed (if you weren't aware, change your password); the likely attacker operated out of Russia. Take all I say with a grain of salt, as LinkedIn has recently tweeted "[o]ur team continues to investigate, but at this time, we're still unable to confirm that any security breach has occurred. Stay tuned here." But, I doubt that this is a false alarm, and for the uninitiated, let me translate that tweet in honest technology speak - "We've realized a breach occurred, we are panicking in a board room and attempting to spin this in the least damaging light possible."

In this day and age it is unsurprising that a large site has been owned by hackers; I think most would agree that this has become commonplace. But, it appears that corporations are failing to evolve based on the failures of their compromised brethren. While LinkedIn should be applauded (quietly) for their use of SHA-1 hashes to store passwords, they should then immediately be criticized for failing to also salt the passwords, or use a more cryptographically strong algorithm such as SHA-256, or SHA-512.

A quick explanation will make their negligence clear. Let us assume that the chance of disclosure of passwords is merely a function of exposure to the internet, multiplied by the traffic of (aka attacks on) the company, divided by the security measures in place to prevent data disclosure. The equation can be noted as EXP * TR / SEC = DISC(%). That equation is of course not scientific, but it helps to explain the current atmosphere of the internet. The variables EXP and TR are hard to control by any company that is out on the internet, and in fact, most companies interested in making a profit want those values to increase. The key to business viability, trust of the consumer (industry respect), and meeting the responsibility placed on you as a data steward is the company's SEC value. I would also argue that the more vital the service you are offering on the internet is, the more responsibility and obligation you have to increase your SEC value.

By using unsalted SHA-1 hashes, LinkedIn essentially conceded that the value of DISC would be enormous, and it did so by negligently failing to salt those passwords. I say negligently because it is commonly understood in the industry that use of a salt makes cracking password significantly harder. Take for example the NIST Enterprise Password Management Guide, which states:
The use of salts also makes cracking more difficult—for example, using 48-bit salting values effectively appends a 48-bit password hash to the original password hash, assuming that the attacker does not have access to the salting values and that the salting values are well-chosen. So a salted password might have the same effective length, and therefore be roughly as time-consuming to crack, as an unsalted password  that is several characters longer. Also, salts typically use the full range of possible values, unlike passwords that have limited character sets, so salts can strengthen the effective password complexity. Policies for password expiration, length, and complexity should take into account the use of salts.
The use of salts defeats, or at least slows down the use of "rainbow tables," which are tables of already calculated hashes of passwords. So, if I know that your site uses SHA-1 hashing, I take a wordlist of X number of words, and hash all of those into a database. Then, when a Russian hacker discloses all of your passwords, I merely correlate the values disclosed with the values in my table to discover passwords. I may not get all of the passwords, because the dictionary file originally used normally does not have every word or possible combination of letters, numbers, and symbols used by individuals, but I am guaranteed to get a large portion because users typically have bad passwords (or shall I say weak/predictable passwords).

The use of salting defeats rainbow tables because the hope is that the potential "cracker" of the passwords is clueless on the salt used to hash the passwords by the particular site, so a traditional rainbow table is useless. Thus the hacker would need to create a rainbow table for every possible iteration of the salt - an extremely time consuming task, and wholly not worth it. In all of these password cracking scenarios, there is a race condition going on. Specifically, that the number of entrants to the race decreases exponentially as the complexity and difficulty of the passwords that could be cracked increases (the value of SEC increases). As an internet company you need not outrun the bear behind you that is attempting to expose your security weaknesses, you merely need to be running faster than the others around you.

It is no argument for LinkedIn to assert that they could not have feasibly implemented a salt on their SHA-1 hashes, nor is it an argument for them to assert that others are using SHA-1 hashes. It is widely known that SHA-1 has been significantly weakened, and SHA-2 (256, 512) algorithms are better alternatives - the federal government urged federal agencies to stop using SHA-1 in March, 2006, and a competition has been running since 2007 to come up with SHA-3.

We must assume that password hashes are going to be disclosed because of the plethora of weaknesses in software currently implemented worldwide. What we shouldn't assume is that the stewards of our data are failing to exercise due diligence in protecting our information. The driver of an increase in the value of SEC is the real world accountability for preventable security failures.

Update: As expected, LinkedIn has confirmed the breach.

Jones in Review: Limitations of the Scalia rule

This week, Cybercrime Review is featuring a series of posts that takes a look at how federal and state courts are applying the Supreme Court's decision in United States v. Jones (previously discussed here).

Several courts have refused to completely throw out evidence obtained as the result of GPS device use. One court did so after finding the situation more like Knotts and Karo, another after finding the taint of GPS use was cured, one under inevitable discovery and standing issues, and several others under the good faith exception.

United States v. Arrendondo, 2012 WL 1677055 (M.D. Fla. 2012)
Law enforcement placed a GPS device in a package being shipped through UPS containing cocaine. Once the package was picked up, law enforcement used the tracking to follow the defendant. The defendant argued that Jones made the GPS use unconstitutional, but the court held that Karo and Knotts allowed the use because law enforcement did not trespass on the defendant’s vehicle - they only opened the package and placed the device in it.

State v. Adams, 2012 WL 1416414 (S.C. Ct. App. 2012)
Police unlawfully placed a GPS device on a vehicle. Five days later, a policeman in another state was notified to be on the lookout for the vehicle, and to “stop it if it violated any traffic laws.” After seeing the car change lanes twice without use of a signal, he pulled the it over, and drugs were found. Though the use of GPS was a violation of the Fourth Amendment, the taint was cured by the intervening criminal acts.

United States v. Santillanes, 2012 U.S. Dist. LEXIS 40532 (E.D. Mich. 2012)
Defendants were arguing a motion to suppress after GPS was used without a search warrant in the investigation. No objection could be made to certain evidence because the defendants did not have standing. Inevitable discovery saved the other evidence because the GPS information was not necessary to finding it. Another standing case is United States v. Hanna, 2012 U.S. Dist. LEXIS 11385 (S.D. Fla. 2012).

United States v. Amaya, 2012 WL 1188456 (N.D. Iowa 2012)
Despite the use of GPS in violation of the Fourth Amendment, suppression was not required because the good faith exception applied under Davis. Thus, suppression is not required for pre-Jones GPS usage in the Seventh, Eighth, and Ninth Circuits. See also United States v. Fata, 2012 U.S. Dist. LEXIS 66759 (D. Nev. 2012); United States v. Aquilar, 2012 U.S. Dist. LEXIS 64139 (D. Idaho 2012); United States v. Leon, 2012 U.S. Dist. LEXIS 42737 (D. Haw. 2012). (Read more about this issue here.)

United States v. Illescas, 2012 U.S. Dist. LEXIS 74594 (N.D. Ala. 2012)
Upheld the use evidence obtained through the use of GPS pre-Jones because 1981 precedent on beepers (from the old Fifth Circuit) provided a good faith application. (Read more here.)

Tuesday, June 5, 2012

"Hacking Exposed" might get your internet yanked - if you're in prison

In Green v. Maye, 2012 U.S. Dist. LEXIS 76338 (June 1, 2012), a prisoner petitioned for a writ of habeas corpus after the Bureau of Prisons denied him access to the prison internet system for ordering the book Hacking Exposed.  The prisoner asserted a denial of due process, a violation of equal protection, and a violation of the Administrative Procedure Act. The basis of the defendants argument was that he should not be denied internet access for ordering a book about internet hacking if sexual offenders were still allowed to send emails. Pretty flimsy. Of course, there is a First Amendment issue lurking underneath here - i.e. the reason for the denial wasn't that he ordered a book, but that he ordered a book about computer hacking. But, as the court stated, it is working with rational-based scrutiny, such that:
the decision to deny Green access to TRULINCS was a result of his attempt to educate himself about computer hacking. Green's actions showed he was a potential threat to the prison computer system and possibly the public. The Court finds the BOP's decision to exclude Green from computer access is rationally related to a legitimate peneological interests.
While this case was rote and simply resolved, I found it worth noting for a few reasons: (1) the case is almost analogous to buying a book that tells you how to escape from prison; but (2) isn't allowing a prisoner to educate themselves a valiant goal, no matter the subject?; yet (3) isn't allowing an inmate to educate themselves through the study of legal books (surely allowed) another form of allowing them to "escape" prison? I suppose the former is done in an illegal manner, and the latter through legal process, but I doubt it is so clearly black and white.

While I tend to observe this question in the altruistic view that the inmate was trying to rehabilitate himself by learning about cyber security (furthering the penological goal of rehabilitation), that, of course, may be supremely naive. The court obviously felt an alternative motive was afoot - namely that an inmate, with access to a computer system (albeit hopefully severely locked down) was attempting to arm himself with the knowledge to affect that computer system negatively.

Yes, knowledge is power, and therefore dangerous. But is the onus for maintaining prison safety on the Bureau of Prisons, institutionally, or on the inmate to give up pursuing learning, personally?

Monday, June 4, 2012

Secrecy and the ECPA - Empirical evidence and an insider's view

I'd like to draw attention to what I believe is a fantastic piece about the evolution of the Electronic Communications Privacy Act of 1986 (ECPA) - what it was intended for, how it is currently being used, and in what ways it could be improved. Here is the article: Stephen W. Smith, Gagged, Sealed & Delivered: Reforming ECPA's Secret Docket; it is currently on SSRN, but is forthcoming in the Harvard Law & Policy Review Vol. 6. It is written by a United States Magistrate Judge who has observed first hand the "secrecy" in action. The abstract alone was sufficient to enthrall me:
Every year federal magistrate judges issue tens of thousands of orders (the exact number is not known) allowing law enforcement access to the electronic lives of our citizens -- who we call, where we go, when we text, what websites we visit, what emails we send. These electronic surveillance orders are authorized by the 1986 Electronic Communications Privacy Act, and are concealed from public view by a legal regime of secrecy which includes sealed court files, gag orders, and delayed-notice provisions. In effect these orders are written in invisible ink -- legible to the phone companies and electronic service providers who execute them, yet imperceptible to targeted individuals, the general public, and even other arms of government, including Congress and appellate courts. Such secrecy has many unhealthy consequences: citizens cannot be well informed about the extent of government intrusion into their electronic lives; Congress lacks accurate empirical data to monitor the effectiveness of the existing statutory scheme and adapt it to new technologies; and appellate courts are unable to give effective guidance to magistrate judges on how to interpret ECPA's complex provisions in light of changing technology. The result is a statutory scheme bereft of the normal process of refinement and correction by appellate or legislative review. With Congress on the sidelines, appellate courts not engaged, and the public in the dark, the balance between surveillance and privacy has shifted dramatically towards law enforcement, almost by default. While it is certainly time to update the substantive provisions of ECPA, it is equally important to make structural changes in the law to eliminate unnecessary secrecy. Such reforms should include the elimination of automatic gagging and sealing orders, as well as the adoption of a publicly available warrant cover sheet to capture basic information about every electronic surveillance order.

Jones in Review: A look at the application of the Supreme Court's GPS decision

Over the next week, Cybercrime Review will feature a series of posts that takes a look at how federal and state courts are applying the Supreme Court's decision in United States v. Jones (previously discussed here).

Application of the Scalia majority opinion
Many courts, predictably, are simply applying the relatively easy rule established by the majority opinion, authored by Justice Scalia - if law enforcement physically trespasses on personal property, installs a GPS device to acquire data, and then uses the information, a search has occurred.

United States v. Lee, 2012 U.S. Dist. LEXIS 71204 (E.D. Ky. 2012)
GPS device use without a search warrant was unconstitutional. It was used to track the defendant’s movements, and the defendant was stopped for not wearing a seatbelt. Suppression was required because circuit precedent did not expressly allow use of warrantless GPS tracking prior to Jones. (View a more detailed post here.)

State v. Zahn, 2012 WL 862707 (S.D. 2012)
The South Dakota Supreme Court held that use of a GPS device to monitor a person’s movements for an extended period of time (26 days) was a search under Katz, relying on the concurring opinions of Sotomayor and Alito. Additionally, the physical trespass was a search under Jones (Scalia). No warrant exception applies to GPS use, and thus the application here requires suppression. “[P]olice must obtain a warrant before they attach and use a GPS device to monitor an individual's activities over an extended period of time,” wrote the court.

United States v. Peter, 2012 U.S. Dist. LEXIS 72485 (N.D. Ind. 2012)
This case did not involve GPS, but instead consisted of law enforcement entering the curtilage with a drug dog. Entering the curtilage was a trespass (like installing GPS) and it was to acquire information, an analogous situation to Jones. The court held that under Jones, it was a Fourth Amendment search, but the intrusion did not require a search warrant.

Use of concurring opinions to invoke privacy rights
Nearly every court that discusses Jones goes into a detailed discussion of the concurring opinions authored by Justices Sotomayor and Alito. Beyond their possible recognition as a "second majority opinion," these concurring opinions are being used by privacy advocates to invoke rights far beyond those established by Scalia, and some courts have taken note as well.

State v. Dykes, 2012 WL 1677055 (S.C. 2012)
The South Carolina Supreme Court struck down a statute mandating lifetime satellite surveillance for child sex offenders without judicial review of their risk of reoffending, finding that mandatory monitoring for low-risk offenders violates due process. The court, at least in part, relied on the concurring opinions in Jones to reach its decision. 
[T]he Constitution guarantees a certain freedom from government intrusion into the day-to-day order of our lives which lies at the heart of a free society. In our opinion, ‘neither liberty nor justice would exist’ if the government could, without sufficient justification, monitor the precise location of an individual twenty-four hours a day until he dies.

Friday, June 1, 2012

Tech Watch: Services allow users to communicate via webcam with strangers; What could go wrong?

Screenshot of Chatroulette. Click to enlarge.
Give a person a webcam, some anonymity, and the ability to connect to strangers over the Internet, and wayward things are certain to happen. In fact, they happen quite often.

Many online services now do just that, including Stickcam and Chatroulette. Both of these services allow users to connect via webcam with total strangers from around the world at random. Users can talk through their microphone or use the text chat feature. Stickcam has other features including live broadcast, group chat (seven webcam spots), and chat rooms (up to 12 live webcams and others by text). Stickcam requires an account to broadcast and allows privacy settings, but Chatroulette does not require an account.

Both services allow users to report inappropriate behavior. Chatroulette claims that they report IP addresses and screenshots of offenders to law enforcement. As far as I am aware, it is unknown whether or not either service retains videos or chat logs by default.

In a recent Massachusetts case, the defendant is charged with production of child pornography (among other crimes) after meeting a 15-year-old girl on Stickcam. United States v. Ritter, 2012 U.S. Dist. LEXIS 74401 (D. Mass. 2012). The defendant requested discovery of the victim's communications with others on the site to prove that she "had a history of 'showing' to others" and "acted freely and voluntarily without [his] coercion." The magistrate concluded that this information is irrelevant to the crime itself, but is probative as to the victim's credibility and is "thus producible 21 days before trial."

UPDATE: Another related site is Omegle, which helps users with common interests connect for text or video chat. It also allows you to anonymously ask a question which you then get to see two people discuss. (Thanks to reader Priscilla for the tip!)