Sunday, April 28, 2013

How Can a Stingray Track Your Cell Phone?


A stingray is no longer just a flat-bodied fish feared for the poisonous barb in its tail; it is also the name of a technology used by the FBI and other law enforcement agencies to track the location of cell phone users. A cousin to the Triggerfish tracking technology seen on the HBO television program “The Wire,” the Stingray puts out mobile phone signals to nearby cell phones and tricks those phones into thinking it is a cell tower. Based on a summary of the technology from the Wall Street Journal, this technology is useful to law enforcement in two separate ways: a user could have a specific location in mind and use the Stingray to capture data on all the devices being used in that setting, or have a specific device in mind and use the system’s antenna power readings to triangulate that device’s location. According to private vendors of these products, they are also able to obtain not only limited “metadata,” but also content sent to and from the phones, like text messages and call audio.

The Fourth Amendment constitutionality of the Stingray has come into focus in US v. Rigmaiden, a case in Arizona federal District Court featuring a defendant accused of being the ringleader of a $4 million tax fraud operation who was caught, in part, due to law enforcement use of a Stingray. The government had Verizon modify the defendant’s phone (by changing the settings on his “air card”), and then used the Stingray, acting as a fake cell tower, to track the location of the phone. The government has relied on a court order directed to Verizon as fulfilling the requirements of the Fourth Amendment and ECPA in this case. Because the government has conceded that this was an intrusion requiring a warrant, the case now revolves around whether this court order was sufficient to enable use of the Stingray.

The Stingray technology raises some interesting Fourth Amendment questions. If the “content-catching” capability of the device is disabled, is it like a “trap and trace” device used only to capture pen register-type non-content information? While the government conceded the issue in Rigmaiden, it may argue this point in future uses of the Stingray. Based on Jones, I think the government will be hard-pressed to claim that a device that can track a suspect to within two meters and sends signals through protected areas is not an intrusion for Fourth Amendment purposes.

 Another issue with the Stingray is that it captures data pertaining not just to the target, but also to any mobile phone within range that connects to the fake cell tower. The government claims that it deletes third party data not pertinent to the case, but the fact of that data’s collection and possible interference with innocent users’ cell phone service is problematic.

The bigger problem, well illustrated in Rigmaiden, is that the government made allusions to a “mobile tracking equipment” in its affidavit to the court but did not go into the specifics of how the Stingray operated. While these devices have been in use for almost twenty years, they are still relatively unknown and no definitive case law exists that governs their use. Courts need to be informed as to exactly what these devices can and cannot do in order to figure out whether Stingrays are effective and appropriate law enforcement tools or overbroad and invasive data collectors.

Monday, April 22, 2013

More on FERPA

Today we had an introductory look at FERPA. As a part of our discussion, we touched on some of the proposed changes and discussed the related report on issues arising during emergency situations. In September 2011, I wrote a piece as a Research Assistant at the Silha Center for the Study of Media Ethics and Law. If you are interested in reading more about the report, proposed changes, and other litigation related to journalists being rejected access to information based on FERPA, the link is below.

http://silha.umn.edu/news/Summer2011/SchoolPrivacyLawChanges.html

Thursday, April 18, 2013

Technology Evolving in Criminal Defendants' Favor: The Example of Blood Draws in DUI Cases

Yesterday the Supreme Court decided Missouri v. McNeely, No. 11-1425, 2013 WL 1628934 (U.S. Apr. 17, 2013), a case concerning the Fourth Amendment and law enforcement nonconsensually drawing the blood of someone suspected of driving under the influence. If drawing someone's blood is an unreasonable seizure, the police may do so without a warrant only if a recognized exception applies. One such exception is when the "exigencies of the situation make the needs of law enforcement so compelling that a warrantless search is objectively reasonable." The imminent destruction of evidence is such an exigency. The crux of this case was whether blood-alcohol content, which is inherently evanescent, presents an "imminent destruction" exigency such that the police could warrantlessly take blood samples.

The State of Missouri sought a per se rule that the natural dissipation of BAC always constitutes an exigency. (Many states, including Minnesota, have held this.) The Court rejected that argument, because BAC declines in a gradual and predictable fashion such that a particularized case-by-case inquiry is necessary in each case. The inquiry is thus whether the police can reasonably procure a warrant before the evidence, well, self-destructs. Because the State didn't bother to argue here that there were exigent circumstances in this case (it just wanted the per se rule), the Court affirmed the conviction.


Okay, so far this case isn't terribly remarkable-- but what I found striking about McNeely is that unlike most Fourth Amendment cases we've studied, here technology advances actually help criminal defendants. In other contexts, new technology has enhanced the police's ability to investigate and charge people with crimes: Katz (wiretapping); Smith (pen register); Kyllo (thermal sensors); Jones (long-term GPS surveillance). The defendants won in many of those cases, but one of the central conflicts common to each was this encroachment of technology on privacy without magisterial review for probable cause. (And this is to be expected, as the Fourth Amendment is about state action--technology use benefiting defendants wouldn't really figure into this analysis.)

In this case, the majority takes special note of "technological developments that enable police officers to secure warrants more quickly, and do so without undermining the neutral magistrate judge’s essential role as a check on police discretion." What sorts of technological developments? The majority points to a 1977 rule change allowing magistrates to issue warrants telephonically, and discusses states where police and prosecutors can apply for search warrants via email and video conferencing. In a separate opinion Chief Justice Roberts notes two particularly innovative jurisdictions:

Utah has an e-warrant procedure where a police officer enters information into a system, the system notifies a prosecutor, and upon approval the officer forwards the information to a magistrate, who can electronically return a warrant to the officer. Judges have been known to issue warrants in as little as five minutes. And in one county in Kansas, police officers can e-mail warrant requests to judges’ iPads; judges have signed such warrants and e-mailed them back to officers in less than 15 minutes.
(citations omitted).

Before yesterday, in a per se exigency rule state like Minnesota, law enforcement could take the blood of DUI suspects without consent or a warrant. Now, if there is a true exigency, law enforcement can still do this. This includes delays in the warrant process. But evolving technology like these e-warrant procedures and iPad requests has moved many situations out of the "exigency" box, thus necessitating a warrant. This is not to say all suspects will get off scot-free, but at least a magistrate will give their situation due consideration. And that is better for defendants than the cops being able to say, "Oh there's no possible time to talk to a magistrate, I get to take your blood right now."
(By the way, it also helps that there haven't been competing technological advances in the body's ability to more quickly eliminate alcohol from the system.)

Tuesday, April 16, 2013

The More You Know: Facebook Gets Active About Privacy

Another day, another Facebook privacy story (for more, see here, here, here, and here). In an apparent effort to counter these types of stories, Facebook announced yesterday, in conjunction with the National Association of Attorneys General (NAAG), that it is launching an online safety campaign. The campaign is "designed to provide teens and their parents with tools and tips to manage their privacy and visibility both on Facebook and more broadly on the Internet." The effort is being led by Maryland Attorney General Douglas Gansler, and was announced on the heels of a "Privacy in the Digital Age" summit in Maryland.

Components of the program include an "Ask the Safety Team" video series where Facebook staffers answer  "frequently asked questions" about privacy and safety concerns; a tip sheet listing the top ten tools for controlling information on Facebook; and state-specific public service announcements with participating attorneys general (19 have signed on so far).

It's hard to fault Facebook for trying to make its privacy features more accessible and transparent, but it already has some vocal critics, such as the Center for Digital Democracy ("Facebook's practices regarding teens, especially its data collection and ad targeting, require an investigation-- not just some glossy educational videos and tip sheets."). It is definitely worth wondering to what degree this is a PR move versus an actual attempt to provide clearer user privacy. Most of materials posted so far are simply descriptive of the existing privacy mechanisms ("What is tagging?" or "How do I use lists to manage the audience that's seeing my updates?") or are vague and common-sense based (#6 on the tip sheet: "Check your privacy settings."). And while describing its privacy features in multiple forms might be effective, it probably says something about the clarity of Facebook's privacy features if such dumbed-down redundancies are necessary. And what happens the next time the company changes its privacy policies? Hyping up its current features to such an extent might make the inevitable changes even more confusing and alarming for users.

It's also interesting to consider this campaign in light of Facebook's 2011 settlement with the FTC, which required the maintenance of a comprehensive privacy plan, clear and prominent notice of information that will be disclosed to third parties, and biennial privacy audits. This campaign likely gives Facebook an additional tool to show the FTC how clear and upfront it is being with users about privacy, but it also provides a lot more ways in which Facebook is making promises to its users. It will probably need to be careful not to preach privacy too strongly and end up stepping on its own toes with promises it didn't mean to make and doesn't want to keep, which is why much of the material posted so far seems a little shallow.

Nevertheless, I think it's noteworthy that Facebook has felt the privacy backlash strongly enough that it is joining forces with attorneys general to promote Internet privacy. And even if the information is nothing new, it never hurts to have things available in a variety of locations and formats. What do you guys think of these measures? Useless, shallow PR, or a sign that Facebook is really trying to be more proactive and transparent about privacy?


The IRS and ECPA Loopholes

The IRS may be bypassing Fourth Amendment warrant requirements for accessing e-mails that have been stored for longer than 180 days. The government agency claims to have the authority to access these e-mails without a probable cause warrant because ECPA only requires such a warrant before obtaining e-mails that have been in electronic storage for less than 180 days. In response to this perceived violation of civil liberties, the ACLU has been criticizing the IRS for its lack of transparency on this issue and for skirting around Constitutional protections.

We know that e-mails are protected from unreasonable searches and seizures because SCOTUS found as much in United States v. Warshak. The Court found that the government needs to obtain a probable cause warrant before gaining access to an individual's stored e-mails. The IRS has not been clear, however, whether they are following Warshak on a national basis or only in a limited geographical area. In an e-mail exchange between an IRS employee from the IRS Criminal Tax Division asked Special Counsel whether Warshak would impact warrant procedures at all. Counsel's response was "I have not heard anything related to this opinion. We have always taken the position that a warrant is necessary when retrieving e-mails that are less than 18- days old." Later internal communications from 2011 indicate that some within the IRS believe it unwise to seek such older e-mails without a warrant but that the Warshak opinion technically only applies in the Sixth Circuit.

18 USC 2703(a), which is a section of ECPA labeled "Required disclosure of customer communications or records: Contents of Wire of Electronic Communications in Electronic Storage", states a government entity may require an electronic communication service to disclosure contents of electronic communication that is in storage in an electronic communications system for 180 days or less. The IRS has used this guideline to justify obtaining e-mails stored longer than 180 days without first getting a probable cause warrant. 

Is there really reason to think that e-mails that have been stored for longer than 180 days are less worthy of privacy protection? Certainly such communication is not less worthy of Constitutional protection as Warshak states, but why should ECPA fail to provide additional protection to such communication? Why should the potentially very personal information contained in old e-mails be subject to seizure by the government without  requiring a probable cause showing in order to do so? 

There are two steps that should be taken to remedy this missing link in existing privacy law. First, the IRS must be transparent about their existing warrant obtaining procedures. The ACLU has been at the forefront in calling for transparency in this area, and this is commendable. The American people should know how easily their government can access personal communications without probable cause, and the IRS owes it to them to make this information public. Second, Congress needs to update ECPA to modernize it. ECPA has written in the 1986, and technology has improved by leaps and bounds since then. In March, new legislation was proposed in Congress that would update ECPA by getting rid of the 180-days clause. This legislation would make sure that the government obtains a search warrant to access all e-mails (even old ones) and also that the government notifies the individual of such disclosure within 10 days. There should be a vote within the year.This legislation has so far received wide support and is a necessary step to ensuring adequate privacy for all of us.


Google’s Privacy Policy Under Attack


Remember the Google new one-size-fits-all privacy policy [a single privacy policy that regulates all Google’s services] that we went through earlier in class? This policy is now under attack in Europe.

On April 2nd, 2013, six countries in the European Union–UK, France, Germany, Spain, Italy, and Netherlands– announced a joint action against Google’s new privacy policy. The EU data protection authorities claimed that the new privacy policy does not allow users to figure out what information is kept, how it is used by Google’s various services, and how long are these privacy data kept. EU authorities demand Google to specify those issues and put up a simpler presentation of the privacy policy.

Does Google care about this action? Definitely. The fines have limited effect to Google but the public relationship can seriously damage Google’s business. Google’s annual revenue in 2012 is $50 billion, and is projected to be $60 billion in 2013. On the other hand, the maximum fine for a violation of privacy policy in the EU is $1.3 million, and each EU member country would probably impose additional fines (but, in general, they are less than $1 million). Thus, the fines are not likely to raise any significant concern to Google. However, the public relation effect is huge to Google. On the same day the EU action was announced, Alma Whitten, Google’s first privacy director, stepped down after three years in the job.

What defense does Google have? Theoretically, Google can defend itself by claiming EU-US safe harbor. EU-US safe harbor provides a streamlined and cost-effective means for US organizations to satisfy the EU privacy by complying to an “adequacy” requirement. The “adequacy” requirement is a lower privacy standard comparing to the EU’s regular privacy policy. The “adequacy” requirements are specified in seven areas: notice, choice, onward transfer (transfers to third parties), access, security, data integrity, and enforcement. In a nutshell, EU-US safe harbor is a reduced privacy standard provided to US companies to operate in the EU. Interestingly, Google’s privacy policy does explicitly mention that Google complies with the EU-US safe harbor. Even more interestingly, Microsoft updated its privacy policy in April, 2013. On the first page of the new privacy policy, Microsoft posted a super big icon of "EU-US Safe Harbor", claiming compliance to it. 

Learning from this event, it seems that the real “teeth” in a governmental privacy action is not the fine, but the stigmatization: “you don’t respect our privacy.” In my opinion, if Google decides to go to court, it will likely prevail on the EU-US safe harbor. However, here, Google's privacy director stepped down immediately without seeking justification from EU-US safe harbor. What do you think? Is the concern of stigmatization so strong that it de facto moots EU-US safe harbor? What benefit does the EU-US safe harbor offer in practice? Any other thought?

Cyber Security vs. Cyber Terrorism

In an article posted online http://www.gsnmagazine.com/node/28918?c=cyber_security a privacy group pushing the government to define cyber security standards. After President Obama signed an executive order concerning cyber security questions have arisen concerning how the new Electronic Privacy and Information Center (EPIC) works and what is EPIC supposed to target. "EPIC, which also pushed for solid privacy and civil rights protections based on DHS privacy policies and the president’s “Fair Information Practices (FIPs), said most Cyber security issues amount to civilian crimes committed in cyberspace and are best handled by state and local law enforcement and not as matters of national security. Misappropriation of intellectual property, cyber-espionage, and hacktivism, don’t pose national security threats and should not be treated as such, it said." 

Overall, EPIC has been pushing for greater control to approach cyber-security's framework by reducing risks to critical infrastructure. The privacy group is concerned that because EPIC's reach is long and restrictions are vague; that personal privacy is will be infringed upon. According to EPIC, it really focuses on threats to the infrastructure, but then makes statements that suggest that cyber security falls under national security." 

When I originally read this article I thought -- Okay, this is just another crazy privacy group looking for any possible complaint to lodge. But in retrospect I think that it is something to follow. What type of authority does EPIC have? It seems like the words "national security" seems to have magical powers that trump privacy concerns. 

I thought that I would leave with a quote from EPIC, “Too often claims of national security tip the transparency-secrecy scale towards secrecy; thus the Cybersecurity Framework should clearly define what encompasses national security threats. Even those aspects of the Cybersecurity Framework that do fall under national security should be transparent whenever possible.”

Does the privacy group have a justified concern?

Monday, April 15, 2013

Doublethink: How Washington state nearly missed the whole point of privacy laws.

Back in February, Emily Marshall blogged about the "Social Networking Online Protection Act" (or SNOPA) that was recently reintroduced to Congress. This bill would prevent employers from forcing employees to hand over their social networking passwords as a condition of employment. While there isn't any current federal protection against this, several states have taken the task upon themselves, with 14 states introducing and 6 states passing this type of law in 2012. I agree with Emily's contention that this is common sense legislation. My home state of Washington seems to disagree. They introduced SB 6637 in April of 2012 and it hasn't been heard of since. Continuing the tradition, they introduced the extraordinarily similar SB 5211 this January, which has now made it out of committee and might just become a real boy someday. 

Now you might be thinking that another state maybe, possibly, potentially passing a similar bill to a lot of other states is hardly a sexy topic for this blog. And you'd be right, if it were not Washington state. This past Thursday, techdirt let me know that Washington found a way to mess this up: amending SB 5211 to give employers a new right to request social networking passwords. Now, as it turns out, techdirt's reporting was a bit late; the amendment was withdrawn on April 3rd. Regardless, the mere fact that the amendment was proposed gives us an opportunity to study how privacy law could go bad (and to mourn for the alternate world in which this passed).

The text of SB 5211 makes it unlawful for pretty much anyone to require their employees to give them their personal social networking password or to let them access the employee's account. It creates a civil action awarding a $500 penalty plus any actual damages and attorneys fees if the employee wins and award the employer reasonable expenses and attorneys' fees with the suit turns out to be frivolous. The Amendment added an exception; if it is conducting an investigation, an employer can go ahead and demand a password or access to an employee's personal account if:
The investigation is undertaken in response to receipt of specific information about the employee or prospective employee's activity on his or her personal account or profile;
The purpose of the investigation is to: ensure compliance with applicable laws, regulatory requirements, or prohibitions against work-related employee misconduct; or investigate an allegation of unauthorized transfer of an employer's proprietary information, confidential information, or financial data;
The employer informs the employee or prospective employee of the purpose of the investigation, describes the information for which the employer will search, and permits the employee or prospective employee to be present during the search;
The employer requires the employee or prospective employee to share the activity or content that was reported;
The scope of the search does not exceed the purpose of the investigation; and
The employer maintains any information obtained as confidential, unless the information may be relevant to a criminal investigation.
On the one hand, it does attempt to appear reasonable. The scope is somewhat narrow, there has to be specific information, and the information is kept confidential. But on the other hand, the state would have directly endorsed employer intrusion into their employee's private accounts as part of a bill designed to protect employees from exactly that. To use one of those terrible analogies that judges love, your employer cannot force you to let them into your home and rifle through your personal journals and written correspondence because they expect to find evidence of employee malfeasance, but this law would let them do the same to your hidden group posts and messages on Facebook. If you don't like it, you can quit or be fired and you would have no recourse.

Aside from being terrible policy, such a provision could also violate federal law. The techdirt article points out that, if willingly violating a website's terms of service counts as accessing a protected computer without authorization/exceeding authorized access, this scheme could lead to rampant CFAA violations. Facebook, for example, includes in its terms of service that "[y]ou will not share your password (or in the case of developers, your secret key), let anyone else access your account, or do anything else that might jeopardize the security of your account." Is the CFAA violated when your employer proceeds to access your account in breach of Facebook's terms? United States v. Drew suggests that it might not, but that doesn't foreclose the possibility that this law would have thrown employers from the frying pan into the fire by giving state approval to an action resulting in federal violations.

Fortunately, the law is going forward without this amendment, but one has to raise one's eyebrows when one of the most liberal states in the nation even considers such a bill. If New Amsterdam can think about it, then perhaps someone else will actually do it. This may seem like a First World Problem™, but I can't imagine any worker anywhere would be all that thrilled about their employer having the right to listen in when that worker privately complained to their buddies. 

Sunday, April 14, 2013

CFAA: Protector or Obstructor of Privacy?


The Computer Fraud and Abuse Act prohibits “intentionally access[ing] a computer without authorization.” The law has been turned on its head to support overreaching prosecutions by the U.S. Department of Justice in cases involving violations of terms of use agreements and, quite recently, a case that led to the highly publicized suicide of Aaron Schwartz. 

But it’s done little for the New York Times, the Washington Post, Twitter, and Apple, all of whom have been the victims of high profile hacking attempts this year. The CFAA and relevant international law hasn’t done much work to protect against hackers in China, and according to technology lawyer Stewart Baker, “our government seems unwilling or enable to stop the attacks or identify the attackers.” 

The government’s failure to protect has led to a debate about private victims taking a proactive approach to their cyber-security efforts: “hacking back” (also referred to as “backhacking”). Hacking back doesn’t necessarily mean destructive retaliatory measures; it could also include attempts at intelligence gathering, such as the recent success of two private cyber-security entities in Luxembourg that uncovered the inner workings of a Chinese hacker group’s network. 

Baker says “[t]he same security weaknesses that bedevil our networks can be found on the systems used by our attackers. . . . [In other words:] ‘Our security sucks. But so does theirs.’ ” Since the government isn’t taking advantage of exploiting hacker networks to vindicate and protect private security and privacy interests, some private entities want to take matters into their own hands. Unfortunately, the Justice Department thinks hacking back may be just as illegal under CFAA as the attacks that prompt it.  

Backhacking could be viewed as an active defensive tactic. Like using a private investigator, backhacking could be used to determine not only the identities of hackers, but to analyze their methods and learn more about how to stop them, as the Luxembourg groups’ hackback demonstrates. But others take a different view, such as Orin Kerr, who finds an analogy in traditional property law: you don’t have a right to break into your neighbor’s house to take back something she took from you. 

Should CFAA protect the privacy of hackers from the “active defensive tactics” of private entities? If not, what limits should be set? Among the various ways to immunize hackbacks by amending the CFAA, which would work best (e.g., a specific intent requirement, affirmative defense, etc.)? Would a push for a governmental approach to cyber-security law enforcement more responsive to private victims be more appropriate? 

Given that the threat identified in the hackback example above is suspected to be a Chinese military unit, maybe the vindication of cyber security and privacy should take a back seat to foreign policy. And maybe the U.S. Government is engaging the Chinese cyber threat in ways that implicate stakes much greater than those of the blueprints for the iPhone 8 Nano or your App Store purchase history.

Read more:
Detailed report on debate at BNA—Bloomberg
Luxembourg Hackback Story—Stewart Baker at Volokh.com
Luxembourg Groups' Report (for the tech savvy)—Malware.lu
Hackback Debates—Orin Kerr, Stewart Baker, and Eugene Volokh at Steptoe Cyber Blog
Madiant's Report on the APT1 hacker group—Madiant

Saturday, April 13, 2013

Privacy Concerns Fuel Drone Restrictions

http://www.reuters.com/article/2013/04/12/us-usa-drones-idaho-idUSBRE93B03S20130412

Last week, Idaho legislators passed a bill which would restrict law enforcement use of drones, becoming the second State behind Virginia to pass such similar legislation. The law requires police to obtain a warrant in order to use drones in the collection of evidence with regards to suspected criminal activity. Further, police are prohibited from using drones to surveille individuals or their property without written consent. However, exceptions exist. If the drone is being used with regards to illegal drugs, public emergencies or search and rescue missions no warrant is required. Legislators cite "high-tech window peeping" as a primary concern in passing the bill. This bill adds extra restrictions to the use of drones on top of Federal restrictions which limit the number of drones which can fly in U.S. airspace.

The exception for illegal drugs both makes sense while creating concern. The article cites finding illegal marijuana fields as a benefit which could come from using drones. From a Fourth Amendment perspective this is not that troubling considering a precedent exists which does not apply Fourth Amendment protection to open fields, so long as they are not "curtilage" of the property. Oliver v. United States, 466 U.S. 170 (1984).  However, the exception does not seem constrained to only searching for fields with drugs. Thus, a scenario could  be imagined where drones are laying watch on a street corner suspected of drug dealing. Or a drone could follow a suspected drug dealer continuously and relentlessly. The use of drones for these purposes seems extremely more intrusive and goes directly against what the legislators wanted to prevent in passing the bill. High-tech window peeping, and stalking seems to be OK with the legislator so long as it is a suspected drug dealer who is being spied upon.

The opportunity for abuse seems possible within the latter two exceptions as well. Search and rescue missions could turn into continuous surveillance of suspected kidnappers or those who are believed to know valuable information. Further, the public emergency language is broad enough that that its not clear exactly when this exception would apply. When would crime become a troublesome enough to become a public emergency is unclear. These three exceptions could, and are, being completed by individual police officers without drones currently. However, the ease and convenience which drones could complete these tasks is especially concerning

Further, it is unclear what type of warrant is required. While no "Super warrant" requirement is mentioned, it would seem prudent to have a stricter warrant requirement for drones, similar to what was done with ECPA. Further, no private right of action exists. Without this higher standard, the warrant requirement does not seem like that big of a barrier for police to get around. If the legislators are truly concerned about individual privacy infringement with the use of drones, modeling a bill similar to the ECPA requirements would make sense. Ultimately, the bill is a step in the right direction. While it is unclear exactly what effect the bill will have on the use of drones, these initial restrictions are necessary as the use of drones by law enforcement becomes increasingly a reality.

Thursday, April 11, 2013

CISPA Moves Forward


The latest version of the Cyber Intelligence Sharing and Protection Act was adopted by the House Intelligence committee and now moves to the House floor for a vote. Ironically, the House Intelligence committee discussed and voted on CISPA – a bill that could significantly diminish citizen privacy protections – in private.

What’s more interesting are the privacy protection amendments that were left out of the recently adopted bill. Two of the left-out amendments touch on issues we’ve seen in other privacy laws: liability for private entities that use information to discriminate and obligating particular entities to de-identify certain data.

I find it interesting that these two safeguards were overwhelmingly defeated. As a bill that is already contentious for its potential to undermine the current privacy law framework regarding personal information in cyberspace, I would think basic protections against unnecessary identification and unauthorized discrimination would be appropriate to include. Additionally, in considering other laws like the Fair Credit Reporting Act and Genetic Information Nondiscrimination Act where similar discrimination and identification concerns are addressed, it would seem that these familiar safeguards would be welcomed. Interesting arguments have also been made about how personally identifiable information would not even aid the purpose of CISPA. Also, it seems quite clear that any information obtained through CISPA’s information sharing regime would be inappropriate to use for anything beyond threat identification. That said, not including a robust non-discrimination provision seems insufficient.

The reluctance toward including these two amendments likely stems from the burden of compliance and the difficulty of enforcement. De-identifying data would no doubt be costly and burdensome.  Non-discrimination violations are difficult to establish because the burden of proof inevitably lies with the aggrieved individual trying to establish how the alleged wrong stemmed from unauthorized access/use. Despite these reasonable justifications for excluding such provisions, I would think CISPA needed some extra juice so as not to suffer the same defeat in the Senate that it did last year.  No doubt Pres. Obama’s executive order addressing cyber security will also influence how the legislature deals with CISPA. The next few months seem likely to provide some interesting developments regarding the government’s access to personal information that is communicated to private entities. 

Tuesday, April 9, 2013

Privacy or the Common Good? Creating Rules in the Face of Uncertain Risks and Rewards



Since we read Privacy in Atlantis, I’ve been thinking about the initial allocation of rights to information. In that article, the authors conclude pretty quickly that the rights to information about an individual ought to be initially allocated to that individual. But I think this conclusion deserves a more robust discussion.

At its core, this debate is really about risks and rewards offered by the use of  information related to individuals. There’s a pretty good argument that the rewards of access to information are so substantial that the default position ought to be to allocate information rights to the public unless there is a reason not to. Jane Yakowitz has written persuasively that data, at least de-identified data, is a common good and public access ought to be maintained in most situations. For example, Yakowitz notes that empirical research using public data was responsible for de-bunking racist theories that Caucasians are cognitively superior.

Economic theory tells us that the most efficient rule is the one that has the fewest exceptions because carving exceptions out the default rule is costly. So if we think that it is better to use data related to individuals as a public good most of the time then we should default to allocating information rights to the public. On the other hand, if we think that most of the time data related to individuals presents the risk of harm, we ought to allocate rights to individual information to the individuals themselves. So it seems that, rather intuitively, we should make the initial allocation of rights based on whether harms outweigh public goods.

My feeling is that we need not make this decision for all kinds of information at once. We don’t need to have a single unifying theory of privacy for all pots of information. Even though it might be economically efficient to have a single rule with minimal exceptions, it might not balance risks and rewards very well. A unified approach approach privileges theoretical consistency over reality, which seems a bit silly to me. So if I was running the world, I’d identify pots of information where the balance clearly goes in one direction. For example, health information and financial information are particularly sensitive and so the risks are high. For this reason, we ought to initially allocate rights to that information to the individual and robustly protect those rights. By contrast, information about individuals’ shopping or television viewing patterns has high economic value and presents relatively minimal risks so this information should be able to be freely used.

So the problem comes up at the margins—where the weighing of harms and benefits isn’t obviously tilted in one direction.

One problem with this analysis is that both the value of information in the public domain and the privacy risks that that unauthorized uses of information are unpredictable. Given this fact, maybe it’s not really a matter of deciding whether the risks or values are larger, but a question of how we want to handle uncertainty. In environmental regulation, there’s been a movement toward the application of the precautionary principle which dictates that when harms are uncertain, the best course of action is to assume the harms will materialize and protect against them. Some have suggested that the precautionary principle is a good model for privacy regulation. I’m inclined against the precautionary principle—at least when reflexively applied. It seems to me that a more careful and context specific analysis or probable risks and rewards, even while costly to conduct because of the inherent uncertainty involved, will produce a better balance of individual rights and common goods.

Monday, April 8, 2013

Do Digital Currency Investigations Differ Any from Paper Currency Investigations?



The New York Times reported yesterday on Bitcoin, which is a peer-to-peer currency that does not rely on a centralized bank or government treasury. This year the collective value of all Bitcoins passed one billion dollars. The Times notes that Bitcoin “comes in pretty handy for people who do not want their transactions monitored.” A significant criticism of Bitcoin is how it facilitates illegal activity. For example, it thwarts enforcement of money laundering laws. Much Bitcoin activity occurs at  the Silk Road, an online marketplace for illicit goods like narcotics and firearms.

What makes Bitcoin interesting from a privacy perspective is that all Bitcoin transactions are registered in a publically-available log. This ledger shows payments as made to Bitcoin “addresses”: strings of about 33 numbers and letters. Unless someone knows your Bitcoin address, people that inspect the public ledger have no idea who is conducting these transactions. In other words, the public keys do not constitute personally identifiable information.

But, Bitcoin isn’t so different from good old paper greenbacks. To track down a particular person using Bitcoin to commit crimes, law enforcement could contact the other participant in the transaction, and get that person to disclose what they know. After all, most of the goods bought with Bitcoin are physical property, and have to be shipped to a real world address at some point. Savvy criminal Bitcoin users can use anonymization programs or use decoy addresses (analogous to what savvy criminal cash users can do) to obfuscate their trail, but of course that takes time, effort, and expertise. And then there’s the cashout. Because (at this point) not all financial transactions can be conducted with digital currency, people will need to trade Bitcoin for government-backed money, which brings them back into a world of close government regulation.

The Fifth Amendment’s protection against self-incrimination might be one area of law that becomes more relevant with Bitcoin. In re Boucher, 2009 WL 424718 (D. Vt. Feb. 19, 2009), rejected a defendant’s Fifth Amendment claim against a grand jury subpoena which ordered him to turn over a password encrypting files that the government suspected harbored child pornography. Because at the time of his arrest the defendant had accessed his hard drive in front of the arresting officers, and the officers were able to see the file names (which strongly hinted that the files would contain child porn), the court ruled that providing access to the files "adds little or nothing to the sum total of the Government's information about the existence and location of files that may contain incriminating information,” thus defeating any Fifth Amendment claim. The same result was reached in United States v. Kirschner, 823 F. Supp. 2d 665 (E.D. Mich. 2010); see also United States v. Doe, 670 F.3d 1335 (11th Cir. 2012) (finding no Fifth Amendment violation here because unlike in Boucher the government did not know what was in the encrypted files it wanted passwords for). So if the government does not have a certain amount of evidence that illegal activity is going on, they can't likely subpoena someone to decrypt their Bitcoin account. Banks would probably be a lot more willing to disclose financial transactions than would other Bitcoin users (putting aside any statutory restrictions on banks). The government could probably try to ban Bitcoin outright, but I'll leave for another day how wise or effective that would be.

At this point, it is still relatively costly and time-intensive for the government to identify illegal activity with Bitcoin and then track down participants in this activity. Still, people enthusiastic about financial privacy should hesitate before uncritically plunging into the world of Bitcoin.

Sunday, April 7, 2013

Facebook Home - Giving Zuckerberg Even More Info

Facebook recently unveiled a new product - an app for the android operating system called "Facebook Home." Facebook's announcement is already raising privacy concerns, at least partially because Facebook is notoriously bad at appeasing the desires of the privacy-concerned. Facebook attempted to alleviate those concerns with a preemptive strike, and to an extent the current limits on the data collected by "Home" are a good sign.

Most Facebook users are aware that their information is being collected, and most facebook users don't care. Some may not realize the extent to which their information is collected and subsequently sold. It doesn't matter which camp an individual "Home" user falls into; the bottom line is that more information is going to be put into Facebook's hands. Perhaps for the users who choose "Home" this isn't a concern, but it does raise concerns about third party protection issues - i.e., how much more information about those who choose not to use "Home" is going to be collected? American users cannot prevent the Facebook app from collecting information regarding how often they are calling or messaging a "Home" user, and potentially what those messages contain. The entirety of Facebook's "Home" data collection intentions are not clear, but once they've establish a large installed base, the capability to collect insane amounts of data and sell it would be only a small "Data Use Policy" change away.

After all, Facebook has a history of letting its users down on the privacy front, specifically by making mandatory, short/minimal notice changes.

Try combining this with Google Glass, running Andriod, for bonus privacy erosion.

Are you too fat to work at CVS?

Rummaging around the internet, I found out that CVS is making waves by (kind of) requiring employees under their health plan to disclose their weight, body fat, glucose levels, and other health information.  Those who refuse to do so will be required to pay an extra $50 per month for the plan.  So basically you could call it "fat tax" (I always imagined that was coming down the pipe at some point).

CVS justifies this action on several levels.  First, it claims to not be viewing the information itself.  Second, it claims that this policy is no different than what many other companies do (which is actually true.  For example, Whole Foods offers an increased employee discount for being skinny, while other companies invade similar areas regarding health).  Finally, the company claims its goals are to help identify health problems to benefit both the company and the workers.

Privacy groups, obviously, are not thrilled.  Patient Privacy Rights founder Dr. Deborah Peel went so far as to call it "incredibly coercive and invasive" (see above links).  And naturally, there is concern as to whether or not such a rule infringes more on the privacy rights of the poor, as those who are less able to afford the $50 "tax" will be less likely to refuse the screening process.

I'm of two minds on this issue.  The free-market libertarian in me thinks that employers should be able to ask for information like this, as it IS certainly relevant when it comes to company costs (a heartless way to look at employee health, I know).  If you don't want to give it up, the option is available to either quit your job or pay the tax.  As Alonzo Harris told Hoyt in Training Day, it ain't like someone put a gun to your head.

And then, of course, there's the "it's none of your business" side of me, which tends to fly off the handle any time someone is forced against their will to disclose information that is sensitive in nature.  And in a very real way, this is certainly forcing the hand of some people.  As a law student who is a few dollars away from eating dirt, I can testify to the power of $50 a month.  In some cases, this isn't a real choice at all, but a requirement.

At the end of the day we're stuck with a balancing act between employers' rights and something that makes us feel very, very uncomfortable.  It (as in, employers prying into their employees private lives) started years ago with smoking.  Now it's moving onto weight.  Next year is it, "do you have kids?  Yes?  You'll have to pay an extra hundo a month to work here, then."

Saturday, April 6, 2013

Google Agrees to Educate Consumers About Privacy in Street View Settlement


Last month, Google agreed to a settlement with 38 state attorneys general in a case they brought over privacy violations committed by Google during the course of its Street View data collection. Google’s street mapping cars (apparently inadvertently) collected e-mail addresses, passwords, and other personal information from the unsecured home networks of unsuspecting computer users.

Google is paying a $7 million fine, to be divided among the states involved in the settlement. But more interestingly for our purposes, Google has also agreed to some privacy initiatives that are unlike those we’ve seen before in the context of FTC consent decrees (the FTC apparently ended its investigation of the Street View issue without imposing a fine). The settlement requires Google to conduct more robust privacy training for its employees and to create and promote a public service announcement instructing consumers on the importance of securing home Wi-Fi networks.


Consumer groups are (predictably) unimpressed by the idea of having Google educate its users about privacy. But on the other hand, maybe there's some wisdom in requiring (or encouraging) companies like Google and Facebook that collect large amounts of consumer data online to educate consumers about data privacy. Consumers clearly are willing to trust these companies with their personal information; perhaps they would also take notice if these companies encouraged users to take privacy more seriously. In order for the notice-and-choice model to be even reasonably effective at permitting consumers to balance their competing desires for privacy and services, consumers need to be well informed about the trade-offs and how to go about preserving the amount of privacy they want. To the extent that Google's privacy education campaign helps consumers get some of that information, it could be a positive development and a useful new tool for privacy regulators.