Wednesday, May 22, 2013

Persona Parameters in the Age of Reality Stars and YouTube Celebrities


The privacy tort of “persona rights” often manifests as an appropriation issue. Someone has value in their personhood that is wrongfully exploited by another and is the tortfeasor is liable for a violation of privacy. As technology evolves and categories, like “fame”, that were more clearly demarcated in previous decades become muddled, it remains to be answered whether laws be able to adequately morph in order to accommodate society’s changing sensibilities. This post will examine the current law, societal changes, and recent examples of this phenomenon in an effort to paint this landscape in broad strokes. 

Over the last ten years or so, technology has democratized public speaking platforms. In the past, significant resources were needed to gain access to wide audiences. Today, however, the internet and television have made these audiences much more accessible to common people. While many use this opportunity to advocate for social justice or to educate, some have sought to capitalize on their celebrity aspirations. With the increased numbers of reality television stars and YouTube celebrities, it has become harder and harder to ascertain where celebrity ends and popularity begins. It is therefore equally hard to assess whether an internet sensation has the kind of value/property right that persona rights originally aimed to protect. 

This particular privacy action is designed to safeguard the right of the individual to exclusively use his identity for benefit. While the case law on this matter is jurisdiction specific, courts have held that just because you are not a “celebrity” does not mean that your identity has no commercial value. Courts have held that commercial value can lie within a specific group of people even if not within the public at large. Similarly, a defense to a charge of (mis)appropriation of someone’s identity, is a claim that the plaintiff is not a celebrity and therefore has no verifiable worth in their identity. 

The tensions discussed above get to the heart of this post. If you do not need to be a “celebrity”, can a quasi-celebrity count? How large of  a group is necessary in order for there to be commercial value? Is a YouTube following significant? What about those members of reality television casts who enjoy a brief stint on a popular season of survivor? What counts as proof for commercial value? 

Recently, reality television “star” turned business mogul, Kim Kardashian, settled a law suit against Old Navy for allegedly using a lookalike in their ad campaign which violated Kardashian’s publicity rights. While Kim may demand big bank for her stamp of approval on a clothing retailer’s threads, other reality tv stars may not be so highly thought of. Perhaps the best approach to this is to examine whether the misappropriated use either would have earned the plaintiff money if they had used their identity in that way or whether they generally engaged in that kind of commercial use of their identity.    

Wednesday, May 15, 2013

One Step Forward and Two Steps Back: Possible Changes to CISPA and ECPA on the Horizon


So I am not sure if anyone is still checking out the blog at this point, but since I finally have a minute, I thought that I would post the last couple of articles I found before the semester officially ends. The first one is interesting because it covers how portions of CISPA would create corporate immunity for certain acts of information gathering that would, as of now, mean that they could be held responsible for various torts and statutory violations. What is interesting about this, perhaps more so than the unsurprising notion that corporations would want to immunize themselves from liability as much as possible, is the fact that CISPA would use such loosely defined language in order to define “cyber security threats.”
I find it interesting that, with all the debate over ECPA and its antiquated definitions/application, a bill being proposed in 2013 still suffers from many of the same problems. The exemption seems to leave it up to the company to determine what is, or is not, a cyber security threat. Obviously, this version of the bill did not pass, but this seems to be a troubling pattern in recent arguments over bills like CISPA and SOPA, which use broad terms that would sweep in far too much information, which is troubling for propoenents of internet privacy, and would make obtaining any sort of damages against corporations for information based harms even more difficult than they are now. The article can be found here: http://motherboard.vice.com/blog/cispas-immunity-provision-would-allow-corporate-hacking
The second article I am including is one about some of the proposed changes to ECPA. ECPA has been consistently criticized as being antequated and out of date in terms of its application to modern technology and understanding of advances in notions of privacy and communication as far as things like internet and email go. In particular, the Stored Communications Act portion seems particularly out of date insofar as it fails to protect information and methods of communication that are arguably as important today as landlines were twenty years ago (after all, how many of you guys still have landlines?).
This amendment to ECPA would require disclosure by law enforcement when an individuals email has been accessed as a result of a warrant. Though there are two exceptions (national security “gag order,” and when it would tip off a subject) it seems like a step in the right direction for privacy. After all, unlike when the police come to your door to search your house, many people are unaware of not only what data they consistently send out to the world, but also of whether a search has even taken place. This would at least serve to make such searches more visible. The article can be found here: http://www.zdnet.com/plans-to-end-warrantless-email-searches-pass-senate-committee-7000014527/
Anyways, for those of you still reading (hey Professor!) I hope you have a good summer! I hope you find these articles as interesting as I did, and congratulations to the 3L’s among us. 

Undeleted Snapchat Photos – Privacy Scandal, or Ho-Hum?



If you were intrigued by this blog's previous post about Snapchat, you may recall that it described the preferred teenage sexting application (150 Million photos uploaded per day, a few of which may not be sexually explicit or potentially incriminating) as a system that “allows the user to send photos to friends that automatically delete after a specified period of time.” While the post went on to speculate about how the application might leave a “digital trail,” and how this evidence might be discovered for litigation, the presumption seemed to be that the images were, in fact, deleted from both user devices and Snapchat's servers after they had been viewed.

Well, earlier this week, digital forensics firm Decipher Forensics announced its discovery that the photographs are not permanently deleted from users' phones, and that they can be retrieved, at least from Android devices, by anyone who has 'root' access to the phones. Although most users do not intentionally root or jailbreak their phones, this discovery presents a substantial concern for those who do, or whose devices are compromised by malware or other surveillance or forensic tools. The allure of Snapchat is that images sent via the service are supposed to be ephemeral, and the company promotes this as the key advantage of the application, stating: “[t]hey'll have that long to view your message and then it disappears forever.” But that sentence is immediately followed by the disclaimer “We'll let you know if they take a screenshot!” Does this constitute an enforceable promise that the messages will be permanently deleted, or does it plainly disclaim perfect confidentiality by warning users that Snapchat does not guarantee that the images will not be retained by their recipients?

The question is complicated by several factors. First, it is not clear from media reports whether the images are “deleted” by the operating system, as the company suggests they are, or merely renamed with a “.NOMEDIA” file extension that prevents users from accessing the images via the Android user interface. What is clear is that the images are not encrypted, and that they are not securely deleted, or “wiped,” from user devices after they expire. These technical differences reflect a spectrum of meanings of “deleted,” ranging from the least secure, in which a hypothetical application might retain all messages but exclude them from the user interface of the application so that they “disappear” from the user's perspective but remain accessible to digital experts, to the most secure, in which messages are encrypted at the endpoint devices using a nonce or one-time pad, and both the encrypted image data and the keys are securely deleted after a single use.


While there is no perfectly secure solution, surely, enforcement of privacy promises must reflect the common interpretation of those promises, not merely the porous, technical interpretations that service providers might prefer. Snapchat makes it clear that message recipients can “capture” images by taking screenshots while the images are being viewed, but this seems to suggest that it is the only way that the images can be retained, and that Snapchat will notify users if their images are retained in this manner. In fact, the images are easily accessible to anyone with either an advanced understanding of file storage or money to spend on file-retrieval applications and services. While this might not surprise many people in our privacy class, do you think that the company has misled ordinary consumers about the security and confidentiality of messages that they send via the service, or do Snapchat users have ample notice and knowledge of the risks of sending images to third parties? Can a carefully-crafted privacy policy cure any potentially misleading statements made in other, more user-accessible contexts? Do the following screenshots from the Google Play store affect your opinion?


Snapchat application description.


Snapchat users also viewed and installed...

Thursday, May 2, 2013

Privacy in 2031

In case anyone hasn't seen it yet, I thought I would point out this xckd comic:



mouseover: "2031: Google defends the swiveling roof-mounted scanning electron microscopes on its Street View cars, saying they 'don't reveal anything that couldn't be seen by any pedestrian scanning your house with an electron microscope.'" 

I thought this comic was particularly interesting, since in many ways it seems like privacy is a diminishing concern in the United States. This article by CNN questions whether 20 years from now, anyone will care about online privacy at all. Because many of the most popular websites are offered for free and paid through by advertising, the author suggests that within our lifetimes privacy is likely to become a thing of the past. Indeed, a few years ago, Mark Zuckerberg even publicly announced that he believes privacy is no longer a social norm. From that point of view, the proposition that people would object to xkcd's imagined Google Earth of 2031 is questionable.

Still, I'm not convinced that society will let privacy go without a fight. Recently, Senator Jay Rockefeller of West Virginia stated that he will be introducing legislation this year to force advertisers to honor "Do Not Track" requests. Similarly, the White House released a Consumer Privacy Bill of Rights in February of this year. While this document does not have any legal effect at the moment, it is notable because it gives an official voice to privacy advocates, and opens the door for future legislation to prevent abuses of personal information. If nothing else, this document reinforces the importance of the FTC's role in ensuring consumers know how their information will be treated, whether or not privacy remains a social norm.

Sunday, April 28, 2013

How Can a Stingray Track Your Cell Phone?


A stingray is no longer just a flat-bodied fish feared for the poisonous barb in its tail; it is also the name of a technology used by the FBI and other law enforcement agencies to track the location of cell phone users. A cousin to the Triggerfish tracking technology seen on the HBO television program “The Wire,” the Stingray puts out mobile phone signals to nearby cell phones and tricks those phones into thinking it is a cell tower. Based on a summary of the technology from the Wall Street Journal, this technology is useful to law enforcement in two separate ways: a user could have a specific location in mind and use the Stingray to capture data on all the devices being used in that setting, or have a specific device in mind and use the system’s antenna power readings to triangulate that device’s location. According to private vendors of these products, they are also able to obtain not only limited “metadata,” but also content sent to and from the phones, like text messages and call audio.

The Fourth Amendment constitutionality of the Stingray has come into focus in US v. Rigmaiden, a case in Arizona federal District Court featuring a defendant accused of being the ringleader of a $4 million tax fraud operation who was caught, in part, due to law enforcement use of a Stingray. The government had Verizon modify the defendant’s phone (by changing the settings on his “air card”), and then used the Stingray, acting as a fake cell tower, to track the location of the phone. The government has relied on a court order directed to Verizon as fulfilling the requirements of the Fourth Amendment and ECPA in this case. Because the government has conceded that this was an intrusion requiring a warrant, the case now revolves around whether this court order was sufficient to enable use of the Stingray.

The Stingray technology raises some interesting Fourth Amendment questions. If the “content-catching” capability of the device is disabled, is it like a “trap and trace” device used only to capture pen register-type non-content information? While the government conceded the issue in Rigmaiden, it may argue this point in future uses of the Stingray. Based on Jones, I think the government will be hard-pressed to claim that a device that can track a suspect to within two meters and sends signals through protected areas is not an intrusion for Fourth Amendment purposes.

 Another issue with the Stingray is that it captures data pertaining not just to the target, but also to any mobile phone within range that connects to the fake cell tower. The government claims that it deletes third party data not pertinent to the case, but the fact of that data’s collection and possible interference with innocent users’ cell phone service is problematic.

The bigger problem, well illustrated in Rigmaiden, is that the government made allusions to a “mobile tracking equipment” in its affidavit to the court but did not go into the specifics of how the Stingray operated. While these devices have been in use for almost twenty years, they are still relatively unknown and no definitive case law exists that governs their use. Courts need to be informed as to exactly what these devices can and cannot do in order to figure out whether Stingrays are effective and appropriate law enforcement tools or overbroad and invasive data collectors.

Monday, April 22, 2013

More on FERPA

Today we had an introductory look at FERPA. As a part of our discussion, we touched on some of the proposed changes and discussed the related report on issues arising during emergency situations. In September 2011, I wrote a piece as a Research Assistant at the Silha Center for the Study of Media Ethics and Law. If you are interested in reading more about the report, proposed changes, and other litigation related to journalists being rejected access to information based on FERPA, the link is below.

http://silha.umn.edu/news/Summer2011/SchoolPrivacyLawChanges.html

Thursday, April 18, 2013

Technology Evolving in Criminal Defendants' Favor: The Example of Blood Draws in DUI Cases

Yesterday the Supreme Court decided Missouri v. McNeely, No. 11-1425, 2013 WL 1628934 (U.S. Apr. 17, 2013), a case concerning the Fourth Amendment and law enforcement nonconsensually drawing the blood of someone suspected of driving under the influence. If drawing someone's blood is an unreasonable seizure, the police may do so without a warrant only if a recognized exception applies. One such exception is when the "exigencies of the situation make the needs of law enforcement so compelling that a warrantless search is objectively reasonable." The imminent destruction of evidence is such an exigency. The crux of this case was whether blood-alcohol content, which is inherently evanescent, presents an "imminent destruction" exigency such that the police could warrantlessly take blood samples.

The State of Missouri sought a per se rule that the natural dissipation of BAC always constitutes an exigency. (Many states, including Minnesota, have held this.) The Court rejected that argument, because BAC declines in a gradual and predictable fashion such that a particularized case-by-case inquiry is necessary in each case. The inquiry is thus whether the police can reasonably procure a warrant before the evidence, well, self-destructs. Because the State didn't bother to argue here that there were exigent circumstances in this case (it just wanted the per se rule), the Court affirmed the conviction.


Okay, so far this case isn't terribly remarkable-- but what I found striking about McNeely is that unlike most Fourth Amendment cases we've studied, here technology advances actually help criminal defendants. In other contexts, new technology has enhanced the police's ability to investigate and charge people with crimes: Katz (wiretapping); Smith (pen register); Kyllo (thermal sensors); Jones (long-term GPS surveillance). The defendants won in many of those cases, but one of the central conflicts common to each was this encroachment of technology on privacy without magisterial review for probable cause. (And this is to be expected, as the Fourth Amendment is about state action--technology use benefiting defendants wouldn't really figure into this analysis.)

In this case, the majority takes special note of "technological developments that enable police officers to secure warrants more quickly, and do so without undermining the neutral magistrate judge’s essential role as a check on police discretion." What sorts of technological developments? The majority points to a 1977 rule change allowing magistrates to issue warrants telephonically, and discusses states where police and prosecutors can apply for search warrants via email and video conferencing. In a separate opinion Chief Justice Roberts notes two particularly innovative jurisdictions:

Utah has an e-warrant procedure where a police officer enters information into a system, the system notifies a prosecutor, and upon approval the officer forwards the information to a magistrate, who can electronically return a warrant to the officer. Judges have been known to issue warrants in as little as five minutes. And in one county in Kansas, police officers can e-mail warrant requests to judges’ iPads; judges have signed such warrants and e-mailed them back to officers in less than 15 minutes.
(citations omitted).

Before yesterday, in a per se exigency rule state like Minnesota, law enforcement could take the blood of DUI suspects without consent or a warrant. Now, if there is a true exigency, law enforcement can still do this. This includes delays in the warrant process. But evolving technology like these e-warrant procedures and iPad requests has moved many situations out of the "exigency" box, thus necessitating a warrant. This is not to say all suspects will get off scot-free, but at least a magistrate will give their situation due consideration. And that is better for defendants than the cops being able to say, "Oh there's no possible time to talk to a magistrate, I get to take your blood right now."
(By the way, it also helps that there haven't been competing technological advances in the body's ability to more quickly eliminate alcohol from the system.)

Tuesday, April 16, 2013

The More You Know: Facebook Gets Active About Privacy

Another day, another Facebook privacy story (for more, see here, here, here, and here). In an apparent effort to counter these types of stories, Facebook announced yesterday, in conjunction with the National Association of Attorneys General (NAAG), that it is launching an online safety campaign. The campaign is "designed to provide teens and their parents with tools and tips to manage their privacy and visibility both on Facebook and more broadly on the Internet." The effort is being led by Maryland Attorney General Douglas Gansler, and was announced on the heels of a "Privacy in the Digital Age" summit in Maryland.

Components of the program include an "Ask the Safety Team" video series where Facebook staffers answer  "frequently asked questions" about privacy and safety concerns; a tip sheet listing the top ten tools for controlling information on Facebook; and state-specific public service announcements with participating attorneys general (19 have signed on so far).

It's hard to fault Facebook for trying to make its privacy features more accessible and transparent, but it already has some vocal critics, such as the Center for Digital Democracy ("Facebook's practices regarding teens, especially its data collection and ad targeting, require an investigation-- not just some glossy educational videos and tip sheets."). It is definitely worth wondering to what degree this is a PR move versus an actual attempt to provide clearer user privacy. Most of materials posted so far are simply descriptive of the existing privacy mechanisms ("What is tagging?" or "How do I use lists to manage the audience that's seeing my updates?") or are vague and common-sense based (#6 on the tip sheet: "Check your privacy settings."). And while describing its privacy features in multiple forms might be effective, it probably says something about the clarity of Facebook's privacy features if such dumbed-down redundancies are necessary. And what happens the next time the company changes its privacy policies? Hyping up its current features to such an extent might make the inevitable changes even more confusing and alarming for users.

It's also interesting to consider this campaign in light of Facebook's 2011 settlement with the FTC, which required the maintenance of a comprehensive privacy plan, clear and prominent notice of information that will be disclosed to third parties, and biennial privacy audits. This campaign likely gives Facebook an additional tool to show the FTC how clear and upfront it is being with users about privacy, but it also provides a lot more ways in which Facebook is making promises to its users. It will probably need to be careful not to preach privacy too strongly and end up stepping on its own toes with promises it didn't mean to make and doesn't want to keep, which is why much of the material posted so far seems a little shallow.

Nevertheless, I think it's noteworthy that Facebook has felt the privacy backlash strongly enough that it is joining forces with attorneys general to promote Internet privacy. And even if the information is nothing new, it never hurts to have things available in a variety of locations and formats. What do you guys think of these measures? Useless, shallow PR, or a sign that Facebook is really trying to be more proactive and transparent about privacy?


The IRS and ECPA Loopholes

The IRS may be bypassing Fourth Amendment warrant requirements for accessing e-mails that have been stored for longer than 180 days. The government agency claims to have the authority to access these e-mails without a probable cause warrant because ECPA only requires such a warrant before obtaining e-mails that have been in electronic storage for less than 180 days. In response to this perceived violation of civil liberties, the ACLU has been criticizing the IRS for its lack of transparency on this issue and for skirting around Constitutional protections.

We know that e-mails are protected from unreasonable searches and seizures because SCOTUS found as much in United States v. Warshak. The Court found that the government needs to obtain a probable cause warrant before gaining access to an individual's stored e-mails. The IRS has not been clear, however, whether they are following Warshak on a national basis or only in a limited geographical area. In an e-mail exchange between an IRS employee from the IRS Criminal Tax Division asked Special Counsel whether Warshak would impact warrant procedures at all. Counsel's response was "I have not heard anything related to this opinion. We have always taken the position that a warrant is necessary when retrieving e-mails that are less than 18- days old." Later internal communications from 2011 indicate that some within the IRS believe it unwise to seek such older e-mails without a warrant but that the Warshak opinion technically only applies in the Sixth Circuit.

18 USC 2703(a), which is a section of ECPA labeled "Required disclosure of customer communications or records: Contents of Wire of Electronic Communications in Electronic Storage", states a government entity may require an electronic communication service to disclosure contents of electronic communication that is in storage in an electronic communications system for 180 days or less. The IRS has used this guideline to justify obtaining e-mails stored longer than 180 days without first getting a probable cause warrant. 

Is there really reason to think that e-mails that have been stored for longer than 180 days are less worthy of privacy protection? Certainly such communication is not less worthy of Constitutional protection as Warshak states, but why should ECPA fail to provide additional protection to such communication? Why should the potentially very personal information contained in old e-mails be subject to seizure by the government without  requiring a probable cause showing in order to do so? 

There are two steps that should be taken to remedy this missing link in existing privacy law. First, the IRS must be transparent about their existing warrant obtaining procedures. The ACLU has been at the forefront in calling for transparency in this area, and this is commendable. The American people should know how easily their government can access personal communications without probable cause, and the IRS owes it to them to make this information public. Second, Congress needs to update ECPA to modernize it. ECPA has written in the 1986, and technology has improved by leaps and bounds since then. In March, new legislation was proposed in Congress that would update ECPA by getting rid of the 180-days clause. This legislation would make sure that the government obtains a search warrant to access all e-mails (even old ones) and also that the government notifies the individual of such disclosure within 10 days. There should be a vote within the year.This legislation has so far received wide support and is a necessary step to ensuring adequate privacy for all of us.


Google’s Privacy Policy Under Attack


Remember the Google new one-size-fits-all privacy policy [a single privacy policy that regulates all Google’s services] that we went through earlier in class? This policy is now under attack in Europe.

On April 2nd, 2013, six countries in the European Union–UK, France, Germany, Spain, Italy, and Netherlands– announced a joint action against Google’s new privacy policy. The EU data protection authorities claimed that the new privacy policy does not allow users to figure out what information is kept, how it is used by Google’s various services, and how long are these privacy data kept. EU authorities demand Google to specify those issues and put up a simpler presentation of the privacy policy.

Does Google care about this action? Definitely. The fines have limited effect to Google but the public relationship can seriously damage Google’s business. Google’s annual revenue in 2012 is $50 billion, and is projected to be $60 billion in 2013. On the other hand, the maximum fine for a violation of privacy policy in the EU is $1.3 million, and each EU member country would probably impose additional fines (but, in general, they are less than $1 million). Thus, the fines are not likely to raise any significant concern to Google. However, the public relation effect is huge to Google. On the same day the EU action was announced, Alma Whitten, Google’s first privacy director, stepped down after three years in the job.

What defense does Google have? Theoretically, Google can defend itself by claiming EU-US safe harbor. EU-US safe harbor provides a streamlined and cost-effective means for US organizations to satisfy the EU privacy by complying to an “adequacy” requirement. The “adequacy” requirement is a lower privacy standard comparing to the EU’s regular privacy policy. The “adequacy” requirements are specified in seven areas: notice, choice, onward transfer (transfers to third parties), access, security, data integrity, and enforcement. In a nutshell, EU-US safe harbor is a reduced privacy standard provided to US companies to operate in the EU. Interestingly, Google’s privacy policy does explicitly mention that Google complies with the EU-US safe harbor. Even more interestingly, Microsoft updated its privacy policy in April, 2013. On the first page of the new privacy policy, Microsoft posted a super big icon of "EU-US Safe Harbor", claiming compliance to it. 

Learning from this event, it seems that the real “teeth” in a governmental privacy action is not the fine, but the stigmatization: “you don’t respect our privacy.” In my opinion, if Google decides to go to court, it will likely prevail on the EU-US safe harbor. However, here, Google's privacy director stepped down immediately without seeking justification from EU-US safe harbor. What do you think? Is the concern of stigmatization so strong that it de facto moots EU-US safe harbor? What benefit does the EU-US safe harbor offer in practice? Any other thought?

Cyber Security vs. Cyber Terrorism

In an article posted online http://www.gsnmagazine.com/node/28918?c=cyber_security a privacy group pushing the government to define cyber security standards. After President Obama signed an executive order concerning cyber security questions have arisen concerning how the new Electronic Privacy and Information Center (EPIC) works and what is EPIC supposed to target. "EPIC, which also pushed for solid privacy and civil rights protections based on DHS privacy policies and the president’s “Fair Information Practices (FIPs), said most Cyber security issues amount to civilian crimes committed in cyberspace and are best handled by state and local law enforcement and not as matters of national security. Misappropriation of intellectual property, cyber-espionage, and hacktivism, don’t pose national security threats and should not be treated as such, it said." 

Overall, EPIC has been pushing for greater control to approach cyber-security's framework by reducing risks to critical infrastructure. The privacy group is concerned that because EPIC's reach is long and restrictions are vague; that personal privacy is will be infringed upon. According to EPIC, it really focuses on threats to the infrastructure, but then makes statements that suggest that cyber security falls under national security." 

When I originally read this article I thought -- Okay, this is just another crazy privacy group looking for any possible complaint to lodge. But in retrospect I think that it is something to follow. What type of authority does EPIC have? It seems like the words "national security" seems to have magical powers that trump privacy concerns. 

I thought that I would leave with a quote from EPIC, “Too often claims of national security tip the transparency-secrecy scale towards secrecy; thus the Cybersecurity Framework should clearly define what encompasses national security threats. Even those aspects of the Cybersecurity Framework that do fall under national security should be transparent whenever possible.”

Does the privacy group have a justified concern?

Monday, April 15, 2013

Doublethink: How Washington state nearly missed the whole point of privacy laws.

Back in February, Emily Marshall blogged about the "Social Networking Online Protection Act" (or SNOPA) that was recently reintroduced to Congress. This bill would prevent employers from forcing employees to hand over their social networking passwords as a condition of employment. While there isn't any current federal protection against this, several states have taken the task upon themselves, with 14 states introducing and 6 states passing this type of law in 2012. I agree with Emily's contention that this is common sense legislation. My home state of Washington seems to disagree. They introduced SB 6637 in April of 2012 and it hasn't been heard of since. Continuing the tradition, they introduced the extraordinarily similar SB 5211 this January, which has now made it out of committee and might just become a real boy someday. 

Now you might be thinking that another state maybe, possibly, potentially passing a similar bill to a lot of other states is hardly a sexy topic for this blog. And you'd be right, if it were not Washington state. This past Thursday, techdirt let me know that Washington found a way to mess this up: amending SB 5211 to give employers a new right to request social networking passwords. Now, as it turns out, techdirt's reporting was a bit late; the amendment was withdrawn on April 3rd. Regardless, the mere fact that the amendment was proposed gives us an opportunity to study how privacy law could go bad (and to mourn for the alternate world in which this passed).

The text of SB 5211 makes it unlawful for pretty much anyone to require their employees to give them their personal social networking password or to let them access the employee's account. It creates a civil action awarding a $500 penalty plus any actual damages and attorneys fees if the employee wins and award the employer reasonable expenses and attorneys' fees with the suit turns out to be frivolous. The Amendment added an exception; if it is conducting an investigation, an employer can go ahead and demand a password or access to an employee's personal account if:
The investigation is undertaken in response to receipt of specific information about the employee or prospective employee's activity on his or her personal account or profile;
The purpose of the investigation is to: ensure compliance with applicable laws, regulatory requirements, or prohibitions against work-related employee misconduct; or investigate an allegation of unauthorized transfer of an employer's proprietary information, confidential information, or financial data;
The employer informs the employee or prospective employee of the purpose of the investigation, describes the information for which the employer will search, and permits the employee or prospective employee to be present during the search;
The employer requires the employee or prospective employee to share the activity or content that was reported;
The scope of the search does not exceed the purpose of the investigation; and
The employer maintains any information obtained as confidential, unless the information may be relevant to a criminal investigation.
On the one hand, it does attempt to appear reasonable. The scope is somewhat narrow, there has to be specific information, and the information is kept confidential. But on the other hand, the state would have directly endorsed employer intrusion into their employee's private accounts as part of a bill designed to protect employees from exactly that. To use one of those terrible analogies that judges love, your employer cannot force you to let them into your home and rifle through your personal journals and written correspondence because they expect to find evidence of employee malfeasance, but this law would let them do the same to your hidden group posts and messages on Facebook. If you don't like it, you can quit or be fired and you would have no recourse.

Aside from being terrible policy, such a provision could also violate federal law. The techdirt article points out that, if willingly violating a website's terms of service counts as accessing a protected computer without authorization/exceeding authorized access, this scheme could lead to rampant CFAA violations. Facebook, for example, includes in its terms of service that "[y]ou will not share your password (or in the case of developers, your secret key), let anyone else access your account, or do anything else that might jeopardize the security of your account." Is the CFAA violated when your employer proceeds to access your account in breach of Facebook's terms? United States v. Drew suggests that it might not, but that doesn't foreclose the possibility that this law would have thrown employers from the frying pan into the fire by giving state approval to an action resulting in federal violations.

Fortunately, the law is going forward without this amendment, but one has to raise one's eyebrows when one of the most liberal states in the nation even considers such a bill. If New Amsterdam can think about it, then perhaps someone else will actually do it. This may seem like a First World Problem™, but I can't imagine any worker anywhere would be all that thrilled about their employer having the right to listen in when that worker privately complained to their buddies. 

Sunday, April 14, 2013

CFAA: Protector or Obstructor of Privacy?


The Computer Fraud and Abuse Act prohibits “intentionally access[ing] a computer without authorization.” The law has been turned on its head to support overreaching prosecutions by the U.S. Department of Justice in cases involving violations of terms of use agreements and, quite recently, a case that led to the highly publicized suicide of Aaron Schwartz. 

But it’s done little for the New York Times, the Washington Post, Twitter, and Apple, all of whom have been the victims of high profile hacking attempts this year. The CFAA and relevant international law hasn’t done much work to protect against hackers in China, and according to technology lawyer Stewart Baker, “our government seems unwilling or enable to stop the attacks or identify the attackers.” 

The government’s failure to protect has led to a debate about private victims taking a proactive approach to their cyber-security efforts: “hacking back” (also referred to as “backhacking”). Hacking back doesn’t necessarily mean destructive retaliatory measures; it could also include attempts at intelligence gathering, such as the recent success of two private cyber-security entities in Luxembourg that uncovered the inner workings of a Chinese hacker group’s network. 

Baker says “[t]he same security weaknesses that bedevil our networks can be found on the systems used by our attackers. . . . [In other words:] ‘Our security sucks. But so does theirs.’ ” Since the government isn’t taking advantage of exploiting hacker networks to vindicate and protect private security and privacy interests, some private entities want to take matters into their own hands. Unfortunately, the Justice Department thinks hacking back may be just as illegal under CFAA as the attacks that prompt it.  

Backhacking could be viewed as an active defensive tactic. Like using a private investigator, backhacking could be used to determine not only the identities of hackers, but to analyze their methods and learn more about how to stop them, as the Luxembourg groups’ hackback demonstrates. But others take a different view, such as Orin Kerr, who finds an analogy in traditional property law: you don’t have a right to break into your neighbor’s house to take back something she took from you. 

Should CFAA protect the privacy of hackers from the “active defensive tactics” of private entities? If not, what limits should be set? Among the various ways to immunize hackbacks by amending the CFAA, which would work best (e.g., a specific intent requirement, affirmative defense, etc.)? Would a push for a governmental approach to cyber-security law enforcement more responsive to private victims be more appropriate? 

Given that the threat identified in the hackback example above is suspected to be a Chinese military unit, maybe the vindication of cyber security and privacy should take a back seat to foreign policy. And maybe the U.S. Government is engaging the Chinese cyber threat in ways that implicate stakes much greater than those of the blueprints for the iPhone 8 Nano or your App Store purchase history.

Read more:
Detailed report on debate at BNA—Bloomberg
Luxembourg Hackback Story—Stewart Baker at Volokh.com
Luxembourg Groups' Report (for the tech savvy)—Malware.lu
Hackback Debates—Orin Kerr, Stewart Baker, and Eugene Volokh at Steptoe Cyber Blog
Madiant's Report on the APT1 hacker group—Madiant

Saturday, April 13, 2013

Privacy Concerns Fuel Drone Restrictions

http://www.reuters.com/article/2013/04/12/us-usa-drones-idaho-idUSBRE93B03S20130412

Last week, Idaho legislators passed a bill which would restrict law enforcement use of drones, becoming the second State behind Virginia to pass such similar legislation. The law requires police to obtain a warrant in order to use drones in the collection of evidence with regards to suspected criminal activity. Further, police are prohibited from using drones to surveille individuals or their property without written consent. However, exceptions exist. If the drone is being used with regards to illegal drugs, public emergencies or search and rescue missions no warrant is required. Legislators cite "high-tech window peeping" as a primary concern in passing the bill. This bill adds extra restrictions to the use of drones on top of Federal restrictions which limit the number of drones which can fly in U.S. airspace.

The exception for illegal drugs both makes sense while creating concern. The article cites finding illegal marijuana fields as a benefit which could come from using drones. From a Fourth Amendment perspective this is not that troubling considering a precedent exists which does not apply Fourth Amendment protection to open fields, so long as they are not "curtilage" of the property. Oliver v. United States, 466 U.S. 170 (1984).  However, the exception does not seem constrained to only searching for fields with drugs. Thus, a scenario could  be imagined where drones are laying watch on a street corner suspected of drug dealing. Or a drone could follow a suspected drug dealer continuously and relentlessly. The use of drones for these purposes seems extremely more intrusive and goes directly against what the legislators wanted to prevent in passing the bill. High-tech window peeping, and stalking seems to be OK with the legislator so long as it is a suspected drug dealer who is being spied upon.

The opportunity for abuse seems possible within the latter two exceptions as well. Search and rescue missions could turn into continuous surveillance of suspected kidnappers or those who are believed to know valuable information. Further, the public emergency language is broad enough that that its not clear exactly when this exception would apply. When would crime become a troublesome enough to become a public emergency is unclear. These three exceptions could, and are, being completed by individual police officers without drones currently. However, the ease and convenience which drones could complete these tasks is especially concerning

Further, it is unclear what type of warrant is required. While no "Super warrant" requirement is mentioned, it would seem prudent to have a stricter warrant requirement for drones, similar to what was done with ECPA. Further, no private right of action exists. Without this higher standard, the warrant requirement does not seem like that big of a barrier for police to get around. If the legislators are truly concerned about individual privacy infringement with the use of drones, modeling a bill similar to the ECPA requirements would make sense. Ultimately, the bill is a step in the right direction. While it is unclear exactly what effect the bill will have on the use of drones, these initial restrictions are necessary as the use of drones by law enforcement becomes increasingly a reality.

Thursday, April 11, 2013

CISPA Moves Forward


The latest version of the Cyber Intelligence Sharing and Protection Act was adopted by the House Intelligence committee and now moves to the House floor for a vote. Ironically, the House Intelligence committee discussed and voted on CISPA – a bill that could significantly diminish citizen privacy protections – in private.

What’s more interesting are the privacy protection amendments that were left out of the recently adopted bill. Two of the left-out amendments touch on issues we’ve seen in other privacy laws: liability for private entities that use information to discriminate and obligating particular entities to de-identify certain data.

I find it interesting that these two safeguards were overwhelmingly defeated. As a bill that is already contentious for its potential to undermine the current privacy law framework regarding personal information in cyberspace, I would think basic protections against unnecessary identification and unauthorized discrimination would be appropriate to include. Additionally, in considering other laws like the Fair Credit Reporting Act and Genetic Information Nondiscrimination Act where similar discrimination and identification concerns are addressed, it would seem that these familiar safeguards would be welcomed. Interesting arguments have also been made about how personally identifiable information would not even aid the purpose of CISPA. Also, it seems quite clear that any information obtained through CISPA’s information sharing regime would be inappropriate to use for anything beyond threat identification. That said, not including a robust non-discrimination provision seems insufficient.

The reluctance toward including these two amendments likely stems from the burden of compliance and the difficulty of enforcement. De-identifying data would no doubt be costly and burdensome.  Non-discrimination violations are difficult to establish because the burden of proof inevitably lies with the aggrieved individual trying to establish how the alleged wrong stemmed from unauthorized access/use. Despite these reasonable justifications for excluding such provisions, I would think CISPA needed some extra juice so as not to suffer the same defeat in the Senate that it did last year.  No doubt Pres. Obama’s executive order addressing cyber security will also influence how the legislature deals with CISPA. The next few months seem likely to provide some interesting developments regarding the government’s access to personal information that is communicated to private entities.