Thursday, February 28, 2013

Does Wi-Fi Sniffing Constitute Interception under the Wiretap Act?



Every bit of wireless information that your laptop or cell phone (or wireless mouse, or Bluetooth music player) transmits is broadcast far and wide, and can be received by properly configured devices anywhere within the range of the radio frequency signals that carry that information. And as a practical matter, every device that can receive those signals does receive and process them – if only to read the addressing information that frames each data packet, to determine whether that packet is addressed to that device.  In most cases, the devices and their drivers are designed to immediately discard any packets that are not addressed to those devices, but it is relatively simple to configure such devices to operate in a “promiscuous” mode that captures and retains all packets – not just the packets that are addressed to that those devices. When combined with free, “off-the-shelf” software like Wireshark, it is not difficult to monitor and record, or “sniff,” as the practice is called, wireless networks.

But does Wi-Fi sniffing constitute “interception” for purposes of the Wiretap Act? 18 USC § 2511(2)(g)(i) declares that it is not unlawful “to intercept or access an electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public.” The question, then, is whether wireless communications are “readily accessible to the general public,” when they are broadcast in public spaces and, in many cases, unencrypted.

The first reported decision to address the issue was In re Google Inc. Street View Electronic Communications Litigation, a case arising from the fact that Google had at one point configured its “Street View” vehicles to sniff Wi-Fi traffic as they passed along public streets, capturing not only the network addresses and SSIDs (network identifiers) of “open” wireless access points, which Google intended to use to improve its location services, but also terabytes of unencrypted network traffic, including plaintext passwords and a broad spectrum of other sensitive or personally-identifying information. Denying Google’s motion for summary judgment, the District Court of Northern California held that this recording of wireless traffic did not fall within the Wiretap Act's “readily accessible to the general public” exception, because the statute’s definition of that phrase in the context of radio communications did not apply to electronic communications. For purposes of the motion for summary judgement, the court then assumed, as the Plaintiff pled, that Wi-Fi communications “are not designed or intended to be public” and that Wi-Fi technology is “architected in order to make intentional monitoring by third parties difficult,” and held that the communications Google collected were not “readily accessible to the general public” because the data packets were not readable without the use of “sophisticated packet sniffer technology.”

Despite the Northern District of California court’s decision, the FCC closed its investigation of the matter without taking enforcement action, stating that it agreed with Google that the unencrypted signals were “readily accessible to the general public,” but cautioning that the information collection clearly infringed on consumer privacy. For its part, the FTC let Google off the hook with a warning, in light of Google’s efforts to remedy the problem.

The issue was raised again in a more recent controversy in the Northern District of Illinois over patents relating to wireless technology, in which the party seeking to enforce the patents, “Innovatio IP Ventures,” sought a preliminary ruling on the admissibility of information it could gain about infringing products by using Wi-Fi sniffing technologies (information that probably could not be used at trial if it was illegally intercepted under the Wiretap Act). Noting the Google Street View decision but declining to follow it, the Innovatio court held that communications transmitted over unencrypted Wi-Fi connections were “readily accessible to the general public,” because they are available to “any member of the general public within range” who chooses to install a low-cost packet capture adapter and free “sniffing” software.

What do you think? Is the information you transmit over an “open” Wi-Fi network “readily accessible to the general public,” such that it should not be illegal to monitor or capture that information? Does it make any difference whether it is Google that is collecting the information or your sketchy neighbor or coffee shop companion? What if it is being captured by law enforcement? If the information contained in the packets is encrypted, but the packet information is not, is it illegal to capture the packet information?

Is Siri iRobot’s Sister?

Like most people, I was exceedingly happy about the addition of Siri to the two most recent iteration’s of Apple’s widely popular iPhone. Touted as a “personal assistant”, her ability to research information, manage your calendar,  and even crack a joke or two here and there (see attached photo), made her a (mostly) pleasant addition to the already capable device. During my mom’s first encounter with Siri, she expressed hesitations around using such software and reminded me how artificial intelligence ran amuck in the blockbuster hit “iRobot.” I laughed (my mom is pretty anti-technology) and dismissed her concerns as unfounded. Then I ran across this YouTube video and realized that my mom wasn’t the only one with these concerns (or at least ideas).


In light of the cases, documents, and statutes we’ve been reading in Privacy, I have become much more aware of how I’ve underestimated the capabilities of modern technology. Not only am I receiving information but information is being collected from me and potentially transmitted to third parties with my “consent” but often without my knowledge. That revelation caused me to think more critically about my mother’s waring. Could Siri be iRobot’s sister? 

While perhaps a little paranoid to think that my hand-held device will turn evil/well-intended dictator, it is not far-fetch to analogize the amount of information Siri is collecting about me to the amount maintained by the artificial intelligence beings in iRobot. Siri knows my schedule, my searches, and potentially things I’m using the phone for even when I’m not using her application. This substantial amount of tells a great deal about my life. Enough for third parties to tailor marketing efforts, monitor my activities, or do a host of other things. The more widely used this technology becomes, the more potential there is to do those things on a macro level. 

I don’t have any insightful answers for this juxtaposition but I think it is worth considering whether this burgeoning use of artificial intelligence among the general public will be the impetus for new legislative privacy protection efforts. What does this technology mean for data mining/brokering? What will it mean for law enforcement and national security? I think that answers to those questions are in the much nearer future than I thought when I dismissed my mother’s caution.  

Google Glass: Ushering in an Era With No Expectation of Privacy?


As technology progresses, an individual’s expectations of privacy are continuously diminished. The children of the digital age do not have as strong an expectation as that held by prior generations. A favorite punching bag of privacy enthusiasts, Google is set to unleash its new product, Google Glass, in the near future. Currently, only a few are expected to be available until a mass market model is developed for release sometime next year. In the eyes of some, this device represents a further step down the path into a world without privacy.

The impending release of these “augmented reality” glasses might threaten our current “reasonable expectations of privacy.” In the early months, or even years, following the release of these devices, their use will probably not be widespread. The technology required for them to function will create huge limitations – particularly in battery life and storage capacity. At least initially, Google Glass will be the only device of its kind on the market. While these conditions remain true, there will not be a huge change in societal expectations regarding privacy.

There are two concerns that arise with Google Glass – first, the potentially innocuous nature of the glasses; second, the relationship between these devices and Google itself.

This first is not necessarily a new issue; there are cameras everywhere. Hidden cameras, cell phone cameras, and high-definition video recording devices with zoom features that put binoculars to shame. Aren’t we already living in a world where we should expect all of our actions, at least outside the home, to be viewed?

The technology behind Google Glass will inevitably improve. The “Glass” glasses will become more unobtrusive. They will become cheaper. Their battery life will be longer and their storage space will expand. Eventually it may become difficult to tell the difference between a particularly clunky set of Ray-Bans and Google Glass. What happens to the individual’s reasonable expectations of privacy when he or she can no longer be certain whether he or she is being filmed at any given time?

The second issue arises because Google is an information broker, not merely a device manufacturer. The relationship between each Glass user and Google itself is a huge concern. Due to information asymmetry and power inequality, the user will likely not have much choice in the license agreement involved in using Glass and consequently in the upload and storage solutions provided.

As these devices become ubiquitous, will the United States follow the path set by the European Union? Should opting-out of data uploads be the default? Should the protections given to third-parties in EU countries be emulated here? Many individuals do not want their information or image to be tagged in countless photos taken by Glass users who were never even noticed.

Will Glass “metamorphosi[ze] us into human versions of those Street View vans” as suggested by CNN’s Andrew Keen? Andrew Keen, Why Life Through Google Glass Should Be For Our Eyes Only, 2013, http://edition.cnn.com/2013/02/25/tech/innovation/google-glass-privacy-andrew-keen/index.html?hpt=hp_c1. Will individuals have the opportunity to be blurred from photos uploaded to Google+ by Glass users? If so, will they be required to opt-out to be blurred, or will we move to a system where an individual is required to opt-in, perhaps through privacy controls in a Google+ account?

More fundamentally, does Google Glass represent a significant technological departure from the already widespread use of camera enabled cell phones? If so, do these departures require further protection? If so, what kind?

As with any emerging technology, there are often more questions than answers.



Considering a New Model for Policing Data Privacy

Earlier this week, MPR’s The Daily Circuit ran a segment on the limitations of “big data.” Their guest was Samuel Arbesman, an applied mathematician and network scientist who writes about data. The segment mentions a couple of other uses for data that we could add to our list from today’s class: safety (diagnosing problems with cars/buildings) and efficiency (whether taxis take the most direct routes or whether a factory is operating at peak efficiency). But it also got me thinking about alternative models for regulating the collection and use of data.
Data privacy was not the focus of the segment, but one caller did raise the issues of privacy and the possibility of detrimental uses of the vast amounts of data being collected. (At about 24:45 in the audio clip, for those of you playing along at home.) Arbesman expressed skepticism about the ability of lawmakers to regulate effectively in this area because the technology changes so rapidly (a challenge we’ve mentioned several times in class already). But Arbesman also recognized the need to deal with what he referred to as questions of ethics surrounding the use of big data. He suggested in passing that institutional review boards (IRBs) might help police these ethical issues, which got me thinking about whether IRBs could be a useful model for addressing data privacy concerns.
The purpose of an IRB is to review and approve all human subjects research before the research begins. Researchers are required to submit extensive information about their proposed projects to the IRB, which is composed of a panel of disinterested members with sufficient expertise to evaluate research activities. See 21 C.F.R. § 56.107. The IRB is then charged with assessing the risks and benefits of participating in the research and ensuring that the proposed consent procedures are adequate to ensure that participants are aware of the risks. See 21 C.F.R. § 56.111. (You can find more information about the University of Minnesota’s IRB, by way of example, here.)
If we used an IRB model for regulating data privacy, entities collecting and using personal data could be required to seek approval for data collection and use in advance. The main drawback I see to the IRB model is that an ethics-based approach might be insufficient to dissuade entities that stand to profit significantly from unethical uses of data. The IRB approach would, however, have the benefits of being predictable (organizations would seek approval of their data uses in advance rather than be subject to litigation after the fact) and more readily adaptable to changing technology. Perhaps this doesn’t need to be an either/or proposition; instead, maybe approval through a voluntary review system could insulate organizations’ data uses from subsequent legal challenges.
In any case, I am curious to know what others of you think of the usefulness of an IRB-like system for addressing some of the privacy concerns we've discussed that are inherent in using data that can be traced to particular individuals.

King v. Maryland - When does DNA collection cross the line?


          Modern technology has created privacy concerns that previous generations could never have imagined. The Supreme Court of the United State will be grappling with one of these issues this week through their examination of a Maryland law allowing law enforcement to collect DNA from suspects that are arrested for (but not yet convicted of) felonies.

Maryland and 25 other states (plus the federal government) allow DNA samples to be collected after a felony arrest. Although state laws vary widely, in Maryland the DNA sample may be taken at arrest, but it cannot be tested until probably cause for the arrest has been established by a judge. If no probable cause is found, the sample must be destroyed immediately. The test done on the sample only identifies about 13 individual DNA markers, and it is not shared with any other parties (public or private) for any other purpose.

                In the current Supreme Court case, Alonzo King Jr. was arrested on an assault charge. At the time of his arrest, a sample was taken for DNA, which was later used to connect him to an unrelated sexual assault case. King attempted to have the DNA evidence suppressed under the Fourth Amendment. King was convicted at the trial court level, but the Maryland Court of Appeals held that the DNA was illegally obtained.

                The U.S. Supreme Court agreed to rule on the constitutional issues, and in the meantime granted a stay so that the Maryland law allowing DNA collection upon arrest will remain in effect. An amicus brief has been filed by 49 states, the District of Columbia, and Puerto Rico supporting the Maryland law. The brief emphasizes the compelling government interest served by collection of DNA samples from arrestees. It also argues that a felony arrestee has a diminished expectation of privacy.

                Cases such as this one raise numerous questions about personal notions of privacy, what is “private,” and what should be “private,” particularly in a law enforcement context. Many people feel that DNA is an unreasonable invasion of privacy – more so than other identifiable information like an address or a fingerprint. For example, familial DNA testing can potentially be used to identify individuals related to the arrestee. Others, however, counterbalance this with the law enforcement purposes of collecting DNA. Obtaining DNA from a broader number of individuals coming into contact with the criminal justice system (for instance, upon arrest, or for all crimes, not just felonies) would give law enforcement a greater opportunity to identify already-apprehended suspects in other crimes that they may be responsible for. But the question remains, is that interest greater than the personal privacy interest in keeping your DNA out of the government’s hands? Where does our personal right to privacy end and the government’s right to enforce its laws begin? The Supreme Court may soon tell us what they think.

Wednesday, February 27, 2013

Facebook Beacon Settlement Upheld by Ninth Circuit


In 2007, Facebook launched a new feature called Beacon. The purpose of Beacon was to allow third-party websites to send data to Facebook to allow for targeted advertising and for users to share activities conducted on third-party sites with their Facebook friends. There were over 40 third-party sites included in this program, ranging from Overstock.com to various news and travel websites. Facebook chose to make this an “opt-out” system, purportedly to make it “lightweight” and improve the ability of users to easily share their activities. However, as acknowledged by head honcho Mark Zuckerberg, this feature had “a lot of mistakes” and Facebook “made even more [mistakes] with how [they] handled them.” These “mistakes” included the complexity of the opt-out procedure, which required knowledge of Facebook’s privacy controls as well those of its third-party partners.

Sean Lane was one Facebook user who was unable to navigate the opt-out procedure of Beacon. He bought a surprise gift for his wife on Overstock.com­—a diamond ring. The surprise did not last for long, however, as this purchase was soon broadcast via Beacon to people in his network on Facebook, including his wife.  Sean eventually became the lead plaintiff in a 2008 federal class action lawsuit filed against Facebook and some of its third-party affiliates, in which he alleged violations of ECPA, the Computer Fraud and Abuse Act, the Video Privacy Protection Act, and California state privacy law for this interception and distribution of private information to users’ Facebook friends without their consent.

Facing a class of 3.6 million wronged users, Facebook decided to settle the case in 2009, with a federal district court in California approving the settlement in February 2010. Among the settlement provisions is the creation of $9.5m settlement fund, which would in part pay for the formation of a privacy foundation, now known as the Digital Trust Foundation (DTF). The DTF would be run by, inter alia, a former Facebook director for public policy. None of the settlement money will go to the users; $2.3m pays for attorneys’ fees and the rest will bankroll the DTF. The Beacon service would also be removed from Facebook. Some plaintiffs objected to the settlement, going so far as to call the agreement “virtually worthless” to the individuals in the class. In objecting to the settlement, these plaintiffs took issue with the fact that nearly 1/3 of the $9.5m settlement fund would be paid out to the class action attorneys with no compensation for the users. They were also unsatisfied with the DTF’s connection to Facebook and classified the shutdown of the Beacon service as a “token gesture.”

In an order issued yesterday, the Ninth Circuit has declined en banc hearing on the settlement, effectively upholding the agreement.  There were six dissenters among the 28 judges on the circuit court, who mainly complained that the DTF created by the settlement to teach users how to be more private in their activities would do nothing to curb Facebook action in abrogating users’ privacy.

I think that Facebook comes out on top here. While $9.5m isn’t exactly a drop in the bucket, it’s not an exorbitant amount for a company of their stature. Using the money to create a privacy foundation addressed to Facebook users is perhaps a savvy PR maneuver.  On the user side, it may be better to see the settlement money paid into a foundation that takes strides toward securing privacy than for each user to receive their roughly $2 in payment from the settlement. However, given the limited role of the DTF and no other effect on Facebook beside a small loss of capital, the settlement may ultimately do little to quell users’ fears that the social network may attempt to institute a similar privacy invasion in the future.

Google and Spain in a battle over EU Privacy Law

When a man from Spain recently "googled" himself, he came across more personal information than he would have liked. Upon entering his name in to the Google search bar, the man found a newspaper article dating a couple years back which revealed that certain property the man owned was up for auction based on his failure to make social security payments. This man then brought a complaint in Spain asking for Google to remove this information from its website; essentially asking the site to make this information unsearchable. One of the highest courts in Spain, the Audiencia Nacional, upheld the man's complaint and ruled that Google should remove this information from its search results. Following this decision and a subsequent challenge by Google, the case was referred up to the European Court of Justice.

A few separate issues must be addressed by the European Court of Justice. First, the court must decide whether Google has to delete the information. Google makes a something of a public newsworthiness argument, stating that valid reasons exist for this type of information to be made public. For example, the information in this case was already made public as required by Spanish Law. Further, the original announcement which made it public in the first place still remains on the newspapers site. Finally, Google and their supporters are concerned over the ability of users to have uncontrolled power to request that information be removed from search results.

More specifically, the Court must address whether Google and can be considered a "controller" of information or simply a host of information. There is also a jurisdictional issue at play here for the Court must decide whether a company based in California such as Google can be subject to EU privacy law.

Google in this case is not making the information public. That was done already by the newspaper. Not only did the newspaper publish it in print but they also put in on their online version. All Google has done in this case is made it easier to access this already public information. The slippery slope argument being made by Google is especially persuasive in my opinion simply because this is what Google does as a business: they make things easier to find on the internet. The amount of personal information on newspaper websites which appears on Google seems endless. To give users the power to request that these results be removed seems unreasonable and impractical. Further, to call Google a controller of this data is a complete stretch of the word seeing as the newspaper is ultimately the one who could remove it from its website and thus making it near impossible to be found. Finally, not only did the newspaper find this information newsworthy, Spain itself required the publication of this information by law. Yes, Google makes the information "more public" but that shouldn't be enough to put the burden completely on Google to control this information.

http://www.reuters.com/article/2013/02/26/us-eu-google-dataprotection-idUSBRE91P0A320130226

Tuesday, February 26, 2013

Creative Uses of Natural Resources

In January of this year, news broke about a security breach at the Minnesota Department of Natural Resources. Forty eight-year-old DNR employee John Hunt had made over 19,000 unauthorized queries into the state's Driver and Vehicle Services database in the past five years, accessing the records of approximately 5,000 people, mostly women. Hunt not only accessed the database storing names, addresses, birthdays, height and weight as well as driver's license photos, but he also stored photos of 172 women in a file on his computer. Being the data security guy that he was (Hunt's responsibilities at the DNR included training other employees on data security), Hunt had encrypted the file and stealthily named it "Mug Shot."

After the DNR got wind of Hunt's shenanigans, it took steps to remedy the issue. The DNR fired Hunt and sent letters to the people whose records had been inappropriately accessed informing them of the breach. It also asked the Minnesota Bureau of Criminal Apprehension to conduct an investigation into the matter.

On February 7th, Hunt was charged with several misdemeanors and gross misdemeanors, including misconduct of a public officer, unauthorized computer access, using encryption to conceal a crime and unlawful use of private data.

Several civil suits have also already been filed against Hunt. A law suit filed in January in the name of Jeffrey Ness is seeking class action status. Another class action law suit was filed in federal court on February 4th against Hunt and a number of other DNR and Department of Public Safety officials on behalf of four women seeking at least $10 million in damages. The suit alleges that Hunt violated the victims' privacy under the federal Driver's Privacy Protection Act (DPPA).

Section 2724 of DPPA provides for a civil action in federal court against "[a] person who knowingly obtains, discloses or uses personal information, from a motor vehicle record, for a purpose not permitted under this chapter." Assuming that the plaintiffs can show that Hunt's use was for a non-permissible purpose, they would still have to prove either actual damages or "willful or reckless disregard of the law" in order to be awarded punitive damages. Showing actual damages may be difficult if Hunt only used the information for his own pleasure, which seems more likely, given that he mostly viewed and stored records and photos of women rather than indiscriminately copied information for the purpose of selling it to a third party.

On the other hand, proving the case against the other DNR and Department of Public Safety officials may turn out to be much more challenging. Although section 2721 of DPPA creates a duty for the State department of motor vehicles to not "knowingly disclose or otherwise make available" personal information, it is not clear that the civil cause of action extends to the department, except maybe for persons knowingly disclosing personal information for non-permissible purposes. By allowing Hunt practically unlimited access, did the other officials, in effect, knowingly disclose personal information to him for non-job-related uses? In practice, limiting Hunt's access may have been impractical since he needed to access the data to perform his job. Showing that not limiting his access went so far as to constitute willful or reckless disregard of the law (to get punitive damages) will probably be difficult.

Companies Fight for User Privacy



Companies Fight for User Privacy
Zach Cohen

Articles:
            These articles seem to be two parts of the same issue: the gap between existing privacy law and the protection required in today’s society. In both articles, the author discusses how companies such as Facebook, Yahoo, and Google, have begun to require warrants from the government if it wishes to access information regarding their users. Google in particular has stated that it requires a showing of probable cause before it releases the content of its user’s email and other information, to the government.
            The probable cause requirement articulated by Google is particularly interesting due to the fact that, even though it seems to comport with the 4th Amendment’s prohibition on unreasonable searches and seizures (Warshak), it is in opposition to ECPA which merely requires “reasonable grounds to believe,” that an email or other documents are going to be useful to an investigation. Google’s efforts to protect its users data reflect the requirements of today’s society, while ECPA reflects the understanding of technology that existed in 1986. Google’s protection of its user’s data is representative of an understanding that technology like email, cloud storage, instant messenger, and the like, have become as ubiquitous today as mail and phone calls were in 1986.
            In many ways, ECPA is representative of a mindset, and reflective of technological development that is out of date and incompatible with the present day notions of privacy. In today’s day and age, the fact that nearly every facet of business, personal, and everyday life, can be conducted via technologies like email, instant messenger, etc, seems to demonstrate the anachronistic nature of ECPA and the government’s mindset with respect to these types of communications. When the protections afforded to things like a normal letter is juxtaposed with the lowered level of protection given to now ubiquitous communications like email or skype, the problems with the government’s policy becomes even more stark.
            The divide between ECPA and Google’s requirements seems unacceptable in light of current privacy law. As the court established in Katz, the government has to obtain a warrant if it wants to search an area in which an individual has a reasonable expectation of privacy. It therefore seems odd that the government’s position as it is expressed in ECPA, would treat something as fundamental as email, instant messages, and other fundamental forms of online communication, as somehow less deserving of privacy protections than those of telephone calls or similar technologies. ECPA itself maintains a heightened form of warrant requirements for wiretaps on phone calls, and yet lessens that requirement when it comes to forms of communication that have become equally important in today’s society. As technologies like cloud storage and email become more popular, the idea that the government needs anything less than probable cause to search such things has become far less tenable. After all, it would be absurd to suggest that the legacy of Katz and other privacy jurisprudence is that the government can dictate what is and is not an acceptably reasonable expectation of privacy.
Ultimately, these articles are representative of the way in which privacy law has failed to adapt to the current times. As cloud storage becomes more and more widely used, and email remains a staple in most homes, the governments position that it needs less cause to access that sort of information than it does to tap your phone lines seems irrational. The government’s refusal to update its requirements for accessing electronic information flies in the face of at least one great Justice has described as the citizen’s “right to be left alone.”


Monday, February 25, 2013

New York State Court Holds There is No Privacy for Smart Phone Users




In the New York Law Journal a state court held that “a cell phone user has no reasonable expectation of privacy that the police will not use the phone’s Global Positioning Satellite (”GPS”) “to locate the phone.” “Given the sophistication of consumers” a user assume the risk when they purchase cellphones equipped with GPS technology that can pinpoint the location of the phone.” The court addressed the third party doctrine and found the police “exigent circumstances request” asking Sprint to “ping,” or locate, the user’s phone did not violate any constitutional rights to privacy. The user “can deactivate or disable GPS services” and secure their privacy by turning the phone off. 
The court distinguished Moorer from GPS cases where the police surreptitious attachment of a GPS device to a suspect’s vehicle to track his movements cannot be consciously turned on or off by the suspect. However, detection of the cellphone would not have been immediately apparent without “Ping” and the confiscation of the bag in which the phone was found occurred with the aid of advanced technology is one, which Justice Scalia would agree “is not in extensive public use.” The court continued its skillful evidentiary dance and considered the bag and the phone abandoned property. Therefore the user did not have a subjective expectation of privacy and did not exhibit a personal expectation to be left alone from government intrusion.
This reasoning was based on the courts conclusion that the user did not own or have any substantial connection to the home where the bag and phone was found and there were no higher state court decision addressing “Ping.” Therefore any claimed expectation of privacy by the user, is not one that society is prepared to recognize as objectively reasonable. This ruling is more in line with a private cause of action for invasion of privacy where the user is responsible for any voluntary dissemination of information to third parties. But in a criminal case the challenged conduct must meet the Katz’s two-part test. This ruling unreasonably intrudes on a users’ reasonable expectation of privacy and disregards the defendant’s contention that he planned to return for his bag. The mere absence of Moorer from the property is the sole basis for determining abandonment.
But abandoned property analysis requires proving intent to throw away property and acts that infer that intent (like leaving garbage at the side of the road). The reasoning of the court assumes that a place-based conception of privacy is a fundamental right but a person-based conception of privacy is a fighting right. This muddies the analysis in the Fourth Amendment context, for while previous conceptions of privacy were derived mainly from the concept in “life, liberty, and property,” the doctrine of trespass “a man’s home is his castle.” The Katz decision and Brandeis theory on privacy shifted the definition of privacy from place-based to person-based. The bag belonged to Moorer and included along with his cell phone his driver’s license logic presumes Moorer intended to return for the bag he asked a friend to keep. It logically follows that Moorer has a subjective expectation of privacy in the contents of his bag.