Sunday, March 31, 2013

A lady never reveals her age... and when IMDb does, she sues them.

An actress's suit against IMDb for listing her age without her permission has survived summary judgment and will proceed to trial. 

U.S. District Judge Marsha Percham denied IMDb's motion for summary judgment on a breach of contract claim.  The suit, originally filed in October 2011, will now proceed to trial and is scheduled to begin on April 8.

Here's a quick refresher to catch you up on how we got here:

According to Hoang, she contacted IMDb to remove an erroneous date of birth from her public IMDb page.  IMDb refused to remove the date of birth unless Hoang could provide evidence that the date was erroneous.  Here's where the story gets interesting... according to Hoang, IMDb accessed the credit card information associated with her account and used it to conduct a search on PrivateEye.com and determine her actual birthday.  With the true date of birth in hand, IMDb then published Hoang's date of birth on her public page.  At no point did the site inform her they had accessed her payment information or conducted a search for her on PrivateEye.com.

Hoang claims that IMDb's publication of her birthday has led to age discrimination by Hollywood casting directors and directly led to her removal from one film project.  Hoang also claimed IMDb was liable for emotional distress, but the ruling by Judge Percham denied Hoang's emotional damages claim.

Judge Percham also denied Hoang's claims under the Consumer Protection Act, finding that Hoang "cannot show that the public interest is impacted by IMDb's actions." 

By alleging a breach of contract, Hoang faces an easier road than if she had alleged  the tort of public disclosure.  The tort requires that the information disclosed be highly offensive to a reasonable person.  I don't think that anyone, outside of Hollywood at least, would find the publication of a woman's age to be particularly scandalous or offensive.  Under the breach of contract claim, she merely needs to demonstrate that IMDb violated its Terms of Service and Privacy Policies by accessing her consumer information without her permission.

From the looks of it, the trial will likely hinge upon whether or not IMDb's actions constituted a response to Hoang's request to remove her incorrect date of birth or, alternatively, an effort by IMDb to improve the services offered by their website.  IMDb's privacy policy explicitly states that it uses personal information to, among other things, respond to user requests and improve the website.


Hoang did request that IMDb remove a false birthday on her page.  IMDb claims that, when asked for evidence that the listed date was erroneous, Hoang provided falsified information to the website.  If I'm IMDb's lawyer, I'm telling the jury that the search was merely a step taken to satisfy Hoang's request to verify her date of birth and the "rare" steps taken in this instance were necessitated by her decision to provide the site with false information in order to appear younger, a violation of the site's Terms of Service.

Alternatively, it could be argued that IMDb's actions were part of maintaining and improving the services of the website.  By providing accurate information about actors, actresses, directors, etc., IMDb is able to fulfill it's role as a resource for casting directors, executives, and other members of the entertainment industry.  The age discrimination in casting is the fault of casting directors, not IMDb.

In opposition, Hoang could (and, dare I say, should) contend that IMDb's actions went well beyond the simple servicing of an administrative request and plenty of alternative (and much less intrusive) forms of action existed, including just leaving the incorrect birthday on the website.  In addition, IMDb's site maintenance and improvements shouldn't come on the backs of intrusions into subscriber's credit card and other personal data.

In my not yet professional opinion, this could shape up to be a really important case in determining just what websites can do with our information and just how broad the terms of rarely read Privacy Policies and Terms of Service are.

Cell Phone Data Privacy in the Wake of Jones and Skinner


          As smartphones become more common, both the law enforcement uses of cell phone data and the privacy concerns related to these uses are on the rise. One aspect of smartphones in particular that has widespread privacy implications is theability to use GPS tracking capability to monitor a person’s every move, simply by tracking the location of their cell phone. A report published by Scientific Reports studied anonymous mobile data for about 1.5 million people. The findings are concerning for privacy advocates – the researchers found that “if they got accurate hourly updates on a person's whereabouts, tracked by their mobile carrier's cell towers, four ‘data points’ were all they needed to figure out the person's identity 95% of the time.” Conversely, given access to mobile data by an individual’s cell phone service provider, law enforcement can track that person’s every move in real time.
          It is unclear whether there is a consensus in modern courts regarding what degree of privacy cell phone users enjoy with regards to their location data. In United States v. Skinner, the Sixth Circuit Court of Appeals held that there is no reasonable expectation of privacy in location data broadcast by a cell phone, thus, the Fourth Amendment does not require the police to obtain a warrant before monitoring a person’s real-time location through cell phone location data. However, privacy advocates argue that this does not mesh with United States v. Jones, decided just seven months earlier, where the Supreme Court held that warrantless long-term GPS monitoring violates the Fourth Amendment.
          It is important to note, for privacy law purposes, that cell phone users knowingly and willingly transmit GPS signals to their cell phone provider in order to use many of the location-based services provided by smartphone apps (Urban Spoon, Google Maps, Foursquare, etc.). Under the Third Party Doctrine, since users have willingly surrendered this information to their cell phone service provider and/or app providers, they have abandoned any claim to privacy that they may have had. According to the Electronic Frontier Foundation (EFF), this is the rationale that the government uses to justify use of location data – “the government claims that cell phone users give up their privacy rights because they have voluntarily disclosed their physical location to the cell phone providers every time a phone connects to the provider's cell tower.” The consequences of such access by law enforcement can be extremely far-reaching; the EFF points out that “location data is extraordinarily sensitive. It can reveal where you worship, where your family and friends live, what sort of doctors you visit, and what meetings and activities you attend.”
          Is it fair to “punish” smartphone users by invading their privacy, especially given the necessity of smartphones in today’s world? Is this even a violation of their privacy at all, given the third party doctrine? What are your expectations of privacy when it comes to cell phone GPS data? Without more clarity from the courts, it is difficult to say what role cell phone location data will play in the criminal law context, but with the growing use of smartphone data to track, apprehend, and charge individuals suspected of crimes, it will surely need to be tackled by the courts head on, sooner rather than later.

Third Party Decision Makers: Government Access to Private Data and the Need for a Warrant Requirement



Facebook’s user agreement gives Facebook the ability to release user data to law enforcement where it has a, “good faith belief,” that it is necessary to prevent harm or that it is required by law.  Such a provision is not unique to Facebook. Yahoo, Twitter, EBay, and others, all have similar provisions buried within their user agreements and terms of service. Increasingly, one of the questions that the privacy debate must answer is: to what extent should consent invalidate the protections that ECPA, the CPA, and even the FTC, traditionally afford user data? While statutes like the Privacy Act regulate the uses and maintenance of government databases, the increasing reliance of the government on information provided by the private sector, raises new questions in the evolving debate over user privacy.
            The current privacy protections for user data are problematic. In my last entry I discussed Google’s requirement that any request for user data by the government, be accompanied by a warrant. While the efforts of Google, and companies like it are admirable, the lack of a unified standard for government access to data collected by private companies is especially troubling for individuals that are concerned with data privacy.
            In Smith, the Court determined that people do not maintain a Fourth Amendment interest in information that they pass on to third parties. While ECPA, FCRA, and even the FTC, regulate how the government can require third parties to disclose user information, there is little protection in place for information that users give to companies, and which the companies then voluntarily disclose. Furthermore, what little regulation may be in place in the form of FTC enforcement agreements (which make it a “promises,” violation to use information in ways other than specified in the user agreement and terms of service) is undercut by vague requirements for disclosure, that allow companies like Facebook, to release information to law enforcement whenever they feel that they have a “good faith belief.”
            As these articles note, government agencies are increasingly using private sector data collectors to gather information that would otherwise be difficult to access. While government regulation like ECPA, the rules propagated by the FTC, and even the Fourth Amendment, regulate how the government can access your data, such protections rapidly become irrelevant when user agreements contain clauses that allow a company to release user data upon request. One way that these articles suggest for protecting user data, is by requiring that the government obtain a warrant before it can access user data from the private sector. While such a requirement would not rise to the level of probable cause necessary for the “super warrant,” governing the Electronic Intercept portion of ECPA, a warrant requirement for the purposes of obtaining private sector data on individual users would be a step in the right direction for user privacy.
            As it now stands, the lack of security controlling government access to private sector user data is in need of reform. While it is true that people may recognize that they give up some degree of privacy when they use Facebook, Google, etc., it is doubtful that they would be comfortable with the truly vast amount of control that they give such companies over their data. Thus, a warrant requirement for government access to user data seems like a good, common sense, step to protecting user privacy and the erosion of civil liberties, in an era where simply avoiding the use of such sites in no longer a viable option. 

Friday, March 29, 2013

"Dr. Livingstone, I presume?"

Have no fear! The doctors of a Fargo, ND health clinic no longer need to worry about getting lost, for their every move is being tracked by a clinic-wide tracking system that shows the location of everyone on a map of the building. Patients are being tracked, as well, by asking them to carry around a small tracking device (see the slide show) that communicates with the tracking system, mapping and storing information about their location. The system can also be used to track the location and movements of medical equipment and to follow inventory levels.

This wonderful cost-saving solution is also being installed at 152 of the nation’s VA hospitals, although initially only to track equipment, not patients or staff. Although COO Pat Gulbranson of the Fargo clinic reports that “so far . . . there have been no objections” related to privacy concerns, the VA nurses aren’t quite as excited about the idea of tracking their moves in the workplace. In fact, various unions that the VA hospital staff belong to quashed the idea of workplace tracking already in late 2011. Of course the unionized VA staff have the privilege of being able to speak as a group and don't need to fear that if they raise concerns about being tracked, they would be suspected of having something to hide. Why else would you not want administrators to know where in the workplace you are at any given moment?

At Family HealthCare in Fargo, clinic staffers are presumably watching for any privacy concerns regarding patients. However, it would seem that patients’ concerns could be better addressed by some kind of a patient advocate who doesn’t “have his own cow in the ditch.”* Also, the privacy concerns can be different from the patient’s perspective than from a staff member’s. Of course, both patients and staff may just feel plain uncomfortable about having their moves tracked. But where staff members may fear things like being tracked for how long they spend on tasks or with patients, or on break or in the restroom, patients probably don’t care if someone thinks they spent too long drinking a cup of coffee. The logical progression of tracking patients’ moves and transferring medical information into digital form is being able to tell by glancing at the computer monitor what ailment is bothering a patient that walks by. “Good afternoon, Mr. Tuberculosis!” “Have a fun weekend, Ms. Venereal Disease—but not too fun!”

* Finnish expression; being biased because of self-interest.

Wednesday, March 27, 2013

What is the meaning of privacy in an age of computer analytics?

On March 25 a study was published on Scientific Reports - part of nature.com - which analyzed personal geographical data and its implications for privacy. The study is well worth a browse, and its conclusions demand serious consideration by anyone who is interested in the technology, law, and policy of privacy. 

Summarized from its abstract, there are two aspects to the study which are particularly interesting: 1) No more than four data points are required to identify an individual from an overall sample space of 1.5 million people to an accuracy of 95% at the data resolution used, and 2) Reducing the data resolution has only a 1/10 adverse effect on these results. This has significant implications for efforts to limit both governmental and private efforts to track private citizens' movements. 

No one denies the possible benefits accruing from effective and creative use of personal data. Setting aside for the moment independent reasons one might want disclosure of personal data to be limited, correlation of disparate data sets allows advertisers to do better business, doctors to make better diagnoses, GPS devices to give better directions through traffic. Apart from commercial interests, it also helps analysts and academics to more accurately describe and predict sociocultural phenomena - which can itself have an effect on policy and law. 

However, it is just as self-evident that telling everyone everything all the time is more than most average people are willing to do. Whether for specific articulable reasons, or merely because of an inchoate creepiness factor, there is a general understanding that certain steps should be taken to protect individuals' privacy. 

There are two stock methods purported to protect privacy without overly hobbling data analysis: anonymization and coarsening. We've already discussed at length the relative ineffectiveness of anonymization. This study not only proves it, it also lets us put a number to it (at least in the context of cell-phone tracking): The study tells us that at a sample rate of once per hour and a geographical specificity no greater than cell phone towers, only four samples are needed to identify an individual to 95% certainty. But it goes further, showing us how ineffective coarsening is as well: if only one in ten cell towers was used, or the sample rate was lowered to once every ten hours, only one more data point would be required to maintain the same 95% accuracy. 

In Jones, the Court declined to adopt the D. C. Circuit's "mosaic" search theory, which argued that at a certain level, data collection could be a search for Fourth Amendment purposes even though no one of the data points would itself be a search. Justice Sotomayor, in her concurrence, suggested that this idea might have more validity than the Jones opinion suggested. But even if it is adopted at some point in the future, this study suggests that, with the powerful analytical tools widely available today, data samples which would suffice to identify individuals to a high degree of specificity would not trigger the theory's protections. 

Forcing citizens to choose to be Luddites rather than subject themselves to effectively perpetual surveillance is a choice, but not one most of us would recommend. If neither anonymizing nor coarsening data collection can effectively protect us against being identified against our will, what tools are left to lawyers, judges, policy makers, and individual citizens, to protect our anonymity? 

Tuesday, March 26, 2013

One Nation Under CCTV, or One Nation Under UAVs?

London, along with much of the rest of the UK, has an extraordinarily high number of CCTV cameras. It is argued that having cameras will act as a deterrent to crimes, and aid in solving those already committed. There is a long-standing debate, though, over whether cameras are the right way to deal with crime. One alternative is focusing on the causes of crime, such as underemployment or lack of access to education or job training. Graffiti artist Banksy has already weighed in on the anti-camera side, after painting a giant, three story protest piece right under the nose of a CCTV camera. Despite erecting unauthorized three story scaffolding, Banksy and his team got away scot free. Citizens around the UK continue to debate over whether the cameras do any good, but the number of cameras continues to rise.

By contrast, many US cities are not nearly so conducive to cameras. London's metro area is about the same size as the Twin Cities metro, and yet has 18 million citizens, compared to our 3.6 million. Add in the fact that much of the US is open highways and sprawling suburbs, and it is little surprise that many areas are turning to UAV drones to patrol the skies. The FAA estimates that 10,000 UAV drones could be flying over the US by the end of the decade, and other sources suggest that 30,000 UAV drones might be replacing most of the highway patrol vehicles we are used to by 2025. Proposals for unmanned ground vehicles a la "Terminator: Salvation" are gaining traction in California and Texas. When drones are on patrol, equipped with cameras that can pull the numbers off license plates from miles away, or read your book from above the clouds (and snoop on ubiquitous wireless transmissions), they will have unprecedented access to personal information. Before that happens, we need to examine the privacy policies that might govern these drones.

One difficulty that attaches to the situation is the fact that the FAA is ill-equipped to govern the privacy policies for drones. Lawmakers are concerned that the FAA is not prioritizing privacy, and the FAA itself acknowledges that it does not require drone operators to follow any privacy guidelines.

Unlike Google, which asserts that its email skimming methods are completely automated, drones still require pilots, and CCTV cameras are useless without someone to review the tapes. "Unmanned" is not "automated," and perhaps that is the most concerning thing. Yet on the other hand, automated systems still report to people, and those people make conclusions on the data their systems produce.

Maybe the solution isn't CCTV or drones. With crime rates dropping across the country, and across the world, isn't adding these monitors putting the cart in front of the horse? Maybe we don't need cities full of cameras or skies packed with drones. Maybe a focus on education, job creation, and equal opportunity is preventing crimes more than robotics ever could.

Or maybe it's just easier to buy a plane than ask the hard questions about social policy.

Sunday, March 24, 2013

Snapchat, Steubenville, and eDiscovery

I was recently cajoled into signing up for snapchat. For those not up to date on teenage app fads, snapchat allows the user to send photos to friends that automatically delete after a specified period of time (default of 3 seconds, maximum of 10 seconds). From the user’s vantage point, the pictures disappear from both the sender’s and receiver’s phones as well as the snapchat server. Among other uses, snapchat quickly became branded as a quick, easy, and safe(r) way for teens to sext. It would allow “teens... to communicate with their friends in a manner that won’t haunt them forever.” (Disclaimer to future employers on this public blog: I have never used--nor do I have any intention to use--snapchat as a sexting vehicle).

In non-Ian related social media news, a verdict was handed down this week in the Steubenville rape case. Two teenagers were convicted the rape of a teenage girl. During the assault, the boys texted, tweeted, and posted youtube videos about the rape. This web of social media evidence of the rape gave “investigators... something like a real-time accounting of the rape.” One thing that stuck out to me was the Judge in the case said it should be a lesson to teens in “how you record things on social media that are so prevalent today.” This led me to the question of what would have happened had the teenagers used snapchat instead? Would the evidence have still been there?

Conventional wisdom is that the pictures, once viewed and automatically terminated, are just that: terminated and irretrievable. But there may still be a digital trail. Snapchat’s privacy policy warns that “Although we attempt to delete image data... we cannot guarantee that the message contents will be deleted in every case.” (click privacy policy at the bottom). The only example it gives, however, is that the receiver could save the photo by taking a screenshot or using another camera. It does not address the digital remnants on its servers.

What would happen if lawyers would make a discovery request for all snapchat messages sent related to a certain case? Ediscoveryresourcedatabase.com addresses how lawyers can go about discovery in the age of snapchat. The first challenge for lawyers to show that the messages existed in the first place. Luckily, snapchat says it “[logs] information about messages, including time, date,and who sent and received the message,” but not their content. 

From best I can tell, no one has successfully retrieved messages that have been deleted for discovery purposes. But, if the messages existed, snapchat opens its users to a spoliation claim (significant alteration or modification of evidence in a reasonably foreseeable litigation.). Because of “the nature of [snapchat] and software settings,” if the sender knew that the pictures were“relevant to an existing or foreseeable legal claim at the time the communication was generated and sent”, a spoliation claim could succeed. However, FRCP 37(e) in the civil context creates a safe harbor for “failing to provide electronically stored information lost as a result of the routine, good-faith operation of an electronic information system.” A cursory Westlaw search reveals that whether snapchat falls within the safe harbor. On the one hand, snapchat deletes everything. On the other, if an someone uses snapchat during an assault because it deletes it, this may  not be “good-faith operation.” We don't know how courts will rule, but we can be sure that "combining cameras; young people; and secret, self-destructing messages could only mean trouble," and that this will soon be litigated.

Saturday, March 23, 2013

New Technologies Always Followed by New Concerns: The Debate Over Big Data

A March 23 article in the New York Times highlights debate over new technologies placed under the "banner" of "Big Data" and concerns over individuals' privacy. 

Big Data may have renewed the debate over privacy, but the article points out that as the government transitioned much of its data, including tax returns and credit information into databases on mainframe computers in the 1960s, many citizens raised similar privacy concerns.

“It really freaked people out,” said Daniel J. Weitzner, a former senior Internet policy official in the Obama administration. “The people who cared about privacy [in the 1960s] were every bit as worried as we are now.”

But along with the fears, came many technology developments that we rely upon and praise today. Will the same be true with Big Data? This is a question we have been beginning to discuss in class. Certainly  we have agreed it can be beneficial to retailers and those extending lines of credit to consumers. According to the Times piece, the Big Data umbrella has includes the ability to collect "data including Web pages, browsing habits, sensor signals, smartphone location trails and genomic information, combined with clever software to make sense of it all."

Proponents of this data collection say it is allowing for us to see and measure things we have never been able to before. Although, as others in the class have pointed out, methodologies of these studies using certain data, like Google searches to predict flu statistics, have been called into question.

But Alex Pentland, computational social scientist and director of the Human Dynamics Lab at the M.I.T. told the Times, "This data is a new asset. You want it to be liquid and used." He believes the future will be data driven and will surpass any vision George Orwell had.

Many government officials and corporate officers, however, appear to be more hesitant to make statements like Pentland. The World Economic Forum, made up of a wide coalition, released a report last month that recommends more restrictions with data and giving consumers more control over their information.

“There’s no bad data, only bad uses of data,” Craig Mundie, a senior adviser at Microsoft, who worked on the position paper, told the Times.

Corporate members of the group members of the group said they will have to address some of the privacy concerns in order to keep the most useful data available to companies.

As we have discussed in class, it appears this debate is in its beginning stages and time will tell how policy will be shaped in this area.


Friday, March 22, 2013

Rep. Louie Gohmert is Confused


Congress is currently considering a major overhaul to ECPA. As the cases we read earlier this year indicate, ECPA is sorely in need of an update to address the new privacy and constitutional issues that have developed alongside modern technology. (Recall, i.e. United States v. Warshak (6th Cir. 2010) (finding that a search legal under the Stored Communications Act violated the 4th Amendment),  Jennings v. Jennings (So.Car. 2012) (distinguishing between webmail and Outlook-like email systems in the applicability of the SCA), and Kirch v. Embarq (10th Cir. 2012) (illustrating the trouble with applying the old ECPA framework to new technology entities). The bill, introduced by Senators Patrick Leahy (D-VT) and Mike Lee (R-UT), would amend ECPA to eliminate the 180-day clause in the Stored Communications Act. Under the new provision, police would need to obtain a search warrant to access emails of any age, rather than only those sent or received within the past 180 days. I’ll leave it to someone else to parse the bill (available here) to see whether it might realistically address some of our other concerns with ECPA. Instead, I’d like to direct your attention to this C-SPAN gem, which has been making its way around the web. In a House hearing on the ECPA amendments, Rep. Louie Gohmert (R-TX) engaged in a fairly ridiculous discussion with a Google attorney:



On the one hand, Rep. Gohmert seems genuinely concerned with citizens’ right to privacy in their email, and perhaps he should be commended for that. And his ignorance of how Google's advertising system works is understandable – it’s complicated, and Google hasn’t always done a great job of explaining it. But what's really concerning is his unwillingness to learn about the systems he will be asked to vote on. His understanding of Google’s system seems to come primarily from Microsoft’s “Scroogled” ad campaign which Holly wrote about last month.
The Google lawyer’s description of their ad system was actually reasonably accurate. He says that advertisers can “identify the key words that they would like to trigger the display of one of their ads,” “the email content is used to identify what ads are going to be most relevant to the user,” and “advertisers are able to bid for the placement of advertisement to users who our systems have detected might be interested in the advertisement.” Since 2004, Google has scanned emails for keywords, and has automatically placed ads that related to keywords in that email. So if your email says, “Hey buddy, let’s go snowboarding next week,” you'll see an ad for “Snowboards now on sale at REI!” But you didn’t see that ad in other emails. But in 2011, Google implemented a new system that “learns from your inbox.” For example, if you email back and forth with your friend about the snowboarding trip, and then you book plane tickets to Colorado, you might start to see ads for particular ski resorts in Colorado, even on unrelated emails. Additionally, the newer system will “learn,” for example, that you never read emails from a particular charity, and Gmail won’t show you ads for charities since you're ignoring the messages you already receive. The 2011 advertising system, which as far as I can tell, keeps track of keywords, the frequency of their use, whether or not you read and respond to the emails in which they occurred, and possible links to other keywords implies a much greater degree of data-gathering than the predecessor model. And to be sure, Google hasn’t always helped itself in describing its own system: as EPIC points, out, Google used the term “content extraction” and “information extraction” to describe it’s advertising set-up in patents Terms like this imply a more nefarious appropriation of user data, whereas the actual process is fairly passive.  But while there certainly may be privacy concerns with Gmail’s “content extraction” advertising model, they are NOT the concerns Rep. Gohmert believes exist.

We’ve talked before about how difficult it is for Congress to pass meaningful privacy legislation when technology will likely render any protections irrelevant, obsolete, or insufficient in the not-so-distant future. The apparent unwillingness of this particular Representative to gain an accurate understanding of Google’s nearly 10 year old advertising technology adds another level of concern.

Pay As You Go Vehicle Tax A Solution?



                A recent Freakonmics podcast caught my attention when gas prices and privacy concerns were raised. Like many others, I’m conscious of gas prices, and grudgingly cough up the near fifty bucks each time I’m at the pump. Stephen Dubner, co-author of the Freaknomics series, explained that we’re at a point in our country’s history where we get more miles per gallon than ever before, and accordingly, citizens have an incentive to drive more. As fuel economy advances, the cost of each mile decreases, and the nation’s transportation budget takes a hit. In other words, better fuel mileage translates to less money for roads and bridges. Tax revenues operate at a fixed rate, as opposed to a sales tax, for example. Currently, the federal rate is 18 cents a gallon. States add an additional fixed tax (Minnesota’s state excise tax is 28.5 ranking it 18th highest based on state taxes). Because it’s fixed, it doesn’t increase when the price of gas goes up. Accordingly, gas tax revenues increasingly lose purchasing power. 

                So what’s the solution? Dubner explains that for whatever reasons, gas taxes are a “no-go zone” for politicians. Alternatively, Robert McDonnell’s (Governor of Virginia) proposal to eliminate the state gas tax and instead raise the sales tax has received recognition. However, there is growing support for a vehicle miles traveled (VMT) tax. Such a tax would operate by taxing drivers per mile driven. This scheme would charge drivers for the same amount of road covered, regardless of the type of car one drives. In response to this suggestion Freaknomics radio host, Kai Ryssdal, exclaimed “This smacks of Big Brother watching me when I drive, dude.” Dubner, on the other hand, offered this: “We’ve all gotten used to willingly carrying around a GPS device with us at all times, which is what a smartphone does, right? We’re also getting used to the ideas of electronic tolling where we don’t have to stop at the booth. So I wouldn’t be shocked if we were to see some per-mile taxing in the future.” Rob Puentes, a transportation policy expert at the Brookings Institution and a VMT proponent, offered support, “If you are driving on the Beltway during rush hour consistently adding to the traffic on those highly congested roads, you’d be paying more, and then those revenues would go back to the road you are using.” 

                The podcast reports that “at least 18 states have pursued VMT pilot projects, and in the past five years, legislatures in at least 11 states have considered more than 20 proposals to establish or study state level fees of this kind.” Obviously, privacy is a concern. A study by the Metropolitan Washington Council of Governments Transportation Planning Board found that 86 percent of Washington, D.C. commuters oppose having a GPS device installed in their vehicles to track their miles traveled.  Personally, I support this idea, at least theoretically. I wouldn’t oppose having a GPS tracking my mileage as long as my personal information isn’t exploited. The use of the GPS would have to be explicit, but as long as I know that someone isn’t monitoring my whereabouts for anything other than mileage I’d be comfortable, or at least open to it. From an economic standpoint this free market concept is nearly perfect; you pay for what you use. However, Freaknomics commentators are less convinced. Many suggest that vehicle weight be brought into the equation since large trucks contribute to most of the wear and tear on the roads. Some claim that this proposal removes an important incentive to purchase fuel efficient vehicles.  However, as the owner of a fuel efficient Ford Fusion, I would still benefit from filling up less often. Another user is concerned about how the state would define road usage. 

                Minnesota’s outdated infrastructure is evident and VMT offers a viable potential solution. If the community vehemently contests this suggestion than perhaps we can look to Finland for guidance. There, traffic fines are indexed based on a driver’s salary (you’re fined 20 percent of one month’s take-home pay). “Just a few tickets from a few speeding billionaires could help balance any budget in a hurry!”

Sunday, March 17, 2013

Mozilla Firing the First Shot?



The Washington Post published an article on March 14th, titled, “Web browsers consider limiting how much they track users” which may be found at: http://www.washingtonpost.com/business/technology/web-browsers-consider-limiting-how-much-they-track-users/2013/03/14/94818d22-8bed-11e2-9f54-f3fdd70acad2_story.html

The article centered on the tug-of-war that is the “do not track” debate raging between web browsers, consumers, and advertisers. The problem with this debate, however, is that the three main players do not have a unified side of the debate, often leaving all three equally criticized for their action or inaction. 

Take Mozilla. It is a web browser that according to the article is used on up to 20% of desktop computers. The web browser is a non-profit organization that has catered to those internet users with an eye toward smaller, less commercialized search engines. Firefox would disallow third party companies that a user did not voluntarily visit from tracking consumers. For instance, if a consumer went to Firefox Mozilla and typed in the search term, “recipes,” only the sites the user clicked on would be allowed to track the consumer. Another company like Weight Watchers would not be allowed to bootstrap a cookie onto the browsing history of the consumer just because the request for recipes might also mean someone is food-conscious. Weight Watchers could only place their cookie on the consumer if the consumer actually clicked on a recipe from Weight Watchers. A consumer would still be tracked by the legitimate sites they visited; it just would not allow those third parties to track through inference of preference. In essence, Firefox would require consumers to make more affirmative preference choices. 

However, other browser companies, like Google, Yahoo!, and Facebook are less enthused about limiting the tracking abilities of advertisers and third party companies. The article states that these companies “would avoid the kinds of restrictions Firefox is considering because of an exception that allows cookies to be placed by sites users voluntarily visit.” In other words, Firefox stands alone in wanting to limit tracking cookies. Many browsers believe that it is through these tracking cookies that advertisers can best cater to their consumers. Since the advertisers are the life-blood of the internet and provide the browsers with money which can then be turned into free services for the consumer, some of the browser companies believe that not allowing advertisers this access will actually end up doing consumers a disservice. Other browsers companies like, Safari, have already limited third party company tracking on cellular phone internet devices. Microsoft got in major trouble with industry giants when it released its new browser with the default set to “do not track,” a setting that was promptly ignored by advertising networks. 

Consumers are just as wishy-washy. On one hand, consumers that are privacy-conscious, subscribe to the ideology of the Firefox executives, that they would limit their exposure to advertisers if it was easier to and they were given the option. Other consumers though, subscribe to the theology of the Future of Privacy Forum, which states, “It’s fine for tracking to come out into the sunlight and for companies to realize that if all you’re trying to do is sell people stuff, most people are cool with that so long as they believe people are trying to do things for them rather than to them.”

Advertisers are likely the most unified voice. They believe in the value of online tracking and that it is through this tracking that consumers are getting individualized and perfected attention. The Interactive Advertising Bureau tweeted Mozilla, “launched a nuclear first strike against the industry,” which would only set the stage for an arms war in sleuth tracking techniques. Advertisers contend that if browsers and consumers want to change the rules of the game, the advertisers will change their strategy and tactics. The old homage, “This could break the Internet,” has come back to rear its ugly head. They argue that the model consumers have depended on will vanish and have to be replaced by perhaps a less than free service. However, perhaps this is what consumers actually want. Perhaps the market has spoken but it does not fit the expected less common monetary denominator. 

At the end of the article, I was left with four main questions: 1) Who cares about “do not track” if it is just a “do not trespass” sign that advertisers will ignore, 2) Is it expected that a “do not track” restriction on desktop browsers would create a want for them on cellular device browsers, 3) What is digital fingerprinting, and is there a role for the FCC and the courts in limiting these more sophisticated tracking capabilities, 4) Is privacy paying the price for the players’ divergent views and infighting?  Would the debate be more beneficial if the players were separated into more defined opinions/ideologies?