Congress is currently considering a major overhaul to ECPA. As the cases we read earlier this year indicate, ECPA is sorely in need of an update to address the new privacy and constitutional issues that have developed alongside modern technology. (Recall, i.e. United States v. Warshak (6th Cir. 2010) (finding that a search legal under the Stored Communications Act violated the 4th Amendment), Jennings v. Jennings (So.Car. 2012) (distinguishing between webmail and Outlook-like email systems in the applicability of the SCA), and Kirch v. Embarq (10th Cir. 2012) (illustrating the trouble with applying the old ECPA framework to new technology entities). The bill, introduced by Senators Patrick Leahy (D-VT) and Mike Lee (R-UT), would amend ECPA to eliminate the 180-day clause in the Stored Communications Act. Under the new provision, police would need to obtain a search warrant to access emails of any age, rather than only those sent or received within the past 180 days. I’ll leave it to someone else to parse the bill (available here) to see whether it might realistically address some of our other concerns with ECPA. Instead, I’d like to direct your attention to this C-SPAN gem, which has been making its way around the web. In a House hearing on the ECPA amendments, Rep. Louie Gohmert (R-TX) engaged in a fairly ridiculous discussion with a Google attorney:
On the one hand,
Rep. Gohmert seems genuinely concerned with citizens’ right to privacy in their
email, and perhaps he should be commended for that. And his ignorance of how
Google's advertising system works is understandable – it’s complicated, and
Google hasn’t always done a great job of explaining it. But what's really
concerning is his unwillingness to learn about the systems he will be asked to
vote on. His understanding of Google’s system seems to come primarily from
Microsoft’s “Scroogled” ad campaign which Holly wrote about last month.
The Google lawyer’s
description of their ad system was actually reasonably accurate. He says that advertisers can “identify
the key words that they would like to trigger the display of one of their ads,”
“the email content is used to identify what ads are going to be most relevant
to the user,” and “advertisers are able to bid for the placement of advertisement
to users who our systems have detected might be interested in the
advertisement.” Since 2004, Google has scanned emails for keywords, and has automatically
placed ads that related to keywords in that email. So if your email says, “Hey
buddy, let’s go snowboarding next week,” you'll see an ad for “Snowboards now
on sale at REI!” But you didn’t see that ad in other emails. But in 2011, Google
implemented a new system that “learns from your inbox.” For example, if you
email back and forth with your friend about the snowboarding trip, and then you
book plane tickets to Colorado, you might start to see ads for particular ski resorts in
Colorado, even on unrelated emails. Additionally, the newer system will “learn,” for example, that you never read
emails from a particular charity, and Gmail won’t show you ads for charities since you're ignoring the messages you already receive.
The 2011 advertising system, which as far as I can tell, keeps track of
keywords, the frequency of their use, whether or not you read and respond to
the emails in which they occurred, and possible links to other keywords implies
a much greater degree of data-gathering than the predecessor model. And to be sure, Google hasn’t always helped itself in describing its own
system: as EPIC points, out, Google used the term “content extraction” and “information
extraction” to describe it’s advertising set-up in patents. Terms like this imply a more nefarious appropriation of user data, whereas the
actual process is fairly passive. But
while there certainly may be privacy concerns with Gmail’s “content extraction”
advertising model, they are NOT the concerns Rep. Gohmert believes exist.
We’ve talked before
about how difficult it is for Congress to pass meaningful privacy legislation when
technology will likely render any protections irrelevant, obsolete, or
insufficient in the not-so-distant future. The apparent unwillingness of this
particular Representative to gain an accurate understanding of Google’s nearly 10
year old advertising technology adds another level of concern.
I find this very amusing, considering the Supreme Court’s perennial entreaties to Congress to address certain privacy issues with legislation (e.g., Taft in Olmstead and Alito in Jones). Yet here Emily gives a very good example of why congressional action may not be a privacy panacea.
ReplyDeleteWould a better privacy regime result if the FTC had rulemaking authority? We’ve discussed in class how the FTC regulates almost exclusively through adjudication, not rulemaking. But one of the classic advantages of administrative agencies is their expertise. After all, I can’t imagine a privacy expert like Paul Ohm sharing in Rep. Gohmert’s confusion (nor can I imagine Congress inviting Prof. Ohm to come be a senior policy advisor, as the FTC has done). So I think the FTC could handle these lawmaking duties much better than the judiciary or legislature.
Of course, there are a few reasons why the FTC doesn’t—probably the particulars of its delegation from the FTC Act. And Prof. Ohm mentioned that the FTC considers itself in the business of law enforcement, not regulation. I wonder if any state FTC analogues go the rulemaking route, and if so, what successes they have had. It’s a route worth considering. (And apparently, at least one Senator has considered giving the FTC privacy rulemaking authority.) Anyone have thoughts on why FTC rulemaking wouldn't actually be any better?