Thursday, February 28, 2013

Considering a New Model for Policing Data Privacy

Earlier this week, MPR’s The Daily Circuit ran a segment on the limitations of “big data.” Their guest was Samuel Arbesman, an applied mathematician and network scientist who writes about data. The segment mentions a couple of other uses for data that we could add to our list from today’s class: safety (diagnosing problems with cars/buildings) and efficiency (whether taxis take the most direct routes or whether a factory is operating at peak efficiency). But it also got me thinking about alternative models for regulating the collection and use of data.
Data privacy was not the focus of the segment, but one caller did raise the issues of privacy and the possibility of detrimental uses of the vast amounts of data being collected. (At about 24:45 in the audio clip, for those of you playing along at home.) Arbesman expressed skepticism about the ability of lawmakers to regulate effectively in this area because the technology changes so rapidly (a challenge we’ve mentioned several times in class already). But Arbesman also recognized the need to deal with what he referred to as questions of ethics surrounding the use of big data. He suggested in passing that institutional review boards (IRBs) might help police these ethical issues, which got me thinking about whether IRBs could be a useful model for addressing data privacy concerns.
The purpose of an IRB is to review and approve all human subjects research before the research begins. Researchers are required to submit extensive information about their proposed projects to the IRB, which is composed of a panel of disinterested members with sufficient expertise to evaluate research activities. See 21 C.F.R. § 56.107. The IRB is then charged with assessing the risks and benefits of participating in the research and ensuring that the proposed consent procedures are adequate to ensure that participants are aware of the risks. See 21 C.F.R. § 56.111. (You can find more information about the University of Minnesota’s IRB, by way of example, here.)
If we used an IRB model for regulating data privacy, entities collecting and using personal data could be required to seek approval for data collection and use in advance. The main drawback I see to the IRB model is that an ethics-based approach might be insufficient to dissuade entities that stand to profit significantly from unethical uses of data. The IRB approach would, however, have the benefits of being predictable (organizations would seek approval of their data uses in advance rather than be subject to litigation after the fact) and more readily adaptable to changing technology. Perhaps this doesn’t need to be an either/or proposition; instead, maybe approval through a voluntary review system could insulate organizations’ data uses from subsequent legal challenges.
In any case, I am curious to know what others of you think of the usefulness of an IRB-like system for addressing some of the privacy concerns we've discussed that are inherent in using data that can be traced to particular individuals.

2 comments:

  1. I can certainly see the appeal of creating an oversight body, but I have two concerns.

    First, enforcement and compliance are likely to be difficult to achieve given that data-collecting or data-using entities may be able to do so below the radar. Private companies are likely to engage in privacy-invading activities for their own internal purposes and so these activities are unlikely to be revealed through the publication of research articles, for example.

    Second, as you mention, IRB review requires a weighing of the risks and benefits of a particular course of action--including the risks and benefits to privacy. HHS, which regulates the current IRB process for research, has made it clear that it considers informational risks to privacy to be substantially less serious than other kinds of risks (see these proposed changes to the Common Rule which create a streamlined and less rigorous review process for research that poses only informational risks). So we end up in the familiar territory of trying to parse out what the harms are and how they ought to be balanced against the benefits. And, in an IRB-based system, we have the added bonus of being modeled on a regulatory system that doesn't take privacy concerns all that seriously.

    (Side note: maybe informational harms aren't that serious and taking a lighter touch to regulating them is appropriate...)

    ReplyDelete
  2. The problems of industry capture and forum shopping are present in IRBs. Given the realatively low profile of these boards, I am not convinced that a privacy IRB would avoid the same fate. As it stands companies have money and resources to advocate their position, there is unlikely to be a similar countervailing interest influencing the IRB so they become an effective rubber stamp or race to the bottom.

    ReplyDelete