Tuesday, February 5, 2013

A Path to Privacy?

We all love the simplicity, individuality, and ease in which smart phone applications invade (or enrich) our lives. Applications on smart phones and social media websites often ask if they may access user contacts. Linked In does it. Twitter does it. Facebook does it. The site usually couches the access in a non-intrusive way of helping us to better "connect," or "get the most from the site/application."


On February 1st, the Federal Trade Commission (FTC) settled with Path, an application used in Apple's iOS platform. Path would prompt users to accept or decline that application's access to personal information and contact details. However, due to a design flaw, the prompt was not given until after the information was already collected. The fence that users thought they had built around their private data turned out to be a smokescreen that was already penetrated.  


Because the flaw was in the program's design, Path was able to escape liability for misrepresenting its privacy policy to users. In essence, the FTC contented itself with deciding that yes, there was a privacy breach by Path, but since the privacy breach was not intentional, the company's privacy policy was sound. One commentator noted that, "This was not an instance where [Path] made a privacy policy promise it violated. In large part this was a [user interface] design failure,” he said. “[The FTC] focused on the omission of the proper timing of the communication to the user as a deception.”

Path agreed to pay $800,000 in fines and to submit to privacy audits for 20 years as punishment for the improper collection of user data. Should we read this settlement as the FTC only condemning intentional privacy invasions? Should there be more liability for companies that put designs on-line without properly checking to be sure that those designs are compatible with their own privacy policy? What about the user's "reasonable expectation of privacy" if a privacy policy that explains the parameters of the user's reasonable expectations is not honored by the product? It seems that there is a loophole for companies to escape liability by blaming their design flaws instead of their internal checks before putting a product out into general use. A $800,000 fine and audits may encourage Path to be more vigilant in protecting privacy, but it remains to be seen if this settlement will prove to change the behavior of companies. Applications companies would be right to be sure their technology matches their promised privacy policies.

2 comments:

  1. I wonder if data leaks should be a strict liability offense, or per se illegal. Either status would make leaks punishable, but of course the industry would complain that it would be overly restrictive and might put too much emphasis on data security. After all, how serious is a leak of demographic user stats, as compared to a leak of credit card information or social security numbers?

    My instinct is to move toward per se illegality of data leaks, but I feel like there are probably a lot of reasons (that don't come to mind) to avoid that sort of stance.

    ReplyDelete
  2. The HIPAA privacy rule requires regulated parties to notify individuals when their protected health information (PHI) is breached. A new HHS rule defines breach as the unauthorized "acquisition, access, or use [of PHI] . . . which compromises the security or privacy of PHI."

    HHS specifically declined to adopt a bright line rule that any unauthorized acquisition, access or use constitutes a breach. In addition to excluding inadvertent disclosures made in good faith from the definition of breach, the rule allows regulated parties to demonstrate that an incident is not a breach if there is a "low probability that protected health information has been compromised" by considering (1) the nature and extent of the information involved (including identifiability); (2) who used the information and to whom it was disclosed; (3) whether it was actually viewed; and (4) the extent to which the risk has been mitigated.

    Even though HHS's new rule tightens breach notification requirements compared to the interim final rule that came before it, the factors used define whether a compromise has taken place make pretty clear that risk of harm, in addition to intent, is the primary trigger for breach notification. So it seems that the FTC is not alone is finding that a per se rule does not appropriately balance the burden to businesses with individual privacy rights.

    ReplyDelete