admin管理员组

文章数量:1529451

CHAPTER


Privacy

INFORMATION IN THIS CHAPTER:

  • •  Sources of privacy law

  • •  Privacy concerns raised by AR

  • •  How AR can enhance privacy

INTRODUCTION

Privacy is a hot topic these days, especially in connection with any sort of communications technology. In part, this is due to the lightning-fast pace at which information technology is developing. The less people understand how the technology works and how it can be used to gather information about them, the more apprehensive they are likely to feel about it. Privacy is as much about emotional reactions as it is about legal doctrine, and it is still a very amorphous concept from either perspective. There is much disagreement about just what the word means, what sort of rights it should include, and where those rights come from.

That said, however, there are various laws and court decisions that define and protect different types of privacy rights. Many of these are likely to be implicated by the development and implementation of augmented world technologies.

SOURCES OF PRIVACY LAW

BACKDROP: THE FIRST AMENDMENT

One basic reason that privacy is such a difficult concept to define and protect in the United States is that it runs counter to our fundamental commitment to free and open speech. Our country was founded on the expression of dissent, personal liberty, and the ability of each individual to participate in the political system. The American legal system still reflects those values in its hesitance to give government the power to prevent a citizen from saying whatever he or she chooses to say - or, putting it more precisely in light of modern communications technology, conveying whatever information he or she may choose to convey.

In the American legal system, virtually all laws concerning the conveyance of information are limited in their application, to some degree, by the First Amendment to the United States Constitution. This bedrock provision prohibits governments from

Augmented Reality Law, Privacy, and Ethics

43

Copyright © 2015 Elsevier Inc. All rights reserved. “abridging the freedom of speech ... or of the press.”1 After more than two centuries of interpretation by the courts, this simple statement has been fleshed out into a fundamental principle of free expression that undergirds our entire framework of participatory democracy. As long as the subject of one’s speech has any arguable connection to issues that affect the well-being or interests of more than just those involved in the conversation - what the law calls “matters of public concern” - then the right to express that view will almost always be protected by the First Amendment. By contrast, “matters of private concern” are those that the law recognizes as not being the legitimate business of anyone other than those directly affected by them. These - and, for the most part, only these - issues the law will protect as “private.”

The following excerpt from a 2011 Supreme Court opinion gives a concise summary this bedrock legal doctrine:

Speech on matters of public concern is at the heart of the First Amendment’s protection. The First Amendment reflects a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open. That is because speech concerning public affairs is more than self-expression; it is the essence of self-government. Accordingly, speech on public issues occupies the highest rung of the hierarchy of First Amendment values, and is entitled to special protection.

Not all speech is of equal First Amendment importance, however, and where matters of purely private significance are at issue, First Amendment protections are often less rigorous. That is because restricting speech on purely private matters does not implicate the same constitutional concerns as limiting speech on matters of public interest. There is no threat to the free and robust debate of public issues; there is no potential interference with a meaningful dialogue of ideas; and the threat of liability does not pose the risk of a reaction of self-censorship on matters of public import.2 3

The fact that this summary of the law preceded an opinion in which the Court ultimately upheld the right of radical protesters to display hateful messages at funerals illustrates the breadth of the phrase “matters of public concern.” Any arguable connection to public affairs imbues speech with a nearly inviolable legal protection, no matter how controversial a particular speaker’s point of view may be.

One corollary of this principle is that information in the public domain is free for all to use. In this context, data is more or less presumed to be public; it is a significant burden to prove that something should be free from public scrutiny. Even if information was once legally private, that privacy is gone for good after it is lost. For example, in the 2001 decision Bartnicki v. Vopperf the United States Supreme Court refused to punish a newspaper for publishing video footage, even though a third party had obtained it in the first instance by illegal eavesdropping.4 5 And in the famous Pentagon Papers cases of 1971,5 the Supreme Court refused to prevent newspapers from publishing leaked classified military documents about the Vietnam War, even though the government warned that disclosure would lead to the death of Americans abroad. That is how sacrosanct the First Amendment principle against what the courts call “prior restraint” on publication has become.

This also explains why what some call the “right to be forgotten” is unlikely to ever take root in the United States as it is beginning to do in Europe. Various groups have advocated different types of legal proposals to give people a legal mechanism to have embarrassing information about them removed from the public record -particularly internet search engines - and to get others to stop repeating it, even if it was once newsworthy. Some American legal commentators have said that this “sweeping new privacy right ... represents the biggest threat to free speech on the Internet in the coming decade.”6 In 2013, California became the first American jurisdiction to grant a legal right to have personal information deleted from the internet, although the statute applies only to minors and is riddled with uncertainty as to how it will work.7 But even if the statute survives legal challenge, First Amendment jurisprudence will not permit American regulators to run very far with this idea. The Supreme Court has struck down on free speech grounds more than one law intended to prevent child pornography, for example, and even refused to restrain newspapers from publishing the names of rape victims, so long as the information was legally acquired.8

That is why the First Amendment remains the elephant in the room during any discussion of American privacy law, even though the provision itself restricts only the government and not private citizens. It explains, for example, why privacy laws cannot prevent individuals from collecting and repeating information that is freely available in public places - such as overheard sights and sounds -including by recording them. The freedom of speech also explains why the penalties for even a bona fide invasion of privacy sometimes seem so anemic; the offender may be punished, but the ill-gotten information typically remains in the public sphere.

This is also why it has been so difficult to find a legal path toward a third category of information between “public” and “private.” For example, philosophy professor Evan Selinger of the Rochester Institute of Technology in New York has proposed formalizing the idea of “obscurity” as a legal category for information that, while not entirely private, must still remain difficult to access.9 Despite the attractiveness of this proposal, it is difficult to envision how obscurity could be lawfully enforced in a legal framework that forbids government restrictions on speech.

All of this said, however, the law will restrict some speech on some subjects under some circumstances. Exceptions to the freedom of speech are just as important to the healthy functioning of our democratic system as is the freedom itself. Certain types of information are so unrelated to the public concern, and some methods of expressing it are so disruptive to the public order, that some regulation by the courts is permitted. Moreover, we need spaces in our lives for private discourse, where we can actively explore our opinions with others without fear of public recrimination. Brazilian President Dilma Rousseff reminded the United States government of this point in the midst of news reports that the NSA had tapped her communications. “Without the right of privacy,” she said, “there is no real freedom of speech or freedom of opinion, and so there is no actual democracy.”10

Under most circumstances, however, government protection of individual privacy over free speech remains the exception rather than the rule. As a result, instead of having a single “right of privacy” in the United States, we have one central freedom of speech, together with a mismatched patchwork of state and federal laws occupying the spaces between and surrounding the boundaries of that freedom.

THE COMMON LAW RIGHT TO BE LEFT ALONE

Federalism is another reason for the lack of a uniform “law” of privacy in the United States. Our legal system is one historically based on limiting the powers of the national government, with all other powers of government being reserved for the states. The power to regulate and protect information about individual citizens was not one of the traditional powers of the Federal government, and (with narrow exceptions discussed below) the affirmative limitations on government power in the Bill of Rights do not have much to say on preventing encroachment on personal privacy. Traditionally, therefore, most of the laws protecting personal privacy have come from state legislatures - which retain the general power to pass virtually any law they choose within the very loose boundaries established by the Constitution - and from state courts, which have the inherent authority to go beyond the written statutes and declare principles of judge-made “common law.”

The modern era of American privacy protection began in 1960 with the publication of a law review article by Dean William L. Prosser.11 He summarized what by then was a burgeoning but chaotic body of common law decisions from courts across the country and distilled them into four distinct torts that have henceforth become the foundation of privacy law in virtually every state. Three of the four torts amount to variations on what is commonly called “the right to be left alone.” They are as follows:

  • •  Intrusion into Seclusion. This common law tort occurs when someone intentionally intrudes upon the private space, solitude, or seclusion of a person, or the private affairs or concerns of a person, if the intrusion would be highly offensive to a reasonable person. The classic example is a secret video camera installed in a changing room or bedroom. The tort occurs upon recording; no publication of the recorded footage is necessary.

  • •  Publication of Private Facts. This separate cause of action arises when someone publicly disseminates little-known, private facts that are not newsworthy, not part of public records, public proceedings, not of public interest, and would be offensive to a reasonable person if made public. Typical examples here include private health matters and intimate sexual information.

  • •  False Light. This cause of action is similar to the tort of defamation (also known as libel or slander), which punishes the unprivileged publication of demonstrably false assertions of fact that injure a person’s reputation. The tort of false light is also designed to protect a person’s reputation, but it deals with the publication of information that, while potentially true in some respects,

is communicated in a manner that conveys something false. It requires a publication made with actual malice that places the plaintiff in a false light and would be highly offensive to a reasonable person.12

One common thread running through each of these causes of action is a prerequisite that the aggrieved party have a “reasonable expectation of privacy” under the circumstances alleged. The word reasonable is a legal term of art loaded with meaning. For one thing, it is an objective measurement. Although courts will often require a plaintiff to have subjectively expected privacy as well, the law does not deem something private just because someone wants it to be. A reasonable expectation of privacy is also one that is constrained by the boundaries of what other laws - such as the First Amendment - make public. A court will determine what the average, reasonable person would have expected under the circumstances, and judge the case according to that standard.

Although Prosser and others like him did much to bring order to the common law of privacy, it remains an inherently decentralized, flexible concept that evolves each time a court applies time-tested principles to the facts of a new case.

EAVESDROPPING AND WIRETAPPING STATUTES

Eavesdropping laws protect the right not to be surreptitiously recorded. More specifically, eavesdropping involves making an audio or video recording of other people under circumstances in which those persons had a reasonable expectation of privacy. Eavesdropping is prohibited by statute in virtually every state, and much of the same subject matter is covered by federal wiretapping statutes as well. It can be punished as a tort, a crime, or both, depending on the jurisdiction. The most recent and highly publicized example of eavesdropping through emerging digital media was the case of now-former Rutgers student Dharun Ravi, who was sentenced to 30 days in jail for using his webcam to secretly record and broadcast his roommate’s intimate encounter - an invasion that ultimately led the roommate to take his own life.13

The boundaries of prohibited activity vary somewhat between states; for example, some punish only audio recording and not video. Some are “one party consent” jurisdictions, in which the recording is lawful as long as one participant in the conversation agreed to the recording. By contrast, “two party consent” states consider the recording to be eavesdropping unless all participants consent. And in both types of jurisdictions, defining “consent” is rarely a simple task.

ELECTRONIC PRIVACY LAWS

There is no one statute, court decision, or other authority that establishes the boundaries between public and private realms online. Instead, we have a patchwork quilt of various statutes intended to address distinct areas of concern. For example, the Electronic Communications Privacy Act14 and the Stored Communications Act15 created barriers to both the government and private citizens obtaining the emails of others. The latter statute has since been interpreted to apply to other types of electronic messages that were intended by their senders to be private, such as texts and Facebook direct messages.

Both federal and state authorities have also taken various actions to regulate the use of customers’ personal information by the owners of commercial websites and mobile applications. Various federal agencies, including the Federal Trade Commission, Federal Communications Commission, and the National Telecommunications and Information Administration, have all issued their own set of “recommended” guidelines for protecting such interests. The FTC occasionally takes legal enforcement action against companies who violate these guidelines in a manner that it considers to be an “unfair” commercial practice.

Meanwhile, the absence of binding legislation from Congress on these issues has led numerous states to pass their own laws regulating other aspects of online privacy. By far, California leads the pack in this respect. In the last few years alone, it has adopted rules on mandatory disclosures of data breaches, requirements for mobile app privacy policies, the ability of minors to get their information taken down, and the responsibility to respect user requests not to have their web usage tracked.

SUBJECT-SPECIFIC PRIVACY LAWS

The foregoing laws protect privacy in broad strokes by establishing general boundaries for behavior or regulating who has access to particular communications media and under what circumstances. There are also laws aimed at safeguarding specific categories of information. For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA)16 significantly increased protection for individuals’ personal health information. The Children’s Online Privacy Protection Act of 1998 (COPPA)17 regulates the collection and use of information from children younger than 13. The Gramm-Leach-Bliley Act of 199918 governs the disclosure of financial data. Various other laws on the federal and state level govern the collection and use of social security numbers and other discrete types of information.

LIMITATIONS ON GOVERNMENT INTRUSION INTO PRIVACY

For the most part, the authorities described above limit how private individuals can collect and use information about other individuals. Our legal system also contains fundamental restrictions on the ability of governmental authorities to collect private information. The most basic of these is the Fourth Amendment to the United States Constitution, which restricts the government from invading “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”19 From this comes the prerequisite that law enforcement officials obtain a judicial warrant based “upon probable cause” before intruding into any place in which a person has a reasonable expectation of privacy. In June 2014, the Supreme Court re-affirmed the importance of this provision in the digital age by holding that the Fourth Amendment requires a warrant before police may examine data on a detained person’s mobile device.20

Of course, subsequent developments such as the USA Patriot Act21 and NSA surveillance scandals of recent years may call into question the efficacy of these limitations on government power. And to be sure, the opportunities for data collection presented by augmented reality and its supporting technologies will sorely tempt law enforcement agencies to find new ways to monitor and collect individuals’ electronic data.

With this legal framework in mind, then, let’s consider how AR-related technologies are likely to test the boundaries of American privacy laws.

PRIVACY CONCERNS RAISED BY AR

FACIAL RECOGNITION AND OTHER BIOMETRIC DATA

The importance of facial recognition in an augmented world

There is nothing inherent to augmented reality that requires the collection of biometric data. It is ingrained in human nature, however, to seek interaction and companionship with other people, which explains how social media has so quickly become the single most popular function of the internet, and why we invent so many devices for calling, texting, tweeting, poking, tagging, friending, following, and liking each other. That is also why we have already seen several real and imagined apps that bring social networking into the augmented medium. It is safe to say, therefore, that we will use AR technologies for new forms of social media and interpersonal interaction.

In order for any AR device to interact directly with a person, the device first needs to recognize who and where the person is. At present, there is no realistic alternative for accomplishing that task in a social setting other than by facial recognition. Retina and fingerprint scans do and will most certainly have their place, but they require the subject to get a little too up close and personal with the scanner to be comfortable in most settings. By contrast, faces can be recognized passively and at a distance.

Even social technology that we use today demonstrates the inevitability of widespread facial recognition. The capability to implement facial recognition on a broad scale has existed for years, but has been held back. As of this writing, for example, Google still prohibits any app for its Glass eyewear that recognizes faces.22 These companies are leery of sparking a privacy backlash - which is exactly what has happened each time Facebook has expanded its use of the technology. For example, in 2012 Congressional hearings, Sen. Al Franken grilled Facebook officials about their intentions for the use of these “faceprints.”23 In August 2013, Facebook changed its Statement of Rights and Responsibilities to give it the authority to add individuals’ profile photos to its facial recognition database.24 This move was met with probes from various European regulators and promises of additional scrutiny from Sen. Franken.

Yet Facebook continues to roll out facial recognition applications, bit by bit, and has refused Sen. Franken’s invitation to promise that they won’t use it even more widely in the future. It isn’t alone. “Businesses foresee a day when signs and billboards with face-recognition technology can instantly scan your face and track what other ads you’ve seen recently, adjust their message to your tastes and buying history and even track your birthday or recent home purchase.”25 This prospect became eerily real for me aboard a cruise ship in the Fall of 2012. It used to be that ship photographers had to post their photos in a massive onboard gallery that patrons spent hours browsing through, trying to pick out the pictures in which they appeared. No more. This time, my digital folder was updated in near-real time with new photos every day, using software that had tagged my face or even the faces of others in my party. Chances are that I signed something at some point allowing the ship to do this, although I’m not sure US privacy laws would hold much sway in international waters anyway. But it brought the technology’s power home in a visceral way.

It is more than commercial pressures driving the technology, however; criminal acts like the Boston Marathon bombing stoke the demand for law enforcement to have better facial recognition capability. “The FBI and other U.S. law enforcement agencies already are exploring facial-recognition tools to track suspects, quickly single out dangerous people in a crowd or match a grainy security-camera image against a vast database to look for matches.”26 Even more likely to gain public support are apps such as Baby Back Home, an AR app in China that uses facial recognition to allow average citizens to locate and identify missing and kidnapped children.27

Or it may be far more simple and personally gratifying applications that finally win the public over. Forbes contributor Tim Worstall recently echoed28 an argument that I have made for years - that the real “killer app” for AR eyewear will be one that recognizes faces and calls to the user’s field of view everything the user knows about that person - their name, the names of their spouse and children, and so on -all in order to avoid embarrassment at cocktail parties.

Whatever vector the technology takes, the more such sympathetic and socially redeeming applications of facial recognition gain acceptance, the more inured and less apprehensive the public will be toward the technology. Businesses will then encounter less resistance to using it for more commercial purposes. At that point, society will grapple in earnest with the boundaries that privacy law can and should impose on facial recognition.

Regulating facial recognition

Of course, as mentioned above, regulatory agencies are not waiting until facial recognition becomes ubiquitous before they begin to regulate the technology. On October 22, 2012, the Federal Trade Commission released a report entitled “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies.”29 The FTC has had its eye on this technology for a long time-at least since the workshop it held on the subject in December 201130-aware that it is being implemented by a wide variety of industries.

Among the privacy issues that concerns the FTC most is “the prospect of identifying anonymous individuals in public.”31 One fundamental consequence of First Amendment jurisprudence, however, is that there are no “anonymous individuals in public [places];” being publicly visible pretty well eliminates any expectation of legally protectable privacy one might hold. Indeed, even before facial recognition technology was dreamed up, the law never recognized a general right to remain an anonymous face in a crowd. This is an example of the proposed right to “obscurity” discussed above.

If anything, it has been the opposite; the law has recognized faces as an important means of identification. For decades, police have used line-ups to identify suspects’ faces, and taken mug shots as a means of recording detainees’ identities. Although the recent rise of websites that catalogue these mug shots for shaming and extortion purposes has caused some agencies to clamp down on their distribution, most courts still protect the public’s right to access these files as public records. And in 2003, a Florida judge refused to allow a Muslim woman to obtain a driver’s license unless she agreed to remove her veil and be photographed, ruling the state “has a compelling interest in protecting the public from criminal activities and security threats,” and that photo identification “is essential to promote that interest.”32 Therefore, we are unlikely to see any significant regulation on the gathering and use of facial recognition information in public places, unless public outcry results in significant new privacy legislation.

Such regulation may have a greater chance of surviving judicial scrutiny, however, to the extent that it targets purely commercial activity. As the Supreme Court has explained, commercial messages receive less vigorous protection than other speech, at least if they have the effect of misleading the public or fostering illegal activity:

The First Amendment’s concern for commercial speech is based on the informational function of advertising. Consequently, there can be no constitutional objection to the suppression of commercial messages that do not accurately inform the public about lawful activity. The government may ban forms of communication more likely to deceive the public than to inform it or commercial speech related to illegal activity.33

This is why courts are able to hear such causes of action as trademark infringement, unfair competition and false advertising - all of which involve activities that are, at their core, speech. Because unfair commercial activity is exactly the sort of activity that the FTC exists to regulate, it is a logical starting place for conversations about the use of facial recognition in commerce.

The FTC sees this as the perfect time to publish its expectations “to ensure that as this industry grows, it does so in a way that respects the privacy interests of consumers while preserving the beneficial uses the technology has to offer.”34 The FTC Facing Facts report does not have the force of law, but you can bet that it will influence the decision-making processes of FTC administrative law judges and others evaluating novel allegations of “deceptive advertising practices” involving facial recognition.

Although the report characterizes its recommendations as “best practices,” it does not do much to actually reduce its discussion to practice. Rather, the report loosely follows the theme of the following three “principles”:

  • 1. Privacy by Design: Companies should build in privacy at every stage of product development.

  • 2. Simplified Consumer Choice: For practices that are not consistent with the context of a transaction or a consumer’s relationship with a business, companies should provide consumers with choices at a relevant time and context.

  • 3. Transparency: Companies should make information collection and use practices transparent.

These “principles” strike me as so vague as to almost be counterproductive. They are intuitive to anyone making a modicum of effort to incorporate privacy concerns into a facial recognition application. As a result, this recitation is not likely to encourage anything more than a modicum of effort to protect privacy. The technology itself is so young that efforts to guide it remain purely speculative at this point.

I am not alone in being uncomfortable with this report. The FTC committee behind the report adopted it on a 4-1 vote. The dissenting commissioner, J. Thomas Rosch, wrote that “the Report goes too far, too soon.” He made three points. First, he thinks that the report fails to identify any “substantial injury” threatened by facial recognition technology. Second, he finds it premature because there is no evidence that any abuses of the technology have yet occurred. Third, he believes the recommendation to provide consumers with “choices” anytime that the technology doesn’t fit the “context” is impossible, given the difficulty in assessing consumer expectations. As a result, he says, this amounts to an overly broad “opt-in” requirement.

In the months since this report was released, politicians have not gotten any more specific as to how they would regulate facial recognition technology. Even Senator Franken’s November 2013 pronouncement complaining about Facebook says only that he “will be exploring legislation to protect the privacy of biometric information, particularly facial recognition technology” and supports “convening] industry stakeholders and privacy advocates to establish consensus-driven best practices for the use of this technology.”35 Likewise, in December 2013, President Obama announced that his administration would be “looking into” these concerns, but offered no more specifics than Sen. Franken did.

In January 2014, the National Information and Telecommunications Administration convened the industry stakeholder meetings called for by Sen. Franken. Its goal is to articulate consensus guidelines for the application of the President’s Bill of Rights to facial recognition technology. I had the opportunity to personally participate in many of these sessions on behalf of the AR industry. As of this writing, those guidelines have not been finalized, and their ultimate utility remains unclear (Fig. 3.1).

The FTC report also expressed worry about facial recognition “data [being] collected [that] may be susceptible to security breaches and hacking.”36 These same concerns have already been expressed about electronic databases of all kinds, and we have seen the consequences of banks, credit card companies, and retailers having their information hacked. As a result, there are already several laws on the books (mostly at the state level) regulating the privacy of commercial databases and spelling out proper procedures to follow when that privacy has been compromised. The FTC also treats the failure of companies to adequately secure customers’ personally identifying information as an unfair commercial practice, and occasionally brings related enforcement actions. For example, in May 2014, it settled charges it brought against Snapchat for failing to provide the advertised level of data security to users of its mobile video messaging app.37

FIGURE 3.1

One of the NTIA’s industry stakeholder meetings on the regulation of commercial facial recognition technology.

Indeed, one plausible scenario is that governmental agencies and courts will begin to treat the recognizable dimensions of one’s face as another facet of the “personally identifiable information” that is already regulated by a variety of laws. Other examples of such sensitive data include Social Security numbers, mailing addresses, ZIP codes, phone numbers, and IP addresses. Under today’s laws, businesses are not forbidden from asking for or collecting such information, but they must post privacy policies listing the information they collect and how it is used. They must also disclose when websites deposit “cookies” on users’ computers that allow the user to be tracked by advertisers as he or she moves between various websites. Some effort is underway to legally regulate the use of cookies and enforce “do not track” protocols, but they have not been very successful to date.

DATA ENHANCEMENT

Mid-air augmented displays of virtual information also create new privacy concerns. Concept art of near-future AR applications is rife with examples of augmented data being displayed as hovering over or nearby individual people. In some cases this is social networking or other self-disclosed information about the person, or even digital advertising associated with the individual’s apparel. In other cases, though, it is data about the person that is stored in a variety of disparate databases with varying degrees of public accessibility, and collected by the AR device into one unified display. These include credit scores, transactional information drawn from IOT-connected devices, political affiliations, and even whether the person appears on sexual offender registries. In these concepts, such displays are made possible by recognizing the person’s facial features and using that identification to query other databases for information about the person.

The FTC has previously raised concerns about practices like these, which it calls “data enhancement.”38 It began by noting the vast amount of facial data already collected by social media companies, and that could easily be gathered by other commercial face recognition applications. The FTC then went on to cite a study by researchers at Carnegie Mellon University, which combined readily available facial recognition software with data mining algorithms and statistical identification techniques to determine an individual’s name, location, interests, and even the first five digits of their Social Security number.39 Powered by AR, this capability could ultimately make available to everyone virtually every fact known by anyone about someone, just by looking at that person. The ability to socially reinvent one’s self at any point in life, already under threat by social media, would be essentially lost.

To address this concern, the FTC suggested such basic steps as reducing the amount of time that companies retain facial information and disclosing to the consumer how their data may be used. Aside from being difficult to enforce, however, these suggestions do very little to address the practice or policing of such data enhancement. If copious amounts of personal information ever become visible through the mere act of seeing someone’s face, we can be certain that the resulting public outcry will lead to practical and legislative steps to curb abuses of this practice similar to the steps described elsewhere in this chapter to address similar concerns in analogous circumstances.

For the foreseeable future, then, the most productive avenue for protecting the privacy of one’s face in public may be more practical than legal. There are already a variety of software products that purport to shield users from being tracked online. The free market will certainly meet the same demand with regard to facial recognition. Already, several innovators have proposed various types of camouflage and countermeasures to throw off facial recognition software. These include off-center masks, makeup, clothing covered in face-prints, and hats containing infrared lights that confound video cameras.

Software engineer Greg Vincent has even suggested the development of a wearable protocol similar to the robots.txt files that prevent certain websites from being indexed by search engines.40 (Fig. 3.2) Using this protocol, says Vincent, “I can

FIGURE 3.2

Greg Vincent’s rough sketch of a robots.txt file for your face.

request that our conversation not be shared with anyone other than you and I ... [or] that I not be recorded for later use, that you not photograph me, that you not use facial recognition technology on me, or that you not record my voice.”41 As long as society retains its anxiety about facial recognition and the law remains unable to assuage that concern, we can expect the fashion and consumer electronics industries to fill the gap.

SURVEILLANCE AND SOUSVEILLANCE

All eyes on everything

Privacy advocates have long worried about “Big Brother” governmental agencies using advanced technology to spy on citizens. Such surveillance activity is inevitable, as the 2013 NSA spying scandal has reminded us. It is already a given that surveillance cameras are everywhere in modern-day public life, from stores to gas stations to street corners to traffic lights. Those are so small as to be barely visible anymore, and we rarely even think about them. Indeed, in November 2013, even the City of Las Vegas - that self-proclaimed haven of anonymity - announced plans to install “Intellistreets” street lights that, among other things, have the ability to record sound and shoot video.42 Knowledge is power, and it is the nature of governments to collect all the knowledge available to them.

But we are also entering an era where personal, wearable video recording devices are about to become ubiquitous. Wearable technology empowers individuals to record the words and deeds of themselves and others far more pervasively than any government could reach. Digital eyewear pioneer Steve Mann has coined the word “sousveillance” to describe such “recording of activity by a participant in the activity,” or “inverse surveillance.”43

We have already come to accept that everyone we meet is likely to be carrying a video-equipped cell phone that they can pull out at any moment. But the newest recording devices are ones that we wear on our persons. Among the earliest of these is the Looxcie, an over-the-ear camera that doubles as a Bluetooth headset. More recently, GoPro has launched a range of similar wearable cameras. Both companies’ devices come with companion mobile apps that can transfer recordings to Facebook, or broadcast what a user sees to his friends, live.

The earliest forms of digital eyewear, such as Google Glass and Recon’s ski goggles, represent a transitional species of device between simple digital cameras and true AR devices. They offer a heads-up display of information, but are not currently designed to truly augment our perception of the physical world by superimposing on our vision interactive digital images with the illusion of physicality. Photo and video capability are, however, an important part of their functionality, and they make it remarkably easy to record on the fly.

All wearable devices are designed to be comfortable, which can cause the wearer to forget they’re there. California Lieutenant Governor Gavin Newsom wore the Glass prototype during a television interview. Newsom later told Wired, “You can easily forget you have them on, and sense the capacity of use in the future,” adding the headset felt incredibly light, comfortable and inconspicuous on his head.44

Wearable devices are intended to let technology get out of your way so you can record life while still participating in it. This has fantastic upsides, and is something I have already enjoyed; I’ve made great, hands-free videos of my kids with my Looxcie and my Glass while continuing to play with them, rather than pull out my camera and separate myself from the experience. But there are also easily foreseeable downsides to forgetting you’re wearing a video camera on your head. I wore my Looxcie during a 2012 augmented reality conference, to underscore the talks I gave there about (among other things) this very subject. Even in that crowd-who are the movers and shakers in the industry that will produce these devices-I got a number of odd looks, turned heads, and derailed conversations.

And accidents do happen. While wearing my Looxcie, even I - someone who was keenly interested in the device’s impact on privacy - forgot I was wearing it at times, and I ended up accidentally recording (and later deleting) at least one conversation that was supposed to be private, along with a couple inherently private situations. What if I had forgotten I was wearing the camera when I walked into a public bathroom, and recorded myself or someone else in a compromising position? Or worn it (accidentally or intentionally) into any other setting in which people expected privacy, such as a family home, bedroom, or church confessional? Or read a confidential document or email? And worse, what if, instead of being set to merely record, my device was live-streaming to Facebook or some other audience?

At present, this is much more of a concern with a device like the Looxcie, which has a battery life of approximately five hours and is designed for continuous recording, than with the earliest digital eyewear. As of this writing, for example, Glass has a battery life of only 30 minutes when recording video,45 and it lights up conspicuously when running - not to mention that activating the recorder requires a hand gesture or voice command. In other words, it is not at all a device designed for surreptitious recording.

But these are the types of concerns we will encounter in droves once true AR eyewear goes mainstream. Most of the buzz surrounding these devices centers on the digital images that they overlay onto the user’s field of view. Less discussed so far, however, is the fact that, in order to truly augment the user’s vision, the eyeglasses need to also see (and recognize) what the user sees. Thus, every prototype of AR eyewear we have seen to date includes an integrated, forward-facing video camera. They have to. The earliest of these devices record only when necessary to run a particular app in order to conserve power. But as the augmented experience becomes more robust, these cameras will need to remain on constantly in order to make the discovery of digital content more organic, spontaneous, and useful.

There are also audio-only devices that pose similar concerns. In October 2013, a wristband-like audio recording device call Kapture accomplished its fundraising goal on Kickstarter. Here’s how the creators described its function:

Kapture functions as a 60-second buffered loop. The loop continuously overwrites itself until you tap the device to save a clip. The saved file is downloaded to your smartphone where the duration can be shortened and you can name, tag, filter, and even share it. Simple!46

Basically an audio-only version of Looxcie, Kapture’s founders foresee it being used to preserve unrepeatable moments with kids or friends, or to record an epiphany while the user is driving.

But once the devices are in consumers’ hands, there will be no way to limit the purposes for which they are used or the subject matter they are used to record. Even the users themselves are not likely to realize everything they’re recording, even when they’re subjectively aware that a recording is being made. The human ear has a marvelous ability to pick one voice out of a crowd and focus on it, ignoring all other conversations. Recording devices, on the other hand, pick up everything within earshot, even the confidential conversations that someone wearing the device may not even realize they’re hearing.

Sousveillance and invasion of privacy

Wearable sousveillance technologies will prove enormously useful in many circumstances. Their use is not inherently incompatible with personal privacy. Nevertheless, they will make possible eavesdropping and common-law invasions of privacy on an unprecedented scale, to the point where these technologies will eventually force a redefinition of what the common law recognizes as private.

From a privacy standpoint, the biggest concern will be the devices that are always on and always recording, such as the Looxcie and the Kapture. Because these are designed to keep recording even without conscious intervention by the user, it becomes virtually inevitable that the user will wear them into situations where he or she would not otherwise think to pull out a recording device, and where he or she would not record if they had been thinking about it. Here I am referring to private conversations and intimate surroundings. The fact that these devices record over their buffers every so often is irrelevant from a liability perspective; it is the act of recording that constitutes eavesdropping and/or intrusion into seclusion. Taking the next step and broadcasting that recording to third parties - which, again, at least some of these devices can be set to do with or without conscious intervention - risks additional liability for causes of action such as publication of private facts or, depending on the context, false light.

Although other mobile AR devices could be used to make surreptitious recordings, the prospect does not seem materially greater than with the smartphones and other mobile recording devices already on the market. As long as the onus is on the user to manually activate the recording feature, they are functionally equivalent to any other form of recording device. Indeed, head-worn recording devices actually have less capacity for surreptitious recording, since they require the user to constantly look at the subject of the video recording and to be within earshot to hear the audio being recorded.

Privacy concerns can also be at least partially mitigated to the extent that the device in question makes it reasonably clear to third parties that it is recording. Eavesdropping and privacy rules generally cover surreptitious recordings, not those made with the knowledge of the person being recorded. The Looxcie, for example, turns on a small red light when it is recording. It is unclear as of this writing whether the Kapture wristband or the various digital eyewear in production give any such warning. Of course, whether such warnings are sufficient to give fair warning of the recording, or whether users have made efforts to obscure them, will depend on the facts of each individual dispute, and may require litigation to sort out. The trouble is, going through all of the procedural steps necessary to sort out the facts of a case can be a long, complicated, and expensive process. I was once involved in an eavesdropping lawsuit that lasted for eight years, and one of the central questions throughout the case was whether the video cameras used to make the recording at issue, and the warning lights on them, were visible or not.

Over time, as wearable recording technology becomes more commonplace, the average person’s expectations - and, therefore, the law’s definition of a reasonable expectation - of privacy will change. Thirty years ago, shoppers in retail stores would not have expected to be filmed as they browsed the aisles. Now one cannot walk into the typical big-box store without being captured from every angle on hundreds of obscure security cameras. Twenty years ago, spies and oddballs were the only people we would expect to carry recording devices on their person, and to publish such footage in real time across the planet was unfathomable. Today, it’s odd to meet someone who doesn’t carry a device with all of those capabilities. We have accepted those developments, and our expectations of privacy have adjusted accordingly. Those expectations will continue to evolve along with our technology.

Surveilling the sousveillers

People in view of those wearing digital eyewear are not the only ones who can be recorded by the devices. Wearable devices are already being used to keep tabs on their users as well.

This potential will become especially apparent once eyewear becomes truly capable of augmenting our vision with data that overlays specific physical objects and places. To accomplish that feat, the devices will need to know not only where the object or place is, but also where the user’s eyes are pointed, in order to maintain the illusion that the digital data is in a fixed physical location. Eye-tracking data is already of great interest to retailers and advertisers, who crave to know what draws customers’ attention. If our digital devices can store and transmit that data, you can bet that advertisers will be clamoring to get their hands on it.

Similarly, employers will be keen to know how much attention their employees are paying to their assigned tasks at any given time. Being able to monitor employees’ eye movements would offer a tempting means of measuring productivity and efficiency. Still other examples of potential uses for this data abound, as do other means of gathering it. As facial recognition technology improves, for example, retail displays will know not only who we are, but also what we’re looking at. Thus will we fully enter into the commercial experience depicted by the groundbreaking futurist film Minority Report, in which augmented displays personalize shopping experiences based primarily on retinal data.

Following the movements of our eyes will not be the only way that a fully connected, internet of things economy will be able to track us, however.

PASSIVE DATA COLLECTION THROUGH THE INTERNET OF THINGS

The phrase “going off the grid” was coined to describe a lifestyle that intentionally avoids interacting with technology that leaves a trace of one’s activities. As depicted by characters in popular fiction, this has heretofore been accomplished mainly by paying for things with cash instead of credit, using a false name, and talking on pay-as-you-go mobile phones. But how can one stay off the grid when every single physical device in existence has the capacity to gather and transmit digital data?

The IOT’s sense of touch: beacons and taggants

As of this writing, Bluetooth Low Energy (BLE) technology is just starting to roll out to the public, most notably in the “iBeacon” feature of Apple’s iOS7. It has been seen as a rival to Near Field Communication (NFC) technology (which iOS8 also embraces), or as a convenient way to pipe coupons into your phone. But history will look back at BLE as a major step forward in manifesting the Internet of Things (IOT), and in eroding any remaining illusions of privacy we have in our physical whereabouts.

BLE is a means of transferring data. “Beacons” - devices that use BLE - are tiny, wireless sensors that transmit data within a 10-meter range. At present, they support only low data rates and can only send (and not receive) small data packets, but these are perfect for interacting with iPhones and wearable computing devices such as smart watches and fitness trackers.47 In light of the current proliferation in such devices, therefore, it’s safe to say that in the near future we may carry a half-dozen devices or more that are equipped with BLE or similar technology.

One of the most obvious applications of BLE is micro-location geofencing. GPS technology is great for determining your approximate location to within a few feet, but it relies on satellites that can’t see into buildings very well. A mobile device running BLE technology, however, can interact with nearby beacons to determine its precise location, even indoors.

Set up around a store, they can detect shoppers entering and exiting, and send them coupons (customized to your unique shopper profile) or even internal directions - Minority Report without the retinal scans. You will soon be able to even pay for goods without ever pulling out your phone, just like the newest vehicles will open their doors even when your key stays in your pocket. PayPal is already developing just such an app using BLE.

The real potential of BLE lies not in coupons, but in the IOT-the burgeoning trend towards making physical objects internet-connected and digitally interactive. Just like humans cannot meaningfully interact wi

本文标签: ARLawPrivacy