Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How to Address Privacy Questions Raised by the Expansion of Augmented Reality in Public Spaces

Ellysse Dick December 14, 2020
December 14, 2020

Introduction

Key Policy Questions About Privacy, Technology, and Public Space

Questions About Privacy and Public Space Raised by AR

Recommendations

Conclusion

Appendix: Framework for Examining New and Existing Policy Questions for Bystander Privacy Raised by AR

Endnotes

Introduction

When new Technologies emerge, they often raise new privacy questions and alter privacy norms, especially when they change how personal information is collected, stored, and processed. These questions can be particularly acute when consumers, businesses, and governments bring these technologies into public spaces where competing priorities, including public safety, civil liberties, and personal privacy, may collide. Augmented reality (AR)—an immersive technology that overlays digitally rendered content on a user’s physical environment—is no exception, and the emergence of this technology has the potential to alter both social and legal understandings of privacy and public space.

AR adoption will likely grow substantially in the coming years.[1] While mobile AR (AR applications accessed through a mobile device, such as a smartphone or tablet) has driven the success of this technology to date, wearable AR technologies such as smart glasses are expected to enter the mainstream market in coming years, and other applications such as advanced heads-up displays in vehicles are also on the horizon.[2] As AR and other immersive technologies gain more widespread adoption, there will be two sets of important privacy questions: one set about the collection and use of data about AR users, and one set about the collection and use of data about AR bystanders. This report focuses on the latter set of questions.

Broadly defined, a public space is any area that is generally open to the public. Public spaces include parks, roads, sidewalks, restaurants, stores, beaches, and sports arenas. They are generally available to anyone, but there are rules and norms in place that govern them and individuals’ activities within them. For example, laws may allow people to join a protest in a public square, but not allow them to physically or verbally harass passersby. In contrast, private spaces, such as individual homes, personal vehicles, private offices, private clubs, and gated streets, are generally not open to the public.

As a technology becomes more ubiquitous in society, solutions to privacy concerns may emerge as a result of social norms and market responses. AR is positioned to follow this cycle. However, this will only be possible if rules and regulations allow for innovation toward these solutions.

The idea that norms and behaviors vary based on this distinction between public and private spaces underlines the concept of an “expectation of privacy” in the United States, which is tied to Fourth Amendment protections against unreasonable searches. Under this logic, an individual does not have a reasonable expectation of privacy in a public space, where their actions and activities can be observed by anyone. Gathering information in a public space, therefore, generally does not violate anyone’s privacy. Of course, in practice, there are some exceptions, such as state and federal laws preventing stalking and harassment, as well as video voyeurism laws that prevent secretly recording people in public bathrooms or store changing rooms. But this basic understanding of privacy and distinctions between public and private spaces informs not only law enforcement and public safety, but also social expectations of privacy. Accordingly, individuals behave and interact differently whenever they are in a space they perceive as public.

Importantly, the expectation of privacy test in law defines “reasonable” as having “a source outside of the Fourth Amendment, either by reference to concepts of real or personal property law or to understandings that are recognized and permitted by society.”[3] This test ties legal parameters of privacy in public space closely to social perceptions of the same, which shift and evolve to accommodate new technologies and changing social norms. In the last century, these parameters have shifted to accommodate widespread adoption of technologies, including handheld cameras, audio and video recordings, telephones, mobile devices, and Internet platforms. Capturing a photo of an individual in a public space has gone from a grave violation of privacy to an accepted practice with select limitations—and communications have evolved from person-to-person or in-person correspondence to enduring and widely accessible posts on social media.

Understanding how new technologies impact the parameters of privacy and public space can help distinguish harmful impacts from this natural evolution. As a technology becomes more ubiquitous in society, solutions to privacy concerns may emerge as a result of social norms and market responses. AR is positioned to follow this cycle. However, this will only be possible if rules and regulations allow for innovation toward these solutions.

Unlike less immersive technologies, AR combines virtual elements with physical space in real time, which effectively alters how users perceive—and interact with—their surroundings. Whether they experience it on a computer screen, mobile device, or wearable headset, AR transports them into a hybrid reality comprising both “real-world” physical elements and digitally rendered content. For example, a homeowner can view digital replicas of furniture in their living room, and a technician can follow visual instructions digitally overlayed on a machine. Simply by looking, AR can allow users to find out more information about the objects, places, and people around them. This hybrid reality is highly personalized, as the digital alterations and additions to physical space can be unique to each user, and depend on the application. While in many cases this information may be completely innocuous, some uses could involve personal information.

Taken individually, the privacy concerns raised by AR are not necessarily new. However, AR amplifies some of the most pressing issues around digital privacy, and combines them in new ways to present novel concerns for bystander privacy in public space. This includes:

  • Continuous data collection: Some technologies, such as cell phones, smart watches, and fitness trackers, continuously collect data while in use. AR devices similarly have the potential to continuously record audio, video, and other data while in use.
  • Bystander data collection: Some technologies, such as security cameras, body cameras, and dashcams, collect data about those in the vicinity. Similarly, AR devices have the potential to collect data about users’ surroundings, including nearby bystanders.
  • Portable data collection: Some technologies, such as handheld cameras and audio recorders, are highly portable, which makes it difficult to notify people about the locations where a device might be recording. Some AR devices, such as wearables, are similarly portable.
  • Inconspicuous data collection: Some technologies, such as hidden cameras, can collect data surreptitiously without any indication that they are recording. AR devices, especially wearable devices, have the potential to collect data without alerting bystanders—and these devices may not be immediately recognizable as recording devices.
  • Rich data collection: Some technologies, such as smart watches and fitness trackers, collect a wide array of data, such as about an individual’s health or location, from sensors. Similarly, some AR devices use a variety of sensors, such as LiDAR sensors to scan 3D objects, or GPS sensors to determine a device’s location.
  • Aggregate data collection: Some technologies, such as license plate readers or facial recognition cameras, collect data that may not present privacy concerns in isolation, but when combined with other data or collected on a very large scale, may reveal sensitive information. AR devices have the potential to collect streams of data that may include information about bystanders, which in the aggregate, may reveal sensitive information.
  • Public data exposure: Some technologies, such as specialized online databases and interactive maps, make it easier for users to find public information, such as property values, political contributions, sex offenders, or gun permit holders. AR devices have the potential to overlay various public datasets, including from social media, on bystanders.

Consider wearable AR devices such as glasses that provide the user with real-time information about their surroundings. In order to provide this information, the glasses have to continuously record and process data about the objects and individuals their cameras, microphones, and other sensors capture using a combination of on-device and remote processing.[4] For example, these devices may capture, process, amplify, and filter audio from nearby conversations to optimize what the user hears.[5] Widespread adoption of such devices may constrain social and legal parameters of “reasonable expectation of privacy.”[6]

This report explores how AR technologies and the hybrid realities they create challenge existing social norms and legal definitions of privacy and the parameters of public space. While AR is hardly the first technology to push these boundaries, this combination of physical and virtual space raises a number of important policy questions that its predecessors did not. By understanding how privacy concerns from AR align with, and diverge from, existing policy questions about technology, privacy, and public space, policymakers can begin to address these impacts before advanced AR technologies gain more widespread adoption by businesses, government, and consumers.

Key Policy Questions About Privacy, Technology, and Public Space

The rise of AR is not the first time a new technology has clashed with perceptions of reasonable expectations of privacy in public spaces. Debates about the impact of technology on privacy in public space rise and fall with the introduction and widespread adoption of many new technologies.[7] This is understandable: As new innovations change the way individuals interact with each other and the world around them, the parameters of privacy in public space will inevitably shift. Social norms and regulations must then adapt to reflect this new reality and address emerging privacy risks.

Debates around privacy and public space with new technologies tend to address five underlying policy questions:

  1. What is a public space, and what is a reasonable expectation of privacy in that space?
  2. What are reasonable standards for transparency and choice when using a technology in public spaces?
  3. What constitutes acceptable government use of a technology?
  4. How should products be developed for children in order to protect their safety and privacy?
  5. Can voluntary practices and codes of conduct address privacy concerns?

Redefining Public Space and Reasonable Expectations of Privacy

New technologies can change social as well as legal definitions of public space, and by extension, redefine what constitutes a “reasonable expectation” of privacy. This pattern predates today’s digital technologies by at least a century, to when the introduction of the Kodak camera in 1888 made it possible to capture virtually anything, and anyone, on film. Some people were aghast at the privacy implications. One contemporary article in The Hartford Daily Courant warned that “the sedate citizen can’t indulge in any hilariousness without incurring the risk of being caught in the act and having his photograph passed around among his Sunday school children.”[8] As the technology became cheaper, and amateur photography gained popularity at the turn of the century, so did fears about privacy and consent—and with these, a new understanding of what constitutes the “public” and the “private.”[9] Within two years, opinions of handheld cameras and amateur photographers rapidly transformed from an interesting new invention to an urgent threat to society—and a new legal framework accompanied this social transformation. Samuel Warren and Louis Brandeis’s 1890 Harvard Law Review article, simply titled “The Right to Privacy,” laid the foundation for legal approaches to privacy in an age of visual media.[10]

The ability to collect massive amounts of data in public spaces, such as from car-mounted or drone-mounted cameras, has raised new questions about privacy.

Despite public fears about a loss of privacy and calls for restrictions on camera use (and indeed some bans at beaches and parks), consumer adoption of this technology continued to increase.[11] By 1905, one-third of American households owned a camera.[12] Ultimately, the ubiquity of amateur photography outweighed the backlash, and within 20 years the handheld camera, and the possibility of appearing in others’ photos in public settings, became a rarely-challenged reality of everyday life.

This debate resurfaced in the latter half of the 20th century as audio (and later on, video) recording technologies matured and miniaturized, gaining widespread adoption by amateurs, businesses, and governments.[13] Now it was possible to capture not only someone’s likeness, but also their movements and private conversations, sometimes without them even knowing. Once again, social norms and policy had to redefine the parameters of reasonable expectations of privacy in public. From recording conversations to catching viral videos on a smartphone or capturing sweeping images with camera-enabled drones or high-resolution cameras on satellites, these devices made it possible to record, and retain, small moments that may otherwise have been lost or forgotten.[14]

The ability to collect massive amounts of data in public spaces, such as from car-mounted or drone-mounted cameras, has raised new questions about privacy. While cameras collect the images in public spaces, such as public streets or airspace where there is usually no reasonable expectation of privacy, they can potentially capture private activities, such as someone parking at or walking out of a doctor’s office. This has led to tensions between legal parameters of privacy, which typically allow for such image collection, and social perceptions, wherein some individuals object to this data collection. In these instances, technical solutions may bridge this disconnect. For example, after public concerns emerged about the images captured by Google’s Street View, the company introduced face and license plate blurring features, and later introduced a web page on which users could manually request to blur their home or other sensitive images.

Technologies that aggregate large datasets, such as satellite imagery in Google Earth or geolocation data from a personal device, have further complicated the question of a reasonable expectation of privacy in public. When firms aggregate massive amounts of data collected from public spaces, they can reveal new patterns or information that would otherwise be private. This blurs the line between public and private information by positioning data collected in public as a potential privacy threat. Similar concerns about collecting and aggregating information have arisen around transactional data and sensor data gathered in public spaces. For example, smart doorbell cameras can capture information about activities on public streets, and mobile network operators can gather data about users’ physical movements as they manage their network operations.

Online public forums such as social media sites have also changed perceptions of public and private interactions, as they facilitate digital interactions that mirror those of physical public space. Users share information in these spaces, whether through text, speech, video, or simply behaviors, which may be digitally or visually observed. For example, in addition to displaying posts for other users to view, social media platforms use both algorithms and human moderators to screen them for inappropriate or violent content. This raises new questions about the nature of public space, and whether virtual spaces from data flows to public online forums should be governed as public space.[15] These spaces are governed by individual privacy policies as well as the laws of the different countries in which they operate. In both private (e.g., privacy policies) and public (e.g., legal requests for information) governance, online public forums remain a public space without a reasonable expectation of privacy for users.

Developing Practices for Transparency and Choice

Transparency and choice practices, whether informal norms or explicit legal requirements, have historically been introduced to address privacy concerns and build public trust in audio, video, and image-capturing (and more recently, data-collecting) technologies by establishing parameters for their use. These practices may include digital or physical signage, verbal notice, or other indicators; mandates for when explicit consent from subjects is necessary; and voluntary or mandatory privacy measures that can be implemented when explicit consent is not possible.

Once again, the foundations of these practices can be traced back to the beginnings of amateur photography. The ability to capture and distribute images of individuals without their permission led to a distinction between taking photographs in public spaces, which falls under First Amendment protections in the United States, and using those images for different purposes. Following public outcry about the use of an individual’s likeness for commercial purposes without their consent, the New York State Legislature introduced Section 50 and Section 51 of the Civil Rights Law in 1903 to require written consent to use a recognizable likeness of a person for commercial purposes.[16] Similar statutes or common law recognitions of this “right to publicity” have since emerged in a majority of U.S. states.

Digital photography from camera-enabled phones made photography, and fears about intrusive and voyeuristic use, even more ubiquitous. They also introduced new forms of notice, such as a digital flash or mimicking the sound of a camera shutter, to prevent surreptitious use of digital cameras. However, with no legal requirement for their use, these mechanisms are relatively easy to circumvent by simply changing user settings. Even if these features are not directly adjustable, third-party apps and other workarounds can effectively disable them, and studies have shown that these signals do not necessarily reduce rates of socially inappropriate use.[17] The proliferation of social media also adds new dimensions to transparency and choice requirements for distribution.

As state and federal lawmakers debate privacy legislation, the question of how to translate existing transparency norms from image- or audio-based recordings to digital data flows is once again defining new parameters of the division between public and private.

During the 1960s, audio surveillance “bugs” caused a public panic as people envisioned mass “snooping” on private conversations. As a result, the Wiretap Act established transparency requirements for recording audio conversations.[18] At the federal level in the United States, at least one party must agree to the recording (one-party consent), but some states require both parties to agree (two-party consent). These requirements prevent audio recording technology from fully eroding private spaces and individual conversations, but also acknowledge that the parameters of public and private space have shifted due to this technology.

Video recording and surveillance practices have adopted similar norms, although legally, video captured in public spaces without audio is treated similarly to still images. While there is no federal notice requirement for video surveillance, several states have introduced notice laws such as consent for installing hidden cameras or only allowing cameras displayed in plain sight. Even when there is no explicit regulation, many businesses and individuals using video surveillance may choose to display the cameras or post a notice. This effectively establishes a norm for privacy in public and private spaces, even though it is not explicitly codified in federal law.

Online platforms and virtual spaces, from social media to listservs to multiplayer video games, are also spaces in which individuals interact with both each other and digital elements. In doing so, individuals reveal information to each other through direct or indirect communication, and to the services they are using through data analytics. Digital information capture is therefore subject to transparency requirements more suitable for digital space. Data privacy laws such as the California Consumer Privacy Act (CCPA) require websites to provide notice to users both that they are collecting their data and the purpose for which they use this data. This adds a new layer to social perceptions of privacy that extends beyond what is immediately observable to include more complex information and data flows. As state and federal lawmakers debate privacy legislation, the question of how to translate existing transparency norms from image- or audio-based recordings to digital data flows is once again defining new parameters of the division between public and private.

Implementing Restrictions on Government Use and Access

Defining the parameters of public spaces for new technologies also requires new considerations of what government actors can and cannot do in these spaces. As consumer technologies have evolved—and with them, adjacent technologies such as video and audio surveillance, data collection, and geotracking that could be used by law enforcement—so have definitions of legally acceptable use by governments.

This question of reasonable expectation of privacy from government and law enforcement is consistently debated in relation to surveillance and investigations. As the technologies by which law enforcement can gather data have evolved, public perceptions of surveillance have also shifted. For example, wiretapping grew from primarily a tool of private or corporate espionage to a law enforcement and national security mechanism during the Prohibition era in the United States.[19] In 1928, the Supreme Court’s ruling in Olmstead v. United States established the constitutionality of wiretapping, and with it, the legal parameters of “reasonable expectation” of privacy within physical bounds.[20] But by the latter half of the century, public and legal understandings of acceptable use by governments shifted. Watergate shook public trust, and the Supreme Court ruling in Katz v. United States overturned Olmstead to extend Fourth Amendment protections to wiretapping.[21]

This evolution is now taking place in the realm of metadata. The Supreme Court’s Carpenter v. United States ruling in 2018 further expanded legal definitions of public and private space by applying Fourth Amendment protections to cell phone location data. Coming closely after other technology-enabled privacy rulings, including United States v. Jones (2012) and Riley v. California (2014), which placed cell phone searches and GPS tracking under Fourth Amendment protections, the Carpenter decision effectively introduced new legal parameters for reasonable expectations of privacy in a digital world.[22]

While there is certainly legal precedent to define acceptable use by government actors, there are some use cases that are still being debated. For example, proposals to implement police body cameras as an accountability mechanism have gained notable public support and adoption.[23] But the privacy implications of this technology are not yet entirely clear, as discussions about when cameras should be turned on, who can access the footage, and whether they can be used to record First Amendment activities such as protests complicate the parameters of acceptable use.[24] While activities such as protests do occur in public space, the question of whether protesters should have a reasonable expectation of privacy from body-mounted cameras while exercising First Amendment rights muddles the distinction between public and private that has been used to determine acceptable use in other cases.

Addressing Child Safety Concerns

With many new technologies, child safety is a prominent concern—and often a driving force behind regulatory requirements. Unlike adults, children are unable to understand the complex risks associated with different technologies, much less give meaningful consent where it is required. This leaves child users particularly vulnerable, and their personal information particularly sensitive. Additional measures are often necessary to ensure children are protected from potential harms when using a technology, including risks of physical harm or harassment by other users.

When e-commerce and household Internet use grew exponentially in the late 1990s, Congress introduced the Children’s Online Privacy Protection Act of 1998 (COPPA) to regulate collection of personal information from children under 13. Although initially intended to mitigate potential harms by limiting the amount of data gathered about users known to be children, the law has since evolved to include photo, video, and audio files and more stringent requirements for collecting information from and tracking child users on the Internet.

Since its enactment, COPPA has been used as a benchmark for child privacy compliance by Internet technologies from websites to social media and entertainment platforms, and has fundamentally shaped the way new Internet-driven technology providers approach child safety and privacy concerns.[25] This has restricted innovation in child-focused digital services, and left uncertainties about what compliance looks like for new and emerging technologies as technological advancement continues to outpace regulatory reforms.

Adopting Voluntary Practices and Codes of Conduct

The private companies offering emerging technologies are often best positioned to understand and respond to the unique privacy risks of their products or services. From this recognition, voluntary practices or codes of conduct can emerge across the industry. These practices may be wholly implemented by industry leaders, or developed in consultation with other stakeholders in government and civil society. They allow commercial users and providers to adapt new technologies to align with social perceptions of public space rather than simply complying with existing legal parameters.

Voluntary practices often emerge among early adopters of new technologies that may present privacy risks. For example, the Digital Signage Federation recommended a set of industry-wide privacy standards in 2011 that established guidelines for privacy protections, consent, and transparency when using emerging technologies such as facial recognition.[26] These guidelines offer an important framework for navigating the privacy implications of new technologies in the absence of sufficient regulatory guidance. Government agencies have also implemented voluntary notice for facial recognition that establishes norms beyond legal requirements. When U.S. Customs and Border Protection (CBP) deployed its Biometric Entry-Exit Program, which uses facial recognition to confirm travelers’ identities at ports of entry, it included posted signs to notify travelers that this technology is in use. Building on the initial rollout of the program, a September 2020 Government Accountability Office (GAO) report offered additional recommendations to provide comprehensive notice in areas where facial recognition technology is active.[27]

Voluntary practices frequently accompany regulation of highly sensitive forms of information, such as that related to child safety. In addition to the parameters set by COPPA, voluntary codes of conduct have helped guide best practices to protect children’s safety in spaces where the distinction between public and private is blurred. Many websites’ child privacy protection measures such as moderation practices go well beyond the requirements of COPPA, often times filling in the legal loopholes that could expose children to greater harm online. COPPA also includes a “safe harbor” provision, which certifies third-party self-regulatory guidelines as compliant under the law.[28]

Questions About Privacy and Public Space Raised by AR

Perceptions of privacy and public space have evolved over time from social acceptance of photography in public to the recognition of digital platforms as public spaces wherein certain information is no longer private. Now, this evolution is reaching a new phase: privacy in hybrid reality. Rather than operating within physical spaces or facilitating digital information flows, AR technologies alter users’ perceptions of their physical space by introducing virtual elements directly into their surroundings, while also collecting information about those environments.

The underlying policy questions that have shaped previous debates about technology and privacy in public space also inform understandings of how AR challenges existing parameters. Many of the questions other new technologies have raised also apply to AR. However, the capabilities and potential use cases of AR also introduce new, technology-specific questions to these debates. Because of this, AR innovation is likely to have a significant impact on both social and legal definitions of public space as hybrid realities become increasingly common in everyday life. By understanding where tensions between physical space and virtual elements appear, policymakers can clarify the legal parameters of public space, while product developers can adapt to shifting social perceptions and regulatory environments.

Parameters of Public Space in AR

AR technologies raise new questions about what constitutes a public space. As with previous iterations of this debate, the combination of AR technology and existing social and legal norms is far from seamless. The primary point of tension is in the collision of virtual and physical space, as questions arise about how AR will impact existing understandings of privacy and physical space. However, as AR technologies become more immersive and digital elements more interactive, the emergence of partially or fully virtual spaces also raises questions about what constitutes a public space when there are no physical parameters.

As AR technologies gain more widespread use across industries, the virtual side of AR could become a form of public space in and of itself. This transformation has already occurred in fully digital spaces on the Internet, where online platforms have become “hyperpublic” spaces in which interactions are not only widely observed, but also retained indefinitely.[29] Now, a more immersive virtual (public) space is emerging: one composed of the purely digital elements and interactions immersive technologies such as AR offer. Such a “virtual public space” requires a markedly different perception of public and private space.

Reasonable Expectations of Privacy in Augmented Reality

The fundamental purpose of AR is to alter a user’s perception of physical space in order to achieve certain objectives. This could include viewing a digital rendering of an object in a home, highlighting the best route on a road or walking path, or displaying labels and instructions on machinery. To accomplish this, AR applications and devices use sound-detection and object-recognition capabilities to read a user’s surroundings for audio or visual indicators. They then overlay digital elements onto the surrounding area. In doing so, AR devices display a different version of reality that is more information-rich than what users would perceive without AR.

In some cases, these alterations take place in physical spaces that would traditionally be considered public, such as sidewalks, streets, or parks. But in others, they can overlap with or fully enter private physical space, such as homes—or communal spaces where there is an expectation of privacy such as public restrooms. AR not only blurs the line between public and private space, it also poses challenges to existing rules governing public spaces and reasonable expectations of privacy. While other recording technologies from handheld cameras to video surveillance passively capture the space around them, AR gathers and then actively processes information from the sounds and images that it captures. Rather than merely capturing and retaining a recording, AR uses this information to provide relevant outputs for a user. AR is also highly mobile and relatively inconspicuous, whether used through a wearable device or a mobile phone. These qualities make it more difficult to delineate where AR apps or devices do and do not gather data, or easily recognize when they are in use. Combined with the data-processing capabilities of AR, this requires a new understanding of privacy and public space.

Because virtual spaces are navigated and maintained so differently from their physical counterparts, traditional policy approaches to public space privacy protections do not apply.

Just as some people have balked at the possibility of appearing in amateur photographs for over a century—or more recently have complained about people recording and posting information about their activities on social media—some bystanders today likely object to AR devices recording their interactions or activities, even when they occur in public spaces. Take the Google Glass rollout in 2013: Much like the Kodak “fiends” at the dawn of personal photography, “glassholes” were seen as a public nuisance with a disregard for others’ privacy. Not only was the design of Google Glass too conspicuous and the price tag too high to prompt widespread consumer adoption, many found the idea of always-on, Internet-connected wearable devices unsettling. Even with a relatively small number of devices in use, the backlash was swift: The Electronic Privacy Information Center argued that the devices invaded “privacy and anonymity [of people captured by the camera] without their consent.”[30] One critic even accused Glass wearers of “demanding social interaction on [their] wholly weird and unsettling terms.”[31] It would be reasonable to expect that advanced AR devices equipped with additional sensors would elicit similar critiques, and require a shift in social perceptions of acceptable behavior in public space.

Most virtual spaces are also mediated by a third party that operates the online platforms that enable these virtual spaces. Much like social media platforms, AR service providers are often collecting and storing information about communications and other activity in these virtual spaces. For example, Niantic, the company behind the popular AR game Pokémon Go, collects visual data from players to build out virtual maps of public areas.[32] AR platforms may also choose to use recording and storage mechanisms for trust and safety management, similar to Facebook’s immersive social platform Horizon, in which the application maintains a recording of the most recent several minutes of activity, which content moderators or safety managers can review as part of an incident report.[33] As a result, these activities are not entirely private.

Because virtual spaces are navigated and maintained so differently from their physical counterparts, traditional policy approaches to public space privacy protections do not apply. However, they also do not require a dedicated subset of laws and regulations. Rather, other policy mechanisms that are better equipped to regulate virtual spaces must be used in their place. For example, digital privacy and data protection laws can dictate how technology providers can use data that users share in privately managed virtual spaces. Similarly, child protection laws can guard against misuse of children’s personal data or personally identifiable features that may be shared in these spaces, such as their voices.

Transparency and Choice Requirements for Hybrid Reality

The hybrid nature of AR introduces unique challenges to the concerns other technologies have raised. Perhaps the most evident is in transparency and choice practices. Unlike technologies tied to a specific location, such as security cameras, most AR devices are mobile, which makes it more difficult to notify bystanders that the technology is in use. Transparency and choice also require a level of public understanding of the technology. Particularly in the early stages of AR adoption, many individuals likely do not understand what it means to be recorded by an AR device—confusion that could hold AR innovation back.

The question of how to provide notice that an AR device is in use also depends on the purpose of the technology. For example, Google Glass included audio and video recording functions. When used in public spaces, for example by journalists, notice could be provided by using verbal notifications or other indications the device was recording. While Google offered these features, ultimately the wearer was responsible for complying with state and federal recording laws, namely one- or two-party consent rules.[34] But more immersive AR devices capture audiovisual information for the purpose of processing and delivering virtual outputs back to the wearer. While they may capture images or audio, they can also process those inputs to provide the user with additional information about individuals and objects around them. These considerations may require different parameters of transparency and choice than those used for more traditional audio and video recordings. First, functionality and consumer preference may discourage overly intrusive notifiers such as lights and sounds. A large flashing light on top of a pair of glasses hardly aligns with the sleek appearance and smooth functionality consumers look for in a product. Second, with the data-processing and recording capabilities of advanced AR devices, subjects may not be aware of, or consent to, these additional capabilities.

There is also the question of who is liable for providing adequate notice. With traditional recording technologies, this is typically the user: the journalist informing a subject they are being recorded, or a supermarket posting signs that video surveillance is in use. In contrast, mobile or wearable AR devices transmit information to the user as well as third-party service providers. The user can inform those around them that they are using an AR device through indicators such as flashing lights, or be held responsible for turning the device off in sensitive locations. But it would be unreasonable to expect them to notify bystanders of the data that is being collected and processed about their surroundings, and how it will be used. While a user may understand the fundamental capabilities of a device, it is extremely unlikely they will have complete knowledge about how it works, and such in-depth transparency requirements could deter consumers from using AR technologies at all. Whether and how service providers and data processors should also provide some form of notice is likely to be a point of concern as AR use becomes more widespread.

Restrictions on Government Use of AR Technology

AR technologies offer many potential use cases for government, from streamlining workforce development to offering more accessible and efficient services.[35] However, AR also presents questions about restrictions on government and law enforcement use, including protecting sensitive information and determining limits on using real-time aggregate information or metadata from AR devices by law enforcement.

AR Use in Government Services and Operations

When considering limitations on use, it is important to consider all potential applications of AR in government. Unlike other recording technologies that may primarily be used for law enforcement, AR has potential uses across government, from workforce development to public services. For example, AR can guide hands-on training with visual instructions and indicators rather than lengthy manuals. It also offers “see what you see” capabilities that allow instructors, support technicians, or other providers to assess situations and communicate with on-the-ground staff or even members of the public more effectively, without requiring a physical presence. In order to safely and effectively use AR in this way, government agencies will have to consider the privacy implications of AR technologies and their obligations under existing privacy rules—and determine whether additional privacy frameworks are necessary for AR-based government services.

AR Use in Law Enforcement

The highly mobile, largely inconspicuous, and data-intensive nature of mobile and wearable AR that complicates transparency and choice practices also complicates its use by law enforcement. Potential law enforcement use ranges from mapping crime scenes to adding real-time information to surveillance footage.[36] This kind of activity requires different limitations than other government uses. With the ability to process information and identify patterns about surroundings in real time, AR may lead to new legal challenges and calls to delineate publicly available or necessary information and reasonable expectations of privacy from surveillance.

In addition to direct use as an investigative tool, law enforcement could use data from individuals’ AR devices in criminal investigations, raising concerns similar to those around GPS data or cell phone records. As mobile and wearable AR gain more widespread use, we may see more legal challenges regarding, for example, government access to metadata logs from an individual’s AR device or details about what virtual information a user added to or extracted from their physical environment.

Child Protection in AR

It is important to protect children from harm by or from AR. Children may not be able to distinguish the virtual from the physical in hybrid reality as adults do, leaving them particularly vulnerable to exposure to harmful content, bullying, and harassment, and they may expose their own sensitive or personal information to others. While mechanisms are in place to protect children in physical space and on websites and mobile apps, there has been less work on how to protect children in AR.

Fully digital products can offer child-friendly services that do not collect or transmit data from children without parental consent, but AR technologies developed for child-specific uses may require some of this information, such as location, voice recordings, or demographic information in order to function properly.[37] There are many potential use cases that engage children with AR technology, particularly in education and child development. For example, recent child-focused innovations have shown that AR can improve attention management for children with autism and enrich early childhood education.[38] The privacy concerns of always-on recording and processing of AR devices are also multiplied when considering children’s privacy. Parents may object to strangers’ AR devices recording, collecting, and processing information about their children, even if they are merely bystanders and not direct targets.

These concerns about data in AR are similar to those raised by other digital technologies that process information about children. As the primary legal benchmark for children’s privacy online, COPPA’s requirements regarding geolocation data, audio, video, and images currently dictate the parameters of what is considered public and private on the internet in relation to children.[39] Efforts to include other data such as biometric information in compliance requirements further restrict what is allowed in this child-focused hybrid space. This may discourage the development of child-focused AR technologies, just as previous implementations have restricted growth of child-oriented websites and applications.[40]

Opportunities for Voluntary Practices and Codes of Conduct in AR Development

As a new and relatively under-studied technology, AR offers significant opportunities for industry actors to contribute solutions to new privacy risks from their technology. Concerns about AR’s potential to diminish individual privacy are an important part of their own decision-making and product-development processes. There are already efforts underway to develop AR-specific frameworks and best practices, such as the XR Safety Initiative’s Privacy Framework and individual efforts by companies building AR.[41] As AR devices and use cases evolve, so do the technical capabilities and practical approaches to mitigate privacy risks. In addition to practices such as transparency reporting, industry actors may also be best positioned to develop technical mechanisms to maintain privacy in spaces where their products are in use. Measures such as blurring faces or sensitive information are already in place elsewhere, including widespread services such as Google Street View.[42]

Voluntary standards alone are far from a perfect solution. While there is certainly an incentive for companies to preempt negative impacts, industry alone cannot tackle the complex social, legal, and policy implications of widespread AR use. To do this, companies building AR have to engage directly with policymakers, civil society actors, and other stakeholders to develop these voluntary practices and identify areas wher



This post first appeared on ITIF | Information Technology And Innovation Foundation, please read the originial post: here

Share the post

How to Address Privacy Questions Raised by the Expansion of Augmented Reality in Public Spaces

×

Subscribe to Itif | Information Technology And Innovation Foundation

Get updates delivered right to your inbox!

Thank you for your subscription

×