Scope AR to Support Apple’s ARKit to More Easily Bring Real-Time Assistance to Enterprise

With ARKit, the augmented reality-based remote assistance application Remote AR, enables any enterprise to implement the most advanced AR functionality within their workforce today

SAN FRANCISCOSept. 8, 2017 /PRNewswire/ — Today Scope AR, the creator of augmented reality (AR) smart instructions and live support video calling solutions, announced support for ARKit, Apple’s much anticipated AR development platform. The company’s live support video calling application, Remote AR, will support ARKit as soon as iOS 11 launches, delivering the most advanced AR functionality yet to devices without AR-specific hardware.

“This is a game changer for any enterprise looking to implement the latest advancements in AR now,” said Scott Montgomerie, CEO of Scope AR. “With our technology, any company can use an existing iPhone or iPad to implement AR within their workforces today, allowing workers to complete tasks faster and more accurately, while also producing significant cost and time-savings. While there are many apps coming to ARKit that will inevitably bring AR to the masses, we’re the first solution leveraging ARKit that is truly impacting the bottom line for enterprise.”

With support for ARKit, Remote AR users can now take advantage of the platform’s sophisticated real-world mapping to collaboratively add annotation and 3D content to a much larger area than has previously been possible on standard devices. This results in a simple, seamless work session for both sides of the call. The capability is available on newer iOS devices, including the iPhone 6s and above, as well as iPads equipped with A9 or A10 processors. You can see a video with more details on how Remote AR can be leveraged with Apple ARKit here.

Remote AR delivers the ability to save time and money, as well as improve knowledge transfer and retention by combining AR with live video streaming, voice, 3D animation, screen sharing, whiteboarding and world-locked annotations. Doing so simulates the effectiveness of having an expert on-site guiding a worker step by step on what to do. Whether a technician needs live support for troubleshooting a problem or conducting maintenance or assembly procedures, Remote AR empowers them to get the knowledge they need, when they need it.

Remote AR is fully platform agnostic for use on Android, iOS, Windows and Tango devices simultaneously, as well as select smartglasses, allowing organizations to easily experience the benefits of AR by using their device of choice. All current Remote AR users will have access to the new ARKit support once iOS 11 is available.

About Scope AR

Augmented reality (AR) company Scope AR is the creator of the first-ever true AR smart instructions and live support video calling solutions – WorkLink and Remote AR, as well as the new heavy industry focused CAT® LIVESHARE. The company provides the industry’s most comprehensive set of tools to allow users to access or become their own experts, learning to assemble, repair or troubleshoot problems wherever they are. Whether training, performing complex fieldwork or remote tasks, or any number of assisted activities across vertical industries including industrial equipment manufacturing, aerospace, construction, utilities, oil and gas, automotive, consumer applications and more, Scope AR provides robust solutions for in-field support and performance tracking. The company’s partners and users include Caterpillar, AstraZeneca, Lockheed Martin and Clicksoftware among others. The company was founded in 2011 and is based in San Francisco with offices in Edmonton, Canada.


Apple’s ARKit could be huge for virtual car showrooms and online sales

Apple’s iOS 11 includes ARKit, developer tools that make it surprisingly easy to build great quality augmented reality experiences. This could have an impact in the automotive world, since it makes it a lot easier to bring virtual showroom experiences to potential car buyers and automakers via ordinary consumer devices.

Here’s an example of what that could look like: Developer Augmently created this AR experience using the developer preview ARKit tools in iOS 11. It features a detailed, rendered 3D model of a Mercedes sedan, complete with an interactive interior and the ability to swap out different paint color options. The model stays firmly rooted in place, allowing a user to virtually walk around it and check out all angles.

Virtual showrooms and AR demonstrations are now standard fare for automakers at big auto shows, and even in dealership settings sometimes with specialized hardware. But this use of ARKit brings it down to Earth – it’s instantly available to anyone with an iPhone.


Car sales models are already being shaken up by startups – Tesla sells its vehicles exclusively online, using showrooms only for in-person looks at its cars. Online car-buying startups are increasingly attempting to complement and supplant the existing brick-and-mortar sales model, too, and even Amazon has experimented with online sales, particularly in Europe.

ARKit replaces a lot of kludgy, hard-to-use and less-than-perfect solutions for doing similar things, but it’s the system-level tools and quality of experience that could help it up the game for automobile sales, demonstrations and showrooming. AR isn’t the only thing ARKit could make mainstream – online vehicle sales could benefit big time, too.



A preview of the first wave of AR apps coming to iPhones

Over the past few weeks I’ve been steeping myself in the developer and investor community that is quickly sprouting up around ARKit.

There are a variety of reasons that people are excited about the possibilities but it’s safe to say that the number one positive that’s shared by everyone is the sheer scale of possible customers that will be able to experience augmented reality on day one of iOS 11. Hundreds of millions of potential users before the year is out is a potent pitch.

I’ve seen some very cool things from one- and two-person teams, and I’ve seen big corporate developers flex their muscle and get pumped about how capable AR can be.

At a round robin demo event yesterday with a bunch of developers of AR apps and features, I got a nice cross-section of looks at possible AR applications. Though all of them were essentially consumer focused, there was an encouraging breadth to their approaches and some interesting overall learnings that will be useful for developers and entrepreneurs looking to leverage ARKit on iOS.

Let me blast through some impressions first and then I’ll note a few things.


What it does: Allows you to place actual size replicas of IKEA sofas and armchairs into your house. 2,000 items will be available at launch.

How it works: You tap on a catalog that lets you search and select items. You tap once to have it hover over your floor, rotate with a finger and tap again to place. The colors and textures are accurately represented and these are fully re-worked 3D models from IKEA’s 3D scans used for its catalogs. It looks and works great, just as you’d expect. IKEA Leader of Digital Transformation Michael Valdsgaard says that it took them about 7 weeks or so, beginning slightly before Apple’s announcement of ARKit, to implement the mode. It will be exclusive to iOS for now because it’s the largest single target of AR capable devices. I asked Valdsgaard how long it took to get a first version up and running and he said just a couple of weeks. This has been a holy grail for furniture and home goods manufacturers and sales apps for what seems like forever, and it’s here.

Food Network In The Kitchen

What it does: Lets you place and decorate virtual desserts like cupcakes. Allows you to access the recipe for the base dessert.

How it works: You drop a dessert onto a surface and are provided with a bunch of options that let you decorate a cupcake. A couple of things about this demo: First, it worked just fine and was very cute. A little animated whale and some googley eyes topping a cupcake which you can then share is fine. However, it also demonstrates how some apps will be treating AR as a “fun extra” (the button is literally labeled “Fun”), rather than integral to the experience. This is to be expected in any first wave of a new technology, but examples out there like KabaQ show that there are other opportunities in food.


What it does: Allows you to place gifs in 3D space, share videos of them or even share the whole 3D scene in AR with friends who have the app. They can then add, remix and re-share new instances of the scene. As many people as you want can collaborate on the space.

How it works: You drop gifs into the world in the exact position you want them. A curated and trending mix of gifs that have transparency built into them is the default, but you can also flip it over to place any old Gif on the platform. Every scene gets a unique URL that can be remixed and added to by people that you share it with, effectively creating a shared gif space that can be ping-pinged around. The placement of gifs felt very logical and straightforward, but the ability to “paint” with the gifs and then share the scenes whole in a collaborative fashion was a pleasant surprise. One example that was impressive was leaving a pathway to a “message” that a friend could follow when you shared the scene to them. Ralph Bishop, GIPHY’s head of design, says that the app will be free like their other apps are but will have branded partners providing some content. GIPHY has something interesting going on here with a social AR experience. It’s early days but this seems promising.


What it does: It’s a game from Climax Studios that places a (scalable) 3D world full of crumbling ruins onto your tabletop that you help your character navigate through without any traditional controls.

How it works: You look through your device like a viewport and align the perspective of the various pathways to allow your character to progress. There are no on-screen controls at all, which is a veryinteresting trend. Climax CEO Simon Gardner says that translating the game into AR was attractive to the studio (which has been around for 30 years) was the potentially huge install base of ARKit. They’re able to target hundreds of millions of potential customers by implementing a new technology, which is not the typical scenario where you start at effectively zero. The experience was also highly mobile, requiring that you move around the scenes to complete them. Some AR experiences may very well be limited in their use or adoption because many people use phones in places where they are required to be stationary.

The Very Hungry Caterpillar AR

What it does: Translates the incredibly popular children’s book into AR.

How it works: The story unfolds by launching the app and simply pointing at objects in the scene. We saw just a small portion of the app that had apples being coaxed from a tree and the caterpillar scooching its way through them to grow larger. This was my favorite demo of the day, largely because it was cute, clever and just interactive enough for the age level it is targeting. It’s also another ‘zero controls’ example, which is wonderful for small children. Touch Press CEO Barry O’Neill says that they’ve seen some very interesting behavior from kids using the app including getting right down at eye level with the tiny caterpillar — which meant that they really had to up-res the textures and models to keep them looking great. Now that ARKit enables capturing any plane and remembering where objects are (even if you move 30-50 feet away and come back), storytelling in AR is finally moving beyond marker-based book enhancements. Any surface is a book and can tell a story.

The Walking Dead: Our World

What it does: It’s a location-aware shooter that has you turning in place to mow down zombies with various weaponry.

How it works: The scene I saw looked pretty solid, with high resolution zombies coming at you from all angles, forcing you to move and rotate to dodge and fend them off. You progress by “rescuing” survivors from the show which provide you with unique additional capabilities. Environmental enhancements like virtual “sewers” that walkers can crawl up out of give each scene a unique feel. It looked fast and smooth on a demo iPad. AMC and Next Games collaborated  on this title. There were some additional fun features like the ability to call up various poses on a survivor like Michonne and stand next to them to take a selfie — which felt super cool. The best kinds of IP-based games and apps will focus on unlocking these kinds of “bring that world into your world’ experiences” rather than cookie cutter gameplay.

Some interesting notes:


Every app had its own unique take on the ‘scanning’ process that allows ARKit to perform plane detection before it can begin placing objects into the world. Basically a few seconds where you’re encouraged to move the phone around a bit to find flat surfaces and record enough points of data to place and track objects. It’s not onerous and never took more than a few seconds, but it is something that users will have to be educated on. Ikea’s conversational interface prompted people to “scan” the room; The Walking Dead suggested that you “search for a survivor” and Food Network’s app went with a “move around!” badge. Everyone will have to think about how to prompt and encourage this behavior in a user to make ARKit work properly.


Aside from the apps that are about placing objects directly into the scene, there is a focus on little-to-no on-screen controls. For Arise, your perspective is the control, allowing you to get an alignment that worked to progress the character. There are no buttons or dials on the screen by design.

The Very Hungry Caterpillar’s control methodology was based on focus. The act of pointing at an object and leaving your gaze on it caused the story to progress and actions to be taken (knocking fruit out of a tree for the caterpillar to munch on or encouraging it to go to sleep on a stump). Most of the other apps relied on something as simple as a single tap for most actions. I think this control-free or control-light paradigm will be widespread. It will require some rethinking for many apps being translated.

Development time

Incredibly short, all things considered. Some of the apps I saw were created or translated into ARKit nearly whole sale within 7-10 weeks. For asset-heavy apps like games this will obviously be a tougher ramp, but not if you already have the assets. GIPHY World, for instance, places a bunch of curated gifs that look great floating in the world at your fingertips, but you can easily drop regular gifs in there from their millions of options.

Models that Touch Press used for its previous Caterpillar app had to be upscaled in terms of complexity and detail quite a bit because they fully expect children to experience them in distances as close as inches. IKEA also had to update its models and textures. But given that the onramp is measured in days or weeks instead of months, I’d expect to see a large number of apps supporting ARKit experiences at the launch of iOS 11 in September. And for a bunch to follow quickly.


Google’s Clay Bavor: Our Goal ‘Is To Make AR Mainstream’

Today, Google is announcing ARCore, a software-based solution for making more Android devices AR-capable without the need for depth sensors and extra cameras. It will even work on the Google Pixel, Galaxy S8, and several other devices very soon and supports Java, Unity, and Unreal from day one. In short, it’s kind of like Google’s answer to Apple’s ARKit.

But that isn’t how Clay Bavor, VP of Augmented and Virtual Reality at Google, would describe it. Instead, when the topic came up, he reminded me that Google Tango had its first development kit all the way back in 2014 and that they’ve slowly been building towards this vision of a future where AR is democratized and available to millions around the world. Specifically, Google wants 100 million AR-capable Android phones within just the next few months.

“I like to call it immersive computing to sidestep some of the jargon and acronym debate — VR, AR, MR — just everything. Integrating computer-generated imagery seamlessly into experiences is what it’s all about,” Bavor explains at the beginning of our interview at Google’s San Francisco office last week. “Our goal here is to make AR mainstream on Android for developers and for consumers…We thought mobile smartphone-based AR was going to be a thing that was important years ago. The first Tango development kit was 2014 and by relaxing the constraints on the hardware, getting rid of the need for a depth sensor or additional tracking cameras we’ve honed in on our aim to prove out the technology and show the world that on consumer-grade sensors you can do really powerful AR experiences.”

From the demos I got the chance to try he’s not exaggerating. On standard Google Pixel and Samsung Galaxy S8 phones I watched robots walk across a tabletop and wave at me, trees shrink and grow from a few inches to several feet, and even a giant lion flex his muscles and look down at me as if I was really there. Similar to the first time someone tries VR, a powerful AR experience can feel like magic.

“There’s a lot of things that need to happen to make it successful though,” Bavor admits. “We’ve always known that it’s got to work at scale, so we’ve been investing in software-only solutions like ARCore, building on all of the Tango technology, just without the additional sensors. We feel that the technology is ready and we’ve figured out some of the core use cases for AR that we’re really excited about, so that’s why we’re so excited to get ARCore out there and start lighting up across the ecosystem.”

One great use case that I got to see first-hand is the dynamic good AR can bring to shopping. Using a plugin on the Wayfair website I watched as a room was measured in real-time (similar to the GIF above) and a chair was placed from the website into the physical space. Imagine applying this same concept to other types of shopping and interior design as well.

Another future example Bavor gave was through the use of VPS (Visual Positioning Service.) “We’ve been investing in a constellation of tools and services and applications around it to make it even more powerful for developers,” Bavor says. “One example is VPS. You’re gonna want, as a developer, to extend beyond just the tabletop or room to something that’s world-scale, or to anchor things in the world that persist so you can go back to them. ARCore and VPS we see as very natural partners and in fact we were building VPS in anticipation of scaling AR on Android with ARCore.”

Imagine being able to return to a specific building and see a sign in AR that has aged and rusted with years passing by. Or know where your friends recommend eating downtown just by looking around — Google Lens could be a big part of that too. It’s the type of stuff that’s been promised and imagined for a while, but we’re getting closer. We’re not there yet, but it’s an attainable goal in our lifetime.

“Another example, which is especially relevant for developers that build traditional smartphone apps in Java, is that we want to make it easier than ever for people to get into 3D modeling that haven’t done it before,” Bavor says. “We know there are a lot of people that want to get into 3D development and AR but aren’t experts in Maya, or Unity, or anything. So Blocks is an app we built with the intention of enabling people that have never done a 3D model in their life to feel comfortable building 3D assets. We even made it easy to export right from Blocks and pull into ARCore apps you’re developing.”

One of the demos I tried, the same one with little robots and trees on top of a table, had all of its assets created directly inside of Blocks and exported to ARCore.

“We’re also working on experimental browsers that combine all of the ARCore functionality into a web browser,” Bavor explains. “With just a little Java, some HTML, and a few assets you can create an AR experience. ARCore embeds parts of itself into the experimental browsers. Google was born in the web and we love the web and we want to enable more devs to build for AR. And notably the experimental browser will have a version for ARCore that uses Android and a version on iOS that uses ARKit. A developer can build one webpage with one Javascript library and have a cross-platform AR experience.”

More details will likely emerge about ARCore in the coming months and we can’t wait to see what intrepid AR developers are able to cook up with this new suite of accessible tools. Let us know what you think of Google’s ARCore, ARKit, WebAR functionality, and everything else in the world AR down in the comments below!


You can finally stuff your head into a Windows VR headset

After a few months of waiting, you can snap up a Windows Mixed Reality headset for yourself… if you meet the right conditions, that is. Microsoft is now selling both the Acer and HP Developer Edition headsets at respective prices of $299 and $329, but only to developers — you can’t pick one up just because you think an Oculus Rift is too expensive, unfortunately. The HP model is also out of stock as of this writing, so you can’t be too picky.

Thankfully, you’re largely getting the same experience. Both wearables include a pair of 1,440 x 1,440 displays, a 95-degree field of view, support for 90Hz refresh rates (the usual target for VR) and a single cable that carries both HDMI video and USB data. Those aren’t mind-blowing figures, but that’s not the point. This is more about fostering VR and AR apps to make sure there are plenty of them when Windows Mixed Reality hardware is available to the general public. If all goes well, Microsoft will have laid the groundwork for taking VR and related technologies into the mainstream.


Nokia And Xiaomi Team Up To Explore VR And AR Patents

Mobile is the path of least resistance when it comes to introducing the masses to virtual and augmented reality and many eyes are on this particular branch of technology as breakthroughs are made in pursuit of delivering a powerful experience.

It pays to have some big names innovating with VR/AR and two huge tech entities are aiming to do just that with network specialist Nokia partnering with smartphone manufacturer Xiaomi for a cross-licensing patent deal.

The new deal, reported by CNBC, explains that the two companies will be seeking out mutually beneficial innovations specifically focused on VR, AR, artificial intelligence, and more. The patents will be of the “standard essential” classification, which involves the technology that brings their products to industry standards.

Xiaomi alone has applied for over 16,000 patents in 7 years with 4,000 of them granted. Nokia has established itself by setting many current mobile standards and their position in the industry could help to expand Xiaomi’s business, which is something CNBC discussed with a Xiaomi rep:

A spokesperson for Xiaomi told CNBC on Wednesday that the Nokia deal will help with global expansion, but it doesn’t mean the company is focusing on one particular market.

“Our collaboration with Nokia will enable us to tap on its leadership in building large, high-performance networks and formidable strength in software and services, as we seek to create even more remarkable products and services that deliver the best user experience to our (product brand) Mi fans worldwide,” says Lei Jun, chairman and CEO of Xiaomi, in the press release for the new announcement.

It may be quite a while before this partnership bears significant fruit, but it is an example of major companies embracing the long-term potential of VR and AR among their various businesses.


Report: Apple Acquires VR & AR Eye-tracking Company SMI

A report from MacRumors provides compelling evidence that Apple has quietly acquired SMI, also known as SensoMotoric Instruments, a company specializing in eye-tracking technology.

German-based SMI has specialized in eye-tracking since its founding in 1991. The company has in recent years turned its expertise toward AR and VR, where eye-tracking data can be used for a wide range of useful things from foveated rendering to avatar eye-mapping. The company has demonstrated its eye-tracking solution in the HTC ViveOculus RiftGear VR, and other head-mounted displays.

SMI’s eye-tracking tech built into a Rift DK2 | Photo by Road to VR

MacRumors reports that the company was quietly acquired by Apple sometime between May 2nd and July 26th, 2017. The publication uncovered documents signed by Gene Levoff, Apple’s vice president of corporate law—representing a company called Vineyard Capital Corporation—which signed over power of attorney to German law firm Hiking Kühn Lüer Wojtek to handle the acquisition of SMI by Vineyard Capital Corporation.

Along with Levoff’s signature, the documents being notarized in Cupertino (the location of Apple’s HQ), and corroboration from an anonymous Apple employee, MacRumorsconcludes that Vineyard Capital Corporation is a shell company that Apple used to hide the acquisition (not an uncommon business tactic).

Not so coincidentally it seems, SMI’s website was recently gutted, now offering no contact information for the company, a complete removal of all information pertaining to product offerings, and a scrubbing of the few pages that remain, including the removal of information about SMI’s management team.

The company’s eye-tracking products and research are not limited only to VR and AR. In SMI has also offered eye-trackers for desktop applications as well as measurement-only head-worn devices for data collection and analysis. Unless Apple plans to integrate eye-tracking technology into its computers or smartphones—an eye-tracking use-case which has seen little to no consumer adoption—the operative use of the company’s tech seems to be aimed at AR or VR.

Earlier this month, Apple for the first time showed a major embrace of VR, revealing that the company had worked with Valve for nearly a year to bring SteamVR to MacOS, and that a number of forthcoming computers would be the first from Apple to be ‘VR Ready‘. Those announcements come alongside longstanding rumors that Apple is developing its own AR headset, an area which the company recently delved deeper into with the reveal of ‘ARKit’, a new augmented reality toolset for building AR apps on iOS devices.


Augmented Reality: It’s Fun And Games Until You Make Your Neighborhood Mad

Do you really need a permit to place virtual objects in a real park?  This and other odd dilemmas are emerging with augmented reality (AR), where virtual worlds are colliding with our actual world.

Earlier this year, an augmented-reality appmaker, Candy Lab, sued the County of Milwaukee, Wisconsin, over its permit requirement for AR apps that cause people to converge on its parks.  This new law was a reaction to the Pokémon Go craze last summer, when thousands of players swarmed the county’s public parks to “catch” Pokémon that were placed there by the game developer.

Credit: Anthony Kwan/Bloomberg

While it’s great to get people walking out and about—visiting parks and other sites they might otherwise have not visited—it’s a problem if the sheer number of visitors suddenly overwhelm the capacity of those sites.  This is something like a distributed denial of service (DDoS) cyberattack but in real life.  The county had complained about “daily traffic congestion, parking issues, littering, compacted and damaged turf, risks to sensitive flora and fauna habitats, and noncompliance with park system operation hours.”

When a political protest or music festival is organized at a public park, it’s not unreasonable to require organizers to obtain a special event permit to ensure there’s enough security personnel, traffic control, toilets on site, and so on.  First Amendment freedoms of speech and assembly may be reasonably balanced by public safety and other urgent considerations.

Is placing virtual objects in a real park anything like organizing a protest or festival?  Candy Lab alleges that the county violated its First Amendment rights in requiring a permit before publishing its AR software, but the company didn’t consider this relevant question (yet), which we can do here.

A key disanalogy immediately comes to mind that might support Candy Lab’s position: Unlike an organized protest, neither Pokémon Go nor Candy Lab’s poker app directs people to a specific location at a specific date and time, and this makes predicting crowd-management needs very difficult and possibly overkill if the game might be a dud.

In Candy Lab’s app, virtual poker cards are distributed across many physical locations; and, if they like, players can pick up those cards for their poker hand by visiting those locations.  As with Pokémon Go, players don’t need to converge on any particular place or at any particular time.  In contrast, an organized event with a particular location and time can more easily predict turnout and related effects, making a permit requirement more reasonable.

A better analogy may be that location-based AR games (so far) are more like a publisher that merely lists a park and other notable spots in a guidebook.  There’s no coordinated effort to get people to visit specific places at specific times and, anyway, we’re already free to visit them when we want.

It would seem silly to require guidebooks and listicles to get an event permit or help pay to accommodate extra people, just because that publicity could lead to more visitors to the spots they pick out.  Even if the publisher created a game around their tourist suggestions—“visit them all and win stuff!”—a permit requirement still appears inappropriate.

If park permits were required for a guidebook publisher, where would it end?  Yelp, for instance, may be responsible for more traffic and crowds around certain businesses than Pokémon Go has created; should Yelp be required to have a permit every time it lists something that could possibly affect traffic and dining patterns?  Probably not; that would be unduly burdensome for businesses and hinder innovation.

Other companies and services could be affected, too.  By simply pinpointing a park on its map, Google Maps also could be responsible for increased traffic to the park.  If we follow the broad logic of “if you’re responsible for increased traffic to x, then you must seek a permit or otherwise contribute to managing crowds at x”, then Google Maps probably wouldn’t survive, nor would paper-map companies, such as Road Atlas or Thomas Guide.

And it doesn’t end with businesses.  If I post on Facebook that a park that I am visiting is great, and that you all should go visit this park or whatever park is close to you, then it’d seem that I would need a permit, too.  The special event permit rule doesn’t single out commercial ventures; it applies equally to noncommercial gatherings, from political protests to large family cookouts.

If talking up a park or town (or just mentioning it in this article) causes more people to move to the area and drives up local real-estate prices—as it has for my town of San Luis Obispo, when it was named the “happiest town in America”—well, that’s just the price and meaning of liberty.  Individual actions can have large-scale effects, and that’s ok.

A delicate balance between liberty and public interest

Now, this isn’t to say that all AR apps should be free to do as they like, just as other businesses and private citizens have moral and legal responsibilities to uphold, too.  Some speech and messaging could be unethical as well as illegal, such as encouraging an emotionally vulnerable person to commit suicide.

Likewise, there may be odd cases where a publisher or AR developer needs to pay special attention to risks and take extra precautions that it ordinarily might not.  For instance, if an AR game directed you to a janky rope-bridge or to an ecologically fragile area, we don’t need to see a crowd appear at a particular time to worry about public safety and environmental harm.  Any significant increase in overall visitors could create special risks.

Most public parks don’t fit in these categories, but some might.  Not just parks and towns, but even entire nations could be legitimately concerned about an overload of visitors, such as Iceland.  So, the above discussion is only a general case for exempting current AR games from obtaining special event permits.  The details matter, and a different balance might need to be struck between civil liberties and public interests in different cases.

Beyond this issue about public spaces, there’s growing awareness that virtual and augmented reality could upend many other areas of law and ethics, some of that caused by Pokémon Gospecifically.  And since most VR/AR apps today are games, they can benefit from many ongoing conversations on the ethics of video gaming as well as cyberspace more broadly.  After the aborted FBI attempt to make Apple create a backdoor into encrypted user communications, Candy Lab’s lawsuit is picking up the unfinished legal debate of whether and when software codecan be protected as speech.

In the next article in this two-part series, I’ll look at similar conflicts between liberty and public interest that are arising in robotics.  Augmented reality is just one of many disruptive technologies to come.  The objects it creates might only be virtual, but the challenges it raises are all too real.


Acknowledgements: This work is supported by the US National Science Foundation, Stanford CARS, and California Polytechnic State University, San Luis Obispo.  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the aforementioned organizations.


With passion for immersive technologies we aim to provide you with everything you need to start and grow as a developer. We also provide custom application development for businesses. Contact us via the form below and one of our staff will be in touch with you shortly.