Augmented Reality

by Kevin Ponto on April 12, 2011

A survey of augmented reality
RT Azuma

Recent advances in augmented reality
R Azuma, Y Baillot, R Behringer


Which of the applications of AR listed in the articles do you find most interesting/promising and why?

Of the errors listed for AR, which do you think would be the most problematic and why?

What do you believe the biggest advance between the first and second article is? (discuss why)

As usual, select a topic that you find interesting, dubious, confusing or curious and explain why.  Write at least a paragraph of explanation and add citations if warranted.


Nathalie April 22, 2011 at 3:12 pm

I really found the medical applications to be quite fascinating in which augmented reality is used as a “visualization and training aid for surgery” (3). With this, med students, in-training, newbie, or even senior surgeons could significantly reduce errors during an operation by allowing the surgeons to “look before they leap.”

I think the most problematic AR application would be the attempts to provide visualization for 3D objects like pipes or interior design. Sure, the prototype images look good, but those are in 2D. How good are we at making 3D holograms? Would it look decent in all perspectives?

I believe the biggest advance between the first and second article is how imagination and possibilities made the AR application ideas in the first article seem very compelling and useful while the actual application of ideas in the second article forced people to consider the user interface and interaction of these tools. It is often the case where after the application has been made, users and developers realize how the UI/UE design has been neglected in the development process, which actually need to occur even before any sort of development occurs.

What I find interesting is the AR in sports broadcasting on page 9. In the past, I’ve seen little annotations on the screen and wondered how they went about making them. Tournament or olympic swimmers’ lanes would have each competitor’s name and associated national flag appear and disappear in the lane to help viewers differentiate between all the swimmers, which I thought was really cool.

These articles reminded me of a video I saw a while back where someone 3D video-mapped onto a white living room space to produce a variety of furniture, wallpaper, and carpet options for the room overall:

Alex April 24, 2011 at 6:09 pm

I find the most interesting aspect of AR is using at as a tool to add extra visual information in engineering applications. The photo that got me excited was the picture of the colored pipes and the 2d blueprints on the floor. Coming from an engineering background, I can appreciate the advantage of having this additional information communicated to you through the altered environment. I can also appreciate how this translates in to medical applications, but is less interesting to me just because of my background.

I think the biggest problem for AR is the latency issues. Since AR is attached to your face, it is easy to cause a lot of input change at a high rate. I feel that many of the other problems are more related to a certain applications of AR where the latency and stream sync issues are much more general. The utility of using AR is easily offset when latency issues are present, and can cause motion sickness or just general annoyance.

I believe the biggest advance between the two papers is in the level of detail in the 3d graphics that are demonstrated in AR. More advanced lighting makes a big difference in how out of place something looks. Having virtual 3d objects that are illuminated properly will make the acceptance of AR objects being in the real environment much better.

One thing that I changed my mind about over the course of the readings was the idea of using fiducials. At first I thought that having to use theses was a major problem, as it was unnecessarily obtrusive. Overtime, I realized that most interesting applications would be not hindered at all by having to place these, and that the added accuracy is more than worth the additional time needed to set this up.

Leo April 24, 2011 at 10:19 pm

I think the most promising is it being used in the medical field. I feel like the HMD would be able to provide the doctors with a lot of key information and be able to point out subtle details. Also in critical moments it may provide them with key information that would aid in their decision making. But I feel like the most important contribution is to be able to “virtually” see inside a body and know what is going on there instead of having to divert your attention to a monitor. The virtual fetus and tumor biopsy examples in the article illustrate that point very well.

I feel like the most problematic error is tracking. Without proper accurate tracking the whole concept can become annoying since in most cases it’s easy to spot. However, when it’s not easy to spot it becomes less of an annoyance and more of a problem since in medicine for example, it can cost a life. The whole point of AR is to be able to see things that will aid in whatever you’re doing, but when you’re not fed correct information or the information is unreliable, it seems to ruin the whole purpose of its application. That’s why I think it’s a bigger problem that delay since delay is pretty obvious, but you can still be given correct information, and in most scenarios use the technology successfully.

I think the biggest advance is in tracking. In the second article mobile AR systems have just become feasible which require a lot more complex forms of tracking in comparison to indoor tracking. I think this alone shows how much further it has come in comparison to other issues such as latency for example.

I think the topic that interests me most is the application of this technology. It could be applied to many things if it becomes successful such as driving. Imagine having a HUD on your front window which labels houses you pass with street addresses and gives you information about the buildings you’re passing. There could also be a GPS built in. I could see many other applications such as this to sports (putting this technology into helmets) .

kusko April 24, 2011 at 10:30 pm

Both articles were very interesting, and the second article provides a good view of the advancements augmented reality has made. Smaller HMD and the use of mobile devices are looking very promising in their improvement into smaller and more reasonable displays. Augmented reality apps are available for many smart phones now and they will only improve as the camera and processing power of the phones increases.

The difficulty with the offset of with the users eyes and the camera or scanners seems to provide a little difficulty. See through display which do not have the offset problem still have problems of their own. The use of lasers to draw directly on the users eye sounds promising at solving the problems of occlusion and brightness and seem to be becoming very compact.

The improvement of the head mounted displays seemed to be the largest advancement. The prism based laser sunglasses were much more reasonable than some of the other displays shown in the first article. The large size of the displays severely limits their application. The augmented reality use on mobile devices have already been integrated into social networking, directions, and guidebooks.

I found the occlusion ability very interesting. Essentially providing capabilities for invisibility or x-ray vision on objects could be very useful. Medical, architectural and repair fields could all be improved by advances in these properties. A safer alternative to many dangerous procedures this would allow a someone to practice or test an action before executing it.

Nick April 24, 2011 at 10:47 pm

Most of these applications seemed really cool and useful, but the most promising one seemed to be using AR for medical purposes. Allowing the doctor to be able to see inside a person without cutting them open seems extremely useful. It seems the surgery would not have to be as difficult or dangerous and also make it easier for the doctor to accomplish what he needs to.

It seemed like the biggest error for AR is to clearly see both the virtual and the real at the same time. Something such as the see-through HMD seems like it is not quite good enough to accomplish anything worth while. Allowing only some of the real world through and having the lens be only partially reflective seems like the user will get 2 not so great pictures trying to be put together instead of just one nice picture.

Even though visualization seems to be a big issue, the second article made it seem as it was not as big as the first one and that many things were made better or at least thought about more. On page 35-36, they talk about some of the visualization problems, but also the advances with them such as having the see-through HMD block the real world only at specific pixels. Also, they talk about fixing the parallax error and fixed eye display problem. It seemed to me that a lot of visualizations they talk about were better than in the first paper, but this could be just because their language seemed more positive.

Another application for AR I found interesting was having virtual instructions appear in the real world, as they show in the first article with the printer maintenance application. I’m waiting for the day when a little program comes with everything a consumer buys that needs instructions that you can put into your home HMD or whatever and it will help you put something together or fix it. This may not be advantageous to the makers, but would really help me out. Something like this could be really helpful for any business with machines that need fixing or putting together. The maintenance workers could detect and fix problems a lot easier and quicker with something like this.

Russel April 24, 2011 at 11:18 pm

I think the medical implications of augmented reality are the most promising. The fact that they were able to construct a 3D representation of a fetus in the womb is truly amazing. We as humans aren’t the most reliable or precise creatures. Being able to see a line on a patient where to cut, or having additional information so the surgeon doesn’t even need to cut could be so incredibly helpful. If the surgeon can have enough information to not even need to operate, preventing something like exploratory surgery, the chance for error can be completely removed.

I think the latency issue for AR would really be the worst. If we’re trying to display complex information and analyze the world as a user is moving around the system needs to respond incredibly quickly. If the idea is to use this for something like surgery like I mentioned earlier, there is really no room for error. If a surgeon is cutting, looks away and then back but the system lags and the surgeon cuts in the wrong place, then we would’ve been better off not using the technology at all.That seems like an important point with AR, however. If the technology is not good enough then you could be a lot better off just not using it at all.

I think the best advance from the first to second article is the use of registration to provide real time additional data to users. They talk about this in the outdoor section. The reason I think this is the best advancement is because it’s something that has made it’s way down to a lot of consumer grade technology. Things like smartphones now can be pointed at a street or something and you can get all sorts of information like ratings or menus for a restaurant. This maybe isn’t the most technologically impressive advancement but it’s cool because it’s something people can actually use.

Something I also thought was interesting in the second article was the bit about AR as a use for advertising. Personally, I really would not want my AR device to add advertisements all over my life. Sure, commercially it’s great but I think it could be a total pain to have to listen to why AT&T’s network is the best when I’m trying to look at some sights while hiking or whatever. On the other hand, we could perhaps use AR to remove ads in real life, think AdBlock but for all the billboards you don’t want to look at. Could be awesome.

Reid April 24, 2011 at 11:19 pm

Since I’m a fan of video games I think the entertainment applications of AR could be both interesting and beneficial. One of the chief complaints concerning computer gaming is that the user is sedentary. This has been recieving much attention in recent commercial systems such as the Wii or XBox Kinnect, AR could be the next step by removing the stationary box+display setup. By dynamically integrating a game with the user’s environment, AR could encourage physical activity while providing entertainment to a wide range of audiences.
I think dynamic errors are the most likely to plauge future AR applications. Static error all generally have some way they can be mitigated, through use of clever device design or software correction. Dynamic errors on the other hand rely on the system being ridiculously fast. While these errors aren’t too problematic in static prepared scenes, if AR is to ever venture into a live environment, dynamic errors will need to be addressed: how often do you stare at the same point without moving around or turning your head?
I think that the autocalibration techniques that are in development may be a crucial step in bringing AR to the commerical market. Precisely calibrated equipment is often expensive and prone to degredation over time, especially when subject to phycisal abuse, as you can be sure any commercial product would be. The development of the autocalibration removes the need for super precise construction hopefully offsetting the cost of production. It also allows the device to dynamicaly react to changes in system properties, such as having the device smacked into a wall or floor by a careless user might produce.
If AR is to be used as a commercial device, it is worth considering what sort of safety hazards it may pose. Aside from privacy issues mentioned in the articles, AR could pose physical health risks in the form of obscuring real threats. Imagine the outcome of trying to cross a busy street with an AR application that is attempting to remove cars from the user to present an unobstructed view of the building on the other side. Less fatal but still painful might be bumping into a pipe in an industrial enviroment because a cutaway view doesn’t show it. Added virtual object could also result in erratic and risky user behavior. A child playing an AR game might be inclined to enter a resticted area to pursue an AR game goal, or to pay more attention to the AR opponent chasing them than the river they are about to run into.

Joe Kohlmann April 25, 2011 at 1:13 am

Which of the applications of AR listed in the articles do you find most interesting/promising and why?
The medical and schematic-related applications of augmented reality appeal to me most. It’s very easy to see into a future where a doctor’s eyewear (and/or gloves) provides all the information he or she needs to locate incision points, monitor patient condition, and manipulate other equipment. Other applications, such as real-time X-Ray imaging, have already been depicted within a science fiction environment, so I imagine that only practical limitations are holding such technologies back. The visual capture phenomenon makes these possibilities not only attractive, but extremely useful, since they remove a required level of integration between visual stimuli and additional data—the two are now literally intermingled. Just as a touch interface literally feels like a more intuitive human input system, augmented reality and visual capture promise a more intuitive human output or display system.
Of the errors listed for AR, which do you think would be the most problematic and why?
Delay and registration are surely the most difficult problems facing augmented reality. We already know how jarring, delayed movement can cause motion sickness in viewers. Latency and associated visual artifacts quickly reduce any practical benefit such a system might have to its user. Likewise, if the virtual environment cannot stay in sync with the real environment, the system loses its appeal and its utility…one would hope that an AR system would offer a *faster* approach to a problem, not a slower one. These challenges are simply magnified in comparison to pure virtual reality research, as the article notes, but I concur that the field’s high interest in real-time performance makes these problems more poignant and more difficult to solve. 
What do you believe the biggest advance between the first and second article is? (discuss why)
The second article discussed mobile, collaborative, and commercial applications. These natural evolutions from the stationary, individual, experimental systems discussed in the first survey show clear routes forward for augmented reality technology. At this stage, any sports fan can attest to the enhancements that AR overlays add to sports telecasts—these visualizations are helpful, well-designed, and just plain cool. This offers the field as a whole one very large foot in the door of public awareness. The “mobile” systems depicted in the second article seem laughable by today’s standards, but from 2011 we can see through to the other side—sensor-laden smartphones have all but solved the mobility and processing power shortcomings of a backpack filled with burdensome equipment.
If I had to pick one, I’d say the collaborative systems were the biggest breakthrough between the two articles. Most of the augmented reality systems I think of, especially with smartphones in the mix, are still individual experiences. Collaborative whiteboards and interactive projections are only the start to what researchers could create with a clear focus on real-time multi-user visualizations. This concept seems most significant and least utilized among the systems discussed in the second article.
As usual, select a topic that you find interesting, dubious, confusing or curious and explain why. Write at least a paragraph of explanation and add citations if warranted.
Augmented reality researchers could learn much from visual effects designers about how the “rule of cool” can enhance the visualizations they develop. I’ll always remember the first-person visor display from Metroid Prime as my favorite example of this—the game included a holographic information frame with a map, energy, and other status graphics as part of Samus’s suit visor. The frame would lag slightly behind the user’s movement, but its lag made it all the more stylish. It animated its movements fluidly when panning the viewport, it shook frantically when under attack or damage, and faded smoothly if an interface element obscured the targeting reticle. By some standards, this isn’t a full augmented reality system, but it is an exemplary visualization that integrates lag as a stylistic and emotive element in the whole. Clever attention to detail like this can make all the difference in user acceptance, and even though no lag might be the end goal (as it turns out, you could actually turn off HUD Lag if you wanted to), style and emotion offer shortcuts and enhancements.

Rachina Ahuja April 25, 2011 at 2:16 am

An application of AR that I found interesting was the mobile application (such as TransVision) for collaborative designing. I think to have maybe annotations or design changes visible to everyone who is part of the process in this way would be really useful and would also make the visualization much easier. The portability doesn’t hurt either.

Registration problems like depth perception seems to me to be most significant. It seems like it would be hard to make virtual objects appear to be at the correct depth and focusing on both real and virtual objects would be difficult which would ruin the whole effect. I think this might be a bit annoying in an AR display.

I think between the first and the second article it seems to me like the focus has shifted a bit from talking about head-mounted augmented reality systems and see-through displays vs. video composition to a wider array of applications and interfaces . To quote-“Different devices best suit different interaction techniques, so using more than one device lets an appropriate device be used for each interaction task. For
example, a handheld tablet interacts well with a text
Naturally, the technological advances are apparent but many experiments have also been performed to address visualization issues-” Researchers are beginning to address fundamental
problems of displaying information in AR displays.” This was an issue questioned in the first article(also one of the major problems with AR) which I think has been looked into since then and we know more about it through the experiments.

I found the issue with photorealistic rendering of VR objects most interesting because apart from the depth perception issues, I feel like this would be the first thing to make things look ‘unreal’ (like figure 15 in the second article which shows the 2-d shop floor plans with the 3-d pipe super imposed on it). I think it would be important to get this right especially with an application where VR objects and real objects need to blend because otherwise the VR objects look cartoon-ish and it just looks weird(especially if it’s really close to real looking but something is not quite right).

Andrego Halim April 25, 2011 at 8:23 am

From the way the author described these applications, I really find the medical application to be the most promising. It seems that it has a lot of prospects in aiding difficult surgery, both in terms of detecting and recreating the virtual internal organs of which the patient is being operated on, or even as simple as providing manual or tutorial as a reference during the surgery itself.

I’d say that all kinds of accuracy errors are problematic, depending on the cases. For example, in response to what I answered above, registration error would be the most problematic when you need millimeter accuracy to do a high-detailed surgery. If the doctor misses just a few inches when operating a neural network, the person can be disabled automatically in the worst case.
On the other hand, for entertainment, registration error wouldn’t be as much problem. For example in class, I don’t think people really care if the gun were mistakenly rendered a few millimeters of off the tracking system. Instead, dynamic errors would be really disturbing. The user would be pulled away from the feeling of immersion if the performance is extremely laggy due to the complexity of the rendered graphics.

I really like the use of compass tracker in the sensing of outdoor environment. The first article doesn’t seem to discuss it at all yet so I assumed that it’s advancement from the first to the second article. I believe that it is a very useful feature to be used in a GPS for exploring natural environment.

For the interesting topic, I want to discuss the use of photorealistic rendering in creating illusion. Initially, as I read the first paper, it mentioned that the ultimate goal of AR is to create environment that is virtually distinguishable from the real one. This sounds really hard and unachievable to me considering the performance and rendering issue. But as I see the example of the application of photorealistic rendering in the second article (i.e. ellipsoidal models to estimate illumination parameters, photometric image-based rendering, and high dynamic range illumination capturing), this seems feasible to do. I’m still not confident about the real-time performance though.

Liana Zorn April 25, 2011 at 2:16 pm

I think the most promising application of AR is annotation on monitor displays because we already see that developing quickly in smart phones. I was amazed when I heard they were developing an app that lets you take pictures of plants and it will identify them for you. This kind of technology gives smart phone users a heightened sense of their surroundings and it’s just plain awesome. The medical uses come in a close second for me, but I don’t think they’re quite there yet as far as being truly useful.
The most problematic aspect of AR seems to be the UI. Until real hologram technology comes around, there isn’t really a perfect way to integrate real and virtual worlds. Wearing any type of HMD – optical OR video – seems like it hinders perceptions. It makes the environment seem less tangible. Monitor displays (smart phones) are nice, but they are completely separate from the world.
I think the biggest advance between the two articles is in location tracking – I’ve been mentioning smart phones this whole time, and it’s incredible to have consumer products that accomplish what they do. The entire idea behind AR would be much more difficult to accomplish without a lot of setup if the product didn’t already know where you were.
I find the idea of visualization in AR interesting – it is the hardest aspect to create a UI for. This comes back to the hologram problem – WE NEED HOLOGRAMS! But anyhow, I am torn between thinking that optical displays are ridiculous and obnoxious and thinking that they are the best possible way to visualize anything in AR. Aside from the eye offset, a transparent visualizer comes the closest to replicating what an actual hologram would do.

Nathan Mitchell April 25, 2011 at 2:45 pm

Personally, I believe that AR has two main purposes when it comes to
applications. First, AR can be informative, in the sense that it can
provide us information about our environment that is hidden from our
regular senses. An example of this would be the ideas in the first
paper about using AR to do architectural work and looking through
walls. But AR can also be generative. Different from just revealing
hidden information, AR has the possibility to collect and synthesize
new information for the viewer. I believe that these applications are
the most interesting. These types of applications would be the terrain
navigation, where the AR system finds the safest or easiest
path. Alternatively, AR could help navigate the social networks we
find ourselves in. The textual display over a person, as shown in the
first article could be extended to access our modern social networking
sites. Imagine if people were highlighted in a colored aura based on
their personal status messages indicating mood, or correlating their
location with hobbies/schedule to indicate how much they may be
enjoying what they are doing. This is information that could be mined
from a large database and compiled into an easy form for a user. This
may be especially useful to those who cannot keep track of everyone
they know. Just looking at a person may someday reveal who they are,
their relationship to you, who your mutual acquaintances are and so forth.

I think that display technology is currently the biggest problem with
AR. Unlike VR, where the user, to some degree, expects a computer
image, an AR user expects a real image with some computer
enhancements. Not only that, but in real time and with no visible
artifacts – like reduced resolution. While sensors like the Kinect can
now provide us with reasonably accurate depth detection cheaply, which
allows correct object placement, drawing objects is a different
matter. Ghosting is still a big issue for opaque objects. I suspect
that even the LCD approach discussed in the second paper has issues
with resolution. Primarily, I foresee aliasing to be a big issue. In
normal computer graphics aliasing can be mitigated by filtering and
blurring sharp edges. However, I doubt the LCD pixels can be turned
‘half on – half off’. For such blurring to function, software would
need to blend what should fill that pixel, if clear, with the image
being generated. This of course makes the problem a video based AR
solution instead of a optical one.

In terms of shear advancement, I think realistic rendering would be in
first place. However, in terms of advancement in AR, I would have to
say the hardware. Our displays have shrunk and our general processing
power per square inch has dramatically increased over the decade. All
of this lends itself to lighter, more user friendly AR rigs. Whether
or not anything else functions well, I don’t believe the average
person will use AR technology if they must carry 50lb of gear just to
walk outside with it. Although the articles don’t mention it as a
requirement, I believe that being able to forget about the hardware is
a key component in AR. It is very similar to VR work where you should
be able to forget about the CAVE or other device simulating the world.

One of the biggest problems I find with AR is related to registration,
as defined in the original article: “The objects in the real and
virtual worlds must be properly aligned with respect to each other, or
the illusion that the two worlds coexist will be compromised.” I think
when many people think of AR, they imagine a system that would adapt
to wherever they were. But most, if not all, of the systems discussed
in the paper were very domain specific. They are designed to recognize
a very small, carefully picked subset of reality. I think in this
sense, the outdoor navigation system would be the most challenging to
produce. The task of making sense of an effectively random scene is
difficult for people ( given how easily some individuals get lost in
the woods or in cities), and we then expect a machine to do better?

In the first paper, the following statement is made about this
problem: “However, it is also not an “AI-complete” problem because
this is simpler than the general computer vision problem.” They then
continue to explain that this simplification comes from use of markers
in the environment. I think that this assumption of a ‘friendly’
environment is overly optimistic. People will want AR to solve
problems of recognizing the environment for them. If the requirement
is to clearly mark the layout of the land, people, or objects with
markers before hand, than AR is not needed. People are just as good at
following road signs. AR is only really interesting if it can do
everything we expect of it just by observing the world, as we do. I
feel that good AR is the “AI-complete” computer vision problem that
the paper claims it is not.

Aaron Bartholomew April 25, 2011 at 8:17 pm

Taking a cue from Michio Kaku’s book, Physics of the Future, I think mobile systems focused on optical see-thorough displays are the most promising application of AR. In particular, this ideal AR system would be a virtual retina display, which uses low power lasers to form high-fidelity images directly on the retina (described in the second article as under development by MicroVision). This type of display is unhampered by “external” equipment and provides the illusion of making images appear two feet in front of the observer. Once these displays are capable of performing network processing via remote servers, I could foresee society adopting this technology as an AR-HUD for web interaction. Since this technology is essentially just an extension to our vision, it would be seamlessly integrated into everyday life. We could have continuous access to the internet through an interactive process that is essentially a natural body process (human-machine synthesis!?).

Much like with VR, I think latency issues are the most problematic for AR. Dynamic robustness is critical for effective AR technology because it cannot be considered to augment reality unless it is synthesized with that reality at all times. In my opinion, lagging virtual environments are intolerable and they immediately render the technology unusable or ineffective. This issue is immediately apparent with the networked retinal virtual display. If processing is performed on remote servers, there is a guaranteed delay which will need to be addressed if mobile AR is to become feasible.

I find the biggest advance from the first article to be the changing trend in user-interface design. The reevaluation on how the virtual information can be interacted with suggests that AR proponents are consolidating theory into practicality. Having a solid grasp of the problem to be solved is an important step towards productivity, since it enables effective planning and decision-making. This trend towards usability indicates to me that sooner than later, AR systems will be commercially available; which I believe is a huge progression for the field.

Although I value the research for developing these systems, I find the prepared scenario tracking systems to be dubious. Even though the developers are able to get super high-quality registration, what is the end goal? Will this technology amount to something usable in non-controlled scenarios? I feel like InterSense and 3rdTech are researching themselves towards a dead-end with a cool, but mostly “worthless” technology. They are stuck in the mindset of AR for the sake of AR that was apparent in the first paper; they haven’t thoroughly evaluated the need they are attempting to satisfy.

Rebecca Hudson April 26, 2011 at 3:57 am

The most promising application of AR is the annotation of real reality in real time, which can be assisted by readily available technology. Mobile AR systems, such as those used with the so-called Smartphones, are ripe for a glut of user applications, some of which will represent major uses of AR technology into the future.
The most problematic issues for the widespread implementation and use of AR technologies will be sociological in nature. Social acceptance will end up being a significant barrier to development because of the potential perceived harm of mapping publicly available information directly onto reality. For example if a person’s entire criminal history or even family tree and net worth were able to be accessed at any instant a person is within camera range. Legal battles and regulatory rodeos are likely to ensue as new applications are rapidly introduced.
The biggest advance between the first and second article is a reduction in the
size and cost of the devices used. Whereas issues such as latency, visual offset, and environmental registration still loom large on the horizon. Hardware and techniques shown to be recently available in the “Recent Advances” piece seem to do many of the same AR tasks from the 97′ survey piece, except in a
smaller, lighter, and faster incarnation.
Topic I find interesting: potential consequences of chronic AR use
Extended use of AR devices for recreational or occupational activities or both could result in full user adaptation to the system. I imagine a scenario in which the use of HMDs for augmented reality is so pervasive that many people fall violently ill when they take off their HMDs for any period of time.
Users could become accustomed to a standardized or offset input position; or to landscapes with effectively flattened appearances, confounding the users’ ability to process standard depth cues with their eyes. I suspect that the first few generations of widespread, consumer AR technology will suffer most acutely from these issues, as the pubic, likely including many children, comprises the fourth through seventy fifth rounds of safety and usability testers of this new technology.

Comments on this entry are closed.

Previous post:

Next post: