Friday, December 30, 2011

Case Study: A VR User Interface

After doing some research on Virtual Reality (VR) user interfaces (UIs) in general, I decided it was best to do a more in depth research on some specific implementations. I have come across many interesting articles and one that I found particularly interesting was the The Go-Go Interaction Technique: Non-linear Mapping for Direct Manipulation in VR. I liked this article and technique because it was specifically aimed at making object interaction and manipulation easier, which is an area that I have mentioned before as being of interest to me.

The Go-Go Interaction Technique allows users to manipulate objects that are both close to and far from themselves. Tracking a Polhemus Fastrak sensor worn on the hand allows a virtual representation of the user's hand to be co-located with the user's actual hand. The Go-Go technique, named after Go-Go Gadget, plays with the length of the user's arm. If the user wants to manipulate an object that is out of reach then all they need to do is to reach their arm past a certain point D, defined to be 2/3 of their own arm's length, and the virtual arm grows in a non-linear fashion, dependent on how far past D the user stretches.

I like this technique for a few reasons. First, there is no special teaching/learning that is involved. In order to get to objects that are out of reach we already know to stretch our arms farther. This movement is natural and intuitive. I also like the simplicity of this technique. There is no input device needed, such as a mouse or keyboard or even vocal input, there is only the body itself.

I believe that the Go-go technique can also be taken and extended fairly easily too, due to its simplicity. For example, this could be used as a way to navigate through an environment. If the virtual environment is larger than the physical one then walking around the room to get to a location becomes impractical. Using a subject's arm direction to indicate where they would like to move is simple and takes away the need for devices such as joysticks. I also think that taking this approach as a jumping off point and extending it to create an entire UI which allows 6 degrees of freedom manipulation of objects is a very interesting path to go down.

Reference

Ivan Poupyrev , Mark Billinghurst , Suzanne Weghorst , Tadao Ichikawa, The go-go interaction technique: non-linear mapping for direct manipulation in VR, Proceedings of the 9th annual ACM symposium on User interface software and technology, p.79-80, November 06-08, 1996, Seattle, Washington, United States 

General Overview of Virtual Reality User Interface

As is true when starting out any new research topic, it is best to begin with a general overview of the knowledge base before narrowing in on anything too specific. For this reason, I went to Google Scholar with the search query 'virtual reality user interface' and started sifting through results. I found one article, A Survey of Design Issues in Spacial Input, particularly helpful as a jumping off point into virtual reality interfaces and have summarized some of the main points in this blog post.

The first sentence of the articles states, "We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces."

The authors have split their discussion into two sections, those issues that deal with human perception and those that deal with ergonomic concerns.

The first section, about human perception, makes the point that there is a difference between understanding and experiencing 3D space. Humans can experience 3D spaces by exploring them and learning about them and this ability comes quite naturally to most. Understanding 3D space, as in being able to mentally manipulate 3D objects is a different matter.
With this in mind there are a number of considerations to take into account when designing a 3D UI:

  • spatial reference 
  • relative vs. absolute gesture
  • two-handed interaction
  • multisensory feedback
  • physical constraints
  • head-tracking techniques
The paper then goes into greater detail about each of these considerations.

The ergonomic section of the paper discusses how to make the interfaces comfortable for a user. This means designing interfaces that allow users to act in the same way that they do in the real world. For example, in many tasks users tend to utilize a very small volume of space around their bodies. Having a user interface which requires large movements over an extended period of time is unnatural and fatiguing. Also, designing an interface with multiple ways to input information such as through hand gestures, mice and keyboards can also be tiring both physically and mentally.

Reference


Ken Hinckley , Randy Pausch , John C. Goble , Neal F. Kassell, A survey of design issues in spatial input, Proceedings of the 7th annual ACM symposium on User interface software and technology, p.213-222, November 02-04, 1994, Marina del Rey, California, United States  

Monday, December 26, 2011

Change of Plans

I have weekly meetings with my research professor, Dr. Bobby Bodenheimer, and recently we have been discussing a change in direction for my research project. While I still believe that dyadic interaction and its effects on learning in a virtual environment is extremely important to the advacement of virtual reality, I also believe that the way in which humans interact with the environment is important and, at least for me, more interesting.

The user interface for an immersive VE is very naturalistic, a user simply puts on a head-mounted display (HMD) and can begin to use the environment right away. Using the environment can mean slightly different things for various environments, however for the most part this means locomoting through the space by walking or with the aid of a joystick and observing the environment. This describes a very passive role within the environment, with little ability to manipulate the space or take an active role in interacting with the objects that may surround the user.

I have mentioned some of my past research in previous posts, and while much of it related to learning in the virtual world, both of my past research projects also involved object interaction within a VE. One experiment had the user throwing a ball that was co-located with its physical counterpart and the other experiment involved a stamp tool and a board which this stamp tool could be fit into. These simple examples of object interaction presented many problems of their own, for example making sure that the virtual and physical representations were oriented in the exact same way and that the tracked physical object was never lost by our tracing system, therefore causing the virtual representation to disappear in the VE. Other problems present themselves once we move away from small and simple object interactions and move into spaces which are larger than the physical tracking space and have many objects. It becomes impossible to provide physical representations for all of the objects in a virtual space. An intuitive interface which allows us to manipulate these objects must therefore be developed, and I hope to expand on this topic with the time I have left here at Vanderbilt.