Friday, December 30, 2011

Case Study: A VR User Interface

After doing some research on Virtual Reality (VR) user interfaces (UIs) in general, I decided it was best to do a more in depth research on some specific implementations. I have come across many interesting articles and one that I found particularly interesting was the The Go-Go Interaction Technique: Non-linear Mapping for Direct Manipulation in VR. I liked this article and technique because it was specifically aimed at making object interaction and manipulation easier, which is an area that I have mentioned before as being of interest to me.

The Go-Go Interaction Technique allows users to manipulate objects that are both close to and far from themselves. Tracking a Polhemus Fastrak sensor worn on the hand allows a virtual representation of the user's hand to be co-located with the user's actual hand. The Go-Go technique, named after Go-Go Gadget, plays with the length of the user's arm. If the user wants to manipulate an object that is out of reach then all they need to do is to reach their arm past a certain point D, defined to be 2/3 of their own arm's length, and the virtual arm grows in a non-linear fashion, dependent on how far past D the user stretches.

I like this technique for a few reasons. First, there is no special teaching/learning that is involved. In order to get to objects that are out of reach we already know to stretch our arms farther. This movement is natural and intuitive. I also like the simplicity of this technique. There is no input device needed, such as a mouse or keyboard or even vocal input, there is only the body itself.

I believe that the Go-go technique can also be taken and extended fairly easily too, due to its simplicity. For example, this could be used as a way to navigate through an environment. If the virtual environment is larger than the physical one then walking around the room to get to a location becomes impractical. Using a subject's arm direction to indicate where they would like to move is simple and takes away the need for devices such as joysticks. I also think that taking this approach as a jumping off point and extending it to create an entire UI which allows 6 degrees of freedom manipulation of objects is a very interesting path to go down.

Reference

Ivan Poupyrev , Mark Billinghurst , Suzanne Weghorst , Tadao Ichikawa, The go-go interaction technique: non-linear mapping for direct manipulation in VR, Proceedings of the 9th annual ACM symposium on User interface software and technology, p.79-80, November 06-08, 1996, Seattle, Washington, United States 

General Overview of Virtual Reality User Interface

As is true when starting out any new research topic, it is best to begin with a general overview of the knowledge base before narrowing in on anything too specific. For this reason, I went to Google Scholar with the search query 'virtual reality user interface' and started sifting through results. I found one article, A Survey of Design Issues in Spacial Input, particularly helpful as a jumping off point into virtual reality interfaces and have summarized some of the main points in this blog post.

The first sentence of the articles states, "We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces."

The authors have split their discussion into two sections, those issues that deal with human perception and those that deal with ergonomic concerns.

The first section, about human perception, makes the point that there is a difference between understanding and experiencing 3D space. Humans can experience 3D spaces by exploring them and learning about them and this ability comes quite naturally to most. Understanding 3D space, as in being able to mentally manipulate 3D objects is a different matter.
With this in mind there are a number of considerations to take into account when designing a 3D UI:

  • spatial reference 
  • relative vs. absolute gesture
  • two-handed interaction
  • multisensory feedback
  • physical constraints
  • head-tracking techniques
The paper then goes into greater detail about each of these considerations.

The ergonomic section of the paper discusses how to make the interfaces comfortable for a user. This means designing interfaces that allow users to act in the same way that they do in the real world. For example, in many tasks users tend to utilize a very small volume of space around their bodies. Having a user interface which requires large movements over an extended period of time is unnatural and fatiguing. Also, designing an interface with multiple ways to input information such as through hand gestures, mice and keyboards can also be tiring both physically and mentally.

Reference


Ken Hinckley , Randy Pausch , John C. Goble , Neal F. Kassell, A survey of design issues in spatial input, Proceedings of the 7th annual ACM symposium on User interface software and technology, p.213-222, November 02-04, 1994, Marina del Rey, California, United States  

Monday, December 26, 2011

Change of Plans

I have weekly meetings with my research professor, Dr. Bobby Bodenheimer, and recently we have been discussing a change in direction for my research project. While I still believe that dyadic interaction and its effects on learning in a virtual environment is extremely important to the advacement of virtual reality, I also believe that the way in which humans interact with the environment is important and, at least for me, more interesting.

The user interface for an immersive VE is very naturalistic, a user simply puts on a head-mounted display (HMD) and can begin to use the environment right away. Using the environment can mean slightly different things for various environments, however for the most part this means locomoting through the space by walking or with the aid of a joystick and observing the environment. This describes a very passive role within the environment, with little ability to manipulate the space or take an active role in interacting with the objects that may surround the user.

I have mentioned some of my past research in previous posts, and while much of it related to learning in the virtual world, both of my past research projects also involved object interaction within a VE. One experiment had the user throwing a ball that was co-located with its physical counterpart and the other experiment involved a stamp tool and a board which this stamp tool could be fit into. These simple examples of object interaction presented many problems of their own, for example making sure that the virtual and physical representations were oriented in the exact same way and that the tracked physical object was never lost by our tracing system, therefore causing the virtual representation to disappear in the VE. Other problems present themselves once we move away from small and simple object interactions and move into spaces which are larger than the physical tracking space and have many objects. It becomes impossible to provide physical representations for all of the objects in a virtual space. An intuitive interface which allows us to manipulate these objects must therefore be developed, and I hope to expand on this topic with the time I have left here at Vanderbilt.

Saturday, November 5, 2011

A Paper Overview

In my quest to understand the background behind what I am performing research on, I have decided that I will use a few blog posts over the course of the school year to write reviews of relevant research papers.
This week I have chosen:


Dodds, T.J. and Mohler, B.J. and Bülthoff, H.H. A Communication Task in HMD Virtual Environments: 
Speaker and Listener Movement Improves Communication. CASA: Proceedings of the 23rd Annual 
Conference on Computer Animation and Social Agents (2010).

Dodds et al. conducted an experiment in which they had subjects perform a communication task in the virtual environment while providing both participants with a self-avatar. They had a 2x2x2 experimental design. The variables in each condition were: the speaker's avatar (static vs. animated), the listener's avatar (static vs. animated) and the perspective (1st person camera view vs. 3rd person camera view). The task required two people, one as the speaker and one as the listener. The speaker had to describe as many words, as provided by the experimenter, to the listener as possible in 3 minutes, while the listener guessed the words that were being described. The task was also performed in the real world for comparison purposes.

The performance measures in this experiment were the average speed of the dominant hand of the speaker during the task, as an indication of the amount of gesturing by the speaker, as well as the average number of words correctly described and guessed in a 3 minute time period.

The results show that the average speed of the dominant hand is highest in the real world, but there is also a significant increase in hand speed in the tracked-tracked-third person condition compared against the other conditions. This condition also yielded the highest average amount of guessed words.

This experiment emphasizes the importance of having a body in the virtual environment. Those conditions where the actors were provided with animated avatars performed better than the other conditions and also behaved more closely to real-world behavior, with regards to the amount of gesturing.

Dodds et al. assert that communication is "an essential subtask of any collaborative virtual environment." This work will be important when setting up and analyzing the results of my experiment. My participants will be performing a non-verbal task, i.e. playing catch, and there are elements of non-verbal communication which will help them to better understand how/when/where the ball will be thrown that will be pivotal in their success.

Both verbal and non-verbal communication are huge players in the way that we perform tasks and learn from each other, and figuring out how to optimize the amount of communication available to the users of a virtual environment is an important step toward improving the relevance and usefulness of virtual reality.

Monday, October 31, 2011

Learning

Last post I described what a dyadic interaction is and why we think it is a useful way to learn.
This post will explore what learning is and how we learn.

Merriam-Webster defines learning as:
1. knowledge or skill acquired by instruction or study
2. modification of a behavioral tendency by experience (as exposure to conditioning)


     There are many different types of learning that include, but are not limited to, habituation, sensitization, classical conditioning and observational learning. Each type of learning describes a different process which enables an individual to retain some sort of skill or knowledge. 
     Classical conditioning, for example, describes the process of using an unconditioned stimulus and response and pairing them with a conditioned stimulus to produce a conditioned response. The archetype for this type of learning is Pavlov's dogs. The unconditioned stimulus was meat powder and the unconditioned response to this powder is salivation by the dogs. Pavlov then paired this stimulus with a conditioned stimulus, a bell, to produce the unconditioned response of salivating to the bell. Every time Pavlov introduced meat powder to the dogs he would also ring a bell and after enough repetitions of this, the dogs began salivating when no meat powder was introduced at all and just the bell was rung. 
     Observational learning is learning which occurs after observation of a model. This is the type of learning which occurred in my first paper, when our subjects' performance improved after watching an avatar perform a task. This is also the type of learning which we hope will occur when we introduce dyadic interactions into a virtual environment. The other humans in the scene will help our subjects to mimic observed actions and learn from watching the results of actions taken within the environment. 
 

Sunday, October 16, 2011

Dyadic Interactions

The first step in my research is to understand the topic that I will be studying completely. I will eventually come up with a series of experiments to investigate the issues that I have chosen to address. I first have to have a clear understanding of what is already understood about my topic and to do this I have broken my project down in to understanding:
1) Dyadic interactions
2) Learning

This post will explore dyadic interactions.
A dyadic interaction is, according to the Oxford University Press, an "interaction between two people (a dyad); interpersonal interaction."
Work by Neomy Storch titled "Relationships formed in dyadic interactions and opportunity for learning" investigates how the relationships between pairs can help to promote learning in the context of a university second language classroom (ESL) setting, Storch claims that there are different types of relationships formed, those of a collaborative nature and those that take a dominant/dominant form. She investigates how the type of relationship then translates to individual learning.
The work by Neomy Storch is supported by a large body of work that looks at the importance of dyadic interactions within a classroom, especially one  with the purpose of teaching a second language. The interactions help to catalyze the learning process and allow the student to feel more comfortable with the material, which in turn allows a better understanding of the material.
All of this work investigates dyadic interactions, and how we can use this information to better understand social interactions.
The work that I would like to accomplish through my current project will look at dyadic interactions as a tool for not only studying social interactions and the transfer of knowledge, but also the transfer of skills.
My previous work, "The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments" addressed the question of how adding other human beings to a Virtual Environment (VE) (whether it was in the form of a previously recorded animation or a self-representation of the user within the environment) affected task performance on three simple tasks. We found that adding either form of human character to the scene did help to improve task performance, and we conclude that the presence of other characters within the scene provides users with helpful cues such as familiar size cues and biological movement cues that are typically absent from VEs. It will be interesting to extend this work, then, and look at the influence a character into the VE that our users can interact with, as opposed to passively watching a recorded character perform a task.

The above is a very brief overview of the motivation for my current project. Next week I will discuss different methods in which humans learn, so we can understand the reasons behind why dyadic interactions seem to be such a powerful tool for enabling the learning process.

References:

Neomy Storch, "Relationships formed in dyadic interaction
and opportunity for learning",International Journal of Educational
Research, Volume 37, Issues 3-4, 2002, Pages 305-322, ISSN 0883-0355,
10.1016/S0883-0355(03)00007-7.

Friday, October 14, 2011

University of Utah 10/06-08

This past week was Fall Break at Vanderbilt University so I took the opportunity to accept an invitation from Dr. Bill Thompson and head out to the University of Utah. I am currently in the process of looking at PhD programs and this was my first trip out to see a school in person.
I was supposed get in on Thursday morning, but unfortunately my flight was cancelled due to a mechanical issue with the door so I didn't end up arriving until Friday at 10:30 am. Dr. Thompson picked me up from the airport and we promptly performed an exercise in perception while riding up and down the escalators with one eye closed, demonstrating the difficulties that people with extremely low vision face.
We then headed to campus and went straight to a group lab meeting where I was introduced to the graduate students who work with Dr. Thompson, Dr. Sarah Creem-Regehr and Dr. Jeanine Stefanucci. I, along with another visiting student named Kaushik Satyavolu, presented our most recent work to the group and had a chance to discuss it with them. I also got a chance to talk to them about their VR setup, and got some good ideas on how I will implement dyadic interactions within our own here at Vanderbilt.
The research group is a unique blend of psychologists and computer scientists working together to do research on topics such as perception and action and presence, using virtual reality as a highly effective tool for studying such areas of research.
Overall, I had a very nice visit and got a chance to talk to many different faculty with a variety of research interests. I'm still trying to decide what to do at the end of this year, and this trip helped me a take a big step in figuring out what exactly I'm looking for!

Sunday, October 2, 2011

Introduction

Hi, I am starting on a year-long project investigating perception and action of dyadic interactions in a virtual environment under the supervision of Professor Bobby Bodenheimer. We will be using the Learning in Virtual Environments (LIVE) lab here at Vanderbilt University to study the ways in which humans interact with each other in virtual reality when given a virtual self-representation of their own body (a self-avatar). We will be doing this by tracking various measures of learning experienced by the users of our environment while performing tasks such as playing catch and ping-pong.
This project is made possible through the support of the Collaborative Research Experience for Undergraduates (CREU) program: http://cra-w.org/collaborative-research-experience-for-undergraduates-creu.
I will be keeping a blog of my progress throughout the process. You can find more information about myself and my past research at mantelish.vuse.vanderbilt.edu/erin (a work in progress).