Replicating real-life food choices
UMBC Professor of Psychology Charissa Cheah was awarded the IRC’s 2017 Summer Faculty Research Fellowship for her proposal to use a virtual version of UMBC’s student dining hall (True Grit’s) to test food choices and potential interventions. This virtual environment allows researchers to exert complete control over any variables (food position, lighting, portion size) and also allows for replicability anywhere in the world. But before the variables could be manipulated, Cheah needed to confirm that subjects made their food choices in the virtual environment (VR) in the same ways that they did in real life (RL). Thus the initial phase of this project sought to answer three questions:
- What are the physiological correlates, particularly neurological, of food decision-making behaviors within the VR and the RL buffet setting?
- How consistent are individuals’ food decision-making processes between the VR setting and the RL buffet setting?
- What are the specific differences in human behavior between the two environments, and how can they be controlled in future experiments?
The IRC’s main task was to build the virtual buffet, which would then be used in our existing virtual reality lab (officially the Observation and Measurement-Enabled Mixed Reality Lab, or OMMRL). To model the dining hall, buffet tables, food, and accessories, a variety of tools common in the video game and entertainment industries were used by IRC artists and engineers. Key among them is photogrammetry: a process that uses computation to calculate 3D computer models of people, places, or things using multiple photographs taken from different perspectives. The process can be extremely accurate and also save time compared to modeling things from scratch. First IRC staff carefully photographed the inside of the dining hall—its walls, floor and ceiling, buffet tables, food trays, and the food itself. Using long exposures allowed for sharp focus of the foreground and the background in these images despite the low light of the dining hall; photogrammetry works best with sharp, clear images. Every subject was made visible in at least two, preferably three images so that enough overlap existed for a stable geometric mesh (3D model) to be built. The food itself was scanned (photographed) in the IRC’s photogrammetry facility.
The images were then fed into a computer application to produce the virtual 3D models. The geometry and surface colors and patterns, known as textures, that the photographs provided of the various objects, were edited separately. Both required considerable repair and optimization using modeling and animation tools. The surface textures were then “wrapped” around the geometry.
The process of building the virtual buffet required balancing two competing needs: To appear real, the models needed to be as detailed as possible, but the more detailed they were, the more work it took for the computer to display them. More detailed models display and refresh on computer screens more slowly than simpler ones. Virtual reality has to be much faster (in frames per second) than television or movies, and if it moves too slowly, the time lag can derail a subject’s impression of reality and even lead to people getting motion sickness. Therefore, in some cases, the polygonal faces of the 3D models had to be either subdivided (increased) or decimated (reduced) for the program to run both smoothly and realistically. The interface that allows subjects to move through the space, collect food, and have it measured was programmed in the UnReal™ game authoring software.
The resulting VR space is an attractive and convincing replica of the original. Subjects pick up a plate and then can choose from up to seven different portion sizes of foods such as carrots, mashed potatoes, salmon, pizza, and desserts. The drink dispenser fills glasses with water or soda. When a subject has made his or her selections, they set their plate down by the cash register, the computer reads the volume of each item and calculates the total calorie count for the plate.
The entire interaction between the subject and the virtual space is tracked and recorded in a variety of media: by being captured with both point-of-view and third-person (or objective camera) video and motion-sensing devices. UMBC Assistant Professor of Information Systems Jiaqui Gong led a team that outfitted the participants with biometric sensors to capture physiological data such as heart rate, heart rate variability, galvanic skin response, and prefrontal cortex activation using functional near-infrared spectroscopy (fNIRs). The sensor data was then synchronized with the video data and time stamps. The true value of all of this data comes from the synchronization, connecting it to the movement of the subjects through their environments.
The use of virtual reality to observe, measure, and intervene in food choice behaviors is a growing practice across numerous domains. Dr. Susan Persky heads the Immersive Virtual Environment Testing Area (IVETA), part of the Social and Behavioral Research Branch (SBRB) of the National Human Genome Research Institute (NHGRI), NIH. Dr. Persky contracted the IRC to improve the representations of food within their virtual buffet. The IRC created eight new food representations for her and helped improve others. One IRC objective in doing the work is to build this relationship so that it might be expanded in the future to include research into significant questions that remain regarding simulations for studying and influencing behaviors.
Researchers and Creators
- Nutrition Consultant: Travis D. Masterson, Dartmouth University
- Project Directors: Charissa Cheah, Professor of Psychology, Jiaqi Gong, Assistant Professor of Information Systems
- Lead Animator/Modeler: Ryan Zuber
- Animation/Modeling Interns: Ali Everitt, Shannon Irwin, Tomas Loayza, Tory Van Dine, Teng Yang
- Lead Programmer: Mark Jarzynski
- Programmer: Caitlin Koenig
Students
- Graduate Assistants from the Sensor-Accelerated Intelligent Learning Lab: Salih Jinnuo Jia, Stephen Kaputsos, Varun Mandalapu, Munshi Mahbubur Rahman, Claire Tran
- Graduate Assistants from the Culture, Childhood, and Adolescent Development Lab: Salih Barman, Hyun Su Cho, Sarah Jung, Tali Rasooly, Kathy Vu
- Matt Wong, ’21 Computer Science, UMCP
Related Contract Work
- Principal Investigators: Susan Perskey, Associate Investigator and Head of the Immersive Virtual Environment Testing Unit (IVETA), SBRB, NHGRI, NIH
- Project Coordinator: Lee Boot, Director of the Imaging Research Center
- Co-Investigators: Ryan Zuber, IRC Technical Director for Modeling and Animation, Chris Fortney, Lab Director, IVETA
- Sponsorship: National Human Genome Research Institute (NHGRI), NIH
Imaging Research Center, UMBC © 2024