Dr. Lorna C. Quandt, Ph.D.
Assistant Professor, Ph.D. in Educational Neuroscience Program
Gallaudet University, Washington, D.C., USA
Title: Embodying sign language: Using avatars, VR, and EEG to design novel learning tools.
Abstract: This talk will introduce SAIL, an NSF-funded project housed within the Action & Brain Lab and Motion Light Labs at Gallaudet University. This project involves the development and testing of a new immersive ASL learning environment, which aims to teach non-signers basic ASL. Our team created signing avatars using motion capture recordings of deaf signers signing ASL. The avatars are placed in a virtual reality environment accessed via head-mounted goggles. The user’s own movements are captured via a gesture-tracking system. A “teacher” avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Using principles from embodied learning, users learn ASL signs from both the first-person perspective and the third-person perspective. The SAIL project draws upon the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG, with the goal of creating a new way to learn sign language.
Bio: Dr. Lorna Quandt is the director of Action and Brain Lab at Gallaudet University in Washington, D.C.. She is an Assistant Professor in the PhD in Educational Neuroscience (PEN) program and is also the Science Director of the Motion Light Lab. Dr. Quandt founded the Action & Brain lab in early 2016. Prior to that, Dr. Quandt obtained her BA in Psychology from Haverford College and a PhD in Psychology at Temple University, specializing in Brain & Cognitive Sciences. She completed a postdoctoral fellowship at the University of Pennsylvania working with Dr. Anjan Chatterjee. For the past decade, her research has focused on how people perceive and learn actions, and more recently, that work has turned to investigating how sign language knowledge shapes various cognitive and perceptual processes. Dr. Quandt also uses the principles of embodied learning and embodied perception to develop immersive sign language learning tools in virtual reality.
Mr. Dmitriy Babichenko
Project Director, Learning Technologies Lab
University of Pittsburgh, PA, USA
Title:Developing an Augmented Reality Tool for Spinal Surgery Navigation .
Abstract: In recent years, the use of Augmented Reality (AR) as a training, planning, and prototyping tool has become widely accepted in many fields, including education, engineering, and medicine. AR technologies have been used in visualization for navigation and control of vehicles from the Mars Exploration Rovers [1] to self-driving cars, indoor navigation [2], surgical training [3], and surgical navigation [4]. Unlike virtual reality (VR), which implies a complete immersion in a virtual world, AR adds digital elements to a live view by using the camera on a smartphone or a headset.
In the excitement to develop AR tools for medical applications, few have considered that simply wearing the AR headset could provide distractions or hindrances during clinical procedures. Moreover, very few studies have been done assessing effects of AR tools and AR headsets on spacial awareness and cognitive load during procedures that require a tremendous amount of precision.
Approximately six months ago, we began working on developing an AR-based surgical navigation tool to assist orthopedic surgeons and neurosurgeons in pedicle screw placement procedures (e.g. spinal fusion). During the screw placement procedure, most of the spine is obscured and surgeons have to rely on computed tomography (CT) scans for guidance. Surgeons have to constantly shift their attention between the screen displaying CT scans and the task at hand (e.g. pedicle screw placement). Our goal is to leverage AR technologies to allow surgeons to view both the CT scans and the surgical field simultaneously, eliminating disruptions caused by attention shifting and time spent re-orienting to either a CT display or the patient. Furthermore, this work has also presented an opportunity to address a gap in the literature about the potential negatives associated with using AR technologies and headsets.
In this presentation, we will discuss several completed, ongoing, and planned studies regarding the impact of AR headsets and surgical AR navigation on cognitive load and spatial awareness during surgical procedures.
[1] F. R. Hartman, B. Cooper, S. Maxwell, J. Wright, and J. Yen, “Immersive visualization for navigation and control of the Mars Exploration Rovers,” in Space OPS 2004 Conference, 2004, p. 420.
[2] C. Koch, M. Neges, M. König, and M. Abramovici, “Natural markers for augmented reality-based indoor navigation and facility maintenance,” Autom. Constr., vol. 48, pp. 18–30, 2014.
[3] E. Barsom, M. Graafland, and M. Schijven, “Systematic review on the effectiveness of augmented reality applications in medical training,” Surg. Endosc., vol. 30, no. 10, pp. 4174–4183, 2016.
[4] L. Ma, Z. Zhao, F. Chen, B. Zhang, L. Fu, and H. Liao, “Augmented reality surgical navigation with ultrasound-assisted registration for pedicle screw placement: a pilot study,” Int. J. Comput. Assist. Radiol. Surg., vol. 12, no. 12, pp. 2205–2215, 2017.
[5] LinkedIn, “FDA Approves First HoloLens Augmented Reality System for Surgical se.” [Online]. Available: https://cacm.acm.org/news/232744-fda-approves-first-hololens-augmented-reality-system-for-surgical-se/fulltext. [Accessed: 17-Dec-2019].
Bio: Dmitriy Babichenko is a Clinical Associate Professor at the School of Computing and Information (SCI) at the University of Pittsburgh. Dr. Babichenko has extensive industry experience in educational software design and development, IT project management, and decision support systems. Since joining SCI in 2013, Dmitriy has taught courses in programming, data mining, information systems analysis, database management systems, and games design. In 2015 Dmitriy founded a SCI’s Learning Technologies Laboratory, a lab dedicated to research on learning technologies, serious games, and immersive media..