MAVERIC

Multiple Autonomous Vehicle Extended Reality Immersive Control Systems

This research explores how extended reality (XR) head-mounted displays (HMDs) may be used to potentially improve or overcome limitations currently present in the user interfaces of conventional software used for controlling large numbers of robotic vehicles with a single person. To do so, an XR user interface prototype has been developed in order to evaluate its strengths and weaknesses and compare it to such conventional software in order to help answer this question.

 

Motivation

To provide some background information, ground control stations (GCSs) are a commonly used tool that allows humans to control and monitor robotic vehicles, such as aerial or terrestrial drones. State-of-the-art GCSs, such as UAV Navigation’s Visionair and Lockheed Martin’s VCSi, even support the collective control of multiple robots simultaneously by a single person, which can either be done locally on-site from where the robots are operating or from some remote location. Currently used in areas like emergency response, foreign environment exploration, large-scale photogrammetry, security, and agriculture to name a few, they are also being considered for future applications in areas like medicine or space exploration as well, since for instance in the latter case GCSs could assist astronauts in controlling or monitoring robots on the Moon or Mars.

The extent of their current effectiveness, however, suffers from the constraints of traditional 2D screen-based user interfaces (UIs), specifically stemming from their reliance on hardware like 2D screens, keyboards, and mice for their visual and physical interfaces. For instance, regardless of the size and number of physical screens used by a GCS, one’s available display real estate is still limited which can constrain how many vehicles one person can effectively control at once. By adding more screen hardware to help, one then further reduces the mobility of such systems which restricts how easily they can be deployed and moved around in the field. Modern GCSs typically also visualize the 3D geospatial data of vehicles within their operating environment using 3D maps, however 2D displays flatten these maps to pseudo-3D perspectives, removing a full sense of depth which can make it harder to understand or interact with the data.

Extended reality HMDs on the other hand are not impacted by such issues. For instance, the display real estate available in XR is far beyond what is capable with screens, so in immersive XR workspaces, one has the space to place any number of virtual objects around oneself in any size and direction. Additionally, since XR HMDs display one’s virtual surroundings in what is perceived to be truly 3D, one can also visualize and interact with natively 3D information using truly 3D data visualizations. As such, XR HMDs remove the physical constraints imposed by screen-based hardware on how much can be displayed at once to users, instead offering them a much more scalable, mobile, and customizable working space to utilize.

 

Research Overview

With the above in mind, this research explores how such strengths of XR technologies may be best leveraged to improve the user experience of multi-robot GCSs when compared to conventional approaches and how XR GCS interfaces could potentially allow one to do things previously not easily viable with traditional GCSs. To do so, we developed an XR GCS user interface prototype that focuses on the use case of controlling and monitoring a large number of aerial drones in large, outdoor environments. To determine in what ways and to what extent an XR approach may benefit how one performs common multi-robot GCS tasks, a usability study will be conducted to evaluate this XR prototype and compare participants’ experience with it to that of a traditional multi-robot GCS. Once completed, the data collected from this study will help better answer these questions and inform the discussion around in what ways an XR approach seems to offer improvements over that of traditional GCSs, in which ways it does not, and how XR may in the future be best applied to potentially improve multi-robot control and monitoring, as well as maybe even other similar fields or applications.


 
Research Team:

Bryson Lawton – Primary Researcher & Prototype Developer
Frank Maurer – Research Supervisor

 

Related Publications:
  1. Bryson Lawton and Frank Maurer. 2022. Exploring Extended Reality Multi-Robot Ground Control Stations. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022), June 6–10, 2022, Frascati, Rome, Italy. ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/3531073.3534469
  2. Bryson Lawton and Frank Maurer. 2022. A Case for Enhancing UAV Ground Control Stations with Cross Reality. In Proceedings of the Workshop on Enhancing Cross-Reality Application and User Experiences at the 2022 International Conference on Advanced Visual Interfaces (AVI 2022), June 6, 2022, Frascati, Rome, Italy. 5 pages. https://cr-workshop.github.io/papers/lawton2022-case.pdf
  3. Bryson Lawton and Frank Maurer. 2022. Immersive Maps for Drone Control: A Case for Improving Multi-UAV Ground Control Station Maps with Extended Reality. In Proceedings of the Workshop on Map-based Interfaces and Interactions at the 2022 International Conference on Advanced Visual Interfaces (AVI 2022), June 7, 2022, Frascati, Rome, Italy. 7 pages. 

SEER Lab

Contact

ICT Building, 856 Campus Pl NW, Calgary, AB, T2N 4V8. Canada

frank.maurer@ucalgary.ca