Emergency Operations Center

EOC-F, or Emergency operations center of the future, was a collaborative effort between the University of Calgary’s Agile Surface Engineering Lab and C4i Consultants.

An Emergency operations center is a strategic command and control facility responsible for emergency management in response to a disaster. In EOC-F, we’ve taken the standard implementation of an EOC and incorporated technologies and devices meant to streamline core operations, facilitate collaboration efforts between emergency response

The Goal of Project:

The goal of this project was to investigate how analytics-based, spatially aware multi-surface environments (MSE) and virtual/augmented reality can support teams managing emergencies in an Emergency Operation Center. The EOC-F was developed so that individuals can easily use it upon an emergency without prior training. As well, by integrating SoD, we allow multiple touch surface devices and VR/AR devices to communicate and have spatial awareness. The more recent project in EOCF was investigating usage of hierarchical task networks and immersive analytics for responsive planning in mixed-reality environment. For this project, we were integrating Microsoft HoloLens, Kinect, and Tango for exploring 3D maps and plans, as well as time in the simulated emergency event. We used SoD as the framework to enable customized gestures when interacting with virtual holograms.

SoD

SoD is a framework which brings together multiple devices, allowing spatial awareness and communication amongst them.

Spatial awareness:

SoD uses Kinect for Windows cameras (both V1 and V2) to identify and locate people in a given space. Handheld devices such as iPads are able to push orientation data to the server. By pairing a person with a device, we can tell where someone is and where they’re facing. This enables several use cases such as sending information to devices nearby or in front.

Communications:

While SoD focuses on providing spatial awareness, it handles communications between devices as well. Any device connected to the server is able to send, receive, or request data. Data is emitted to one or more devices, which listen or subscribe to an event. If a device is subscribed to an event, it will receive the emitted data, which can be anything such as a string, dictionary, JSON, request, etc.

Communications in SoD are provided by Socket.IO 0.9x. A major improvement in Socket.IO 1.0 is the addition of binary streaming; however, v1.0 is not supported by the C# and iOS implementations as of yet.

The latest version of SoD, SoD 2.0 uses SignalR for the SoD Server to communicate between devices. Multiple Kinects and multiple HoloLenses are also integrated to identify and move virtual objects with custom gestures received via Kinect. Unity clients for SoD is developed which utilizes the Kinect features through SoD.

  https://youtu.be/2JbLrqxWssU