Skip to Navigation Skip to Page Content

Computer science collaboration leads to improvements in data transmission in disasters

Calyam, Morago, Pelapur and Palaniappan pose on the quad.

Prasad Calyam, Brittany Morago, Rengarajan Pelapur and Kannappan Palaniappan were part of a team that demonstrated a novel cloud resource management architecture for processing the wealth of needed data in disaster situations. Photo by Hannah Sturtecky.

The ability to process a massive amount of visual data quickly and efficiently is critical in a disaster scenario, where a few minutes can be the difference between life and death. A group of researchers from the MU Computer Science Department teamed up to develop a visual cloud computing architecture to streamline the process.

“Incident-supporting visual cloud computing utilizing software-defined networking” recently appeared in the journal IEEE Transactions on Circuits and Systems for Video Technology in a special issue on cloud computing for mobile devices. Assistant Professor Prasad Calyam, Associate Professor Kannappan Palaniappan, Associate Professor Ye Duan and their respective labs collaborated on the study, which demonstrated a novel cloud resource management architecture for processing the wealth of needed data in disaster situations. Guna Seetharaman of the U.S. Naval Research Laboratory also was a co-author on the study.

The premise of the study is that in disaster scenarios, the abundance of security and civilian cameras, personal mobile devices with cameras and sensors, as well as aerial video from planes or unmanned aerial vehicles (UAVs) provide useful data for first responders and law enforcement. That data can be critical in terms of knowing where to send emergency personnel and resources, contain hazardous materials, as well as tracking suspects in man-made disasters. Rescue crews may also require 3D reconstructions of the area, creating even more data.

This abundance of visual data, especially high-resolution video streams, is difficult to process even under normal circumstances. But in a disaster situation, the computing and networking resources needed to process it may not be available at the desired capacity in the general vicinity of the disaster. So the question then becomes what is the most efficient way to process the most necessary data and how to quickly present the most relevant visual situational awareness for first responders and law enforcement?

The research team proposed a collection, computation and consumption architecture, linking devices at the network-edge of the cloud processing system, or “fog” with scalable computation and big data in the core of the cloud. Visual information flows from the collection fog — the disaster site — to the cloud and finally to the consumption fog —the devices being used by first responders’, emergency personnel and law enforcement. The system works similar to the way mobile cloud services are provided by Apple, Amazon, Google, Facebook and others.

“It works just like we do now with Siri,” Palaniappan said. “You just say, ‘Find me a pizza place.’ What happens is the voice signal goes to Apple’s cloud, processes the information and sends it back to you. Currently we can’t do the same with rich visual data because the communication bandwidth requirements may be too high or the network infrastructure may be down [in a disaster situation].”

The workflow of visual data processing is only one part of the equation, however. In disaster scenarios, the amount of data generated could create a bottleneck in the network.

“The problem really is networking,” Calyam said. “How do you connect back into the cloud and make decisions because the infrastructure as we know it will not be the same. No street signs, no network, and with cell phones, everybody’s calling to say they’re OK on the same channel. There are challenging network management problems to pertinently import visual data from the incident scene and deliver visual situational awareness.”

The answer to that problem is algorithms designed to determine what information needs to be processed by the cloud and what can be processed on local devices, such as laptops and mobile devices, spreading out the processing over multiple devices. The team also developed an algorithm to aggregate similar information to limit redundancy.

“Let’s say you’re taking pictures of crowds say from surveillance cameras because it’s a law-enforcement type of event,” Palaniappan said. “There could be thousands of such photos and videos being taken. Should you transmit terabytes of data?

“What you’re seeing is often from overlapping cameras. I don’t need to send two separate pictures; I send the distinctive parts. That mosaic stitching happens in the periphery or edge of the network to limit the amount of data that needs to be sent. This would be a natural way of compressing visual data without losing information. To accomplish this needs clever algorithms to determine what types of visual processing to perform in the edge or fog network, and what data and computation should be done in the core cloud using resources from multiple service providers in a seamless way.”

The study benefitted from the work of a variety of doctoral students with different computer science backgrounds, including networking, computer vision, target tracking, network management, image processing and machine learning. Recent Ph.D. recipients Brittany Morago and Rengarajan Pelapur and current doctoral students Dmitrii Chemodanov, Rasha Gargees, and Zakariya Oraibi brought that variety of skills together on this project.

“They had all kind of worked together before this project happened. They had taken classes together, worked on projects together, and a few of them were taking Prasad’s class on cloud computing,” Palaniappan said. “So part of it evolved as a class project. Two or three of them worked on it as a class project. We encouraged them to work together over the summer, then we built on top of that.”

Funding for the project came from a combination of ongoing grants from the National Science Foundation, Air Force Research Laboratory and the U.S. National Academies Jefferson Science Fellowship.

The hope for future research is to expand the algorithm development and field testing of the visual cloud computing system on specific types of disasters and work closely with public safety groups who can benefit from this research and related technology outcomes.

“(Secretary of State) John Kerry particularly spoke at a commencement speech in Northeastern University about how the administration is looking at more investment in extreme events,” Calyam said.

“If you could actually invest in technology and preparedness earlier [before extreme events], you could minimize the post-disaster impact.”

Explaining the Engineering: Improving disaster response from Mizzou Engineering on Vimeo.