Skip to Navigation Skip to Page Content

Faculty duo, grad student take first at CVPR Change Detection Challenge Workshop

Motion Capture

A novel flux tensor-based hybrid moving object detection system outperforms state-of-the-art methods on challenging videos that often confound traditional foreground-background motion analysis systems. This means the newly-introduced technology is better able to adjust to complex backgrounds, shadows, changes in illumination, precipitation, immobile objects, camera jitter and removed objects, among other potential problems.

Security systems, traffic patterns, cellular information, biomedical data and many more – the uses for motion capture technology are vast and varied, but the results are only as good as the system tasked to obtain them. Two University of Missouri faculty members and a graduate student were part of a team that could make those results much more reliable in the future.

Computer Science graduate student Rui Wang, associate professor Kannappan Palaniappan and assistant research professor Filiz Bunyak joined fellow Air Force Research Lab (AFRL) scientist Guna Seetharaman to produce a paper entitled “Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models (FTSG).” Their work took first place at 2014 Change Detection Challenge Workshop, held in conjunction with the Institute of Electrical and Electronics Engineers Computer Vision and Pattern Recognition Conference in June.

Team

Computer Science graduate student Rui Wang, associate professor Kannappan Palaniappan and assistant research professor Filiz Bunyak joined fellow Air Force Research Lab (AFRL) scientist Guna Seetharaman to produce a paper entitled “Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models (FTSG).” Their work took first place at 2014 Change Detection Challenge. From left: Pierre-Marc Jodoin, Palaniappan, Wang, Fatih Porikli and Seetharaman. Jodoin and Porikli are organizers of the 2014 Change Detection Challenge, and Bunyak was unable to attend.

Their work, detailed in the paper and accompanying charts and video, presented a hybrid moving object detection system. This system is meant to allow for common variables that often confound traditional foreground-background motion capture systems. This means the newly-introduced technology is better able to adjust to complex backgrounds, shadows, changes in illumination, precipitation, immobile objects, camera jitter and removed objects, among other potential problems.

“If you miss detecting objects of interest in image sequences such as moving people, cars, or microorganisms such as bacteria in the case of microscopy videos, you cannot analyze their behaviors,” Bunyak said.

Previously, those complex scene conditions could cause the change detection technology to confuse which people and objects were part of the foreground, and thus something to be tracked, and what it considered as the background, making results less reliable. Palaniappan and Bunyak’s system goes deeper than the traditional foreground-background segmentation algorithms widely used and instead uses a hybrid algorithm that uses context and multiple semantic layers.

“What we’re doing here, which helps, is we don’t just classify a pixel into background and foreground,” Bunyak said. “We identify different types of background and foreground — such as static background, revealed background, shadow, illumination change, moving object, stopped object, etc. — using various visual cues and context.”

The project has been years in the making. Palaniappan and Bunyak teamed-up to combine flux tensor motion detection with levelset-based active contours for object segmentation and tracking along with power efficient implementations on heterogeneous computer architectures. This led to a series of publications between 2007 to 2011. About a year and a half ago, the work kicked up when Rui Wang, who was working in the CIVA Lab as an undergraduate student, started to work on it as part of his master’s thesis. This research has been supported by Air Force Research Lab over the last few years.

“The concept of flux tensor was developed in the CIVA Lab and provided a novel and robust way to detect motion compared to other methods such as background modeling,” Palaniappan said. “Fusion of flux tensor that is robust to noise and complex background with Gaussian mixture modeling that is good at detection of stopped objects combined the strength of both techniques and provided significant performance improvements compared to other methods.”

Pierre-Marc Jodoin, chairman of the Change Detection Workshop, wrote that the judges “greatly appreciated the originality of [the team’s] method.” Bunyak said that while the team was confident in its work, the recognition provided a valuable validation from peers and new visibility.

“You know that your initial results are good, but until it’s official … you never know [how good], particularly since there were 14 other highly competitive international teams, a number of which had competed in the previous change detection challenge in 2012,” Bunyak said. “This award brought recognition and international visibility for the research. Since CVPR 2014, there has been interest from both government and industry to use FTSG to detect people, vehicles and other objects.”