Jump to Header Jump to Main Content Jump to Footer

Three engineering faculty papers accepted by international conference

Home > Blog > Three engineering faculty papers accepted by international conference

Three engineering faculty papers accepted by international conference

Papers resulting from the work of three College of Engineering faculty and their research labs have been accepted to the International Conference on Computer Vision. At the top are Xiaoyu Wang, Tony Han and Xutao Lv. Wang and Lv are graduate assistants in assistant professor Han’s lab. In the center is Henry He, associate professor of electrical and computer engineering, and at the bottom are Feliz Bunyak and assistant professor Kannappan Palaniappan. Bunyak is a post-doc in Palaniappan’s lab.

Mizzou Engineering will be well represented at the Twelfth IEEE International Conference on Computer Vision (ICCV) to be held in Kyoto, Japan this fall.

“It’s a highly selective conference,” said Tony Han, assistant professor of electrical and computer engineering, explaining that out of 2,000-plus submissions, only 260 papers were accepted.

Han, who also served as a reviewer, is one of three engineering faculty members who made the cut. Papers by the research teams of Zhihai “Henry” He, associate professor of electrical and computer engineering, and Kannappan Palaniappan, associate professor of computer science, were also accepted. Additionally, the paper from Professor Han’s group was selected for oral presentation.

Computer vision, a burgeoning research frontier, concerns itself with various methods by which computers interpret and “understand” images and videos. Each of the researchers representing MU at ICCV 2009 explores different aspects of the computer vision field.

An HOG-LBP Huan Detector with Partial Occlusion Handling

Xiaoya Wang, Tony X. Han and Shuicheng Yan

Han’s research group has been working on human detection with partial occlusion handling. With potential applications for surveillance and assisted living, among other things, human detection represents a problem that many in the computer vision field are working to fine tune.

“The same person may wear different clothes and may take on different poses, which make his appearance vary drastically,” Han said as an example of the difficulty of the task he and his team have tackled, especially since he has pushed the detection of the human form to include its partial obstruction. “We have collected 3,000 images that include all kinds of poses and that have been taken in all variants of lighting conditions to do this.”

“We are working on recognition of general actions in daily life, given the fact that we can detect humans pretty well in videos and images,” said Xutao Lv, a graduate assistant in Han’s lab.

“Originally, our program running on a CPU needed several seconds to process a frame of information. With the program we have developed on a GPU, it will process several frames per second,” said Xiaoyu Wang, a research assistant in Han’s lab and a coauthor on the paper submitted to the conference.

“To my best knowledge, this is the first frame-by-frame human detector running on GPU, which can achieve real-time processing,” said Han, adding that if the technique can be made robust, he has already been approached by a commercial venture that has expressed interest in his group’s detection technique.

Building Recognition Using Sketch-Based Representations and Spectral Graph Matching

Yu-Chai Chung, Tony X. Han and Zhihai He

A number of computer vision approaches to the detection of man-made structures exist, however He’s research group tackled the problem of matching and retrieving images of structures from two contrasting camera views.

“Looking at a building from two vastly different viewpoints makes identification difficult,” said He. “It is hard to match what is seen in an aerial photo to what is shown in photo taken at ground level.”

With the new process He has developed, structural components within a frame are turned into sketch-based representations and then a recognition scheme is applied to the images.

“In the case of surveillance, you are looking at a large area from the air and a soldier is taking a photo from the ground of part of the same area,” He said. “With this new technique, you can localize what you see.”

He explains that the soldier’s photo can be used as a query in a large database of images to locate a segment of video that contains the building in question.

“Before entering the building, the soldier can look at the video clips and see all entrances and exits and make a detailed plan of how he would like to proceed,” said He.

“It is complimentary with the Global Positioning System,” He said.

“This is a challenging problem, especially in city sites,” said He, adding that test results of his research are very promising and that results show it to outperform existing algorithms.

Efficient Segmentation Using Feature-based Graph Partitioning Active Contours

Filiz Bunyak, Kannappan Palaniappan

Image segmentation is often a key component in computer vision tasks. Various techniques are utilized to find distinctive objects within a frame – people, cars, trees and buildings – depending on the digital image tasks. Palaniappan’s research group has worked to extend graph partitioning active contours (GPAC) combining two powerful mathematical models – graph theory and partial differential equations to come up with a practical tool for image analysis.

“In image analysis, we’re always trying to segment the scene into objects of interest,” said Palaniappan.

The GPAC image segmentation algorithm was originally developed at the University of California, Santa Barbara. Palaniappan explains that the technique produces quality results, but it is not widely used because of its extreme computational requirements.

“Segmentation of a 1000 by 1000 pixel image may take up to a terabyte of memory, making it prohibitive,” said Palaniappan. “It takes every pixel in an image and creates a dense graph, connecting all pairs and assigning weights based on similarity, then applies level sets for grouping the pixels into regions.

“We have extended the algorithm and significantly reduced the computational complexity. Instead of a terabyte, our process needs only a few kilobytes of computer memory,” he said, explaining that his team’s method uses foreground and background information summarized in a low-dimensional data structure combining spatial and feature information.

Palaniappan and Filiz Bunyak, a post-doc in his lab, have begun to apply their extended segmentation technique to biomedical and defense imagery.

“High throughput screening now produces large volumes of raw biomedical imagery and by using image analysis techniques, we can help biomedical scientists synthesize information to make unique discoveries. It’s exciting,” Bunyak said, adding that collaborations with biomedical researchers at MU analyzing vascular and cellular images have led to interesting research directions and joint publications.

“We will continue working along these lines. This technique has many applications for additional research, “said Palaniappan whose lab has been supported by funding from NIH, the U.S. Air Force, the Leonard Wood Institute/Army, the National Geospatial Intelligence Agency, NSF and NASA.

Back to Top

Enter your keyword