Skip to Navigation Skip to Page Content

MU Engineering researchers develop improvement in topic modeling

Two University of Missouri College of Engineering researchers and a colleague from the Naval Surface Warfare Center recently published a paper accepted by the International Conference on Pattern Recognition, a highly respected conference which will be held in December in Mexico.

A computer image of yellow grass with blue cows.

Two MU researchers and a colleague from the Naval Surface Warfare Center developed an algorithm that extends Latent Dirichlet Allocation and allows it to account for gradients and incidents where one visual component blends into another. In this case, the primary focus is on the grass beneath the cows. Photo courtesy of Alina Zare, Chao Chen and J. Tory Cobb.

Chao Chen, a recent MU Engineering doctoral recipient; Alina Zare, assistant professor of electrical and computer engineering; and Tory Cobb of the Naval Surface Warfare Center put together a paper, “Partial Membership Latent Dirichlet Allocation (PM-LDA) for Image Segmentation.”

Latent Dirichlet Allocation (LDA) is a form of topic modeling. Topic models, in machine learning and natural language processing, are algorithms used to uncover recurring elements in collections of documents, images or words.

“The idea is that maybe you have a ton of different pictures of cows, horses or cars,” Zare explained. “You take all the pictures, and [the algorithm] is able to segment out each image into the different objects in each image and say, ‘Oh, all the objects in these images are the same type — all the cows, all the horses, all the cars.’”

Zare, whose research foci include machine learning, pattern recognition and image analysis, said the problem with LDA is that it’s too segmented. According to the paper, “There are many images in which some regions cannot be assigned a crisp label. … In these cases, a visual word is best represented with partial memberships across multiple topics.”

In other words, sometimes objects in images are hard to segment out because there is no clear delineation between where one object begins and another ends. Take a sunset, for example.

“You can either be sun or not sun, sun or sky, but not a mix of the two,” Zare said. “But if there’s something like a sunset, there’s a big gradient where there’s a mixture of the things.

“In SONAR data, even experts that look at sonar data can’t draw the line between hard-packed sand and sea grass.”

In response, the research trio developed an algorithm that extends LDA and allows it to account for gradients and incidents where one visual component blends into another. The team experimented on two datasets of natural images and another SONAR dataset and illustrated that their process, PM-LDA, “can produce both crisp and soft semantic image segmentations.” In other words, this new method can perform object recognition in situations where objects in images aren’t clearly and sharply delineated.

“Within the same image, it can learn the gradients and sharp edges, whereas LDA can only find the sharp edge, which can be flat out wrong in a lot of situations,” Zare said.