June 01, 2025
Five students developed a new artificial intelligence (AI) model to revolutionize how people consume music.

A student’s Mizzou Engineering experience isn’t complete without a hands-on capstone project. These group projects encourage computer science students to solve problems they’re interested in and give them free reign to design an innovative solution they think the world needs.
This semester, students in Team New chose to design a new way to listen to music. Parker Dierkens, Michael Hackmann, Olo Masiza, Alex Savas and Blake Simpson explained why they wanted to pursue the project and what they learned from the experience.
Music plays a central role in people’s daily lives.
We wanted to give people the power to personalize how they experience the music they listen to.
A unique way of implementing music personalization for users became allowing users to pick different types of songs and combine them together using an AI model. For example, a user could choose rap and rock songs and blend them together for an original new tune that holds elements from the input songs.
We want a user to use a machine learning model holding a wide-ranging catalog of music so they have endless possible generations of new music. As a group, we have varying ranges of music creation experience, but we were all excited for the challenge.
Our project has evolved in scope over its lifespan.
After deciding on our project idea, we researched the main crux of the project: artificial intelligence models. Creating a well-defined music-focused model came down to collecting a dataset to train our model with. Once trained, a user inputs a prompt for the type of generated music they want to create, and our model, Music Mixer, constructs the new song.
We considered the ethical implications of this before beginning our research.
When developing the idea, we initially were worried about whether we could ethically and legally train the model using popular music. Much of the music in the dataset is not in the public domain and we did not receive a license for these songs.
After researching copyright law, we discovered for the purposes of research, we could train the model on music that was not licensed with the condition that we could not monetize the project. If we did not monetize our project, our use of artists’ music would be considered fair use. Thus, we made the decision to have this be a research project with no intention of monetizing our project.
We developed two different generation types to let users create instrumental or lyrical pieces of music.
We initially simplified our generation results to produce songs without vocals to keep the generation time reasonable for users and to increase the quality of the generated music. After meetings with industry experts, an additional generation process was added to our application, allowing for lyric generation.
Flexibility is essential.
Methodical planning can only safeguard future decisions to a degree. Our project has followed its pre-set plan consistently, but some junctures have called for adjusting pieces of the project, such as programs used to construct the machine learning model, user input/output and data storage.
Mizzou structured its courses as stepping stones.
This helped us understand each piece of the development process. Be it the intricate inner workings of the algorithms used in programs, database creation, unit testing, web development, or modern-day software development habits, each class assisted in the creation of Music Mixer.
Artificial intelligence currently holds the attention of the world.
With only a few words, a swath of information can be automatically generated, seemingly instantly. However, this information will only ever be as good as the data these machine learning models are trained on.
Most people focus on what AI can produce, but they should instead focus on how an AI is trained. Whoever trains the best model will hold the keys to the future.
Learn more about computer science at Mizzou!