神器四-哼歌就能搜音乐

忘记歌名,忘记作者,不用担心,只要你能哼出那首歌的小片段,我们照样找到你想要的音乐

四川大学计算机学院本科毕业设计 学生黄**(由于公布姓名后,粉丝过多,黄同学不堪其扰要求匿名), 指导老师魏骁勇

摘要:

随着计算机网络技术和多媒体技术的快速发展,音乐信息的数据量急剧增长。如何从浩瀚的音乐数据中快速准确地找到想要的音乐已成为现代信息检索领域的一个热门的研究课题。传统的基于文本的音乐检索只能通过输入歌曲名或演唱者名等信息进行检索,已无法满足人们对音乐检索的需求。基于内容的检索允许用户通过哼唱的形式来检索所需的歌曲,使用户即使忘记歌曲名或演唱者名等信息,只要他能哼唱出歌曲的部分片断就能找到所要的歌曲。

With the widespread use of the Internet, the large amount of music data is created and exchanged every second in this world, enjoyably enriching our life experience, but also raising big challenges to conventional information retrieval (IR) technology in between. To find the music on demand, nearly all conventional music search engines are still using the text-matching based techniques, where music are indexed with meta-data, on the basis of which the text queries can be matched. However, despite the costly labeling process required in these techniques, it often happens that the users cannot remember the exactly textual information for a song, which usually results in the failure of the search.

To address this problem, Query by humming (QBH) enables users to issue a query by singing a segment of the song for retrieval in case that they are not able to provide the textual information. In this thesis, we implement a QBH system, in which we can extract the acoustic features for both the users’ humming (a wav segment) and the MIDI-formatted music data and use them for matching. The experimental results have validated the efficiency and effectiveness of this system.

请看视频演示