四川大学计算机学院本科毕业设计 学生黄**（由于公布姓名后，粉丝过多，黄同学不堪其扰要求匿名）， 指导老师魏骁勇
With the widespread use of the Internet, the large amount of music data is created and exchanged every second in this world, enjoyably enriching our life experience, but also raising big challenges to conventional information retrieval (IR) technology in between. To find the music on demand, nearly all conventional music search engines are still using the text-matching based techniques, where music are indexed with meta-data, on the basis of which the text queries can be matched. However, despite the costly labeling process required in these techniques, it often happens that the users cannot remember the exactly textual information for a song, which usually results in the failure of the search.
To address this problem, Query by humming (QBH) enables users to issue a query by singing a segment of the song for retrieval in case that they are not able to provide the textual information. In this thesis, we implement a QBH system, in which we can extract the acoustic features for both the users’ humming (a wav segment) and the MIDI-formatted music data and use them for matching. The experimental results have validated the efficiency and effectiveness of this system.