Simple music fingerprinting using Chromaprint in Python

Have you ever heard or even use service like Shazam? Cool, right? No, we are not going to make something as magical as it 😜 But using chromaprint we can create audio fingerprint so that we can do music search by using a music sample.

Before we can use chromaprint python library, pyacoustid, we need to install chromaprint C library in our OS. The build instruction is in its repo. Or if you use Debian/Ubuntu, you can install it using apt (if there is any error, try to google up the error. Most of the time, it’s due to missing dependency):

sudo apt install ffmpeg acoustid-fingerprinter

Then you can create virtual environment (or not) and install the pip packages:

pip install pyacoustid

Now to generate a fingerprint from a file:

import acoustid
import chromaprint

duration, fp_encoded = acoustid.fingerprint_file('music.mp3')
fingerprint, version = chromaprint.decode_fingerprint(fp_encoded)
print(fingerprint)

The fingerprint will contain an array of 948 signed int that represent compact characteristic aka fingerprint of the audio file. It can also be visualized with the help of numpy and matplotlib:

pip install numpy matplotlib
import numpy as np
import matplotlib.pyplot as plt

...

fig = plt.figure()
bitmap = np.transpose(np.array([[b == '1' for b in list('{:32b}'.format(i & 0xffffffff))] for i in fingerprint]))
plt.imshow(bitmap)
Music fingerprint bitmap visualization

Now, let say we have another music file and want to check the similarity with the previous file. One way to do this is by calculating similarity between the fingerprints:

pip install fuzzywuzzy[speedup]
from fuzzywuzzy import fuzz

similarity = fuzz.ratio(sample_fingerprint, fingerprint)
print(similarity)

The similarity will contains percentage of similarity between sample_fingerprint and fingerprint calculated using fuzzy algorithm.

I also made a simple program that uses all the codes above. The program calculates similarity between files in two directories, find the best match and also visualize the fingerprints.

That’s all. Thanks for reading ☕

7 Replies to “Simple music fingerprinting using Chromaprint in Python”

  1. Hello, this post is great. I’m trying to create a music service… I want to know how I can compare two big audio (radio fragments), then finding little similar fragments (songs), I need to split and extract the common fragments.
    Thanks, beforehand!

  2. Thanks. The text behind of link is very interesting, so I need to understand it. Can you explain me or recommend me something more, about that?

  3. As far as I understand, the idea in the paper is to split audio into smaller chunks, manually labels them either as song or accompaniment (intro, filler between song .etc) and use them train a machine learning models (SVM). Unfortunately I can’t explain more than that due to lack to knowledge & experience in audio processing.

    If you prefer more practical recommendation, you can search in GitHub using “audio segmentation” keywords. This is the closest one I can find https://github.com/amsehili/audio-segmentation-by-classification-tutorial. It detects more classes than the paper, but you should be able to study it to get better idea.

  4. The fingerprint here is just array of signed int. So we can store it as it is in mongodb with the help of package such as `pymongo`. But the actual problem is how to search them again based on similarity.

    I haven’t done such thing before. But I would try to cluster the whole fingerprints (eg. using k-means) and store fingerprints along with their cluster id. Then to find the similarity, I can just (1) find the cluster id of the input, (2) query mongodb for all fingerprints with the same cluster id, (3) run text similarity on the results to find top matching fingerprints.

Leave a Reply

Your email address will not be published. Required fields are marked *