09b_Music_genre_classification_with_the_Million_Song_Dataset
Genre classification with the Million Song Dataset¶
The Million Song Dataset is a
freely-available collection of audio features and metadata for a million contemporary popular music tracks
originally collected and compiled by Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere.
The dataset is intended to encourage development of algorithms in the field of music information retrieval. The data for each track includes both textual features such as artist and album names, numerical descriptors such as duration and various audio features derived using a music analysis platform provided by The Echo Nest (since acquired by Spotify). Of the various audio features and segmentations included in the full dataset, the most detailed information is included at a ‘segment’ level: each segment corresponds to an automatically identified ‘quasi-stable music event’ – roughly contiguous sections of the audio with similar perceptual quality. The number of segments per track is variable and each segment can itself be of variable length – typically they seem to be around 0.2 – 0.4 seconds but can be as long as 10 seconds or more.
For each segment of the track various extracted audio features are available – a 12 dimensional vector of chroma features, a 12 dimensional vector of ‘MFCC-like’ timbre features and various measures of the loudness of the segment, including loudness at the segment start and maximum loudness. In the version of the data we provide, we include a 25 dimensional vector for each included segment, consisting of the 12 timbre features, 12 chroma features and loudness at start of segment concatenated in that order. To allow easier integration in to standard feedforward models, the basic version of the data we provide includes features only for a fixed length crop of the central 120 segments of each track (with tracks with less than 120 segments therefore not being included). This gives an overall input dimension per track of 120×25=3000. Each of the 3000 input dimensions has been been preprocessed by subtracting the per-dimension mean across the training data and dividing by the per-dimension standard deviation across the training data.
We provide data providers for the fixed length crops versions of the input features, with the inputs being returned in batches of 3000 dimensional vectors (these can be reshaped to (120, 25) to get the per-segment features). To allow for more complex variable-length sequence modelling with for example recurrent neural networks, we also provide a variable length version of the data. This is only provided as compressed NumPy (.npz) data files rather than data provider objects – you will need to write your own data provider if you wish to use this version of the data. As the inputs are of variable number of segments they have been ‘bucketed’ into groups of similar maximum length, with the following binning scheme used:
120 – 250 segments
251 – 500 segments
501 – 650 segments
651 – 800 segments
801 – 950 segments
951 – 1200 segments
1201 – 2000 segments
2000 – 4000 segments
For each bucket the NumPy data files include inputs and targets arrays with second dimension equal to the maximum sgement size in the bucket (e.g. 250 for the bucket) and first dimension equal to the number of tracks with number of segments in that bucket. These are named inputs_{n} and targets_{n} in the data file where {n} is the maximal number of segments in the bucket e.g. inputs_250 and targets_250 for the first bucket. For tracks with less segments than the maximum size in the bucket, the features for the track have been padded with NaN values. For tracks with more segments than the maximum bucket size of 4000, only the first 4000 segments have been included.
To allow you to match tracks between the fixed length and variable length datasets, the data files also include an array for each bucket giving the indices of the corresponding track in the fixed length input arrays. For example the array indices_250 will be an array of the same size as the first dimension of inputs_250 and targets_250 with the first element of indices_250 giving the index into the inputs and targets array of the fixed length data corresponding to first element of inputs_250 and targets_250.
The Million Song Dataset in its original form does not provide any genre labels, however various external groups have proposed genre labels for portions of the data by cross-referencing the track IDs against external music tagging databases. Analagously to the provision of both simpler and more complex classifications tasks for the CIFAR-10 / CIFAR-100 datasets, we provide two classification task datasets derived from the Million Song Dataset – one with 10 coarser level genre classes, and another with 25 finer-grained genre / style classifications.
The 10-genre classification task uses the CD2C tagtraum genre annotations derived from multiple source databases (beaTunes genre dataset, Last.fm dataset, Top-MAGD dataset), with the CD2C variant using only non-ambiguous annotations (i.e. not including tracks with multiple genre labels). Of the 15 genre labels provided in the CD2C annotations, 5 (World, Latin, Punk, Folk and New Age) were not included due to having fewer than 5000 examples available. This left 10 remaining genre classes:
Rap
Rock
RnB
Electronic
Metal
Blues
Pop
Jazz
Country
Reggae
For each of these 10 classes, 5000 labelled examples have been collected for training / validation (i.e. 50000 example in total) and a further 1000 example per class for testing, with the exception of the Blues class for which only 991 testing examples are provided due to there being insufficient labelled tracks of the minimum required length (i.e. a total of 9991 test examples).
The 9991 test set examples have labels provided: however to avoid any accidental over-fitting to the test set you should only use these for the final evaluation of your model(s). If you repeatedly evaluate models on the test set during model development it is easy to end up indirectly fitting to the test labels – for those who have not already read it see this excellent cautionary note int the MLPR notes by Iain Murray.
The 25-genre classification tasks uses the MSD Allmusic Style Dataset labels derived from the AllMusic.com database by Alexander Schindler, Rudolf Mayer and Andreas Rauber of Vienna University of Technology. The 25 genre / style labels used are:
Big Band
Blues Contemporary
Country Traditional
Dance
Electronica
Experimental
Folk International
Gospel
Grunge Emo
Hip Hop Rap
Jazz Classic
Metal Alternative
Metal Death
Metal Heavy
Pop Contemporary
Pop Indie
Pop Latin
Punk
Reggae
RnB Soul
Rock Alternative
Rock College
Rock Contemporary
Rock Hard
Rock Neo Psychedelia
For each of these 25 classes, 2000 labelled examples have been collected for training / validation (i.e. 50000 example in total). A further 400 example per class have been collected for testing (i.e. 10000 examples in total), which you are provided inputs but not targets for. The optional Kaggle competition being run for this dataset (see email) is scored based on the 25-genre class label predictions on these unlabelled test inputs.
The tracks used for the 25-genre classification task only partially overlap with those used for the 10-genre classification task and we do not provide any mapping between the two.
For each of the two tasks, the 50000 examples collected for training have been pre-split in to a 40000 example training dataset and a 10000 example validation dataset. If you wish to use a more complex cross-fold validation scheme you may want to combine these two portions of the dataset and define your own functions / classes for separating out a validation set.
Data provider classes for both fixed length input data for the 10 and 25 genre classification tasks in the mlp.data_providers module as MSD10GenreDataProvider and MSD25GenreDataProvider. Both have similar behaviour to the MNISTDataProvider used extensively last semester. A which_set argument can be used to specify whether to return a data provided for the training dataset (which_set=’train’) or validation dataset (which_set=’valid’). Both data provider classes provide a label_map attribute which is a list of strings which are the class labels corresponding to the integer targets (i.e. prior to conversion to a 1-of-K encoded binary vector).
The test dataset files for the 10 genre classification task are provided as two separate NumPy data files msd-10-genre-test-inputs.npz and msd-10-genre-test-targets.npz. These can be loaded using np.load function. The inputs are stored as a $10000\times3000$ array under the key inputs in the file msd-10-genre-test-inputs.npz and the targets in a 10000 element array of integer labels under the key targets in msd-10-genre-test-targets.npz. A corresponding msd-25-genre-test-inputs.npz file is provided for the 25 genre task inputs.
Accessing the Million Song Dataset data¶
Before using the data provider objects you will need to make sure the data files are accessible to the mlp package by existing under the directory specified by the MLP_DATA_DIR path.
The fixed length input data and associated targets is available as compressed NumPy .npz files
msd-10-genre-train.npz 210MB
msd-10-genre-valid.npz 53MB
msd-10-genre-test-inputs.npz 53MB
msd-10-genre-test-targets.npz 5.2KB
msd-25-genre-train.npz 210MB
msd-25-genre-valid.npz 53MB
msd-25-genre-test-inputs.npz 53MB
in the AFS directory /afs/inf.ed.ac.uk/group/teaching/mlp/data.
If you are working on DICE one option is to redefine your MLP_DATA_DIR to directly point to the shared AFS data directory by editing the env_vars.sh start up file for your environment. This will avoid using up your DICE quota by storing the data files in your homespace but may involve slower initial loading of the data on initialising the data providers if many people are trying access the same files at once. The environment variable can be redefined by running
gedit ~/miniconda2/envs/mlp/etc/conda/activate.d/env_vars.sh
in a terminal window (assuming you installed miniconda2 to your home directory), and changing the line
export MLP_DATA_DIR=$HOME/mlpractical/data
to
export MLP_DATA_DIR=”/afs/inf.ed.ac.uk/group/teaching/mlp/data”
and then saving and closing the editor. You will need reload the mlp environment using source activate mlp and restart the Jupyter notebook server in the reloaded environment for the new environment variable definition to be available.
For those working on DICE who have sufficient quota remaining or those using there own machine, an alternative option is to copy the data files in to your local mlp/data directory (or wherever your MLP_DATA_DIR environment variable currently points to if different).
Assuming your local mlpractical repository is in your home directory you should be able to copy the required files on DICE by running
cp /afs/inf.ed.ac.uk/group/teaching/mlp/data/msd-*-train.npz ~/mlpractical/data
cp /afs/inf.ed.ac.uk/group/teaching/mlp/data/msd-*-valid.npz ~/mlpractical/data
cp /afs/inf.ed.ac.uk/group/teaching/mlp/data/msd-*-test-*.npz ~/mlpractical/data
On a non-DICE machine, you will need to either set up local access to AFS, use a remote file transfer client like scp or you can alternatively download the files using the iFile web interface here (requires DICE credentials).
As some of the files are quite large you may wish to copy only those you are using (e.g. only the files for one of the two tasks) to your local filespace to avoid filling up your quota. The msd-25-genre-test-inputs.npz files will only be needed by those intending to enter the associated optional Kaggle competition.
In addition to the fixed length input files there are also corresponding files with the variable length input data in the same AFS directory
msd-10-genre-train-var-length.npz 1.6GB
msd-10-genre-valid-var-length.npz 403MB
msd-10-genre-test-inputs-var-length.npz 403MB
msd-10-genre-test-targets-var-length.npz 3.1KB
msd-25-genre-train-var-length.npz 1.5GB
msd-25-genre-valid-var-length.npz 367MB
msd-25-genre-test-inputs-var-length.npz 363MB
As you can see some of these files, particularly the training sets, are very large so you will need to be careful if copying to your filespace to make sure you have sufficient quota available.
Example two-layer classifier models¶
Below example code is given for creating instances of the 10-genre and 25-genre fixed-length input data provider objects and using them to train simple two-layer feedforward network models with rectified linear activations in TensorFlow.
In [ ]:
import os
import tensorflow as tf
import numpy as np
from mlp.data_providers import MSD10GenreDataProvider, MSD25GenreDataProvider
import matplotlib.pyplot as plt
%matplotlib inline
MSD 10 genre task¶
In [ ]:
train_data = MSD10GenreDataProvider(‘train’, batch_size=50)
valid_data = MSD10GenreDataProvider(‘valid’, batch_size=50)
In [ ]:
def fully_connected_layer(inputs, input_dim, output_dim, nonlinearity=tf.nn.relu):
weights = tf.Variable(
tf.truncated_normal(
[input_dim, output_dim], stddev=2. / (input_dim + output_dim)**0.5),
‘weights’)
biases = tf.Variable(tf.zeros([output_dim]), ‘biases’)
outputs = nonlinearity(tf.matmul(inputs, weights) + biases)
return outputs
In [ ]:
inputs = tf.placeholder(tf.float32, [None, train_data.inputs.shape[1]], ‘inputs’)
targets = tf.placeholder(tf.float32, [None, train_data.num_classes], ‘targets’)
num_hidden = 200
with tf.name_scope(‘fc-layer-1’):
hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)
with tf.name_scope(‘output-layer’):
outputs = fully_connected_layer(hidden_1, num_hidden, train_data.num_classes, tf.identity)
with tf.name_scope(‘error’):
error = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(outputs, targets))
with tf.name_scope(‘accuracy’):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)),
tf.float32))
with tf.name_scope(‘train’):
train_step = tf.train.AdamOptimizer().minimize(error)
init = tf.global_variables_initializer()
In [ ]:
with tf.Session() as sess:
sess.run(init)
for e in range(10):
running_error = 0.
running_accuracy = 0.
for input_batch, target_batch in train_data:
_, batch_error, batch_acc = sess.run(
[train_step, error, accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
running_error += batch_error
running_accuracy += batch_acc
running_error /= train_data.num_batches
running_accuracy /= train_data.num_batches
print(‘End of epoch {0:02d}: err(train)={1:.2f} acc(train)={2:.2f}’
.format(e + 1, running_error, running_accuracy))
if (e + 1) % 5 == 0:
valid_error = 0.
valid_accuracy = 0.
for input_batch, target_batch in valid_data:
batch_error, batch_acc = sess.run(
[error, accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
valid_error += batch_error
valid_accuracy += batch_acc
valid_error /= valid_data.num_batches
valid_accuracy /= valid_data.num_batches
print(‘ err(valid)={0:.2f} acc(valid)={1:.2f}’
.format(valid_error, valid_accuracy))
MSD 25-genre task¶
In [ ]:
train_data = MSD25GenreDataProvider(‘train’, batch_size=50)
valid_data = MSD25GenreDataProvider(‘valid’, batch_size=50)
In [ ]:
tf.reset_default_graph()
inputs = tf.placeholder(tf.float32, [None, train_data.inputs.shape[1]], ‘inputs’)
targets = tf.placeholder(tf.float32, [None, train_data.num_classes], ‘targets’)
num_hidden = 200
with tf.name_scope(‘fc-layer-1’):
hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)
with tf.name_scope(‘output-layer’):
outputs = fully_connected_layer(hidden_1, num_hidden, train_data.num_classes, tf.identity)
with tf.name_scope(‘error’):
error = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(outputs, targets))
with tf.name_scope(‘accuracy’):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)),
tf.float32))
with tf.name_scope(‘train’):
train_step = tf.train.AdamOptimizer().minimize(error)
init = tf.global_variables_initializer()
In [ ]:
sess = tf.Session()
sess.run(init)
for e in range(10):
running_error = 0.
running_accuracy = 0.
for input_batch, target_batch in train_data:
_, batch_error, batch_acc = sess.run(
[train_step, error, accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
running_error += batch_error
running_accuracy += batch_acc
running_error /= train_data.num_batches
running_accuracy /= train_data.num_batches
print(‘End of epoch {0:02d}: err(train)={1:.2f} acc(train)={2:.2f}’
.format(e + 1, running_error, running_accuracy))
if (e + 1) % 5 == 0:
valid_error = 0.
valid_accuracy = 0.
for input_batch, target_batch in valid_data:
batch_error, batch_acc = sess.run(
[error, accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
valid_error += batch_error
valid_accuracy += batch_acc
valid_error /= valid_data.num_batches
valid_accuracy /= valid_data.num_batches
print(‘ err(valid)={0:.2f} acc(valid)={1:.2f}’
.format(valid_error, valid_accuracy))
Predicting test data classes and creating a Kaggle submission file¶
An optional Kaggle in Class competition (see email for invite link, you will need to sign-up with a ed.ac.uk email address to be able to enter) is being run on the 25 genre classification task. The scores for the competition are calculated by calculating the proportion of classes correctly predicted on the test set inputs (for which no class labels are provided). Half of the 10000 test inputs are used to calculate a public leaderboard score which will be visible while the competition is in progress and the other half are used to compute the private leaderboard score which will only be unveiled at the end of the competition. Each entrant can make up to two submissions of predictions each day during the competition.
The code and helper function below illustrate how to use the predicted outputs of the TensorFlow network model we just trained to create a submission file which can be uploaded to Kaggle. The required format of the submission file is a .csv (Comma Separated Variable) file with two columns: the first is the integer index of the test input in the array in the provided data file (i.e. first row 0, second row 1 and so on) and the second column the corresponding predicted class label as an integer. The predictions must be preceded by a header line as in the following example
Id,Class
0,12
1,24
2,9
…
Integer class label predictions can be computed from the class probability outputs of the model by performing an argmax operation along the last dimension.
In [ ]:
test_inputs = np.load(os.path.join(os.environ[‘MLP_DATA_DIR’], ‘msd-25-genre-test-inputs.npz’))[‘inputs’]
test_inputs = test_inputs.reshape((test_inputs.shape[0], -1))
test_predictions = sess.run(tf.nn.softmax(outputs), feed_dict={inputs: test_inputs})
In [ ]:
def create_kaggle_submission_file(predictions, output_file, overwrite=False):
if predictions.shape != (10000, 25):
raise ValueError(‘predictions should be an array of shape (10000, 25).’)
if not (np.all(predictions >= 0.) and
np.all(predictions <= 1.)):
raise ValueError('predictions should be an array of probabilities in [0, 1].')
if not np.allclose(predictions.sum(-1), 1):
raise ValueError('predictions rows should sum to one.')
if os.path.exists(output_file) and not overwrite:
raise ValueError('File already exists at {0}'.format(output_file))
pred_classes = predictions.argmax(-1)
ids = np.arange(pred_classes.shape[0])
np.savetxt(output_file, np.column_stack([ids, pred_classes]), fmt='%d',
delimiter=',', header='Id,Class', comments='')
In [ ]:
create_kaggle_submission_file(test_predictions, 'msd-25-example-network-submission.csv', True)