The Art of Creating a Mixtape — A Data Science Approach

Sigh, I wish I had a better view of my music… And so, here we are.

A Better View of My MusicA Better View: Music MapTracks are complex, and single descriptors like artist, album, or genre are not sufficient enough to capture their underlying ‘feel’.

So I needed a way to capture more information and get an idea of my ‘inventory’ of songs before I began selecting.

Iteratively, this led to a functional prototype of an interactive dashboard that would allow me to roam my library of music in map-form (as visible above) in a more informative manner than swiping down on a list.

The dashboard served as an exploratory research tool — allowing me to see the spatial mapping of other mixtapes (albums) — as well as an organization tool to select tracks for my playlist.

Here is a quick sample of me first finding a not-so-positive (low valence) but high energy song and then connecting it with a song from a specific artist (The Bleachers):musicmapclip.

movEdit descriptiondrive.

google.

comSound FeaturesTo understand underlying properties of songs, I thought of doing my own sound analysis but then found that Spotify has done some great work in calculating Audio Features.

Here are the ones that, after some filtering, proved to be useful creating a spatial mapping:Acousticness: “A confidence measure from 0.

0 to 1.

0 of whether the track is acoustic.

”Danceability: “Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity.

”Energy: “Energy is a measure from 0.

0 to 1.

0 and represents a perceptual measure of intensity and activity.

Typically, energetic tracks feel fast, loud, and noisy.

”Loudness: “The overall loudness of a track in decibels (dB).

Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks.

”Valence: “A measure from 0.

0 to 1.

0 describing the musical positiveness conveyed by a track.

Tracks with high valence sound more positive (e.

g.

happy, cheerful, euphoric), while tracks with low valence sound more negative (e.

g.

sad, depressed, angry).

”Technical Details of Mapping in 2DI scaled the above features (+ some others) them into normal distributions, and then used TSNE to reduce them into 2 dimensions, allowing me to create the ‘Music Map’ you see above.

Now we have a birds eye-view of a song library where each song is mapped relative to another rather than a meaningless list of endless items.

I also used K-means clustering in high-dimensions to get groups of songs — each colour represents a different cluster — which serve as cognitive landmarks on the map indicating separation and borders.

I like maps, I think spatiality helps knowledge organization and, in turn, makes it easier to build cognitive maps.

Cognitive maps facilitate capturing connections between items.

I then calculated linear gradients across the X and Y axis for each of the audio features and annotated it with the arrows that indicate the direction of flow.

How to interpret graphEach circle-dot is a song and its placement reflects its relativity with other songs.

As an example, songs on the left differ from the songs on the right more significantly than from the songs in the center.

The longer and further the flow arrow is from the center of the map, the stronger the flow in the respective direction.

As an example, the bottom left of the map contains significantly acoustic features as the acousticness gradient arrow is long and far from centre.

On the other hand, loudness gradient arrow is small and closer to center of the map indicating that though there is somewhat of a correlation of louder songs in the top right quadrant of the map, the correlation is weak.

Colours are clusters meaning same coloured points are grouped together in higher dimensions and they help to see the boundaries where two groups overlap.

This kind of graph-based UI is something I’ve been thinking about and experimenting with on various fronts like blog posts, tweets, etc.

because I think changing User-interface structures could also change the way we think.

Interactive ControlsControl Panel for Music MapOf course, staring at colourful dots is not exactly informative so I made it interactive with labels and added controls to filter and focus the map.

I also added interactive features to select songs to add to your mixtape and track their positions on the map.

Interactive Features:The Controls tab can filter the map based on various properties of songs.

The Artists tab also allows to filter on artists.

The Explore tab shows details of tracks as you hover over them on the map.

Hovering over a song will also highlight other songs in the same album on the map and their ordering.

Clicking a song will add it to your mixtape table below the map where you can click the preview button to get a quick 30sec snippet from the middle of the song.

Toggle buttons on the top right allow you to toggle different map views.

Mapping out a MixtapeAfter exploring, researching, and making your selection, you can then see the final spatial journey of your Mixtape.

Sample Mixtape Spatial JourneyComing back to our initial principles of mixtapes: we’ve used underlying sound features to emphasize and design for Flow — we can design movement from various parts of the map like taking a listener from a good-vibe acoustics, to depressing but energetic pop, to a mellow shoe-gaze dream-pop sequence.

Part of designing for Flow is researching the movement of other albums by your favourite artists on the map.

Order is accounted for, we hand-pick our songs one by one in a sequence.

And finally, Authenticity, though one can’t guarantee sincerity, our map contains songs from our own library.

Therefore, at the very least, the structure allows for authentic, sincere selection of songs that have indeed infected us as we expect them to infect another.

Final RemarksThe prototype I linked in this post is built from a random selection of artists in different genres from my own library — which leans more towards indie/rock/acoustic.

(Strangely, it is a little exposing to share one’s music library… but fear not, I did not give all my secrets (songs) away!)This approach obviously emphasizes the sound part of music, and not the lyrics.

Further work would incorporate topic-modelling with lyrics and lexical qualities.

A project I did a while ago explored that a bit.

Currently, I made the entire thing using Bokeh Standalone Javascript to avoid dealing with any server costs 🙂 Therefore, I did not add automatic playlist creation through Spotify API.

For now, you can create a list of songs and then add them on to a Spotify playlist yourself.

.. More details

Leave a Reply