More on timbre
This is the graph of timbre in the paper in Scientific Reports:
The Y axis represents beta, a coefficient which represents the amount of variety. I don’t buy that distribution of timbres. Knowing about the history of music and technology, you’d expect there to be an uptick in the early 80s when synthesisers became cheap and lots of new sounds were available at the touch of a button. You’d expect the adoption of ProTools in the late 90s to expand the sonic palette. Instead, you get a really weird set of data from the mid-60s that’s all over the place and a gradual decline. I’m not sure the computer scientists behind this research really understood music, or what they were looking at, and they interpreted the data through their prejudices. I suspect the weird set of data in the mid-60s was mostly because the dataset has little 60s music (less power, more room for errors) outside of the well-known stuff, and perhaps that the timbres it’s picking up (largely by doing an analysis of spectra, a la a spectrogram, which is objective in a way a musicological timbral analysis might not be, but which is missing half the picture, and can’t tell much about the presence of guitars or synths, etc) are reflecting the great variety of quality in the sounds you could get from different studios and so forth (e.g., think about Abbey Road in 1967 compared to the Pye studios the Kinks used, which were audibly less high fidelity). Through the 70s, quite high fidelity recording became cheaper and cheaper, and more bands used it. I reckon perhaps that kind of sound quality issue (and the drum machines/sound replacement) is a large part of the reason why they found the data they found, rather than the “modern music all sounds the same” thing.