Hi Kai
I had stated that my calculation assumed no repetitions. You are right that it is 12^8 (or roughly 1 billion) with repetitions, which makes sense. But I like to exclude repetitions since a melody like C C C C C C C C is obviously not interesting. And this is for only one note length and I never intended to include any variations in expression and articulation or dynamics. One could go crazy with this calculation and I am not that interested. This was just for a rough idea.
I did not intend to suggest that music is math (oh please, lets not get side tracked here!) but posted that with the idea that someone might be curious about the number of combinations possible.
The traditions of classical music composition have found ways to teach us how to narrow down this amazingly complex "landscape" of tonal possibilities into beautiful melodies without ever knowing math. It is fascinating to me that a computer algorithm that churns out melodies will probably take a million years to write a melody like Tchaikovsky or Mozart that can move us emotionally. How does the human brain achieve it? Fascinating.
Cheers
Anand
hi anand,
so sorry, i missed that you made this assumption. in this case my point is not about the math, but rather about the assumption itself, because you should have a hard time naming a famous melody without a single note repetition.
my post wasn’t entirely serious and i did not want to turn this into a nerdy discussion about math 😊, i was just surprised to read such a post on this forum.
my main concern is your assumption that every 100th melody is “good” (which is surely hard to define in first place). i would guess that out of the nearly uncountably many (landscape is a good picture), the percentage of melodies that a large number of people agrees on to be “good” is FAR smaller. otherwise we would hear many more of them … and that’s why the exquisite examples in this thread are so special.
i fully agree with your statement about classical music, but i am not sure about the million years. neural networks are making tremendous progress. recently the same architecture of the “alphago” software that had previously beaten the world champion in the game of go taught itself in less than a day how to play chess (and other games) and plays now better than specialized chess software (despite decades of development of the latter). the “deepdream” networks can already “imitate” famous painters and the results are quite stunning (although a bit creepy). so i wouldn’t be surprised if a neural network would learn how to copy a certain style and “compose” e.g. “tchaikovsky-esque” music in the next decade. coming up with really new ideas is surely much harder, but who knows, maybe a neural network can one day even learn what makes a melody great and implicitly apply this complex recipe (which likely won’t consist of a few simple rules) to come up with new ones …
cheers
kai