The Dazzled Project by David Dalmazzo turns the average generative-visualization project on its head—where audio is generated based on forms created through a particle system. This approach is not new, but it is still relatively rare.
From Dazzled’s project page:
Dazzled Project is based on the idea to compose a generative particle environment that could create at the same time structures and sounds. I would like to program patterns and physics simulations with the aim to compose music structures that has a direct representation on a formal shape.
Projects like these make me wonder how long off we are before whole mainstream musical pieces are generated through an algorithm rather than a fully human process. In such a case, there is no lack of creative potential, the creativity will just shift to the system of rules a machine uses to compose the musical piece. If this does catch on, I wonder how our relationship to music and musicians would change, if at all.