Generative

From Things and Stuff Wiki
Revision as of 06:28, 15 April 2019 by Milk (talk | contribs) (→‎Clara)
Jump to navigation Jump to search


Algorithmic / generative

oof mess. mixed together, made it's own page to sort.


  • https://en.wikipedia.org/wiki/Algorithmic_composition - the technique of using algorithms to create music. Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term is usually reserved, however, for the use of formal procedures to make music without human intervention, either through the introduction of chance procedures or the use of computers. Some algorithms or data that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.





  • https://en.wikipedia.org/wiki/Pop_music_automation - field of study among musicians and computer scientists with a goal of producing successful pop music algorithmically. It is often based on the premise that pop music is especially formulaic, unchanging, and easy to compose. The idea of automating pop music composition is related to many ideas in algorithmic music, Artificial Intelligence (AI) and computational creativity.


Live

BackupBand

  • BackupBand - a music auto-arranger. It has a virtual drummer, bassist, and rhythm guitarist. These 3 "musicians" follow your chord changes live (as you play some MIDI instrument, such as a keyboard) and they play along with you in perfect time. It's like having a live rhythm section backing you up. The rhythm section knows how to play in 60 different styles such as Rock, Disco, HipHop, Heavy Metal, Reggae, Swing, various latin styles, etc. You can also create your own styles for them to play. The bassist plays a rickenbacker, fender precision, synth, and double (acoustic) bass. The guitarist plays a les paul, steel string, and nylon string. The drummer plays 6 kits. You can also create your own multi-sampled guitars, basses, and kits for them to play.


Expresseur

  • Expresseur - Play any score, improvise over a chord-grid, invent new instruments. Even if you are not a musician. Load a MusicXML score. Expresseur compiles the notes of the score. You interpret the rhythm with your feeling, interacting with the other musicians/singers. Select a list of chords. Play the Chords, the bass, the scales. Always in tune! Invent new instruments. Connect any sensor. Script new music logic.

Impro-Visor

  • Impro-Visor - short for “Improvisation Advisor”, is a music notation program designed to help jazz musicians compose and hear solos similar to ones that might be improvised. The objective is to improve understanding of solo construction and tune chord changes. There are other, secondary, things it can do, such as improvise on its own. It has also been used for transcription. Because rhythm-section (e.g. piano, bass, drums) accompaniment is automatically generated from chords, Impro-Visor can be used as a play-along device. Now having a wider array of accompaniment styles, its use is not limited to jazz.

PIanoAI

  • PianoAI - code for providing an augmented piano playing experience. When run, this code will provide computer accompaniment that learns in real-time from the human host pianist. When the host pianist stops playing for a given amount of time, the computer AI will then improvise in the space using the style learned from the host.


Prearranged

MMA

  • MMA—Musical MIDI Accompaniment - an accompaniment generator. It creates MIDI tracks for a soloist to perform over from a user supplied file containing chords and MMA directives. MMA is very versatile and generates excellent tracks. It comes with an extensive user-extendable library with a variety of patterns for various popular rhythms, detailed user manuals, and several demo songs.


  • LinuxBand - a GUI front-end for MMA (Musical MIDI Accompaniment). Type in the chords, choose the groove and LinuxBand will play a musical accompaniment for you. It’s an open source alternative to Band-in-a-Box.


  • LeMMA is a simple GUI “front-end” written in Python for MMA (Musical MIDI Accompaniment – also written in Python). I wrote it so that I can easily churn out chord progressions. Just enter the chords, select the grooves and press “Play”. Should work for both Linux and Windows.

to sort

Ryth(M)aid

  • Ryth(M)aid + GUI - Little GUI - Jazz - Practice - Program, which plays a bass, drums and piano track based on a given set of chord changes and probabilities. Uses TSE3 for midi code and gtk+ for GUI.


SoundHelix

  • SoundHelix - a free, versatile and flexible Java framework for composing and playing algorithmic random music based on constrained random generation (CRG). SoundHelix is an algorithmic random music generator (including a built-in MIDI sequencer) which can play generated songs on MIDI devices in real-time. It can also write the generated songs as MIDI files.

Computoser

Machina

  • Machina - a MIDI sequencer based on Finite State Automata. A machine can be constructed manually from the user interface, recorded from MIDI input (free-form or step), or loaded from a MIDI file. The probability of arcs can be manipulated to build a machine that produces structured but constantly changing output. This way, Machina can be used as a generative recording tool that plays back patterns similar to, but not identical to, the original input.


Infno

  • Infno - an algorithmic generator of electronic dance music fully implemented in SuperCollider 3, the latest generative music work in a line of 'Infinite Length Pieces'. The program attempts to model the production of electropop and dance music styles with a closer union of parts than typical in many previous algorithmic composition systems. Voices (including a percussion section, bass, chord and lead lines) are not independently created: the parts can cross-influence each other based on underlying harmonic ideas, rhythmic templates and already generated lines. The eventual system posits a potential for any one part to influence or in the extreme force the recalculation of another, both from top-down and bottom-up information impacting on compositional preference. In particular, dynamic programming is used to choose melodic lines for counter melodies under cost constraints of register, harmonic template, existing voices and voice leading heuristics.

improv


bassline

  • bassline - generates randomly wandering jazz-like basslines. It mostly plays one note per beat, but sometimes throws in a pair of swung quavers. It tends to keep moving either up or down, but sometimes it turns round and starts going the other way. It prefers intervals of one and two semitones, but from time to time it throws in larger intervals.


MGen

steve

Topiary

  • Topiary - a modular suite of (VST and other) plugins, for real time, live, musical pattern generation. The italic terms in the above are explained below


to sort



  • Tapis Bourgeois is a batch music generator for contrepoint and tonal music, written in PERL. It outputs a partially random midifile, constrained by the mesure-wise configuration file.
  • Cycle Plus One is a musical pattern generator that can be used to explore interesting sonic experiences. Using a fixed rhythmic profile, a steady eighth note pulse, you can experiment with tonality and density within a specified cycle of beats. The application can help composers create a matrix of tones that can be exported as MIDI or musical XML. From there, the material can be worked with further in a sequencing or music notation program. Cycle Plus One is meant to be a starting point for experimentation, allowing the composer to play with variation using a minimal amount of musical material.
  • GRAMophone II is an algorithmic generator of music composition. The music is generated using two kinds of formal grammar: Chomsky's regular grammar (or Type 3) for a TOP DOWN approach to the composition and a reduced version of Lindenmayer grammar for a BOTTOM UP approach.
  • https://github.com/jisungk/deepjazz - uses Keras & Theano, two deep learning libraries, to generate jazz music. Specifically, it builds a two-layer LSTM, learning from the given MIDI file [3]




  • Grammidity - a preliminary release of a reworking of a reworking of a program I wrote back in the mid 90's. This is a genetic programming system based on the idea of using a kind of grammar as the underlying gene. Grammars are stored in a raw format, then parsed into productions. Those productions are expanded. The resulting strings are then evaluated (usually in a kind of basic stack machine) and the evaluations rated and evolved.


  • Cycle Plus One - a musical pattern generator that can be used to explore interesting sonic experiences. Using a fixed rhythmic profile, a steady eighth note pulse, you can experiment with tonality and density within a specified cycle of beats. The application can help composers create a matrix of tones that can be exported as MIDI or musical XML. From there, the material can be worked with further in a sequencing or music notation program. Cycle Plus One is meant to be a starting point for experimentation, allowing the composer to play with variation using a minimal amount of musical material.



  • Random Phase Music Generator - small program that generates random phase music. Phasing is the process of looping the same pattern of music on two or more tape recorders (running at slightly different speeds), so they will slowly shift out of synchronization and produce the out of phase effect. Play with this program to experiment with this technique!



  • JChordBox - a library tool that can generate backing tracks from a chord progression and a music style (containing music templates or grooves). A music Style is describe using an XML and a MIDI file. You can generate an XML Style file from a MIDI file by adding markers to delimit grooves (or music templates). An XML Song file describes a chord progression and sets the music style to use. JChordBox comes with several command line tools (GenerateSong, CreateStyleFromMidiFile, SongPlayer …).


  • chasp - creates simple accompanying pieces of different genres. To accomplish this ASP is used to solve the problem of chord progressions, based on the rules proposed by the theory of harmony. This results into a harmonic sequence that eventually provides the basis for the creation of simple musical pieces by applying genre-specific templates, through an additional imperative control framework.



  • https://github.com/igorski/molecular-music-generator - MMG is a simple command-line tool that can generate musical patterns is based on "the Molecular Music Box" by Duncan Lockerby. The properties of the algorithm can easily be defined in a JSON file, which will then be rendered into a MIDI file, which can in turn be opened in DAW music software or be played back by synthesizers.



  • https://github.com/MannyPeterson/Mozart - The Mozart Digital Composer project is an attempt at a heuristics based approach, in place of traditional neural networks, for creating computer generated music. The basic principle is to use computer software, in this case a Java SE application, to compose and play, using a MIDI device, music that imitates, as closely as possible, music composed by a Human. The project is currently under active development and many planned features, such as the ability to select the type of scale (e.g major, minor, mixolydian, etc) and key, are in the works.


  • https://github.com/chronopolis5k/Beatfox - a free Csound generative beat tool - part of a larger compositional environment I'm building. It will constantly create perfectly looped rhythms that can easily be dropped into a program like Ableton Live. Leave it running all night, and you'll wake up with hundreds of loops in your folder! It sounds similar to a Nord Drum or modular analog drums. It can also produce some 808-ish sounds.


  • Jnana - a generative musical accompaniment system integrated into Ableton Live. It has the ability to analyze MIDI input and generate new material in a similar style. It can analyze input in real-time or from desired clips within Ableton and can populate Ableton clips with new material.



  • Muzoti - a new approach to music composition based on a theory of human evolution. Compose with Muzoti and give your loved ones a completely unique album of classical music. [8]


  • QuasiMusic - turns quasiperiodic tilings of the plane into something like a MIDI version of a player-piano roll. This was inspired by a suggestion made by Akira Bergman in a discussion with John Baez. [9]


  • Randomposer - an automatic-music composer from Stephan Beyer


One-line algorithmic C



  • https://github.com/kragen/ar_bytebeat - a bytebeat synthesizer implemented on the Arduino. It's not the first bytebeat synthesizer on the Arduino, but I think it's the first that does real-time composite video visualizations of the signal, using the TVout library hacked to remove its audio output.


Pure Data



Max for Live

  • Jnana - Jnana Live plug-in will take live input from the Ableton track and integrate it into the analysis each time a phrase has “ended”. The end of a phrase is determined simply by noticing when the input stops for a given amount of time. When an input phrase has completed, the plug-in has the ability to auto-generate a “response” to this phrase based on all the phrases seen thus far.

Resources

  • Algorithmic.net - a lexicon of systems and research. This site provides a comprehensive research resource for computer aided algorithmic music composition, including over one-thousand research listings, over one hundred system listings, cross referenced links to research, links to software downloads and documentation, and web-based tools for searching and filtering the complete lexicon.



  • Procedural Audio Now! - is a monthly Queen Mary University of London meetup for people interested in developing Procedural Audio systems for video games and other interactive media.


Methods

  • https://en.wikipedia.org/wiki/Musikalisches_Würfelspiel - was a system for using dice to randomly 'generate' music from precomposed options. These 'games' were quite popular throughout Western Europe in the 18th century. Several different games were devised, some that did not require dice, but merely 'choosing a random number.'



  • https://en.wikipedia.org/wiki/Markov_chain#Music - employed in algorithmic music composition, particularly in software such as CSound, Max and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric.


  • Computer Music Algorithms (4th Ed 2018) - explains algorithmic music and contains 57 programs, 20 styles, & 24 chapters that will generate music of different styles, new each time executed, as a midi file. You need only a 'c' compiler and a midi player to play the midi files generated. The 'styles' produce music from ancient Greek music to the algorithm of Kircher (1650) to all styles including the game music and the author's own unique fractal music and others of his. The algorithms are explained in technical terms of music theory, including the special data structures constructed. There are folders of c files for each chapter with code, that may be compiled and when executed, generate the midi files. The last program added here is the ancient Greek music program, which included two years of intense research. Also, my fractal music programs are a result of 20 years of development and improvement. Reverse music was nearly my dissertation topic, but it was solved in one of the chapters.



  • GERP - an attempt to generate stylistically valid EDM using human-informed machine-learning. We have employed experts (mainly Chris Anderson) to hand-transcribe 100 tracks in four genres: Breaks, House, Dubstep, and Drum and Bass. Aspects of transcription include musical details (drum beats, percussion parts, bass lines, melodic parts), timbral descriptions (i.e. “low synth kick, mid acoustic snare, tight noise closed hihat”), signal processing (i.e. the use of delay, reverb, compression and its alteration over time), and descriptions of overall musical form. This information is then compiled in a database, and machine analysed to produce data for generative purposes. Two different systems have been created to interpret this data: GESMI (created by Arne Eigenfeldt/loadbang) and GEDMAS (created by Chris Anderson/Pittr Patter). GEDMAS began producing EDM tracks in June 2012, while GESMI produced her first fully autonomous generation in March 2013. It is interesting to note the similarities of the systems (due to the shared corpus) and the differences (due to the different creative choices made in the implementation). Closed source?
  • Cellular automata and music - Take computers, mathematics, and the Java Sound API, add in some Java code, and you've got a recipe for creating some uniquely fascinating music. IBM Staff Software Engineer Paul Reiners demonstrates how to implement some basic concepts of algorithmic music composition in the Java language. He presents code examples and resulting MIDI files generated by the Automatous Monk program, which uses the open source jMusic framework to compose music based on mathematical structures called cellular automata.


  • MusicAlgorithms - interactive tools that provide a unique learning experience for users, regardless of their musical training. Students of music composition can explore algorithmic composition, while others can create musical representations of models for the purpose of aural interpretation and analysis. Here, the algorithmic process is used in a creative context so that users can convert sequences of numbers into sounds.


Evolutionary

  • https://en.wikipedia.org/wiki/Evolutionary_music - the audio counterpart to evolutionary art, whereby algorithmic music is created using an evolutionary algorithm. The process begins with a population of individuals which by some means or other produce audio (e.g. a piece, melody, or loop), which is either initialized randomly or based on human-generated music. Then through the repeated application of computational steps analogous to biological selection, recombination and mutation the aim is for the produced audio to become more musical. Evolutionary sound synthesis is a related technique for generating sounds or synthesizer instruments. Evolutionary music is typically generated using an interactive evolutionary algorithm where the fitness function is the user or audience, as it is difficult to capture the aesthetic qualities of music computationally. However, research into automated measures of musical quality is also active. Evolutionary computation techniques have also been applied to harmonization and accompaniment tasks. The most commonly used evolutionary computation techniques are genetic algorithms and genetic programming.

DarwinTunes

AlgoRhythmia

  • https://github.com/Xangis/AlgoRhythmia - a programmable virtual drum machine based on algorhythmic and evolving beats. An 8-channel drum machine and beat generator supporting .wav sample playback and MIDI. It generates patterns up to 4 measures long in any of six time signatures and at virtually any BPM. It allows the user to mutate or regenerate any or all of the drum sounds in a pattern or to edit them by hand via the pattern editing window.

Neural net


folk-rnn

Tensorflow

RBM Toadofsky


Clara

  • Clara: A Neural Net Music Generator – Christine Payne - an LSTM that composes piano music and chamber music. It has some parallels to Google’s Magenta project, although it’s an entirely separate project, and uses PyTorch, MIT’s music21, and the FastAI library. I use a 62 note range (instead of the full 88-key piano), and I allow any number of notes to play at each musical time step (in contrast with many generation models which insist on having always exactly 4 notes at a time, or in having much smaller note ranges). I trained my models using midi samples from Classical Archives, although the code should work for any piano midi files, as well as for many chamber music files.

Windows VST

  • Cube Breath - a standalone application for Windows created with SynthEdit and SAVIhost, but will also run as a VST effect. It is a realtime, fully automatic pop music generator. Audio input is vocoded in tune with the music, allowing the user to "instantly transform their shopping list, answering machine messages or office memos into floor-filling number one hit singles".


  • Breath Cube - employs the 'pop music generating' engine of Cube Breath and adds a 3-band synthetic voice to "challenge your listening skills". Breath Cube was created using SynthEdit and runs as a standalone with the included SAVIHost application or as a VST plug-in.


Commercial


  • Mixtikl is a dedicated, integrated & powerful multi-platform generative music editor, mixer, arranger and cell sequencer. It includes many modifiable generative music templates that you can easily mix together. To generate its sounds it features the Partikl Sound Engine, a powerful sound source comprising a modular synth with Soundfont (SF2)/DLS support + live FX. - $
  • Noatikl 3 Noatikl, an immensely deep & powerful app for generative MIDI music composition AND sound design, Generative Music Composer → for iOS, Mac, Windows, VST/AU. - $