Creative coding
General
See also Effects, Dataflow, Audio, Graphics, Visuals, Generative, Web Audio, etc.
- https://en.wikipedia.org/wiki/Creative_coding - a type of computer programming in which the goal is to create something expressive instead of something functional. It is used to create live visuals and for VJing, as well as creating visual art and design, art installations, projections and projection mapping, sound art, advertising, product prototypes, and much more.
- https://github.com/terkelg/awesome-creative-coding - Generative Art, Data visualization, Interaction Design, Resources.
- https://github.com/jasonwebb/morphogenesis-resources - Comprehensive list of resources on the topic of digital morphogenesis (the creation of form through code). Includes links to major articles, code repos, creative projects, books, software, and more.
- https://github.com/GeWu-Lab/awesome-audiovisual-learning - A curated list of audio-visual learning methods and datasets.
- https://github.com/DolbyIO/awesome-audio - A curated list of awesome audio technology resources for developers
- PDF: The Algorithmic Beauty of Plants - a book by Przemyslaw Prusinkiewicz and Aristid Lindenmayer.
- https://en.wikipedia.org/wiki/The_Algorithmic_Beauty_of_Plants - a book by Przemyslaw Prusinkiewicz and Aristid Lindenmayer. It's notable as it is the first comprehensive volume on the computer simulation of certain patterns in nature found in plant development (L-systems).The book is no longer in print but is available free online.
- Learn FluCoMa - The Fluid Corpus Manipulation project (FluCoMa) instigates new musical ways of exploiting ever-growing banks of sound and gestures within the digital composition process, by bringing breakthroughs of signal decomposition DSP and machine learning to the toolset of techno-fluent computer composers, creative coders and digital artists.
Audio programming
Less GUI, more code.
See also Dataflow, Computing#Programming, Audio, Notation
- https://github.com/SoundDevelopment/sound-developing-links - All gathered links relating to sound and developing.
- https://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments - typically consist of an audio programming language (which may be graphical) and a user environment to design/run the language in. Although many of these environments are comparable in their abilities to produce high-quality audio, their differences and specialties are what draw users to a particular platform. This article compares noteworthy audio synthesis environments, and enumerates basic issues associated with their use.
- https://en.wikipedia.org/wiki/List_of_audio_programming_languages - including languages optimized for sound production, algorithmic composition, and sound synthesis.
- https://en.wikipedia.org/wiki/Digital_filter - a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-time analog signals.A digital filter system usually consists of an analog-to-digital converter (ADC) to sample the input signal, followed by a microprocessor and some peripheral components such as memory to store data and filter coefficients etc. Program Instructions (software) running on the microprocessor implement the digital filter by performing the necessary mathematical operations on the numbers received from the ADC. In some high performance applications, an FPGA or ASIC is used instead of a general purpose microprocessor, or a specialized digital signal processor (DSP) with specific paralleled architecture for expediting operations such as filtering. Digital filters may be more expensive than an equivalent analog filter due to their increased complexity, but they make practical many designs that are impractical or impossible as analog filters. Digital filters can often be made very high order, and are often finite impulse response filters which allows for linear phase response. When used in the context of real-time analog systems, digital filters sometimes have problematic latency (the difference in time between the input and the response) due to the associated analog-to-digital and digital-to-analog conversions and anti-aliasing filters, or due to other delays in their implementation. Digital filters are commonplace and an essential element of everyday electronics such as radios, cellphones, and AV receivers.
- https://github.com/jatinchowdhury18/WaveDigitalFilters - Circuit Modelling with Wave Digital Filters
- https://github.com/OceanSwift/DSP-Links-Resources - Curated list of DSP, Audio, MIDI, OSC related links and resources
- https://github.com/BillyDM/Audio-DSP-Resources - Curated list of audio DSP and plugin development resources
How-tos
- Musicdsp.org — Musicdsp.org documentation - a collection of algorithms, thoughts and snippets, gathered for the music dsp community. Most of this data was gathered by and for the people of the splendid Music-DSP mailing list at http://sites.music.columbia.edu/cmc/music-dsp/
- Hack Audio - Here you will find informational content related to audio and computer programming, as well as a community of coders/engineers interested in music.New information will be posted on a weekly basis for a range of experience levels – from beginner to professional.If you feel like you don’t have a lot of experience with programming, that’s great! This blog is meant to help you go from zero to hacker hero.
- Dr. Lex' Site: Programming Volume Controls
- PDF: Alias-Free Digital Synthesis of Classic Analog Waveforms - Techniques are presented for alias-free digital synthesis of classical analog synthesizer waveforms such as pulse train and sawtooth waves.
- The Scientist and Engineer's Guide to Digital Signal Processing's Table of Content - By Steven W. Smith, Ph.D.
- Introduction to Signal Processing - Sophocles J. Orfanidis
by Allen B. Downey
- PDF: The Ardour DAW – Latency Compensation and Anywhere-to-Anywhere Signal Routing Systems - Robin Gareus PhD thesis, 2017-12.
- Fiview - freeware application for Windows, Linux and Mac OSX that can be used to design and view digital filters. It also makes it very easy to compare different filters by allowing you to switch between them using the digit keys, and it generates efficient and readable public domain example code that can be used directly in an application. It is released under the GNU GPL. Much of the underlying filter design code was based on mkfilter from Tony Fisher -- see the source code for details. The resulting filters were improved by splitting them into separate stages, which improves the accuracy and stability of them enormously, especially for higher-order Bessel and Butterworth filters. The source also includes a library, fidlib (now with its own page), which can be used to design filters at run-time. The fiview utility generates fast generic compiler-optimisable example C code both using the frequencies provided, and also in a form that allows the frequencies to be provided at run-time via a call to fidlib. This permits applications the flexibility to do things like generating banks of similar filters at run-time according to run-time parameters.
- MicroModeler DSP - a web-based digital filter IDE. MicroModeler DSP solves the entire filter design, scaling and coding problem in one integrated process. Spend less time on the problems and more time on your project.
- Polarbear - a tool for designing filters in the complex domain. Filters can be designed by placing any number of poles and zeros on the z plane. From this the filter coefficients are calculated, and the filter can be applied in real time on an audio stream.
- FFmpegSource - (usually known as FFMS or FFMS2) is a cross-platform wrapper library around FFmpeg/libav. It gives you an easy, convenient way to say "open and decompress this media file for me, I don't care how you do it" and get frame- and sample-accurate access (usually), without having to bother with the sometimes less than straightforward and less than perfectly documented libav API.
- Making Audio Plugins - Martin Finke's Blog - An ongoing tutorial series about creating audio software. Run as a standalone app or as VST, VST3, AU, RTAS, AAX.
- Denormals - (or subnormals) are very small floating point numbers. When they drop below a threshold size, many CPUs (under many different conditions) exhibit considerable drops in performance when processing them. These drops can be as much as a factor of 100 times! What can be done?
- Music Signal Processing - Department of Electrical Engineering - Columbia University [5]
- Introduction to Digital Filters With Audio Applications - Julius O. Smith, CCRMA
- KVR Audio: Book: The Art of VA Filter Design 2.1.0
- Designing Sound by Andy Farnel
- Welsh's Synthesizer Cookbook: Synthesizer Programming, Sound Analysis, and Universal Patch Book
- Making Computers Sing - When most people think of computer music or sounds, “blips” and “bleeps” often are the first things to come to mind. Let’s dive in and look and at some of the ways a computer musician would make this sound Comparing Max MSP and PureData, Csound, Supercollider and ChucK, Faust.
- https://github.com/jatinchowdhury18/ComplexNonlinearities - contains recent research exploring purely digital complex nonlinear signal processing.
- https://github.com/aike/audiolang - Audio Language Examples
bash
cat /dev/urandom | hexdump -v -e '/1 "%u\n"' | awk '{ split("0,2,4,5,7,9,11,12",a,","); for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }' | xxd -r -p | aplay -c 2 -f S32_LE -r 16000
Historical
- CCRMA @ The Age of Noise Festival - Colombia, Juan Reyes (et. al.), 2001
MUSIC-N
1957
- https://en.wikipedia.org/wiki/MUSIC-N - refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task.
The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.
Less obviously, MUSIC can be seen as the parent program for: RTSKED (a later RealTime Scheduling language by Max Mathews), Max/MSP, Pure Data, AudioMulch, SuperCollider, JSyn, Common Lisp Music, ChucK, or any other computer synthesis language that relies on a modular system (e.g. Reaktor).
- Music N – Musica Informatica - Music N is an acronym, coined retrospectively, by which commonly indicates a set of computer music languages, developed over forty years. Although made by different people and in different contexts, these languages have in common certain characteristics that have led, in fact, to speak of a single family membership.
MUSIC-V
1969
- Music V – Musica Informatica - Music V is a sound synthesis software. It is among the best known of the Music N family of computer music programming language. Certainly the first one that had a spread beyond the boundaries of the United States. Ported to a PDP10, Music V became the Mus10 music compiler system and played scores composed in Leland Smith's SCORE language.
- https://github.com/vlazzarini/MUSICV - Max Mathew's MUSIC V synthesis program, prepared by Bill Schottstaed for gfortran, with additions and fixes by Victor Lazzarini.
- https://github.com/asousadias/Risset_CsoundPatches - a working in progress of some Csound versions of Jean-Claude Risset's Music V orchestras and scores.
MUSIC-10
SCORE
- https://en.wikipedia.org/wiki/SCORE_(software) - 1972, useful software to manage the score of other Music N, such as the Music IV and Music V.
- SCORE-11 was originally designed for Vercoe’s MUSIC11 system. MUSIC11, which was written in PDP-11 assembly language, was replaced in 1986 by CSOUND, a version of the program written in the programming language C, and which runs on many different computer systems. SCORE-11 works well with either version of Vercoe’s program, which will be referred to as CSOUND in this manual except where the distinction is important.
SCORE-11 (Brinkman, 1981 & 1990) is a note-list preprocessor for the CSOUND compiler, which was written by Barry Vercoe at MIT. SCORE-11 was written by Alexander Brinkman of the Eastman School of Music. The SCORE-11 input syntax is based, with some important extensions, on the well known “SCORE” program (used on the Stanford-Ircam MUS10 system) by Leland Smith (Smith, 1972 & 1980). A composer who is familiar with either preprocessor will have no problems changing to the other one. This will reduce dependency on a specific music system. Some features of SCORE were not implemented in SCORE-11. These are mostly the features that allow data to be copied from one instrument block to another. Several features wer added to SCORE-11 to make it very powerful.
SCORE-11 is written in the PASCAL programming language, increasing its potential for exportability. You will find SCORE-11 to be of great help in the process of defining the hundreds of events that make up a composition. In the following pages you will find a comprehensive manual introducing and describing in detail the various features of SCORE-11.
Cmusic
- Cmusic - developed in 1980 by Richard Moore at the Computer Audio Research Laboratory (CARL) of the Center for Music Experiment at the University of California at San Diego (UCSD). Moore, who had worked alongside Max Mathews for the development of GROOVE, at first had thought to create a new version of Music V but upgraded to the C language. Following preferred to prepare a completely new software.
- https://github.com/vlazzarini/cmusic - the sources and makefiles for a build of the classic CARL cmusic and related programs. The build has been tested on OSX and Linux, and although a number of warnings are issued, it compiles and links successfully (but see 'Known issues')
SSSP
- http://www.billbuxton.com/SSSP.html - Structured Sound Synthesis Project
In designing the system, we decided early on to adopt a highly interactive approach to the design of the human interface. Batch processing as in Music V (Mathews: 1969) is an alternative, but one which widely separates the composer and the program, causing serious delays in the feedback loop. We feel a score editor must be interactive because there are facets of the task which demand control and aesthetic judgment by the composer in an interactive and exploratory manner. Several modes of interaction have previously been used in music systems, such as alphanumeric text as in MUSlC10 (Smith: 1978), voice recognition (Tucker, Bates, Frykberg, Howrath Kennedy, Lamb, Vaughan: 1977), and piano -type keyboard (New England Digital Corp.: 1978). In our work we have adopted a bias towards graphics -based interaction (Baecker: 1979; Newman and Sproull: 1979) in the belief that this approach can make a significant contribution towards an effective human interface. First, music lends itself well to representations in the visual domain. Second, the task of editing music is complex in the sense that there are many parameters and commands to be manipulated and controlled this complexity can be reduced by the graphic representation of information. Third, previous work (Pulfer: 1972; Tanner: 1972; Vercoe: 1975) indicates that more congenial interfaces can be constructed using dynamic graphics techniques.
- YouTube: SSSP Overview
SAMBOX
1977
The compiler was replaced in 1977 with dedicated synthesis hardware in the form of the Systems Concepts Digital Synthesizer (built by Peter Samson and known as the ``Samson Box). The Samson Box was capable of utilizing many types of synthesis techniques such as additive synthesis, frequency modulation, digital filtering and some analysis-based synthesis methods. The PLA language, written by Bill Schottstaedt, allowed composers to specify parametric data for the Samson Box as well as for other sound processing procedures on the PDP10 mainframe (and on its eventual replacement, a Foonly F4). On April 3, 1992, the Foonly and Samson Box were officially retired.
- http://120years.net/the-samson-box-or-systems-concepts-digital-synthesizer-peter-samson-usa-1977/ - Systems Concepts Digital Synthesizer
- PLA - 1983
HMSL
- https://en.wikipedia.org/wiki/Hierarchical_Music_Specification_Language - a music programming language written in the 1980s by Larry Polansky, Phil Burk, and David Rosenboom at Mills College. Written on top of Forth, it allowed for the creation of real-time interactive music performance systems, algorithmic composition software, and any other kind of program that requires a high degree of musical informatics. It was distributed by Frog Peak Music, and runs with a very light memory footprint (~1 megabyte) on Macintosh and Amiga systems.
Unlike CSound and other languages for audio synthesis, HMSL is primarily a language for making music. As such, it interfaces with sound-making devices through built-in MIDI classes. However, it has a high degree of built-in understanding of music performance practice, tuning systems, and score reading. Its main interface for the manipulation of musical parameters is through the metaphor of shapes, which can be created, altered, and combined to create a musical texture, either by themselves or in response to real-time or scheduled events in a score. HMSL has been widely used by composers working in algorithmic composition for over twenty years. In addition to the authors (who are also composers), HMSL has been used in pieces by Nick Didkovsky, The Hub, James Tenney, Tom Erbe, and Pauline Oliveros. A Java port of HMSL was developed by Nick Didkovsky under the name JMSL, and is designed to interface to the JSyn API.
Music Mouse
- https://en.wikipedia.org/wiki/Music_Mouse - an algorithmic musical composition software developed by Laurie Spiegel. Spiegel's best known and most widely used software, "Music Mouse - An Intelligent Instrument" (1986) is for Macintosh, Amiga and Atari computers. The "intelligent instrument" name refers to the program's built-in knowledge of chord and scale convention and stylistic constraints. Automating these processes allows the user to focus on other aspects of the music in real time. In addition to improvisations using this software, Spiegel composed several works for "Music Mouse", including Cavis muris in 1986, Three Sonic Spaces in 1989, and Sound Zones in 1990. She continued to update the program through Macintosh OS 9, and as of 2012, it remained available for purchase or demo download from her website.
Image/ine
- Image/ine - the first piece of software (for normal computers) that allowed users to manipulate uncompressed video in real time. Limited, at the time, to 320x240 pixels at some 10 frames per second (the Macintosh 8600 was the dream machine), it nevertheless proved a point: artistic quality and stage guts made lack of frame rate and image quality of secondary interest; real time video manipulation could be done! Soon others followed, all with their strengths and weaknesses. What set Image/ine apart, and sets ImX apart, is that this is software for video people, not for musicians and not for programmers.
Lisp
Common Music
1989
- Common Music (CM) - a music composition system that transforms high-level algorithmic representations of musical processes and structure into a variety of control protocols for sound synthesis and display. Its main user application is Grace (Graphical Realtime Algorithmic Composition Environment) a drag-and-drop, cross-platform app implemented in JUCE (C++) and S7 Scheme. In Grace musical algorithms can run in real time, or faster-than-real time when doing file-based composition. Grace provides two coding languages for designing musical algorithms: S7 Scheme, and SAL, an easy-to-learn but expressive algol-like language.
Common Music can write scores in several different syntaxes (currently CLM, CMN, Music Kit, MIDI, CSound and Paul Lansky's real-time mixing program, RT). The scores can then be rendered on workstations using any of the target synthesis programs. For example, CLM (Common Lisp Music, written by Bill Schottstaedt) is a widely used and fast software synthesis and signal processing package that can make use of multiple Motorola 56001 DSPs.
Pla is the intellectual ancestor of CM.
- https://github.com/andersvi/cm2 - Common Music v2.0
- https://github.com/ormf/cm - Common Music 2.12 by Rick Taube. Also check out the related cm-incudine, cm-fomus, cm-utils and cm-svg repos here.
Common Lisp Music
1997
- CLM (originally an acronym for Common Lisp Music) is a sound synthesis package in the Music V family. It provides much the same functionality as Stk, Csound, SuperCollider, PD, CMix, cmusic, and Arctic — a collection of functions that create and manipulate sounds, aimed primarily at composers (in CLM's case anyway). The instrument builder plugs together these functions (called generators here), along with general programming glue to make computer instruments. These are then called in a note list or through some user interface (provided by Snd, for example).
- https://github.com/defaultxr/cl-wav-synth - a wav sample editor. It comes in two parts, the main library for manipulating wav files and a (Mc)CLIM interface with a full lisp listener, a sample pane editor, a spectrum pane editor and a song pane editor.
- https://github.com/defaultxr/cl-alsaseq - Common Lisp interface to ALSA MIDI.
cl-patterns
- https://github.com/defaultxr/cl-patterns - A library for writing patterns to generate or process (a)musical sequences of mathematically (un)related (non-)compound values in Lisp.Put more simply, cl-patterns is a system for making music and noise via Lisp code.
Ronin
- https://github.com/hundredrabbits/Ronin - a procedural graphics tool designed to automate simple graphical tasks, like resizing, cropping, coloring, and generating algorithmic images. It interprets a minimal dialect of LISP, look at the examples to better understand how this all works. The library updates is constantly revealing new applications to Ronin, you can see the list of available functions here. Most of our iconography and designs were created with both Ronin and Dotgrid.
tosort
Incudine
- Incudine - useful to design software synthesizers or sound plugins from scratch, exploiting the expressive power of Common Lisp, without the constraint to use pre-built unit generators. It is also a compositional tool that allows to produce high quality sounds controllable at the sample level, defining and redefining the digital signal processors and the musical structures on-the-fly.Incudine introduces the Virtual UGen (VUG), a new mechanism for defining primitive unit generators by writing concise and reusable code. The definition of a VUG contains lisp code and/or other nested VUGs (or "real" UGens), and it is re-arranged and compiled efficiently during the creation of a DSP.
- https://github.com/titola/incudine - mirror
thundersnow
- thundersnow - a work-in-progress digital audio workstation and live coding laboratory in Lisp.It builds upon cl-patterns' (non-)deterministic (non-)musical composition and performance functionality with a set of graphical control surfaces and visualizations, as well as bdef's audio buffer and analysis abstractions to quickly generate and wrangle musical data.Under the hood it should support any of the sound servers these libraries do, which include SuperCollider (via cl-collider) and Incudine. Its interface is structured using the Common Lisp Interface Manager for that classic, truly dynamic Lisp "feel".
- https://github.com/defaultxr/thundersnow - Lisp/McCLIM-based digital audio workstation and live coding laboratory.
Csound
- Csound - a sound and music computing system which was originally developed by Barry Vercoe in 1985 at MIT Media Lab. Since the 90s, it has been developed by a group of core developers. Although Csound has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer. Csound has tradtionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context. Csound can run on a host of different platforms incuding all major operating systems as well as Android and iOS. Csound can also be called through other programming languages such as Python, Lua, C/C++, Java, etc. [8]
- https://github.com/csound/plugins - Repository for Csound plugins which were originally in the main repository, and for new plugins as well.
- https://en.wikipedia.org/wiki/Csound - Csound was originally written at MIT by Barry Vercoe, based on his earlier system called Music 11, which in its turn followed the MUSIC-N model initiated by Max Mathews at the Bell Labs. Its development continued throughout the 1990s and 2000s, led by John ffitch at the University of Bath. The first documented version 5 release is version 5.01 on March 18, 2006.
- https://github.com/gogins/csound-extended-manual - An online version of the Csound Reference Manual that embeds the csound-extended-wasm build of Csound to play examples live in Web browsers.
- /chapter: A-The-Csound-Api - An application programming interface (API) is an interface provided by a computer system, library or application that allows users to access functions and routines for a particular task. It gives developers a way to harness the functionality of existing software within a host application. The Csound API can be used to control an instance of Csound through a series of different functions thus making it possible to harness all the power of Csound in one’s own applications. In other words, almost anything that can be done within Csound can be done with the API. The API is written in C, but there are interfaces to other languages as well, such as Python, C++ and Java.Though it is written in C, the Csound API uses an object structure. This is achieved through an opaque pointer representing a Csound instance. This opaque pointer is passed as the first argument when an API function is called from the host program.
- cSounds.com:Mastering Csound - A collection of intermediate and advanced tutorials by Dr. Richard Boulanger
- http://www.csounds.com/mastering/em_09.html - modular synth
- https://github.com/rorywalsh/csoundPluginOpcodes - SDK was designed to help third-party plugin opcode development either in C or C++. Its only dependency is a Csound installation.
- https://github.com/gogins/cxx-opcodes - The CXX opcodes provide a means for Csound users to embed C++ source code in Csound orchestra code, and for Csound then to compile, load, link, and run C++ during the course of the Csound performance.
- https://github.com/vlazzarini/opcode_compiler - experimental opcode builds on the initial work by Michael Goggins, and is based on the llvm/clang interpreter example code. It provides a just-in-time C module compiler, which can be used to add new opcodes to Csound on-the-fly.
- https://github.com/tomara-x/witches - demon-summoning sequencers in the form of csound opcodes
Live Coding with Csound
- Live Coding with Csound - Utility code and basic setup for live coding with Csound
- https://github.com/chronopolis5k/Csound-Live-Coding - Adapting Steven Yi's live coding work in Csound for granular processing, ambisonics and a few other personalizations.
PythonScore
- PythonScore - a modular Csound score environment for event generation, event processing, and the fashioning musical structures in time.
CSoundQT
- CsoundQt - a frontend for Csound featuring a highlighting editor with autocomplete, interactive widgets and integrated help. It is a cross-platform and aims to be a simple yet powerful and complete development environment for Csound. It can open files created by MacCsound. Csound is a musical programming language with a very long history, with roots in the origins of computer music. It is still being maintained by an active community and despite its age, is still one of the most powerful tools for sound processing and synthesis. CsoundQt hopes to bring the power of Csound to a larger group of people, by reducing Csound's intial learning curve, and by giving users more immediate control of their sound. It hopes to be both a simple tool for the beginner, as well as a powerful tool for experienced users.
Cabbage Studio
- Cabbage Studio - a Csound based DAW with a fully functional patching interface and development environment. Develop, prototype and test Csound based audio instruments on the fly using an integrated development solution that includes an embedded source code editor and rapid GUI designer. Cabbage Studio isn't just for users familiar with Csound, it can load a number of different plugin formats including VST, AU, and LADSPA and comes with over 100 high end audio plugins ready to use out of the box.
Other
- http://iainmccurdy.org/csound.html - a catalogue of example Csound files.
- CMask is a score file generator for Csound. Its main purpose is the generation of events to create textures of granular sounds. Versions for MacOS9, Win, Linux.
- Hadron Particle Synthesizer - The Hadron particle synthesizer is the ultimate creative tool for granular synthesis. It is available in different plugin formats (VST, AU and Max For Live), even though the graphic appearance is slightly different for the different plugins wrappers, the audio functionality of Hadron is the same in all formats.
- AVSynthesis Composition Tool - java
- Csound Journal: Composing With Csound In AVSynthesis - Dave Phillips
- Cabel - a graphical user interface for building csound instruments by patching modules similar to modular synthesizers. It is simply customizable through XML. It's intended for Csound beginners to learn some fundamentals about this wonderful language (although installing all dependencies of Cabel isn't as trivial as it should be) and for advanced Csound users, who want to test their ideas and play with input parameters in a graphical user interface. Old!
Cecilia
- Cecilia - a graphic user interface for the sound synthesis and sound processing package Csound. Cecilia enables the user to build very quickly graphic interfaces with sliders and curves to control Csound intruments. It is also an editor to Csound with syntax highlighting and a built-in reference. Cecilia is also a great tool to explore the parameters of a new opcode in an interactive and intuitive way.
- https://web.archive.org/web/20000818030311/http://www.musique.umontreal.ca:80/Org/CompoElectro/CEC/ - old
Blue
- Blue - An Integrated Music Environment, powered by Csound. An open-source, cross-platform desktop application for composing music. Use visual tools together with text and code to create the music of your dreams.
pch2csd
- https://github.com/gleb812/pch2csd - The goal of this project is to (re)implement the Clavia Nord Modular G2 sound engine in Csound, a well-known sound and music computing system.
csound-expression
- https://github.com/spell-music/csound-expression - a Haskell framework for computer music. With the help of the library we can create our instruments on the fly. A couple of lines in the interpreter is enough to get the cool sound going out of your speakers. It can be used for simple daily sound-file processing or for a full-blown live performances.
p5.Csound
- p5.Csound - a lightweight wrapper to the web assembly build of Csound. It provides all the power of Csound inside p5.js sketches. With over a thousand processing opcodes, it is one of the most extensive audio processing libraries available in a browser. This interface provide very little in the way of wrapper functions. In fact, it only provides two. After that everything is accessible through the Csound object.
Cmix / RTcmix
- RTcmix - An Open-Source, Digital Signal Processing and Sound Synthesis Language, one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on a IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications.
RTcmix has a number of unique (or highly unusual) features when compared with other synthesis and signal processing languages. For one, it has a built-in MINC parser, which enables the user to write C-style code within the score file, extending its innate capability for algorithmic composition and making it closer in some respects to later music software such as SuperCollider and Max/MSP. It uses a single-script instruction file (the score file), and synthesis and signal processing routines (called instruments) exist as compile shared libraries. This is different from MUSIC-N languages such as Csound where the instruments exist in a second file written in a specification language that builds the routines out of simple building blocks (organized as opcodes or unit generators). RTcmix has similar functionality to Csound and other computer music languages, however, and their shared lineage means that scripts written for one language will be extremely familiar-looking (if not immediately comprehensible) to users of the other language.
- RT-cmix -- Using CMIX in Real Time - Several computer music "languages" intended to facilitate the coding of sound synthesis and signal-processing algorithms have become popular among practitioners of computer music (CSOUND, CMIX and CMUSIC, for example). Until quite recently, however, low-cost computing power was not available to allow these languages to run in real-time; instead the computer musician generally worked by reading and writing samples to and from disk files and listening to the results when computation was complete. Within the past few years smaller, general-purpose computers have become powerful enough to run digital sound synthesis and signal-processing algorithms in real time. This demonstration will show the capabilities of RTcmix, a set of extensions and modifications to the CMIX computer music language enabling it to function as a real-time programming environment. Included in the RTcmix package is a dynamic event scheduler and the ability to process disk files and external audio signals in real time. Thus RTcmix can be used to create interactive performance works in addition to providing a fast 'turnaround' for composers building tape-music pieces. Additional aspects of the package include a socket-based protocol for control of RTcmix by external processes, and "backwards compatibility" with all existing CMIX features and instruments. The implementation of RTcmix will be discussed in detail, as well as the rationale behind the design of the language. RTcmix is a unix-based language in the public domain, currently running under both Linux and IRIX.
Pascal
- https://github.com/BeRo1985/br808 - BR808 is a originally demoscene-originated software synthesizer, which became a bit too big for 64k intros.
Smalltalk
- https://github.com/stpope/Siren9 - The Siren Music/Sound framework for Smalltalk
C
- Experimental music from very short C programs
- Experimental one-line algorithmic music - the 2nd iteration
- YouTube: Music from very short programs - the 3rd iteration [9]
- Microtone - How tiny you ask? 128 bytes. The sound generation itself takes up just 14 lines of assembly, generating 31 bytes of machine code. The rest is taken up the ELF header and functions needed to open the sound device and outputting samples
- Coding an Equalizer - This article expands the section on page 20 of my book The Audio Expert that describes briefly how digital equalizers work. As explained in the book, all equalizers are based on filters of various types. Computer code that implements a filter is called Digital Signal Processing, or DSP for short. Most digital filters emulate equivalent analog filters, and the common language for all filters is mathematics. Therefore, several trigonometry formulas are shown below, and there's no escaping this! But the basic operation of the computer code that implements an equalizer is not too difficult to follow, even if you don't understand the formulas. To keep this example as brief as possible, the code implements a simple high-pass filter having one pole (6 dB per octave). Formulas to implement other filter types including those used in parametric equalizers are shown on the Cookbook Formulae web page by Robert Bristow-Johnson.
ld_preload sounds
- https://github.com/gordol/ld_preload-sounds - Generates WAV output by hooking malloc() and read(). Adding support for other calls should be pretty easy, pull-requests are much welcomed! Also, it should go without saying... but I will say it anyway... this is experimental.
basicwav
- https://github.com/voidqk/basicwav - C99 library for saving/loading WAV files
sndlib
- io.c (read and write sound file data)
- headers.c (read and write sound file headers)
- audio.c (read and write sound hardware ports)
- sound.c (provide slightly higher level access to the preceding files)
- sndlib.h (header for the preceding files)
- sndlib2xen.c and sndlib-strings.h (tie preceding into s7, Ruby, or Forth)
- clm.c and clm.h (Music V implementation)
- clm2xen.c, vct.c and vct.h (tie clm.c into s7, Ruby, or Forth)
- xen.h, xen.c (the embedded language support)
libsndfile
- http://www.mega-nerd.com/libsndfile - a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface. It is released in source code format under the Gnu Lesser General Public License.
- https://github.com/libsndfile/sndfile-tools - a collection of programs (written in C) to do interesting things with sound files.See the web page for more: http://libsndfile.github.io/sndfile-tools/AboutA collection of tools (written in C) to do interesting things with sound
libaudec
- https://github.com/zrythm/libaudec - a wrapper library over ffmpeg, sndfile and libsamplerate for reading and resampling audio files, based on Robin Gareus' audio_decoder code (x42/silan:audio_decoder@master). This library is meant to be linked in statically to larger projects. Until version 1.0 is released, the API is subject to change.
dr_libs
- https://github.com/mackron/dr_libs - Single file audio decoding libraries for C/C++.
- https://github.com/SpotlightKid/pydrlibs - Python bindings for the dr_libs audio decoding libraries
libsoundio
- libsoundio - a lightweight abstraction over various sound drivers. It provides a well-documented API that operates consistently regardless of the sound driver it connects to. It performs no buffering or processing on your behalf; instead exposing the raw power of the underlying backend.libsoundio is appropriate for games, music players, digital audio workstations, and various utilities.libsoundio is serious about robustness. It even handles out of memory conditions correctly.
- https://github.com/dkorpel/libsoundio-d - Translation from C to D of libsoundio
blargg resampler
- https://github.com/kode54/blargg_resampler - Simple FIR resampler for stereo signals
r8brain-free-src
- https://github.com/avaneev/r8brain-free-src - High-quality pro audio sample rate converter / resampler C++ library
clownresampler
- Progress on my Mega Drive emulator – clownmdemu – Clownacy's Corner
- https://github.com/Clownacy/clownresampler - This is a single-file library for resampling audio. It is written in C89 and dual-licenced under the terms of The Unlicence and the Zero-Clause BSD licence. In particular, this library implements a windowed-sinc resampler, using a Lanczos window.
hiir
- https://github.com/unevens/hiir - A header only ready to include mirror of the HIIR library by Laurent De Soras, an oversampling and Hilbert transform library in C++.
oversimple
- https://github.com/unevens/oversimple - A library for audio oversampling, which tries to offer a simple api while wrapping HIIR, by Laurent De Soras, for minimum phase antialiasing, and r8brain-free-src, by Aleksey Vaneev, for linear phase antialiasing.
sndfilter
- https://github.com/velipso/sndfilter - Algorithms for sound filters, like reverb, dynamic range compression, lowpass, highpass, notch, etc
Xenakis-Sieves
- https://github.com/rodneydup/Xenakis-Sieves - This program was originally written in BASIC by Iannis Xenakis and later translated to C by Gérard Marino. Gérard Marino's code included some libraries that, at the time of writing (2019), are now obsolete. Curtis Roads adapted the code to work with std libraries and Rodney DuPlessis corrected some further errors to create this version that will compile on most modern operating systems with a C compiler. There are two programs: PointsFromSieve, which generates points from a given sieve, and its complement, SieveFromPoints deduces a sieve from given points.
libtsp
- TSP Lab - Reports / Software - a library of routines for signal processing. It also includes a number of general purpose routines useful for program development. Programs using this library for filtering, LPC analysis/synthesis and resampling are available as part of the AFsp package.
miniBAE
Audio File Library
- Audio File Library is a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.
bloopsaphone
- https://github.com/localhost/bloopsaphone - arcade sounds and chiptunes for ruby and c
lyd
- Lyd - an embeddable signal processing language and engine. Suitable for among other things realtime audio effect and instrument synthesis and mixing. It can form the audio core of games, virtual instruments, real-time audio mixing and editing experiments. Lyd is currently mostly an application programmers toy, out of the box at the moment lyd contains a mediocre autogenerated approximation of an OPL2 FM synthesizer in its patch set. When you launch lyd you can use it as a synthesizer with an attached MIDI keyboard or with MIDI sequencers supporting ALSA midi. Lyd should ble able to enable processing plug-ins for various audio synthesis standards exposing the lyd language and efficiency in interaction to allow live tweaking of the lyd code when composing.
Pyo
- Pyo - a Python module written in C to help DSP script creation. Pyo contains classes for a wide variety of audio signal processing. With pyo, the user will be able to include signal processing chains directly in Python scripts or projects, and to manipulate them in real time through the interpreter. Tools in the pyo module offer primitives, like mathematical operations on audio signals, basic signal processing (filters, delays, synthesis generators, etc.), but also complex algorithms to create sound granulation and other creative audio manipulations. pyo supports the OSC protocol (Open Sound Control) to ease communications between softwares, and the MIDI protocol for generating sound events and controlling process parameters. pyo allows the creation of sophisticated signal processing chains with all the benefits of a mature and widely used general programming language.
Allegro
- Allegro - a cross-platform library mainly aimed at video game and multimedia programming. It handles common, low-level tasks such as creating windows, accepting user input, loading data, drawing images, playing sounds, etc. and generally abstracting away the underlying platform. However, Allegro is not a game engine: you are free to design and structure your program as you like. Allegro 5 has the following additional features: Supported on Windows, Linux, Mac OSX, iPhone and Android, User-friendly, intuitive C API usable from C++ and many other languages, Hardware accelerated bitmap and graphical primitive drawing support (via OpenGL or Direct3D), Audio recording support, Font loading and drawing, Video playback, Abstractions over shaders and low-level polygon drawing
Soundpipe
- Soundpipe - a lightweight music DSP library written in C. It aims to provide a set of high-quality DSP modules for composers, sound designers, and creative coders. Soundpipe supports a wide range of synthesis and audio DSP techniques which include: Classic Filters (Moog, Butterworth, etc), High-precision and linearly interpolated wavetable oscillators, Bandlimited oscillators (square, saw, triangle), FM synthesis, Karplus-strong instruments, Variable delay lines, String resonators, Spectral Resynthesis, Partitioned Convolution, Physical modeling, Pitch tracking, Distortion, Reverberation, Samplers and sample playback, Padsynth algorithm [10]
- Sporth - SoundPipe fORTH, is a small stack-based audio programming language. For composers, Sporth is a different approach to making sound. Stack based languages are somewhat novel in the world of musical languages, and lend themselves well to modular sound design scenarios. Sporth syntax is simple to learn, and rewarding to master. Sound designers and composers fluent in languages like Csound, ChucK, and Supercollider will find Sporth a new and refreshing take on the same basic concepts. For developers, Sporth has a simple API that allows it to be used inside of other applications. In addition to compiling Sporth code, the API has access to other features of Sporth that would otherwise be unavailable, such as audio-rate software channels, and user defined function callbacks.
- http://paulbatchelor.github.io/proj/libline - an ANSI C library for generating audio-rate line segments and curves for computer-based music. Libline can easily interface with sample-accurate audio libraries like Soundpipe, and a local copy is used inside of Sporth via Polysporth.
- http://paulbatchelor.github.io/proj/spigot - a set of simple graphical interfaces for Sporth. It is available for download on github and CCRMA-hosted gitlab.
- http://paulbatchelor.github.io/proj/moons.html - Moons is an isorhythmic circular sequencer. It is written using a combination of C and C++. All visuals are created using OpenGl; All the sounds are snythesized in realtime using Sporth, Soundpipe, and RTaudio.
Patchwerk
- Patchwerk - a portable ANSI C library for constructing audio graphs and DSP signal chains. It designed to be portable, fast, and memory efficient. The core Patchwerk is written using the CWEB literate programming system.In addition to the core patchwerk API, there is also an actively maintained Runt interface with a built-in collection of DSP nodes from Soundpipe. The goal this Runt interface is to provide a similar feeling and faster alternative to Sporth, a stack-based audio language previously created by the author.
Sound Open Firmware
- Sound Open Firmware - an open source audio DSP firmware and SDK that provides audio firmware infrastructure and development tools for developers who are interested in audio or signal processing on modern DSPs.
Kiss FFT
- https://sourceforge.net/projects/kissfft/ - A Fast Fourier Transform based up on the principle, "Keep It Simple, Stupid." Kiss FFT is a very small, reasonably efficient, mixed radix FFT library that can use either fixed or floating point data types.
- https://github.com/AWSM-WASM/PulseFFT - A WebAssembly implementation of the C Fast Fourier Transform library kissFFT
PFFFT
- https://bitbucket.org/jpommier/pffft/src/master/ - PFFFT does 1D Fast Fourier Transforms, of single precision real and complex vectors. It tries do it fast, it tries to be correct, and it tries to be small. Computations do take advantage of SSE1 instructions on x86 cpus, Altivec on powerpc cpus, and NEON on ARM cpus. The license is BSD-like.
- https://github.com/marton78/pffft - A fork of Julien Pommier's Pretty Fast FFT (PFFFT) library, with several additions
mini_al
- https://github.com/dr-soft/mini_al - a simple library for playing and recording audio. It's focused on simplicity and has a very small number of APIs. C/C++, single file, public domain.
miniaudio
- https://github.com/mackron/miniaudio - Single file audio playback and capture library.
- https://github.com/AndreasOM/miniaudio-rs - Rust bindings for miniaudio C library.
tinywav
- https://github.com/mhroth/tinywav - A minimal C library for reading and writing (16b-int & 32b-float, WAV audio files.
RAGE
- https://github.com/concert/rage - Realtime Audio Graph Engine, provides an audio processing graph implementation tailored towards distributed audio workstations. The audio graph within RAGE is composed of processing elements and connections between elements which are managed by the RAGE host. Rage processing elements can be both traditional data transforms (effects, mixers, etc) as well as data producers (samplers, synthesizers, etc) and data consumers (recorders, analysis plugins). The RAGE host manages the scheduling of both the low-latency audio commutations and the higher latency subtasks such as loading a file from disk for sample replay. Additionally, RAGE provides session time interpolated parameters, freeing individual elements from this responsibility.
- Linux Audio Conference 2018: Distributed time-centric APIs with CLAPI
Digraph
- https://github.com/grame-cncm/digraph - a very simple, C++ 11 template-based, directed graph library. It is not designed to be general, but to suit the needs of the (next) Faust compiler.
LEAF
- https://github.com/spiricom/LEAF - Previously named OOPS, LEAF is a C library for Audio synthesis and processing created by Mike Mulshine and Jeff Snyder. The library consists of a set of high-level audio synthesis components (Oscillators, Filters, Envelopes, Delays, Reverbs, and other Utilities). Our primary use case is embedded audio computing on 32-bit ARM microcontrollers that can run "bare-metal" (without an OS), such as the STM32f4, STM32f7, and STM32H7. The code, however, is general enough to be used in many other situations as well. We have included a JUCE VST/AU generating template to test the library (2), and the python script we use to generate wavetables.
Most of these algorithms are sourced from other projects, especially the STK (Sound Toolkit) library and various discussions on the music-DSP mailing list. We also owe a lot to open source computer programming languages, such as C-sound, ChucK, PureData, and Supercollider. Other interesting projects to check out that similarly target embedded applicatons are: TeensyAudio (C++), Hoxton Owl (C++), Axoloti (C), and Mutable Instruments (C++).
sms-tools
- Spectral Modeling Synthesis Tools - MTG - Music Technology Group (UPF) - a set of techniques and software implementations for the analysis, transformation, and synthesis of musical sounds based on various spectral modeling approaches. These techniques can be used for synthesis, processing and coding applications, while some of the intermediate results might also be applied to other music related problems, such as sound source separation, musical acoustics, music perception, or performance analysis. The basic model and implementation were developed by Xavier Serra as part of his PhD thesis published 1989. Since then many extensions have been proposed at MTG-UPF and by other researchers.
- https://github.com/MTG/sms-tools - Sound analysis/synthesis tools for music applications written in python (with a bit of C) plus complementary teaching materials.
MTS-ESP
- https://github.com/ODDSound/MTS-ESP - a simple but versatile C/C++ library for adding microtuning support to audio and MIDI plugins. It allows for a single master plugin to simultaneously control the tuning of any number of connected client plugins across a DAW session.
Ficus
- https://github.com/resinbeard/ficus - development library provides your application with both multichannel playback/capture of mono wav audio files and all the flexibility and routing possibilities of a JACK client.Ficus is distributed as a group of source files, settings like available number of input/output channels and storage banks are available as compile-time options through the configuration file, config.h
Snack Sound Toolkit
- Snack Sound Toolkit - designed to be used with a scripting language such as Tcl/Tk or Python. Using Snack you can create powerful multi-platform audio applications with just a few lines of code. Snack has commands for basic sound handling, such as playback, recording, file and socket I/O. Snack also provides primitives for sound visualization, e.g. waveforms and spectrograms. It was developed mainly to handle digital recordings of speech, but is just as useful for general audio. Snack has also successfully been applied to other one-dimensional signals.
Postfish
- https://github.com/ePirat/Postfish - a digital audio post-processing, restoration, filtering and mixdown tool. It works as a linear audio filter, much like a rack of analog effects. The first stage of the filter pipeline provides a bank of configurable per-channel processing filters for up to 32 input channels. The second stage provides mixdown of the processed input audio into a group of up to eight output channels. The third stage applies processing filters to the output group post-mixdown. The Postfish is a stream filter; feed it audio from a list of files or input stream, and it renders audio to standard out, as well as optionally providing a configurable audio playback monitor via a sound device. If the input audio is being taken from files, Postfish also provides simple forward/back/cue seeking and A-B looping control. The next major update of Postfish will also include automation to allow mixdown settings to be 'recorded' and applied automatically during rendering.
Open-DSP-Toolbox
- https://github.com/james34602/Open-DSP-Toolbox - A special purpose signal processing toolbox for special IIR filter design and transfer function form conversion, written in C.
ame
- https://github.com/AkiyukiOkayasu/ame - an open source C++ header-only library focused on fast audio processing on Cortex-M, such as i.MXRT.
scalemap
- https://github.com/maxwellpollack/scalemap - a string format for musical scales in C, C++, and JavaScript
avpack
- https://github.com/stsaz/avpack - fast C library that can pack and unpack data to/from the popular multimedia container formats.
AIF-WAV-File-Lib
- https://github.com/tremblap/AIF-WAV-File-Lib - A simple audio import function with friendlier licensing
wavpack-stream
- https://github.com/dbry/wavpack-stream - A library for streaming audio compression, based on WavPack.
GStreamer
- GStreamer - a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.
- http://www.gstreamer.com - SDK
- https://github.com/matthew1000/gstreamer-cheat-sheet - This series of docs provides a cheat sheet for GStreamer on the command-line.
- YouTube: Using GStreamer - Jan Schmidt, LinuxConfAu 2018 - Sydney, Australia
- YouTube: Fun with GStreamer Pipelines - Linux.conf.au 2013
- https://github.com/metratec/gst-editor - Port of gst-editor to Gtk+ 3 and GStreamer 1.0
- https://github.com/RidgeRun/gst-interpipe -allows communication between two independent pipelines. The plug-in consists of two elements: interpipesink, interpipesrc
PortMedia
- https://sourceforge.net/projects/portmedia - offers free, cross-platform, open-source I/O libraries for digital media including MIDI, video, and audio (see www.portaudio.com). Most work is in C to facilitate interoperabilty with many languages and systems.
- PortAudio - a free, cross-platform, open-source, audio I/O library. It lets you write simple audio programs in 'C' or C++ that will compile and run on many platforms including Windows, Macintosh OS X, and Unix (OSS/ALSA). It is intended to promote the exchange of audio software between developers on different platforms. Many applications use PortAudio for Audio I/O.
- Playrec - a Matlab and Octave utility (MEX file) that provides simple yet versatile access to soundcards using PortAudio, a free, open-source audio I/O library. It can be used on different platforms (Windows, Macintosh, Unix) and access the soundcard via different host API including ASIO, WMME and DirectSound under Windows.
- https://github.com/RossBencina/RealTimeFileStreaming - Example of interfacing PortAudio real time audio with file I/O
JamesDSPManager
- https://github.com/james34602/JamesDSPManager - Cross-platform Audio Effect / Digital Signal Processing library)GUI is based on Omnirom DSP Manager and able to run on most Android devices from 5 to 10 include Samsung, AOSP, Cyanogenmod, recent HTC and Huawei(arm64). This app include many cool features.This repo is a Android variant of JamesDSP, JDSP4Linux uses core library from current repo.
SA_Toolkit
- https://github.com/skei/SA_Toolkit - "initially the base code toolkit for all the SA (skei.audio, plugins, but now a bit more generic and hopefully useful for other projects and people.. 100% based on clap, with wrappers to and from other plugin formats (vst2/3, ladspa, lv2, standalone exe).. clap/ladspa hosting (other formats coming too, probably).. currently supports linux and windows, no mac version planned.. gpu-accelerated, freely scalable (vector) gui.. all clap goodies (poly modulation, voice stacking, thread pool, remote controls, preset discovery, parameter indication, sample accurate..), voice manager with threaded voices, overlapping notes, polyphonic modulation, etc, etc).. simplified, abstracted, streamlined for rapid installation/setup, prototyping, iterations.."
Mozzi
- Mozzi - sound synthesis library for Arduino
C++
- YouTube: Ian Hobson - The use of std variant in realtime DSP (ADC'17) - Application Developer & Software Engineer, Ableton .C++17 introduces std::variant, a type-safe union class. A variant's value represents one of a fixed set of possible types, with C++'s type system ensuring that correct code paths are executed for the active type. This talk will explore the pros and cons of working with variants, with a special focus on DSP. Variants allow for well defined interfaces and minimal memory footprints, but what are they like to use in practice, and are they performant enough for realtime use?
RtAudio
- RtAudio - a set of C++ classes that provide a common API for realtime audio input/output across Linux (native ALSA, JACK, PulseAudio and OSS), Macintosh OS X (CoreAudio and JACK), and Windows (DirectSound, ASIO and WASAPI) operating systems.
AFsp
- AFsp Programs and Routines - provides audio file utility programs and a library of routines for reading and writing audio files.
Audio File Library
- Audio File Library - a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.
AudioFile
- https://github.com/adamstark/AudioFile - A simple C++ library for reading and writing audio files.
Libnyquist
- https://github.com/ddiakopoulos/libnyquist - a small C++11 library for reading sampled audio data from disk or memory. It is intended to be used an audio loading frontend for games, audio sequencers, music players, and more.
r8brain
- https://github.com/avaneev/r8brain-free-src - Open source (under the MIT license) high-quality professional audio sample rate converter (SRC) (resampling) library. Features routines for SRC, both up- and downsampling, to/from any sample rate, including non-integer sample rates: it can be also used for conversion to/from SACD sample rate and even go beyond that. SRC routines were implemented in multi-platform C++ code, and have a high level of optimality.
libstdaudio
- https://github.com/stdcpp-audio/libstdaudio - an implementation of the standard audio API for C++ proposed in P1386.The latest published revision of this proposal can always be found at https://wg21.link/p1386
volume-cpp
- https://github.com/threeal/volume-cpp - a cross-platform audio volume control library in C++
softcut-lib
- https://github.com/monome/softcut-lib -a C/C++ library for manipulating audio buffers in real time.it provides a relatively low-level but robust interface for commonly needed functions in musical systems development: crossfaded looping with subsample accuracy resampling read and write heads crossfaded overdub behavior input and output multimode filters
rt-cqt
- https://github.com/jmerkt/rt-cqt - aims to be a reasonable fast header-only C++11 implementation of the Constant-Q transform. It is especially designed for easy usage in real-time audio applications, handling various block-sizes and samplerates. The implementation is roughly based on the Judith C. Brown, Miller S. Puckette: An efficient algorithm for the calculation of a constant Q transform paper. pffft is used to handle the ffts, Polyphase IIR lowpasses to perform upsampling / downsampling.
muFFT
- https://github.com/Themaister/muFFT - a library for doing the fast fourier transform (FFT, in one or two dimensions. The FFT has many applications in digital signal processing. The main use cases are fast linear convolution and conversion from time domain into frequency domain and vice versa. See [The Fast Fourier Transform](@ref FFT) for details on how the algorithm works and how it is implemented in muFFT.
LibAudioStream
- https://github.com/sletz/libaudiostream - a framework to manipulate audio ressources through the concept of streams. Using this abstraction, different applications like audio players or offline processing tools can be developed.
APF
- https://github.com/AudioProcessingFramework/apf - The Audio Processing Framework (APF) is a collection of C++ code which was written in the context of multichannel audio applications. However, many modules have a more generic scope.
Aquila
- https://github.com/zsiciarz/aquila - an open source and cross-platform DSP (Digital Signal Processing) library for C++11.
qm-dsp
- QM-DSP
- https://github.com/c4dm/qm-dsp - A C++ library of functions for DSP and Music Informatics purposes. Used by the QM Vamp Plugins amongst other things.
DSP filters in C++
- https://github.com/dimtass/DSP-Cpp-filters - DSP C++ audio filters
CamillaDSP
- https://github.com/HEnquist/camilladsp - A flexible cross-platform IIR and FIR engine for crossovers, room correction etc. A tool to create audio processing pipelines for applications such as active crossovers or room correction. It is written in Rust to benefit from the safety and elegant handling of threading that this language provides. Supported platforms: Linux, macOS, Windows. Audio data is captured from a capture device and sent to a playback device. Alsa, PulseAudio, Jack, Wasapi and CoreAudio are currently supported for both capture and playback.
Synthesis ToolKit in C++
- Synthesis ToolKit in C++ (STK) is a set of open source audio signal processing and algorithmic synthesis classes written in the C++ programming language. STK was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, realtime control, ease of use, and educational example code. The Synthesis ToolKit is extremely portable (it's mostly platform-independent C and C++ code), and it's completely user-extensible (all source included, no unusual libraries, and no hidden drivers). We like to think that this increases the chances that our programs will still work in another 5-10 years. In fact, the ToolKit has been working continuously for nearly 20 years now. STK currently runs with realtime support (audio and MIDI) on Linux, Macintosh OS X, and Windows computer platforms. Generic, non-realtime support has been tested under NeXTStep, Sun, and other platforms and should work with any standard C++ compiler.
CREATE Signal Library / CSL
- https://github.com/stpope/CSL6 - CREATE Signal Library (CSL, pronounced "sizzle," previously called the CREATE Oscillator or CO). CSL is a cross-platform C++ library for digital audio signal synthesis, analysis, spatialization and interactive sound/music application development. CSL was developed at the Center for Research in Electronic Art Technology (CREATE) and the Graduate Program in Media Arts and Technology (MAT) at the University of California, Santa Barbara (UCSB) starting in the late 1990s.
- https://github.com/stpope/CSL7 - source code and documentation for the CREATE Signal Library (CSL, pronounced "sizzle," previously called the CREATE Oscillator or CO). CSL is a cross-platform C++ library for digital audio signal synthesis, analysis, spatialization and interactive sound/music application development. CSL was developed at the Center for Research in Electronic Art Technology (CREATE) and the Graduate Program in Media Arts and Technology (MAT) at the University of California, Santa Barbara (UCSB) starting in the late 1990s.
Signalsmith Audio's DSP Library
- https://github.com/Signalsmith-Audio/dsp - A C++11 header-only library, providing classes/templates for (mostly audio) signal-processing tasks.
MoogLadders
- https://github.com/ddiakopoulos/MoogLadders - sound Collected C++ implementations of the classic 4-pole moog ladder filter
Gamma
- https://github.com/LancePutnam/Gamma - a cross-platform, C++ library for doing generic synthesis and filtering of signals. It contains helpful mathematical functions, types, such as vectors and complex numbers, an assortment of sequence generators, and many other objects for signal processing tasks. It is oriented towards real-time sound and graphics synthesis, but is equally useful for non-real-time tasks.
Tonic
- https://github.com/TonicAudio/Tonic - Fast and easy audio synthesis in C++. Prefer coding to patching? Love clean syntax? Care about performance? That's how we feel too, and why we made Tonic.
- YouTube: Tonic Basics
- https://github.com/TonicAudio/ofxTonic - Open Frameworks Addon for the Tonic audio synthesis Library. Tonic is an efficient, pure C++ patching tool with a refreshingly crisp and simple syntax.
CLAM
- CLAM - (C++ Library for Audio and Music) is a software framework for research and application development on the audio and music domain. It provides means to perform complex audio signal analysis, transformations and synthesis. It also provides a uniform interface to common tasks on audio applications such as accessing audio devices and audio files, thread safe communication with the user interface and DSP algorithms recombination and scaling.
You can use CLAM as a library to program your applications in C++ but you can also use graphical tools to build full applications without coding.
- http://clam-project.org/wiki/Network_Editor_tutorial - graphical patching
- http://clam-project.org/wiki/SMSTools_tutorial - spectral; analyzes, transforms and synthesizes back a given sound. For doing so, it uses the Sinusoidal plus Residual model (sometimes referred to as SMS but also known as HILN in the context of MPEG4).
DISTRHO Plugin Framework
- https://github.com/DISTRHO/DPF - designed to make development of new plugins an easy and enjoyable task. It allows developers to create plugins with custom UIs using a simple C++ API. The framework facilitates exporting various different plugin formats from the same code-base. DPF can build for LADSPA, DSSI, LV2 and VST formats. All current plugin format implementations are complete. A JACK/Standalone mode is also available, allowing you to quickly test plugins. Plugin DSP and UI communication is done via key-value string pairs.
- https://github.com/DISTRHO
- https://github.com/DISTRHO/DPF-Plugins - Collection of DPF-based plugins for packaging
- https://github.com/DISTRHO/dpf-cmake-action - A GitHub action for DPF-based plugins using CMake
- https://github.com/StevenRVE/sve-template-plugin - This repository contains a template plugin for building plugins using the DPF framework.
- https://github.com/clearly-broken-software/dpf-nanovg-widgets-examples - Example plugin using the dpf-nanovg-widgets.
- https://github.com/DISTRHO/elements-template-plugin - Template repository for doing audio plugins with DPF and Cycfi Elements
JUCE C++ Library
to rework
- JUCE is a wide-ranging C++ class library for building rich cross-platform applications and plugins for all the major operating systems. Hundreds of classes covering a vast range of tasks from high-level user-interface handling, graphics and audio, right down to low-level collections, networking, strings, etc.
- http://www.rawmaterialsoftware.com/juce.php
- https://code.google.com/p/juced/
- https://en.wikipedia.org/wiki/JUCE - Like many other frameworks (e.g., Qt, wxWidgets, GTK+, etc.), JUCE contains classes providing a range of functionality that covers user-interface elements, graphics, audio, XML and JSON parsing, networking, cryptography, multi-threading, an integrated interpreter that mimics ECMAScript's syntax, and various other commonly used features. Application developers needing several third-party libraries may thus be able to consolidate and use only the JUCE library, or at least reduce the number of third-party libraries they use. In this, the original inspiration was Java's JDK, and JUCE was intended to be "something similar for C++".
A notable feature of JUCE when compared to other similar frameworks is its large set of audio functionality; this is because JUCE was originally developed as a framework for Tracktion, an audio sequencer, before being split off into a standalone product. JUCE has support for audio devices (such as CoreAudio, ASIO, ALSA, JACK, WASAPI, DirectSound) and MIDI playback, polyphonic synthesizers, built-in readers for common audio file formats (such as WAV, AIFF, FLAC, MP3 and Vorbis), as well as wrappers for building various types of audio plugin, such as VST effects and instruments. This has led to its widespread use in the audio development community.
JUCE comes with wrapper classes for building audio and browser plugins. When building an audio plugin, a single binary is produced that supports multiple plugin formats (VST & VST3, RTAS, AAX, Audio Units). Since all the platform and format-specific code is contained in the wrapper, a user can build Mac and Windows VST/VST3/RTAS/AAX/AUs from a single codebase.
- https://github.com/sudara/awesome-juce - A curated list of JUCE modules, templates, plugins, oh my!
- Audio Ordeal: How to build a VST - Lesson 1: Intro to JUCE
- https://github.com/DISTRHO/juce/tree/juce6 - with LV2 support. Old! Historical!
- https://github.com/Dreamtonics/juce_emscripten - This port was originally a proof-of-concept created here by @beschulz. This fork is an unofficial continuation of the attempt towards a complete JUCE framework running inside a browser.
- https://github.com/kunitoki/popsicle - aims to give JUCE (https://juce.com/) a broader audience by allowing it to be used from python. Thanks to cppyy (http://cppyy.readthedocs.io/en/latest/) it exposes the JUCE framework api in a pythonic way, and the way it enables to write apps in python is very much similar to the way of writing them in C++ but without the overweight of managing project build, configurations and IDE solutions.
- https://github.com/eyalamirmusic/JUCECmakeRepoPrototype - A prototype to model a way to create an entire repo using JUCE6 and CMake.
- FRUT - makes it easy to build JUCE projects using CMake instead of Projucer. It enables more flexibility in project architecture, simplified CI setup, and easier integration with other JUCE and non-JUCE projects. Converting an existing JUCE project to FRUT is easy, and you don’t need to be a CMake expert to use it!
- https://github.com/sudara/pamplejuce - Pamplejuce is a template lifestyle for creating and building JUCE plugins with 2020 best practices.Out of the box, it supports: C++20; JUCE 6.x as a submodule, tracking develop; CMake 3.19 Catch2 2.13.3; Github Actions for both CI and artifact building; Proper .gitignore given the above; Mac Silicon (arm64) built on Xcode 12.2
- https://github.com/madskjeldgaard/Cookiejuce - A command line tool for generating modern JUCE projects with CMake. Easily generate beautiful JUCE projects and create audio, midi and music software with modern C++ using a simple command line tool interface. When run, you are asked a series of questions, and a new project is generated based on your answers with everything correctly set up and named for your project.
- https://github.com/FigBug/Gin - A few extras for juce
- https://github.com/SquarePine/squarepine_core - An assortment of modules that make a great addition to your JUCE app and plugin development tool belt.
- https://github.com/Supperware/ht-api-juce - Head Tracker API for C++/JUCE
- https://github.com/drowaudio/drowaudio - a 3rd party JUCE module designed for rapid audio application development. It contains classes for audio processing and gui elements. Additionally there are several wrappers around 3rd party libraries including cURL, FFTReal and SoundTouch. dRowAudio is written in the strict JUCE style, closely following the style guide set out at JUCE Coding Standards.
- https://github.com/talaviram/JUX - aimed at being a complimentary library to improve JUCE UI building blocks.
- https://github.com/ChaseDurand/simple-plugin-gui - JUCE audio plugin showing knobs, toggles, switches, and VU meter.
- https://github.com/sudara/melatonin_inspector - A JUCE module that gives you the ability to inspect and visually edit (non-destructively) components in your UI. Inspired by Figma, web browser web inspectors and Jim Credland's Component Debugger juce-toys.
- https://github.com/ffAudio/PluginGuiMagic - Plugin GUI Magic is a live editor for JUCE GUIs, focussed on Audio Plugins.
- https://github.com/ImJimmi/JIVE - The ultimate JUCE extension for building GUIs
- https://github.com/FangCunWuChang/FlowUI - A VisualStudio-like UI framework by JUCE
- https://github.com/nick-thompson/react-juce - Write cross-platform native apps with React.js and JUCE
- https://github.com/JoshMarler/react-juce - Write cross-platform native apps with React.js and JUCE
- https://github.com/remberg/juceCustomSliderSample - Simple juce custom slider example using png files loading from file.
- https://github.com/bgporter/animator - Flexible animation controller for JUCE applications
- https://github.com/sudara/melatonin_audio_sparklines - Sparklines For JUCE AudioBlocks
- https://github.com/sudara/melatonin_blur - a batteries-included, cross-platform CPU blur library for the JUCE C++ framework. The goal: Get drop shadows and inner shadows fast enough that entire modern vector interfaces in JUCE can be built without resorting to deprecated solutions with lower quality of life (looking at you, OpenGL on macOS!). Melatonin Blur provides a 10-30x speedup over using Stack Blur.
- https://github.com/reFX/ColourSelector - aim is to provide a compatible but more fully-featured ColourSelector that mimics Photoshops' behavior and UX.
- https://github.com/getdunne/juce-lyrics - JUCE based plug-in to display karaoke-style lyrics
- https://github.com/ElandaSunshine/Jaut - A simple and powerful DSP library for JUCE based applications.
- https://github.com/MeijisIrlnd/SDSP - DSP Helper user module for Juce
- https://github.com/Chowdhury-DSP/chowdsp_utils - JUCE module with utilities for ChowDSP
- https://github.com/adamski/pitch_detector - JUCE module for pitch estimation
- https://github.com/izzyreal/juce-raw-keyboard-input-module - Raw keyboard input module for JUCE on Linux, Windows, macOS and iOS
- https://github.com/segfault16/juce_protobuf - JUCE module that wraps https://github.com/protocolbuffers/protobuf and allows for using e.g. grpc inside a Juce application
- https://github.com/genkiinstruments/juce_bluetooth - Bluetooth LE module for JUCE
Jamoma
- Jamoma - a C++ platform for building dynamic and reflexive systems with an emphasis on audio and media. Platform is composed of a layered framework architecture that creates an object model and then specializes that object model for audio and matrix processing, and system automation and management. Jamoma makes use of polymorphic typing, dynamic binding, and introspection to create a cross-platform API pulling ideas from languages such as Smalltalk and Objective-C while remaining within the bounds of the portable and cross-platform C++ context. The implementations include modular environments for Max by Cycling '74 and Pd by Miller Puckette.
iPlug2
- https://github.com/olilarkin/wdl-ol - original OLD - Enhanced version of Cockos' iPlug - A simple-to-use C++ framework for developing cross platform audio plugins and targeting multiple plugin APIs with the same code. VST / VST3 / Audiounit / RTAS / AAX (Native) formats supported. NOTE: THIS IS OBSOLETE, PLEASE SEE IPLUG2: @iplug2
- http://www.olilarkin.co.uk/html
- http://www.cockos.com/wdl
- iPlug2
- https://github.com/iPlug2/iPlug2 - a simple-to-use C++ framework for developing cross platform audio plug-ins/apps and targeting multiple plug-in APIs with the same minimalistic code. It abstracts an audio plug-in (IPlug) and it's drawing engine/GUI toolkit (IGraphics). IGraphics contains a collection of common controls well suited for audio plug-in GUIs, either using bitmap or vector graphics. The original version of iPlug was developed by John Schwartz aka schwa and released in 2008 as part of Cockos' WDL library. iPlug 2 (2018) is a substantial reworking that brings multiple vector graphics backends (including GPU accelerated options and HiDPI/scaling), a better approach to concurrency, support for distributed plug-in formats and compiling to WebAssembly via emscripten, amongst many other things.
- https://github.com/iPlug2/iPlug2OutOfSourceExample - an example of how to set up an iPlug2 project to build "out of source", which might be desirable to keep all your project dependencies synchronised with version control.
- https://github.com/olilarkin/SuperSynth - an example of how to set up an iPlug2 project to build "out of source", which might be desirable to keep all your project dependencies synchronised with version control. It is also set up for "containerized development" using VSCode and github codespaces.
- https://github.com/iPlug2/iPlug2OOS - an example of how to set up an iPlug2 project to build "out of source", which might be desirable to keep all your project dependencies synchronised with version control. It is also set up for "containerized development" using VSCode and github codespaces.
APAL
- https://github.com/pongasoft/jamba - A lightweight VST2/3 framework
Jamba
- https://github.com/CapRat/APAL - crossplatform audio abstraction library which is currently under heavy development. At the moment it supports VST2, VST3, LV2 and LADSPA. But there are a lot more to come, because its made for adding more Formats to it. Because its in an early phase of development it just supports audio and midi processing ( sure a lot more too, but that are details).
Ayane
- https://github.com/pdeljanov/Ayane - A modern C++11 audio engine offering a clean, fast, and safe API for manipulating audio data
Resonance
- https://github.com/resonance-audio/resonance-audio - the official open source project for the Resonance Audio SDK. This repository consists of the full source code of the Resonance Audio C++ library, as well as the platform integrations into Unity, FMOD, Wwise and DAW tools.
DSPFilters
- https://github.com/vinniefalco/DSPFilters - A Collection of Useful C++ Classes for Digital Signal Processing. "Techniques for digital signal processing are well guarded and held close to the chest, as they have valuable applications for multimedia content. The black art of Infinite Impulse Response ("IIR") filtering has remained veiled in secrecy with little publicly available source code...until now."
Building on the work of cherished luminaries such as Sophocles Orfanidis, Andreas Antoniou, Martin Holters, and Udo Zolzer, this library harnesses the power of C++ templates to solve a useful problem in Digital Signal Processing: the realization of multichannel IIR filters of arbitrary order and prescribed specifications with various properties such as Butterworth, Chebyshev, Elliptic, and Optimum-L (Legendre) responses. The library is provided under the MIT license and is therefore fully compatible with proprietary usage.
Classes are designed as independent re-usable building blocks. Use some or all of the provided features, or extend the functionality by writing your own objects that plug into the robust framework. Only the code that you need will get linked into your application. Here's a list of features: Exclusive focus on IIR filters instead of boring FIR filters, Complete implementation of all "RBJ Biquad" Cookbook filter formulas, Butterworth, Chebyshev, Elliptic, Bessel, Legendre designs, Low Pass, High Pass, Band Pass, Band Stop transformations, Low, High, and Band Shelf filter implementations for most types, Smooth interpolation of filter settings, pole/zeros, and biquad coefficients to achieve seamless parameter changes, Representation of digital filters using poles and zeros, Realization using Direct Form I, Direct Form II, or user provided class, Fully factored to minimize template instantiations, "Design" layer provides runtime introspection into a filter, Utility template functions for manipulating buffers of sample data, No calls to malloc or new, great for embedded systems, No external dependencies, just the standard C++ library!
DSP Library
- Signalsmith DSP library - a set of C++11 header-only classes/templates to support certain DSP tasks (mostly audio). It's still growing, but currently includes: Delay tools (circular buffers, single/multi-channel delay-lines); Interpolators (Lagrange, polyphase, Kaiser-sinc); Envelope tools (e.g. box-filter, peak-hold); FFT and spectral processing (including multi-channel STFT)
DaisySP
- https://github.com/electro-smith/DaisySP - an open source DSP library written in C++ and specifically tailored to embedded audio applications.It was originally created for use with the Daisy Embedded Audio Platform, however it can easily be used elsewhere.
spuc
- https://github.com/audiofilter/spuc - DSP Using C++ - Digital filters, Equalizers, Comm building blocks
spuce
- https://github.com/audiofilter/spuce - A DSP digital filtering library - IIR/FIR filtering
iir1
- https://github.com/berndporr/iir1 - An infinite impulse response (IIR) filter library for Linux, Mac OSX and Windows which implements Butterworth, RBJ, Chebychev filters and can easily import coefficients generated by Python (scipy). The filter processes the data sample by sample for realtime processing. It uses templates to allocate the required memory so that it can run without any malloc / new commands. Memory is allocated at compile time so that there is never the risk of memory leaks.
Q
- Cycfi Research: Q Audio DSP Library - a cross-platform C++ library for Audio Digital Signal Processing. Aptly named after the “Q factor”, a dimensionless parameter that describes the quality of a resonant circuit, the Q DSP Library is designed to be simple and elegant, as the simplicity of its name suggests, and efficient enough to run on small microcontrollers. Q leverages the power of modern C++ and efficient use of functional programming techniques, especially function composition using fine-grained and reusable function objects (both stateless and stateful), to simplify complex DSP programming tasks without sacrificing readability. MIT license.
- https://github.com/cycfi/Q - C++ library for Audio Digital Signal Processing
libnoise
- libnoise: Glossary - A portable, open-source, coherent noise-generating library for C++
mopo
- https://github.com/mtytel/mopo - Modular and Polyphonic audio synthesis library
ftz
- https://gitlab.com/ftz/audio - Digital audio processing and synthesis library. The following operations are implemented: Amplification, Mixing, Panning, Bit depth conversion, Sample rate conversion, Pitch Shifting. The following oscillators are implemented: Sine wave, Pulse wave, Triangle wave, Sawtooth wave, White noise. The following sample formats are implemented out of the box (you can always add yours): 16 bit signed int, 24 bit signed int, 32 bit signed int, 32 bit float, 64 bit float. The following file formats are supported: Microsoft RIFF Wave. The following audio interfaces available for playback: ALSA.
- https://gitlab.com/ftz/midi - This library provides a convenient interface for manipulation of MIDI data as well as providing higher level abstractions for I/O.
AudioTK
- https://github.com/mbrucher/AudioTK - a set of audio filters. It helps assembling workflows for specific audio processing workloads. The audio workflow is split in independent components (without feedback loops) that consist of filters. Each filter has a set of synchronized input and output ports that can be connected together. All input ports must be connected, but not all output ports need to be. Sampling rate can be independent between input and output ports, but input sampling rates are identical, and output sampling rates are also identical.
Raul
- http://drobilla.net/software/raul - Realtime Audio Utility Library, is a C++ utility library primarily aimed at audio/musical applications. It is used by Ingen, Patchage, and Machina.
kode
- https://github.com/skei/kode - c++ plugin framework/library
ASPiK
- https://github.com/willpirkleaudio/ASPiK - an audio plugin framework for developing AAX, AU and VST3 plugins.
- ASPiK SDK: ASPiK Developer's Guide - design and develop your AU, AAX and VST plugins and assemble the GUI right inside of your DAW while it is running. You can create a new blank plugin project in a few seconds using the ASPiKreator software. And if you are a RackAFX7 user, you can now export your RAFX2 plugins in ASPiK format thanks to an agreement with the author.
eDSP
- https://github.com/mohabouje/eDSP - easy Digital Signal Processing, is a digital signal processing framework written in modern C++ that implements some of the common functions and algorithms frequently used in digital signal processing, audio engineering & telecommunications systems. It aims to provide efficient and streamlined base calculations while at the same time having a straightforward and easy-to-use interface similar to another numeric language like MATLAB. The library integrates a C++ and Python interfaces and supports Windows, Linux, Mac OS, iOS and Android.
MadronaLib
- https://github.com/madronalabs/madronalib - a C++ framework for DSP applications.
NFDriver
- https://github.com/spotify/NFDriver - A cross platform C++ audio driver with low latency.
MWEngine
- https://github.com/igorski/MWEngine - 0ActionsWikiReleases 1Audio engine and DSP for Android, written in C++ providing low latency performance in a musical context, supporting both OpenSL and AAudio.
oboe
- https://github.com/google/oboe - a C++ library which makes it easy to build high-performance audio apps on Android. It was created primarily to allow developers to target a simplified API that works across multiple API levels back to API level 16 (Jelly Bean).
KFR
- KFR - Fast, modern C++ DSP framework, FFT, Sample Rate Conversion, FIR/IIR/Biquad Filters (SSE, AVX, AVX-512, ARM NEON)
fast-filters
- https://github.com/jpcima/fast-filters - Implementation of FIR and IIR filters optimized for SIMD processing
openAudioProgrammingTutorials
- https://github.com/harryhaaren/openAudioProgrammingTutorials - A tutorial style set of programs that will hopefully assist beginner programmers get familiar with the Linux Audio scene
jack-wrapper
- https://github.com/opampband/jack-wrapper - A simple C++ wrapper for JACK audio
JackCpp
- JackCpp - c++ classes which wrap the Jack audio io api and lock-free ring buffer. Works with Linux and OSX (thanks to Will Wolcott for OSX testing and example/test file comments).
segmod
- https://github.com/lucdoebereiner/segmod - Non-Standard Sound Synthesis
Maximilian
- Maximilian - an open source, MIT licensed C++ audio synthesis library. It’s designed to be cross platform and simple to use. The syntax and program structure are based on the popular ‘Processing’ environment. Maximilian provides standard waveforms, envelopes, sample playback, resonant filters, and delay lines. In addition, equal power stereo, quadraphonic and 8-channel ambisonic support is included. There’s also Granular synthesisers with Timestretching, FFTs and some Music Information Retrieval stuff.
lsp-lltl-lib
- https://github.com/sadko4u/lsp-lltl-lib - Low-level template library for basic data collections used in LSP Project
rt-wdf_lib
- https://github.com/RT-WDF/rt-wdf_lib - a real-time capable Wave Digital Filter library for circuit modeling, with support for arbitrary topologies and multiple/multiport non-linearities. It was introduced in a DAFx-16 paper and comes with a full API reference documentation.
Planeverb
- https://github.com/themattrosen/Planeverb - CPU based real-time wave-based acoustics engine tailored for games. It works in 2D, handling fully dynamic scenes in a single-CPU-core budget. Technical details can be found in associated publication.
mimium
- https://github.com/mimium-org/mimium - a domain specific programming language for describing/generating sound and music.With this language, you can write a low-level audio processing with an easy expression and high-performance powered by LLVM.
Tracktion Engine
- Tracktion Engine - defines a high level document object model for time based, sequenced audio applications and provides an API to create, modify and play these back. Create an Engine object, our arrangement object, called an Edit, add some elements to it such as audio files, MIDI and plugins, then play it back or render it to an audio file. Tracktion Engine is provided in JUCE module format making it incredibly quick and easy to setup and start creating projects. We’ve included a bunch of examples and accompanying tutorials along with scripts to build them so you can get up and running with the minimum of fuss.
SoLoud
- SoLoud - an easy to use, free, portable c/c++ audio engine for games. The engine has been designed to make simple things easy, while not making harder things impossible.
Modus
- Modus is an open source, cross-platform C++ library which allows you to handle music from code.
Wavelet
- https://github.com/Ircam-RnD/wavelet - a cross-platform c++ library for online estimation of the Continuous Wavelet Transform (CWT). The online estimation is based on a filterbank implementation of the CWT with minimal delay per scale and optimization based on multi-rate computation. The library also allows for offline estimation of the CWT using FFT.
Pitch detection
- https://github.com/sevagh/pitch-detection - pitch detection algorithms in C++
- https://github.com/vadymmarkov/Beethoven - A maestro of pitch detection.
- https://github.com/adamski/pitch_detector - JUCE module for pitch estimationPitchYIN class based on the YIN implementation found in the aubio libraryPitchMPM class adapted from the McLeod Pitch Method implementation in https://github.com/sevagh/pitch-detectionThe updated version of the PitchMPM class now uses FFT for the auto-correlation function using the AudioFFT library (via the module wrapper at https://github.com/adamski/audio_fft). The previous time-based version is now in the time-based branch.
signalsmith-stretch
- https://github.com/Signalsmith-Audio/signalsmith-stretch - a C++11 library for pitch and time stretching, using the final approach from the ADC22 presentation Four Ways To Write A Pitch-Shifter.It can handle a wide-range of pitch-shifts (multiple octaves) but time-stretching sounds best for more modest changes (between 0.75x and 1.5x).
MiniVideo
- https://github.com/emericg/MiniVideo - multimedia framework developed from scratch in C/C++, bundled with small testing programs and a neat media analyser.
CHON
- https://github.com/rodneydup/CHON - Coupled Harmonic Oscillator Network
NFSmartPlayer
- https://github.com/spotify/NFSmartPlayer - We dreamed of an audio engine that could multitrack, stream long audio, apply audio effects, react to user input through a scripting layer, be resource-light, and work on the majority of platforms. We wanted to bring new audio experiences to both listeners and creators, without requiring a degree in audio engineering. We created a declarative format to describe the audio and effects over time, known as NFGrapher. This C++ smart player consumes JSON produced by NFGrapher, and outputs audio in either realtime to the audio driver or faster-than-realtime (offline to a file). Via NFDecoder and NFHTTP, it can stream audio files from the file system, over HTTP(S), or through DASH (HLS) transmuxing.
vintage
- https://github.com/jcelerier/vintage - an experiment in seeing how far modern C++ features allow to write purely declarative code and introspect this code through various reflection-like features.
Sequentity
- https://github.com/alanjfs/sequentity - A single-file, immediate-mode sequencer widget for C++17, Dear ImGui and EnTT
Oto
- https://github.com/i18fn/Oto - A programming language for sound play
Surge Filters
- https://github.com/surge-synthesizer/sst-filters - the filters from the Surge Synthesizer as a submodule.
mbdsp
- https://github.com/michaeldonovan/mbdsp - A collection of DSP algorithms/utilities for working with audio. Header only and intended to be suitable for embedded applications.This is mostly intended as an educational exercise for use in my hobby projects, you probably shouldn't use it for anything important.
aubergine
- https://github.com/jurihock/aubergine - C++ real-time audio signal processing engine on top of RtAudio and Oboe that tries to Keep it Simple, Stupid
AlloSystem
- https://github.com/AlloSphere-Research-Group/AlloSystem - a cross-platform suite of C++ components for building interactive multimedia tools and applications.
allolib
- https://github.com/AlloSphere-Research-Group/allolib - Library for interactive multimedia application development. Developed by: AlloSphere Research Group. allolib is a fork of AlloSystem, maintained in separate repository. University of California, Santa Barbara
- https://github.com/AlloSphere-Research-Group/allolib_playground - Code playground for allolib
diatonic
- https://github.com/pd3v/diatonic - You don't need to create lists of vectors/arrays to set pre-definitions for "all" scales, chords and a given tonality chords'; create them dynamically; do it by calling functions.
C#
MusicEngine
- https://github.com/geekdrums/MusicEngine - make music synced game with Unity / ADX2 / Wwise
Objective C
Mozzi
- Mozzi - brings your Arduino to life by allowing it to produce much more complex and interesting growls, sweeps and chorusing atmospherics. These sounds can be quickly and easily constructed from familiar synthesis units like oscillators, delays, filters and envelopes.You can use Mozzi to generate algorithmic music for an installation or performance, or make interactive sonifications of sensors, on a small, modular and super cheap Arduino, without the need for additional shields, message passing or external synths.
resample
- https://github.com/michaelforney/resample - Public-domain audio resampler
D
Dplug
- https://github.com/AuburnSounds/Dplug - Dplug is a library for creating native audio plugins (VST / AU / AAX) as simply as possible.
Ddsp
- https://github.com/ctrecordings/Ddsp - A library for Digital Signal Processing in D with a focus on audio
libsoundio-d
- https://github.com/dkorpel/libsoundio-d - D translation of libsoundio, a library for real-time audio input and output
Java
Java Sound
JSyn
- JSyn - allows you to develop interactive computer music programs in Java. You can run them as stand-alone applications, or as Applets in a web page. JSyn can be used to generate sound effects, audio environments, or music. JSyn is based on the traditional model of unit generators which can be connected together to form complex sounds. For example, you could create a wind sound by connecting a white noise generator to a low pass filter that is modulated by a random contour generator.
Beads
- Beads is a software library written in Java for realtime audio. It was started by Ollie Bown in 2008. It is an open source project and has been developed with support from Monash University in Melbourne, via the Centre for Electronic Media Art‘s ARC Discovery Grant Project “Creative Ecosystems”, and a Small Grant for Early Career Researchers from the Faculty of Information Technology. Beads contributors includes Ollie Bown, Ben Porter and Benito.
JMSL
- http://www.algomusic.com/jmsl/ JMSL is a Java API for music composition, interactive performance, and intelligent instrument design. With JMSL, the composer/programmer can create stand-alone musical applications or deploy applets on the web. JMSL supports JSyn, MidiShare, MidiPort, and JavaSound.
JNAJack
- https://github.com/jaudiolibs/jnajack - Java bindings to JACK Audio Connection Kit
JFuge
- JFugue - Music Programming for Java™ and JVM Languages
JASS
- JASS (Java Audio Synthesis System) is a unit generator based audio synthesis programming environment written in pure Java. Java 1.5 is required. The environment is based on a foundation structure consisting of a small number of Java interfaces and abstract classes, which implement the functionality needed to create filter-graphs, or "patches". Unit generators are created by extending the abstract classes and implementing a single method. Patches are created by linking together unit generators in arbitrary complex graph structures. Patches can be rendered in real-time with special unit generators that communicate with the audio hardware, which have been implemented using the JavaSound API and through JNI for some platforms.
jVSTwRapper
- jVSTwRapper - an easy and reliable wrapper to write audio plug-ins in Java. It enables you to develop VST (2.4), Audio Unit (AU) and LADSPA compatible audio plugins and virtual instruments plus user interfaces (Swing) and run them on Windows, Linux and Mac OSX. Five demo plugins (+src) are included.
jackAudio4Java
- https://github.com/free-creations/jackAudio4Java - A thin JNI wrapper around the "JACK Audio Connection Kit"
jnajack
- https://github.com/jaudiolibs/jnajack - Java bindings to JACK Audio Connection Kit
Pipes
- https://github.com/jaudiolibs/pipes - an audio routing and unit generator library linking together the various other JAudioLibs libraries. Pipes consists of code that was originally written for or included in PraxisLIVE, but is now provided separately for re-use by other Java audio or DSP projects.
electro
- https://github.com/Deep-Symmetry/electro - A Java library to help work with musical time. Based on the rhythm namespace in Afterglow, ported to plain Java to give these capabilities to Beat Link without requiring it to embed the Clojure ecosystem.
Python
PyJack
- PyJack - jack audio client module for python
jackclient-python
python-jackpatch
- https://github.com/jessecrossen/python-jackpatch - A python module exposing JACK patchbay, MIDI, and transport functionality.
python-wavefile
- python-wavefile - Pythonic libsndfile wrapper to read and write audio files.
audioread
- https://github.com/beetbox/audioread - Decode audio files using whichever backend is available. The library currently supports: Gstreamer via PyGObject. Core Audio on Mac OS X via ctypes. (PyObjC not required., MAD via the pymad bindings. FFmpeg or Libav via its command-line interface. The standard library wave, aifc, and sunau modules (for uncompressed audio formats).
python-sounddevice
- https://github.com/spatialaudio/python-sounddevice - provides bindings for the PortAudio library and a few convenience functions to play and record NumPy arrays containing audio signals.The sounddevice module is available for Linux, macOS and Windows.
PyWavelets
- PyWavelets - a free Open Source wavelet transform software for Python programming language. It is written in Python, Cython and C for a mix of easy and powerful high-level interface and the best performance.
synthplayer
- https://github.com/irmen/synthesizer - python sample mixer and sequencer, waveform synthesizer, and sound playback engine
AudioLazy
- https://github.com/danilobellini/audiolazy - Expressive Digital Signal Processing (DSP) package for Python
- https://github.com/danilobellini/lz2lv2 - LV2 plugin code generator from AudioLazy/Python code.
Pippi
- https://github.com/luvsound/pippi - Computer music with python
- https://github.com/luvsound/astrid - an interactive workstation and performance interface for pippi (a DAW & an instrument)
Undulance
- https://github.com/zwegner/undulance - a Python software synthesis library. It mostly relies on PyPy to work at a sufficient speed, since CPython is too slow.
Audiostream
- https://github.com/kivy/audiostream - a python extension that gives you a direct access to the speaker or microphone.
mutils
- https://github.com/joelrobichaud/mutils - Python music theory utility library made to easily handle MIDI or frequency reprensations of musical constructs (notes, chords, scales, ...).
AVIO
- https://github.com/ri0t/avio - The AVIO Suite is an experimental approach to the concept of a live stage performance. AVIO stands for "Audio Visual Input Output" and the suite is a component (event) based agglomeration of tools to work with Controller, Midi, Audio and Visual data.
audio_dspy
- https://github.com/jatinchowdhury18/audio_dspy - A Python package for audio signal processing tools
Keras Audio Preprocessors
- https://github.com/keunwoochoi/kapre - compute STFT, ISTFT, Melspectrogram, and others on GPU real-time.
DDSP
- https://github.com/magenta/ddsp - a library of differentiable versions of common DSP functions (such as synthesizers, waveshapers, and filters). This allows these interpretable elements to be used as part of an deep learning model, especially as the output layers for audio generation.
- https://github.com/acids-ircam/ddsp_pytorch - Implementation of Differentiable Digital Signal Processing (DDSP, in Pytorch
pyaec
- https://github.com/ewan-xu/pyaec - simple and efficient python implemention of a series of adaptive filters. including time domain adaptive filters(lms、nlms、rls、ap、kalman)、nonlinear adaptive filters(volterra filter、functional link adaptive filters)、frequency domain adaptive filters(frequency domain adaptive filter、frequency domain kalman filter) for acoustic echo cancellation.
MAWB
- https://github.com/mindhog/mawb - a jumbled bundle of tools used by its author to do random things to make music, but the idea is to have a complete component management system geared towards music performance. MAWB was written for Linux systems using ALSA and jack. You may be able to get this system working under other operating systems, and portions of the system are usable indepdendantly.
Cells
- Cells - allows you to organize code into runnable snippets and mix programming languages. Use SuperCollider, Python, TidalCycles, Node.js etc. in the same project.
Pipeline
- https://docs.klio.io/en/latest/userguide/pipeline - encapsulates the various steps of the Klio job from reading input data, transforming the data, and writing output data. Klio pipelines offer a Pythonic interface to build upon beam pipelines and allow large-scale data processing on Docker and Google Dataflow.
Pipeline
- Libfmp API Documentation — libfmp 1.2.1 documentation - API documentation for the Python package libfmp. This package goes hand in hand with the FMP Notebooks, a collection of educational material for teaching and learning Fundamentals of Music Processing (FMP) with a particular focus on the audio domain.
- C0 - The FMP notebooks offer a collection of educational material closely following the textbook Fundamentals of Music Processing (FMP). This is the starting website, which is opened when calling https://www.audiolabs-erlangen.de/FMP. Besides giving an overview, this website provides information on the license and the main contributors.
pytemidi
- https://github.com/complynx/pytemidi - Python bindings for Tobias Erichsen Virtual MIDI
Pedalboard
- https://github.com/spotify/pedalboard - Python library for adding effects to audio. It supports a number of common audio effects out of the box, and also allows the use of VST3® and Audio Unit plugin formats for third-party effects. It was built by Spotify's Audio Intelligence Lab to enable using studio-quality audio effects from within Python and TensorFlow.
music21
- What is music21? — music21 Documentation - a Python-based toolkit for computer-aided musicology.People use music21 to answer questions from musicology using computers, to study large datasets of music, to generate musical examples, to teach fundamentals of music theory, to edit musical notation, study music and the brain, and to compose music (both algorithmically and directly).One of music21’s mottos is “Listen Faster.” With the toolkit you should be able to find interesting moments and get a sense of the overall profile of a piece or a repertory of pieces. We hope that with the computer you’ll have more time for listening and playing for enjoyment and use less of your time listening for work.The system has been around since 2008 and is constantly growing and expanding. The approaches and traditions in music21 have been used in many previous software systems. See Authors, Acknowledgments, Contributing, and Licensing for information on the authors and background of the project.The 21 in music21 refers to its origins as a project nurtured at MIT. At MIT all courses have numbers and music, along with some other humanities departments, are numbered 21. The music departments of MIT, along with Harvard, Smith, and Mount Holyoke Colleges, helped bring this toolkit from its easiest roots to a mature system.
- https://github.com/cuthbertLab/music21 - A Toolkit for Computer-Aided Musical Analysis and Computational Musicology
Sardine
- Sardine - a free/open-source software for Python 3.10+. Sardine turns Python into a musical instrument. It is a framework for live coding music and controlling musical hardware from Python. You can play alone or synchronize with other audio softwares and instances of Sardine. With Sardine you can: Play synthesizers and audio samples using SuperCollider. Control synthesizers through MIDI and OSC messages. Interconnect audio/video softwares and/or sound engines. Play collaboratively with your friends or other musicians. Extend your musical environment using any Python package. Explore multiple algorithmic pattern languages for improvisation.
- https://github.com/sardine-system/sardine-web - Official web editor plugin for Sardine
- https://github.com/Bubobubobubobubo/sardine-sounds - attempt to generate a basic set of audio samples and synthesis modules integrated with Sardine
- https://github.com/Bubobubobubobubo/sardine-vscode - VSCode extension for Sardine
libavg
- libavg - allows programmers, media artists and designers to quickly develop media applications. It uses python as scripting language, is written in high-speed C++ and uses modern OpenGL for display output. The project has been under constant development since its inception in 2003. libavg supports the full variety of display elements - images, text, videos, camera output, vector graphics - that modern graphics-intensive applications need. It is fast: the layout engine supports thousands of display elements on the screen at once as well as hardware-accelerated video output. Text using markup, videos with an alpha channel, rendering to offscreen buffers, masking, as well as GPU shader effects such as blur and chromakey are all supported. Plugins written in C++ can expand the engine and have access to all libavg internals. The Tutorial explains all these concepts using a sample application.
music_essentials
- https://github.com/charlottepierce/music_essentials - Music operations for python - create and manipulate scales, chords, and intervals.
athenaCL
- athenaCL system - an open-source, object-oriented composition tool written in Python. The system can be scripted and embedded, and includes integrated instrument libraries, post-tonal and microtonal pitch modeling tools, multiple-format graphical outputs, and musical output in Csound, SuperCollider, Pure Data, MIDI, audio file, XML, and text formats.
Musical parts are deployed as Textures, layered surface-defining objects containing numerous independent ParameterObjects to control pitch, tempo, rhythm, amplitude, panning, and instrument (Csound) parameters. The system includes an integrated library of Csound and SuperCollider instruments, and supports output for external Csound instruments, MIDI, and a variety of alternative formats. Over eighty specialized Generator, Rhythm, and Filter ParameterObjects provide tools for stochastic, chaotic, cellular automata based, Markov based, generative grammar and Lindenmayer system (L-system), wave-form, fractional noise (1/f), genetic, Xenakis sieve, linear and exponential break-point segments, masks, and various other algorithmic models. ParameterObjects can be embedded in other ParameterObjects to provide powerful dynamic and masked value generation. Textures can be combined and edited, and tuned with algorithmic Temperament objects. Texture Clones allow the filtering and processing of Texture events, performing transformations not possible with parameter generation alone.
SoundCard
- https://github.com/bastibe/SoundCard - a library for playing and recording audio without resorting to a CPython extension. Instead, it is implemented using the wonderful CFFI and the native audio libraries of Linux, Windows and macOS.SoundCard is cross-platform, and supports Linux/pulseaudio, Mac/coreaudio, and Windows/WASAPI. While the programming interface is identical across platforms, sound card naming schemes and default block sizes can vary between devices and platforms.
Ruby
Negasonic
- Negasonic Editor - Ruby DSL for music live coding in the browser, you can play with it in the online editor
.NET
- https://github.com/naudio/NAudio - Audio and MIDI library for .NET
F#
- https://github.com/brianberns/FYampaSynth - Modular Synthesizer Programming in F#
Go
- https://github.com/200sc/klangsynthese - Waveform and Audio Synthesis library in Go. Klangsynthese right now supports a number of features that will work regardless of OS, and a number of features specific to Windows where the hope is to move support to Linux and Darwin.
- https://github.com/go-audio - format libraries
Rust
- https://github.com/RustAudio/rust-jack - Decent jack bindings for rust
Symphonia
- https://github.com/pdeljanov/Symphonia - Pure Rust multimedia format demuxing, tag reading, and audio decoding library
rsynth
- https://github.com/PieterPenninckx/rsynth - An API abstraction for API's for audio plugins and applications. Use it to write real-time audio effects, software synthesizers, ... and target different platforms (vst, jack, ...). It is currently most suitable for real-time or "streaming" audio processing. E.g. you cannot use it to reverse audio in time.
dasp
- https://github.com/RustAudio/dasp - Digital Audio Signal Processing in Rust.Formerly the sample crate.A suite of crates providing the fundamentals for working with PCM (pulse-code modulation) DSP (digital signal processing). In other words, dasp provides a suite of low-level, high-performance tools including types, traits and functions for working with digital audio signals.
Nannou
- Nannou - An open-source creative-coding framework for Rust. Nannou is a library that aims to make it easy for artists to express themselves with simple, fast, reliable code. Whether working on a 12-month laser installation or a 5 minute sketch, this framework aims to give artists easy access to the tools they need.
Makepad
kira
- https://github.com/tesselode/kira - an audio library designed to help create expressive audio for games. Besides the common features you'd expect from an audio library, it provides interfaces for scripting audio events, seamlessly looping complex pieces of music, smoothly changing parameters, and more.
baseview
- https://github.com/RustAudio/baseview - low-level window system interface for audio plugin UIs. baseview abstracts the platform-specific windowing APIs (winapi, cocoa, xcb) into a platform-independent API, but otherwise gets out of your way so you can write plugin UIs.
audio-featrs
- https://github.com/Rikorose/audio-featrs - Audio feature calculation written in Rust.
rust-audio-experiments
- https://github.com/SecondFlight/rust-audio-experiments - An unorganized collection of audio experiments
Glicol
- Glicol - an acronym for "graph-oriented live coding language", is a computer music language with both its language and audio engine written in Rust programming language, a modern alternative to C/C++. Given this low-level nature, Glicol can run on many different platforms such as browsers, VST plugins and Bela board. Glicol's synth-like syntax and powerful audio engine also make it possible to combine high-level synth or sequencer control with low-level sample-accurate audio synthesis, all in real-time.
- https://github.com/glicol/glicol-cli - cross-platform music live coding in terminal
- https://github.com/glicol/glicol-vst - music live coding as a vst plug-in
Rust-AV
- https://github.com/rust-av/rust-av - Pure-rust implementation of multimedia primitives and eventually some examples of demuxers, muxers and codecs.
RustFFT
- https://github.com/ejmahler/RustFFT - RustFFT is a high-performance FFT library written in pure Rust.
rust-music
- https://github.com/TomboFry/rust-music - Somehow, a DAW, in Rust? (Please don't have ANY expectations) [MIRROR]
Groove
- https://github.com/sowbug/groove -A digital audio workstation (DAW) engine.
NihAV
- NihAV - a research multimedia framework built on not invented here syndrome and licensed under GNU Affero General Public License 3.0 (relicensing to other free licenses available on request).
nightfall
- https://github.com/Dusk-Labs/nightfall - a library used internally by Dim to allow on-demand transcoding and streaming of various video files.
pitch_calc
- https://github.com/RustAudio/pitch_calc - A library for musical pitch conversions! Provides functions and methods for converting between frequency (hz), midi-step, letter-octave and mels.
valib
- valib - a library focusing on abstracting DSP algorithms to make them reusable and composable. It’s focusing on musical applications such as plugins or embedded digital synths.
Zig
Zang
- https://github.com/dbandstra/zang - Audio for Zig
Kasumi
- https://github.com/extrasharp/kasumi - audio graph
Nim
Omni
- https://github.com/vitreo12/omni - a cross-platform DSL (Domain Specific Language) for low level audio programming. It aims to be a new, expressive and easy to use programming language to code audio algorithms in. Omni leverages nim and C to compile code to self-contained static or shared libraries that can then be loaded and used anywhere. So far, two wrappers have already been written to compile omni code to SuperCollider UGens (omnicollider), or Max 8 objects (omnimax).
Haskell
Punctual
- https://github.com/dktr0/Punctual - language for live coding audio and visuals. It allows you to build and change networks of signal processors (oscillators, filters, etc) on the fly. When definitions are changed, when and how they change can be explicitly indicated.Punctual runs in a web browser, and is portable to any system with a browser that supports the Web Audio API (for sound) and WebGL (for video). While it can be used in a standalone way, it is also bundled inside the Estuary platform for collaborative live coding.
Estuary
- https://github.com/dktr0/estuary - Platform for collaboration and learning through live coding
Lua
See Lua
- https://github.com/stetre/luajack - Lua bindings for the JACK Audio Connection Kit
Worp
- https://github.com/zevv/worp - an experimental sound / music / DSP engine written in LuaJIT, currentlyin early alpha state.
isms
- https://github.com/tehn/isms - reconstruction of norns. reconsidering design for use on a computer with large screen and keyboard.
Purescript
- klank.dev
- https://github.com/mikesol/purescript-audio-behaviors - DSP in the browser using the behavior pattern.
MATLAB
- https://github.com/ricedsp/rwt - a collection of Matlab M-files and C MEX-files for 1D and 2D wavelet and filter bank design, analysis, and processing. The toolbox provides tools for denoising and interfaces directly with our Matlab code for wavelet domain hidden Markov models and wavelet regularized deconvolution.
SuperCollider
1996 / OSS in 2002
- SuperCollider - an environment and programming language for real time audio synthesis and algorithmic composition. It provides an interpreted object-oriented language which functions as a network client to a state of the art, realtime sound synthesis server.
- https://github.com/madskjeldgaard/awesome-supercollider - A curated list of SuperCollider stuff
- YouTube: SuperCollider Tutorials - playlist
- UGen - UGens represent calculations with signals. They are the basic building blocks of synth definitions on the server, and are used to generate or process both audio and control signals. The many subclasses of UGen are the client-side representations of unit generators, and are used to specify their parameters when constructing synth definitions (see SynthDef).
- Writing Unit Generators - "SuperCollider has a vast library of unit generators that can be assembled in unlimited ways, but sometimes even those aren't sufficient. You may have a need for an unusual signal processing algorithm, or you're running into efficiency problems that can be solved by condensing parts of your SynthDef into a single UGen. UGens are defined in server plugins written in C++. Server plugins are not to be confused with quarks, which extend the SuperCollider language. UGens exist more or less independently of the interpreter and you don't need much familiarity with SC to write them. Writing UGens is not too difficult, but it's arguably far less convenient and intuitive than the high-level tools that SuperCollider provides. You'll need a build system and a good amount of boilerplate code -- even fairly basic signal processing operations can require a lot of code. You don't have an instant live coding environment, and mistakes can easily crash the server. SuperCollider's UGens are stable and well-tested, and custom UGens are best viewed as a last resort for when the limitations of SC are impassable."
- Pattern Guide 01: Introduction | SuperCollider 3.11.0 Help - Patterns are one of the most powerful elements of the SuperCollider language, but in some ways they can be difficult to approach using only the class-oriented help files. These documents seek to bridge the gap, explaining the conceptual background behind patterns, describing the usage of specific Pattern classes, and proceeding into examples of practical musical tasks written as patterns.
- https://github.com/madskjeldgaard/fzf-sc - Combine the magic of fuzzy searching with the magic of SuperCollider in NeoVim
- https://github.com/madskjeldgaard/nvim-supercollider-piano - emulates the functionality found in digital audio work stations such as Ableton where a user may select a synth and press they computer keyboard's keys to play the synth. Except, this is a million times more cool because it is NeoVim and SuperCollider.
- https://github.com/ideoforms/python-supercollider - Python client for the SuperCollider audio synthesis server
- https://github.com/supercollider/cookiecutter-supercollider-plugin - cookiecutter project for SuperCollider server plugins
- https://github.com/asb2m10/plugincollider - Run SuperCollider as a VST3 plugin
- https://github.com/enrike/sc - Python interface to Supercollider scsynth
- sc by scgolang - provides a way to interface directly with the SuperCollider server from Go programs.
- https://github.com/scgolang/scc - SuperCollider CLI in Go
- https://github.com/miguel-negrao/FPLib - Functional Programming library for SuperCollider
- https://github.com/ModalityTeam/Modality-toolkit - a library to facilitate accessing (hardware) controllers in SuperCollider. It is designed and developed by the ModalityTeam, a group of people that see themselves as both developers and (advanced) users of SuperCollider.The central idea behind the Modality-toolkit is to simplify creation of individual (electronic) instruments with SuperCollider, using controllers of various kinds. To this end, a common code interface, MKtl, is used for connecting controllers from various sources (and protocols). These are atm. HID and MIDI; OSC, Serialport and GUI-based are planned to be integrated.
- https://github.com/madskjeldgaard/outputfx - Easily manage output effect chains on the main output of SuperCollider.
- https://github.com/madskjeldgaard/ReaCollider - Use Reaper from SuperCollider.
- Utopia is a SuperCollider library for the creation of networked music applications, and builds upon the work of the Republic Quark and other existing network systems in SuperCollider. It aims to be modular (features available largely 'à la carte'), secure (provides methods for authentication and encryption), and flexible (to the extent possible, it tries not to impose a particular design or architecture). It provides functionality for synchronisation, communication, code sharing, and data sharing.
- Modality Toolkit - simplifies creation of highly personalised electronic instruments in SuperCollider by introducing a common code interface. This allows for uniform access to HID, MIDI, OSC and GUI-based conrollers, as well as switching of functionality, even at runtime.
- NNdef: livecoding digital musical instruments in SuperCollider using functional reactive programming
- https://github.com/williamthazard/cuterScope - a cuter scope
- https://github.com/madskjeldgaard/mkplugins - A collection of SuperCollider plugins by Mads Kjeldgaard.
- https://github.com/escuta/mosca - SuperCollider class for GUI-assisted authoring of dynamic ambisonic sound fields.
- https://github.com/schollz/supercollider-microtonic - Porting the Microtonic VST to SuperCollider
- https://github.com/williamthazard/bwsampler - a sampler built in collaboration with students at the Bridge Way School in Philadelphia, PA
- https://github.com/yaxu/SuperDirt - SuperCollider implementation of the Dirt sampler for the Tidal programming language
- https://www.marcinpietruszewski.com/the-new-pulsar-generator - The New Pulsar Generator (nuPg) is an interactive program for sound synthesis developed in SuperCollider 3 (SC3) programming language. The nuPg program produces a form of synthesis called pulsar synthesis (PS). The technique of PS operationalises the notion of rhythm with its multitemporal affordances as a system of interconnected patterns evolving on multiple timescales. The technique generates a complex hybrid of sounds across the perceptual time span between infrasonic pulsations and audio frequencies, giving rise to a broad family of musical structures: singular impulses, sequences, continuous tones, time-varying phrases, and beating textures.
- https://github.com/everythingwillbetakenaway/DX7-Supercollider - My accurate Yamaha DX-7 clone. Programmed in Supercollider.
- https://github.com/madskjeldgaard/weirdverbs - Weird, bad and/or trashy reverb experiments for SuperCollider
- https://github.com/sonoro1234/Lua2SC - Lua client for supercollider scsynth and supernova
- ScalaCollider - a real-time sound synthesis and signal processing system, based on SuperCollider and the Scala programming language. It can be used as a library in a Scala application, but there is also a small stand-alone prototyping environment called ScalaCollider-Swing. See Mellite for a more comprehensive IDE.
- https://github.com/schollz/livecodingmusic - live coding music in python + supercollider
- https://github.com/nhthn/supercollider-cd-skip - A CD skipping UGen for SuperCollider
- https://github.com/tremblap/IBufWr - a SuperCollider (https://supercollider.github.io/) porting of the bespoke Max (https://cycling74.com/products/max) object ipoke~ v4.1 (http://www.no-tv.org/MaxMSP/), which allows to write to the server buffers without leaving unfilled indices when writing faster than realtime.
- https://github.com/Xon77/Live4Life - A spatial performance tool for SuperCollider: interactive sequencer of syntheses and samples with pre-defined spatial trajectories and algorithms, including a multichannel effect system, controlled by a GUI and a combination of several interfaces.
- https://github.com/redFrik/GFX - Green effects module system with optional GUI
- ProxySpace | SuperCollider 3.12.2 Helpan environment of references on a server
AlgaLib
- https://github.com/vitreo12/AlgaLib - a new environment for live coding that focuses on the creation and connection of sonic modules. Unlike other audio software environments, the act of connecting Alga modules together is viewed as an essential component of music composing and improvising, and not just as a mean towards static audio patches. In Alga, the definition of a new connection between the output of a module and the input of another does not happen instantaneously, but it triggers a process of parameter interpolation over a specified window of time.
Clojure
- https://github.com/aamedina/lv2 - clojure lv2 experiment
Overtone
- Overtone is an open source audio environment being created to explore musical ideas from synthesis and sampling to instrument building, live-coding and collaborative jamming. We use the SuperCollider synth server as the audio engine, with Clojure being used to develop the APIs and the application. Synthesizers, effects, analyzers and musical generators can be programmed in Clojure.
lein repl user=>(use 'overtone.live)
- PDF: Overtone Cheat Sheet
- Cadejo is a MIDI management tool for use with Overtone
- https://github.com/ctford/leipzig - A composition library for Overtone.
Pink
- https://github.com/kunstmusik/pink - provides the basis for developing music systems. It is also designed so to scale to user needs, whether they are exploring and designing low-level signal processing algorithms, developing pre-written compositions, or creating interactive real-time systems. It offers a slim core engine designed to be highly customizable.
Music as Data
- Music as Data (MAD) is a live programming language/environment based on Processing.org written in Clojure. MAD lets you treat music as data and apply data transformation on the fly so you can experiment with notes and samples.
Scala
- https://github.com/Sciss/SoundProcesses - A computer music framework to describe, create and manage sound processes in the Scala programming language.
- Mellite - Mellite is an environment for creating experimental computer-based music and sound art. It is a desktop application, allowing you to work with real-time and offline sound synthesis processes, combining multiple perspectives such as live improvisation, implementing sound installations, or working in DAW-like timeline views. Mellite runs on all major operating systems and can be used both in a purely graphical fashion, or by writing and connecting snippets in the Scala programming language.
ChucK
2003
- ChucK is a programming language for real-time sound synthesis and music creation. It is open-source and freely available on MacOS X, Windows, and Linux. ChucK presents a unique time-based, concurrent programming model that's precise and expressive (we call this strongly-timed), dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive music. [16]
- WebChucK IDE
- https://github.com/ccrma/webchuck - WebChucK brings ChucK, a strongly-timed audio programming language, to the web! ChucK's C++ source code has been compiled with Emscripten and targets WebAssembly (WASM, to run via the AudioWorkletNode interface of the Web Audio API. With near-native performance, WebChucK runs on modern desktop browsers as well as tablets and mobile devices! Bring together ChucK's real-time sound synthesis engine and web tools to create new experiences and develop creative workflows. Embed WebChucK into any website to build online audiovisual experiences, immersive multi-channel audio web apps, or shareable musical instruments! To learn more about WebChucK and what it can do, check out https://chuck.stanford.edu/webchuck/
- LiCK - Library for ChucK.
- YouTube: ChucK and Audicle Quick Demo 2006
- ChucK Racks - makes it possible to write and execute ChucK programs as VST/AudioUnit plugins.
- https://github.com/spencersalazar/chugin.rs - Rust <> ChucK (Chugin) bindings generator.
- https://github.com/DBraun/ChucKDesigner - an integration of the ChucK music/audio programming language with the TouchDesigner visual programming language.
Processing
- Processing.org - a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping.
- XYscope - a library for Processing to render graphics on a vector display (oscilloscope, laser) by converting them to audio.This includes most primitive shapes (point, line, rect, ellipse, vertex, ...) by converting those points to waveforms (oscillators with custom wavetables) and generating audio in real time using the Minim library. Vector graphics shine on a vector display and now you can view your generative works like never before! Tested on MacOS 10.9+ / Windows / Linux (RPi!).
FAUST
2002
- FAUST - Functional Audio Stream is a functional programming language specifically designed for real-time signal processing and synthesis. FAUST targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. Simply put, Faust lets one program dsp code once in a purely functional language, and compile it to various platforms including max/msp, supercollider, audio unit, vst, lv2, and more.
- Faust Libraries - This page provides information on how to use the Faust libraries.
- https://github.com/grame-cncm/faustlibraries - This repository contains the source code and the documentation of the DSP libraries of the Faust Programming Language.
_ # a channel _,_ # two channels in parallel _,_:_,! # two channels sequenced to a into one channel
process = _:_; // series combination (1 in, 1 out) process = _,_; // parallel combination (2 ins, 2 outs) process = +; // summer (2 ins, 1 out)process = _,_ : +; // same summer process = _,_ : + : _; // same summer process = -; // signal subtractor process = *; // pointwise signal multiplier (nonlinear)7 process = /; // pointwise signal divider (nonlinear) process = mem; // unit-sample delay process = _, 1 : @; // unit-sample delay process = _,10 : @; // ten-sample delay process = a ~ b; // feedback thru b around a process = _ ~ _ ; // feedback thru _ (generates 0) process = mem ~ _; // two-sample closed loop (generates 0) process = + ~ _; // digital integrator process = _ <: _ , _; // mono to stereo process = _ <: _ , _, _, _; // mono to quad process = _ , _ <: _ , _, _, _; // stereo to quad (see diagram) process = _ , _ :> _; // stereo to mono [equiv to +] process = _, _ , _ , _ :> _ ; // quad to mono [equiv to +,+:+]
hslider("gain", 0, 0, 1, 0)
Grouping solution 1: [18]
a = vgroup("term1", nentry("a",1,0,10,1)); b = vgroup("term1", nentry("b",2,0,10,1)); x = vgroup("term2", nentry("x",3,0,10,1)); y = vgroup("term2", nentry("y",4,0,10,1)); process = a*x + b*y;
Solution 2:
a = nentry("v:term1/a",1,0,10,1); b = nentry("v:term1/b",2,0,10,1); x = nentry("v:term2/x",3,0,10,1); y = nentry("v:term2/y",4,0,10,1); process = a*x + b*y
Environments
- https://github.com/jpcima/faustpp - A post-processor for faust, which allows to generate with more flexibility
FaustWorks:
git clone git://faudiostream.git.sourceforge.net/gitroot/faudiostream/FaustWorks
FaustLive:
- https://git.open-music-kontrollers.ch/lv2/mephisto.lv2 - FAUST JIT LV2 plugin bundle
- https://github.com/Simon-L/pasfa - A paste service for Faust code
Videos
- YouTube: Faust Quick Start - 1h29m
- YouTube: CCRMA Faust Workshop 2015
- YouTube: Romain Michon: Faust Physical Modeling Library - 2018
- YouTube: Faust Physical Modeling Workshop
Guides
- PDF: FAUST : an Efficient Functional Approach to DSP Programming - Yann Orlarey, Dominique Fober and Stephane Letz. FAUST is a programming language that provides a purely functional approach to signal processing while offering a high level of performance. FAUST aims at being complementary to existing audio languages by offering a viable and efficient alternative to C/C++to develop signal processing libraries, audio plug-ins or standalone applications. The language is based on a simple and well formed formal semantics. A FAUST program denotes a signal processor, a mathematical function that transforms input signals into output signals. Being able to know precisely what a program computes is important not only for programmers, but also for compilers needing to generate the best possible code. Moreover these semantics questions are crucial for the longterm preservation of music programs. The following paragraphs will give an overview of the language as well as a description of the compiler, including the generation of parallel code.
- PDF: A FAUST Tutorial
- PDF: FAUST Tutorial 2
- Romain Michon - Faust Tutorials - This page gathers a series of tutorials around the Faust programming language wrote as part of various projects. It was not written in a "linear way", so each section is independent and covers a different topic. As a result, it is quite possible that you find some redundancy if you read this page from top to bottom. Should you have any question, feel free to send me an e-mail.
- ccc: Aggraef: Plugin Programming with Faust #minilac16 - video
Tools
- https://github.com/Normalised/pMix2 - a preset interpolator, plug-in chainer and Faust IDE written with JUCE
- https://github.com/hrtlacek/faustTools - Tools for analyzing Faust programs. At the moment there is one tool present, faustwatch. Faustwatch is a tool that observes a .dsp file used by the dsp language FAUST. If the file is changed (saved after editing); the block diagram can be automatically shown in the default browser; the impulse response can be plotted in the time domain; the impulse response can be plotted in the frequency domain. The time and frequency domain plots of the last saved version are always visible so the current and last saved version can be compared. The impulse response is played back via pyo and JACK Audio. Basically it is supposed to make FAUST development faster.
- https://github.com/Fr0stbyteR/faust-ui - an user interface parser, compositor and components renderer for Faust Programming language
- https://github.com/rottingsounds/bitDSP-faust - bit juggling digital signal processing for faust.
- https://github.com/droosenb/faust-wdf-library - in progress library for creating Wave Digital Filter based models for the Faust programming language, written entirely in faust. Currently the library only includes files and functions which are being tested for use in the eventual completed library.
- https://github.com/jpcima/faustmd - This program builds the metadata for a Faust DSP ahead of time, rather than dynamically.The result is a block of C++ code which can be appended to the code generation.
- https://github.com/mzuther/ProtoFaust - DSP prototyping in Faust for VCV Rack
- https://github.com/DBraun/TD-Faust - an integration of FAUST (Functional AUdio STream) and TouchDesigner
Targets
- https://bitbucket.org/agraef/faust-lv2 - provides an LV2 plugin architecture for the Faust programming language. The package contains the Faust architecture and templates for the needed LV2 manifest (ttl) files, a collection of sample plugins written in Faust, and a generic GNU Makefile as well as a shell script to compile plugins using the architecture.
- https://bitbucket.org/agraef/faust-vst - provides a VST plugin architecture for the Faust programming language. The package contains the Faust architecture, faustvst.cpp, the faust2faustvst helper script which provides a quick way to compile a plugin, a collection of sample plugins written in Faust, and a generic GNU Makefile for compiling and installing the plugins.
- https://github.com/ftrvxmtrx/faust9p - Tools to compile Faust DSP code to a 9p server running on Plan 9.
Projects
- https://github.com/dariosanfilippo/modified_lorenz
- https://github.com/dariosanfilippo/modified_van_der_pol
- https://github.com/dariosanfilippo/modified_chua
- https://github.com/dariosanfilippo/modified_thomas
- https://github.com/tai-studio/faust-sc - dsp objects with a supercollider-like interface
OpenFrameworks
- OpenFrameworks - an open source C++ toolkit for creative coding.
- https://github.com/danomatika/ofxPd - (maintained) a Pure Data addon for OpenFrameworks using libpd
- https://github.com/npisanti/ofxPDSP - openFrameworks addon for audio synthesis and generative music
- https://github.com/chaosct/ofxPython - openFrameworks addon to embed Python
- https://github.com/danomatika/loaf - an interpreter for openFrameworks which allows you to write OF applications in the Lua scripting language. This means you can quickly create using openFrameworks but without having to compile C++ or use a heavy IDE like Xcode or Visual Studio. A built-in OSC (Open Sound Control) server enables loaf to communicate natively with other creative coding and music applications over a network connection. Additionally, a built-in Syphon server allows for streaming loaf's screen output to visual applications on the same macoS system.
- https://github.com/Akira-Hayasaka/ofxGLEditor - Script editor for oF Live Coding
- https://github.com/ofnode/ofLiveApp - Live coding openFrameworks applications.
- https://github.com/d3cod3/ofxVisualProgramming - A visual programming patching environment for OF
- OpenFrameworks UI – braitsch - a simple to use, fully customizable, high-resolution graphical user interface for OpenFrameworks inspired by the popular JavaScript datgui interface.
- https://github.com/zenwerk/ofxBeatTracking - simple beat tracking with openFrameworks.
- https://github.com/moebiussurfing/ofxSurfingVideoSkip - A tweakeable loops video player.
Happy Brackets
- Happy Brackets - aims to make creative coding on multiple devices easier. Products like the Beagle Bone and Raspberry Pi herald an exciting new world of computing; they cost very little but are capable of running a full Linux OS, process realtime audio and video, and offer rich networking and IO capabilities. This has opened up a great world of creative possibilities, especially where you have many devices all communicating over a network, gathering sensor data and producing audio-visual outputs. It can be a pain to set-up and configure all of these devices. Development can be slow and debugging confusing. With HappyBrackets, you set them all up once and then remote control them from a single computer. You don’t need to plug your devices into a screen or keyboard, just run them on a disk image set up with the HappyBrackets toolkit. Finding your devices on the network is easy. Then you can write and deploy Java code straight from our special plugin for the popular development environment IntelliJ IDEA. HappyBrackets is an open source project.
Cinder
- Cinder - a C++ library for programming with aesthetic intent - the sort of development often called creative coding. This includes domains like graphics, audio, video, and computational geometry. Cinder is cross-platform, with official support for macOS, Windows, Linux, iOS, and Windows UWP. Cinder is production-proven, powerful enough to be the primary tool for professionals, but still suitable for learning and experimentation. Cinder is released under the 2-Clause BSD License.
- https://github.com/AVUIs/AdamBrucePiotr - Cinder project with Pure Data
- https://github.com/videodromm/videodromm-livecoding-cinder - live coding environment
Two
- https://github.com/hugoam/two#reflection - c++ toolkit for rapid development of live graphical apps and games
Audiality
- Audiality 2 - a realtime audio and music engine, primarily intended for video games. While it supports traditional sample playback as well as additive, subtractive and granular synthesis, the distinctive feature is subsample accurate realtime scripting.
acidforth
- https://github.com/boomlinde/acidforth - Software synthesizer and sequencer programming language. Hard to learn, impossible to master.
Jupyter
nnAudio
- https://github.com/KinWaiCheuk/nnAudio - Audio processing by using pytorch 1D convolution network
Nyquist
- Nyquist is a sound synthesis and composition language offering a Lisp syntax as well as an imperative language syntax (SAL) and a powerful integrated development environment.. Nyquist is an elegant and powerful system based on functional programming.
- http://www.audacityteam.org/help/nyquist/
- http://wiki.audacityteam.org/wiki/Nyquist_Audio_Programming
Extempore
- Extempore - a programming language and runtime environment designed to support 'cyberphysical programming'. Cyberphysical programming supports the notion of a human programmer operating as an active agent in a real-time distributed network of environmentally aware systems. The programmer interacts with the distributed real-time system procedurally by modifying code on-the-fly.
Central to the Extempore programming environment is a new systems programming language designed to support the programming of real-time systems in real-time. xtlang is designed to mix the high-level expressiveness of Lisp with the low-level expressiveness of C. xtlang uses an s-expression syntax common to Lisp, and more particularly to Scheme. xtlang also borrows many Lisp like semantics including first class closures, tail recursion and macros. However, xtlang also borrows heavily from systems languages like 'C' including static typing, low-level type expressivity, direct pointer manipulation and explicit memory managment (i.e. no GC). xtlang then extends these 'C' semantics with type-inferencing, ad-hoc polymorphism, reified generics, and zone/region based memory management.
Audio Programming Environment
- Audio Programming Environment (APE) is an open-source audio plugin, that allows to directly script/code DSP real time, integrated in your signal chain. Utilizing a built-in code editor, compiler, console and a basic control surface API, testing and prototyping DSP is extremely easy and convenient.
AudioKit
- AudioKit - a powerful audio toolkit for synthesizing, processing, and analyzing sounds. It contains several examples for iOS (iPhone & iPad) and Mac OSX, written in both Objective-C and in Swift. A test suite is provided for many of the operations included in AudioKit. A playground project can be used for trying out AudioKit instruments and for greatly speeding up the development of your own instruments and applications. [19]
- http://audiokit.io/playgrounds/ - live coding
- https://github.com/AudioKit/Cookbook - Canonical Examples for Using the AudioKit Framework
- https://github.com/jokela/ROMPlayer - AudioKit Sample Player (ROM Player) - EXS24, Sound Font, Wave Player
Signalflow
- SignalFlow - an audio synthesis framework whose goal is to make it quick and intuitive to explore complex sonic ideas. It has a simple and consistent Python API, allowing for rapid prototyping in Jupyter, PyCharm, or on the command-line. It comes with over 100 of built-in node classes for creative exploration.Its core is implemented in C++11, with cross-platform hardware acceleration. SignalFlow has robust support for macOS and Linux (including Raspberry Pi), and has work-in-progress support for Windows. The overall project is currently in alpha status, and interfaces may change without warning.
The #MusicBricks Toolkit
FluCoMa
- FluCoMa - The first release of software from the FluCoMa project is a toolkit of objects for decomposing and describing audio in musical creative coding environments.
Jacktube
- Jacktube is an open source audio/MIDI processing program. It uses LADSPA and DSSI plugins to generate and process audio, and MIDI events to control its operation. The exact behavior is defined by using a simple scripting language to define rules. Even though Jacktube is primarily meant for audio work, it can be used in any signal processing application. The language has some superficial similarities to Perl, but the programming language is designed to be as small and efficient as possible for its purpose, namely setting up plugin graphs and responding to MIDI events.
Sonic Pi
- Sonic Pi - The Live Coding Synth for Everyone. Simple enough for computing and music lessons. Powerful enough for professional musicians. Free to download with a friendly tutorial. Learn to code creatively by composing or performing music in an incredible range of styles from classical to algorave. Ruby DSL.
FunDSP
- https://github.com/SamiPerttu/fundsp - an audio DSP (digital dignal processing) library with a focus on usability. It features a powerful inline graph notation that empowers users to accomplish diverse audio processing tasks with ease and elegance.The custom notation taps into composable, zero-cost abstractions that express audio processing networks as Rust types. FunDSP comes with a combinator environment containing a suite of audio components, math and utility functions and procedural generation tools.
alda
- alda - Inspired by other music/audio programming languages such as PPMCK, LilyPond and ChucK, Alda aims to be a powerful and flexible programming language for the musician who wants to easily compose and generate music on the fly, using naught but a text editor. Alda is designed in a way that equally favors aesthetics, flexibility and ease of use, with (eventual) support for the text-based creation of all manner of music: classical, popular, chiptune, electroacoustic, and more! [20] [21] [22]
Platonic Music Engine
- Platonic Music Engine takes an initial input from the user (like a name or number or random string of characters) and converts it using a non-random process into a piece of music, the Platonic Score. The software then allows you to manipulate this random-sounding music via the use of various style algorithms and quantizers into sounding like any style of music imaginable while still preserving the Platonic Score in its core.
Serpent / Aura
- Serpent is the scripting language for Aura, a platform for computer music, animation, and interactive systems. Serpent was designed and implemented as a stand-alone, general purpose interpreter. Serpent is perhaps ideal as a game scripting language due to its real-time design and support for external C++ objects and C functions. It is open source, and I would be happy to share code as well as future design and development with others.
- AuraRT is a software framework for creating interactive multimedia software, particularly advanced interactive music compositions. A subproject is AuraFX, a flexible signal processor configurable by end-users.
https://wiki.python.org/moin/Audio/
TMC
- TMC - Tiny Music Compiler, a DSL (Domain-Specific Language) that describes a set of operations of audio files. It does not manipulate audio itself. Instead, it calls existing tools such as SoX.
Bipscript
- bipscript - a simple programming environment for creating music. instantiate and connect LV2 plugins to create audio and MIDI networks, schedule MIDI and other control events directly on the plugins and system outputs, schedule logic to react to external events e.g. from a human performer.
"For this example we'll create two LV2 plugins: a software synthesizer and a reverb; we'll feed the output of the synth into the reverb and connect the reverb to the main system outputs. Also note that we set the initial value of the reverb amount to zero;"
local synth = Lv2.Plugin("http://calf.sourceforge.net/plugins/Monosynth", "Velo Bee") local reverb = Lv2.Plugin("http://calf.sourceforge.net/plugins/Reverb") reverb.setControl("amount", 0.0) reverb.connect(synth) local mainOutput = Audio.StereoOutput("main", "system:playback_1", "system:playback_2") mainOutput.connect(reverb)
Sporth
- Sporth - short for SoundPipe fORTH, a small stack-based musical language, roughly inspired by stack languages like Forth and PostScript.
stak
- https://github.com/hatcatpat/stak - forth-y livecoding audio doodad heavily inspired by sporth, with an aim to be like ProxySpace.
wcnt
- wcnt - Wav Composer Not Toilet is a not real time modular audio synthesis/sequencer/sampler application for GNU/Linux systems. It outputs audio into 8/16/24/32bit PCM or floating point format .WAV audio files. wcnt is commandline based and reads plain text files, within which definitions of modules and data objects are placed. Modules are where the synthesis/sampling/sequencing happens and operate on a sample by sample basis. Transmission of events between modules only occurrs at the time of the event. Occurrences of events are transparent, the data stream is continuous.
Tao
- Tao is a software package for sound synthesis using physical models. It provides a virtual acoustic material constructed from masses and springs which can be used as the basis for building quite complex virtual musical instruments. Tao comes with a synthesis language for creating and playing instruments and a fully documented (eventually) C++ API for those who would like to use it as an object library.
OpenAL
AeonWave
- AeonWave is a low-level, hardware accelerated 4D spatialized audio library aimed at the professional simulation market. The software currently runs on Windows and Linux for ARM and x86 and tests have shown that AeonWave renders 3D audio between 450% and 1400% faster than any competing product depending on the hardware configuration. AeonWave started out as project Anaconda; a fast rendering new OpenAL implementation. After realizing this would not be good enough for spatialized 3D audio demands the library has been rewritten;
Vult
- Vult - a simple and powerful language to program high-performance algorithms that may run in small microprocessors or microcontrollers. Vult is specially useful when programming Digital Signal Processing (DSP) algorithms like audio effects or synthesizers. The Vult compiler is a transcompiler, which takes Vult code and produces plain C/C++ code that can be compiled in any platform with a C/C++ compiler, for example: Arduino or Teensy boards. Vult can also generate JavaScript that can be run directly in the browser or C/C++ that can be compiled as Pure Data externals. Code written in Vult has freedom.
Norns
- https://github.com/monome/norns - norns is many sound instruments.
- https://github.com/dndrks/cheat_codes_2 - a sample playground for norns
- https://github.com/madskjeldgaard/nornsgenerate - A command line tool for generating a basic project for the monome norns sound computing platform.The tool generates a basic main lua file and optionally a custom SuperCollider engine as well.
- https://github.com/jaggednz/midigrid - A helper library for midi grids and monome norns
- https://github.com/PaulBatchelor/norns-lowlevel - Example code for doing low-level norns stuff in C
- https://github.com/ngwese/siren - lisp application environment for monome norns
- https://github.com/monome/softcut-studies -multi-voice sample playback and recording system build into the norns environment
- https://github.com/northern-information/athenaeum - a repository of study, spike, and sample scripts for norns
- https://github.com/schollz/piwip - live sample synthesizer for monome norns
- https://github.com/synthetiv/euclidigons - Sequencer for monome norns, imagined by and realized in collaboration with @setfield
- https://github.com/ryleelyman/faeng - faeng is a sequencer
- https://github.com/mattb/beets - probabilistic performance drum loop slicer for Norns
- https://github.com/catfact/thebangs - one-shot polysynth engine for supercollider
- https://github.com/schollz/supertonic - an instrospective drum machien
- https://github.com/schollz/amenbreak - dedicated amen break script for norns. Requires least 150MB of disk space. Over 200 amen break samples (from internet archive), loaded into memory for instant playback, automatic stutter/stretch/delay effects (amen+break), overdrive/decimate/degrade control (punch), kick drum added to bolster kicks of pre-analyzed samples, edit mode allows manipulating individual slices/kick volume, all effects are determined by probabilities with sweet spots controlled by many different easing functions
- https://github.com/TheSlowGrowth/MonomeArcClone - A clone of the monome arc that works with serialosc.
- https://github.com/schollz/plonky - a string-like keyboard and sequencer for monome grid
- https://github.com/schollz/mididim - a norns script for recording/playing midi
- https://github.com/pangrus/hachi - Euclidean drum machine emulating the TR-808 sound.
- https://github.com/schollz/mx.samples - like mr. radar or mr.coffee but for samples on norns.
- https://github.com/21echoes/pedalboard - A simple collection of chainable effects for the Norns sound computer.
- https://github.com/northern-information/dronecaster - cast drones & record whatever returns
- https://github.com/cfdrake/stack - A stack of bandpass filters, for Monome Norns.
- https://github.com/jaseknighter/flora - An L-systems sequencer and bandpass-filtered sawtooth engine for monome norns
- https://github.com/northern-information/nornsilerplate - monome norns + boilerplate = nornsilerplate
- https://github.com/zjb-s/sempra - dynamic melody sequencer
- https://github.com/schollz/qwertymidi - A mod for norns to plug in a keyboard to use as a midi controller
- zxcvbn - a norns script for a tracker on norns.
- https://github.com/andr-ew/ndls - -track tape looper, delay, & sampler for monome norns + grid (beta)
SOUL
- SOUL - represents a paradigm shift in the way audio development is done. Its architecture improves latency and performance in ways that are impossible using current techniques, and it opens up the use of new computing platforms for audio acceleration. Soulunlocks native-level speed, even when used within slower, safer languages. The SOUL language makes audio coding more accessible and less error-prone, improving productivity for beginners and expert professionals. [23]
- SOUL - creating a new language and infrastructure for writing and deploying audio code. It aims to unlock improvements in latency, performance, portability and ease-of-development that aren't possible with the current mainstream techniques that are being used.
- SOUL - [24]
- https://github.com/soul-lang/SOUL - The SOUL programming language and API
- https://github.com/AkiyukiOkayasu/SOUL-Etude - Simple SOUL lang example
- https://github.com/maxwellpollack/soulpatch-input-example - a simple JUCE console application which *should* load a SOUL Patch and send it input events
Gwion
- Gwion - strongly timed programming language for making music. It is strongly influenced by chuck, but adds a bunch of high-level features: templating, first-class functions and more. It aims to be simple, small, fast, extendable and embeddable.
IBNIZ
- IBNIZ - a virtual machine designed for extremely compact low-level audiovisual programs. The leading design goal is usefulness as a platform for demoscene productions, glitch art and similar projects. Mainsteam software engineering aspects are considered totally irrelevant. IBNIZ stands for Ideally Bare Numeric Impression giZmo. The name also refers to Gottfried Leibniz, the 17th-century polymath who, among all, invented binary arithmetic, built the first four-operation calculating machine, and believed that the world was designed with the principle that a minimal set of rules should yield a maximal diversity.
Cmajor
- Cmajor - a C-family language designed specifically for writing DSP signal processing code.
Game audio
- https://www.ambiera.com/irrklang - a high level 2D and 3D cross platform (Windows, Mac OS X, Linux) sound engine and audio library which plays WAV, MP3, OGG, FLAC, MOD, XM, IT, S3M and more file formats, and is usable in C++ and all .NET languages (C#, VisualBasic.NET, etc). It has all the features known from low level audio libraries as well as lots of useful features like a sophisticated streaming engine, extendable audio reading, single and multithreading modes, 3d audio emulation for low end hardware, a plugin system, multiple rolloff models and more. All this can be accessed via an extremely simple API.
JavaScript / Web Audio
MIDI
- http://muki.io/ - music streaming from MIDI and chiptunes [27]
- http://midi.space/ MIDI.space - WebMIDI and Audio Mashup for Finger Drumming[28]
- ZOUND Live - Audio modular collaborative tracker using web technologies.
- Synthy - an online synthesiser and sequencer with live world output and colours made by Filip Hnízdo using the Web Audio API, the live server is powered by socket.io and Node.js. The database of patterns pushed to synthy is powered by the wonderful NeDB.
Audiocogs
- Audiocogs - JavaScript codecs, audio processing and other things that come with the territory.
- https://github.com/audiocogs - javascript and rust
DSP.js
- https://github.com/corbanbrook/dsp.js/ DSP.js is a comprehensive digital signal processing library for javascript. It includes many functions for signal analysis and generation, including Oscillators(sine, saw, square, triangle), Window functions (Hann, Hamming, etc), Envelopes(ADSR), IIR Filters(lowpass, highpass, bandpass, notch), FFT and DFT transforms, Delays, Reverb.
Tone.js
- Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers looking to create web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for scheduling and timing events and prebuilt synths and effects. For signal-processing programmers (coming from languages like Max/MSP), Tone provides a wealth of high performance, low latency building blocks and DSP modules to build your own synthesizers, effects, and complex control signals.
Sawtooth.io
- https://www.sawtooth.io/ - basic filtering,and synthesis
tonal
- tonal is a modular, functional music theory library. Built from a collection of modules, it's able to create and manipulate tonal elements of music (pitches, chords, scales, keys). It deals with abstractions (not actual music) and while is designed for algorithmic composition and music generation, can be used to develop any kind of midi or audio software.
Audiolet
- https://github.com/oampo/Audiolet - a JavaScript library for real-time audio synthesis and composition from within the browser. It uses graph-based routing and pattern-based scheduling to make complex audio simple to program, and easy to understand.
Pedalboard.js
Wad
- https://github.com/rserota/wad - Wad is a Javascript library for manipulating audio using the new HTML5 Web Audio API. It greatly simplifies the process of creating, playing, and manipulating audio, either for real-time playback, or at scheduled intervals. Wad provides a simple interface to use many features one would find in a desktop DAW (digital audio workstation), but doesn't require the user to worry about sending XHR requests or setting up complex audio graphs.
noise.sh
- noise.sh - digital signal processing spreadsheet for sound design.
Vodka
- https://github.com/eeeeaaii/vodka - Vodka is a creative coding environment for creative writers. [31]
Handel
Handel - a small procedural programming language for writting songs in browser.The Handel Interpreter interprets Handel programs and plays compositions in browser, thanks to Tone.js. [32]
Krill
- https://github.com/Mdashdotdashn/krill - a livecoding environment inspired from TidalCycles. The aim of this project is preserve Tidal's wonderful flexibility while allowing an faster way to hack at it in Javascript. At this moment, Krill only sequences note-events via midi, there is no equivalent to Tidal's superdirt (although there's some plan in the future to allow things along that line). It is presently under active development.
mono
- https://github.com/stagas/mono - a low level language for audio expressions that compiles to wasm
Other
GridSound
- GridSound - GridSound (0.36.0) wants to be an open source online digital audio workstation following the new WebAudio API
DSPGraph
- https://github.com/ReplicatAudio/DSPGraph - DSP Graph is a web based tool that allows users to write and run JavaScript code directly in their web browser.Note: DSP Graph is not intended for end-users, and is instead designed for audio software developers and people who want to learn about building audio software. If you are not an audio software developer or are not interested in building audio software, this tool may not be suitable for your needs.This graphing tool is specifically designed for digital audio signal processing, and features a simple, intuitive interface that makes it easy for users to enter and edit their code. The built-in graph generator can be used to visualize the output of the code in real time, providing a useful tool for experimenting with different algorithms and techniques for digital audio signal processing.
JSFFT
- https://github.com/dntj/jsfft - Small, efficient Javascript FFT implementation for node or the browser.
Gibberish
- https://github.com/gibber-cc/gibberish - designed to be a fast audio API for the browser. It takes the low-level building blocks provided by genish.js and uses them to create higher-level synthesizers, effects, and sequencers. Gibberish proceses each sample of each synthesis block one sample at a time, enabling a variety of effects not typically possible in JavaScript synthesis systems, most importantly single-sample feedback loops and audio-rate modulation of scheduling.
genish.js
- https://github.com/charlieroberts/genish.js - a js library to compile optimized audio callbacks inspired by gen~
Opimodus / OMN
Wolfram:Alpha
GPU
Neural net
- https://github.com/pranaymanocha/PerceptualAudio - a Tensorflow implementation of our audio perceptual metric. It contains (0) minimal code to run our perceptual metric, (1) code to train the perceptual metric on our JND dataset, and (2) an example of using our perceptual metric as a loss function for speech denoising.
Live coding
- Hybrid Live Coding Interfaces - Hub for researchers interested in Hybrid Live Coding.
- TOPLAP - an organisation founded in 2004, to explore and promote live coding.
- https://github.com/toplap/awesome-livecoding - A curated list of live coding languages and tools
- CCC: Linux Live on Stage - SuperBoucle, Carla, Faust & LV2 Plugins
Gibber
- Gibber - a live coding environment for the web browser, using the Gibberish.js audio engine, the CodeMirror code editor library and wrapping Three.js for 3d graphics and shader support. Version 2 of Gibber features a much more efficient audio engine, some interesting mapping abstractions and a server/database backend for publishing and browsing files and collaboratively live coding. In order of preference, Gibber runs in Chrome, Safari, and Firefox.
- Gibber - Creative Coding for JavaScript
- http://charlie-roberts.com/gibber/extensive-use-of-gibber-in-first-us-algorave/
Fluxus
- Fluxus - a rapid prototyping, playing and learning environment for 3D graphics, sound and games. Extends the Racket language with graphical commands and can be used within it’s own livecoding environment or from within the DrRacket IDE. Fluxus is crossplatform (Linux, Windows, OSX, Android, PS2), and is released under the GPL licence.
Al-Jazari
- Al-Jazari - livecoded entirely by gamepad, and employs a simple graphical language to allow robots to interact with each other and move over a terrain populated by audio triggers. The running code is displayed and edited in thought bubbles over each robot. For upcoming performance dates see this page. Al-Jazari is written entirely in Scheme, and runs on Fluxus which is a small game/graphical engine which runs on Linux and OSX.
TidalCycles
- TidalCycles - or Tidal for short, is a language for live coding pattern. It allows you to make musical patterns with text, describing sequences and ways of transforming and combining them, exploring complex interactions between simple parts. Tidal allows you to express music with very flexible timing, providing a little language for describing patterns as step sequences (which can be polyphonic and polymetric), some generators of continuous patterns (e.g. sinewaves, sawtooths) and a wide range of pattern transformations. Tidal is highly ‘composable’ in that pattern transformations can be easily combined together, allowing you to quickly create complex patterns from simple ingredients. [34]
- Strudel REPL - a new live coding platform incorporating a faithful port of TidalCycles to JavaScript for creating dynamic music pieces in the browser! It is free and open-source and made for beginners and experts alike.
- https://github.com/musikinformatik/SuperDirt - Tidal Audio Engine
- https://github.com/thgrund/tidal-looper - Different looper variants for SuperDirt to provide live sampling in TidalCycles.
- https://github.com/exhibitionist-digital/superdirt-voltage - A small set of SuperDirt synths and Tidal helpers to control modular synths. No MIDI required!
- https://github.com/kindohm/vscode-tidalcycles - Support for TidalCycles in VSCode
Petal
- https://github.com/siaflab/petal - A small language on Sonic Pi with similar syntax to TidalCycles
Livecodelab
- Livecodelab - a special secret place where you can make fancy "on-the-fly" 3d visuals and play awesomely offbeat (literally) sounds. "On-the-fly" meaning: as you type. Type just three letters: "box", and boom! a box appears. No clicking play, no waiting, no nothing. What are you waiting for? Try the magic. Press the button below and play with the examples.
Topos
- https://github.com/Bubobubobubobubo/Topos - a web based live coding environment designed to be installation-free, independant and fun. Topos is loosely based on the Monome Teletype. The application follows the same operating principle, but adapts it to the rich multimedia context offered by web browsers. Topos is capable of many things: it is a generative/algorithmic music sequencer made for improvisation and composition alike; it is a synthesizer capable of additive, substractive, FM and wavetable synthesis, backed up by a powerful web based audio engine; it can also generate video thanks to Hydra, oscilloscopes, frequency visualizers and image/canvas sequencing capabilities; it can be used to sequence other MIDI and OSC devices (the latter using a NodeJS script,; it is made to be used without the need of installing anything, always ready at https://topos.live
flok
- https://github.com/munshkr/flok - Web-based P2P collaborative editor for live coding sounds and images
Moony.lv2
- Moony.lv2 - Realtime Lua as programmable glue in LV2
csynth
- https://github.com/jessecrossen/csynth - An LV2 soft-synth instrument where the patches are C++ code.
SAU / saugns
- https://codeberg.org/sau/extra-scripts - Extra SAU scripts collection. Makefile for bulk-rendering to MP3 files. Currently, mainly various quick test scripts kept here instead of thrown away.
wide
- https://github.com/pd3v/wide - a polyphonic instrument, multi-instrument, DSLish/API MIDI sequencer for live coding music. It sends MIDI messages to any stand-alone synthesiser or DAW. cling (an interative C++ interpreter) is for the live coding enverionment.
Wavepot
- http://studio.substack.net/wavepot - old demo
extramuros
- https://github.com/d0kt0r0/extramuros - language-neutral shared-buffer networked live coding system
NoiseFunge
- https://github.com/revnull/noisefunge
- YouTube: 808 miles high - example
Protoplug
- Protoplug - a VST/AU plugin that lets you load and edit Lua scripts as audio effects and instruments. The scripts can process audio and MIDI, display their own interface, and use external libraries. Transform any music software into a live coding environment!
Worp
- Worp - an experimental sound / music / DSP engine written in LuaJIT, currently
Klangmeister
- Klangmeister - a live coding environment for the browser. It lets you design synthesisers and compose music using computer code - without having to install anything on your own computer. Klangmeister works best in Chrome, because the synthesis features that it relies on have patchy support across the other browsers. [36]
jazzari
- jazzari - A programmable band in your browser. JavaScript.
Wulfcode
- Wulfcode - Ostensibly for live-coding, but it’s dramatically simplified compared to the SuperCollider and Csound-based environments used by the live-coding community proper. It’s an object-oriented, text-based MIDI sequencer with its own syntax, and a repertoire of commands and structures that allow interesting looping motifs and polyrythmic phrases to be easily generated and manipulated on the fly.
FoxDot
- FoxDot - a pre-processed Python-based programming language that provides a fast and user-friendly abstraction to SuperCollider. It also comes with its own IDE, which means it can be used straight out of the box and no fiddling around with config files.
Fragment
- Fragment - a web-based spectral musical instrument driven by real-time visuals generated by its users from shared GLSL scripts.
- https://github.com/grz0zrg/fas - C Pixels-based additive, spectral, granular, subtractive and PM audio synthesizer implemented as a WebSocket server
- https://github.com/grz0zrg/fas_launcher - A wxWidgets launcher for Fragment Audio Server built for the Fragment Synthesizer, a web-based Collaborative Spectral Synthesizer. This program should compile on most platforms. This program is a simple native launcher which provide an easy to use interface to start the Fragment Audio Server, the launcher also provide a convenient way to configure the audio server for individual sessions and provide a direct way to launch the web. application pre-linked with the native audio server by passing ?fas=1 as a web. argument.
Fragment add sine waves together to produce sounds, the software gather frequencies from vertical slices containing the pixels data of a graphical WebGL powered canvas, each horizontal lines of the score is associated to a pure sine wave generator, all the vertical slices are grouped into one before being "fed" to the synthesis engine, the pixels data (red and green channel) determine the amplitude of the associated sine wave for each audio channels (it is stereophonic) and the vertical position of the pixel determine which sine wave generator is active, the synthesizer is mainly controlled by the visuals generated from the GLSL script.
The synthesizer support the WebMIDI API which is only supported by Chrome and Opera at the moment, it is possible to assign controllers to widgets and controls the GLSL script "uniform" variables.
- YouTube: Fragment Synthesizer - playlist [37]
PraxisLIVE
- PraxisLIVE - an open-source hybrid visual environment for live creative coding. Praxis LIVE mixes intuitive real-time visual node editing, with a range of built-in components for audio, visual & data processing, together with an embedded compiler and editor for live-coding Processing, Java and GLSL. While including specific support for audio and video processing, Praxis LIVE is designed to support other forms of cyber-physical coding.
clive
- http://mathr.co.uk/clive/ - Live-coding audio in C.
dafxpipe
- https://github.com/nwoeanhinnogaehr/dafxpipe - Software for live coding audio effects and synths in Python. There's also some libraries for doing STFT and phase vocoder stuff.
mercury
- mercury - minimal and human-readable language for the live coding of algorithmic electronic audiovisual performances.Programmed in the Cycling'74 Max8 node-based creative coding environment, as an abstracted layer on the Max/MSP audio engine and with the use of Node4Max for parsing, lexing and generative algorithms and Jitter/OpenGL for the visuals and the responsive texteditor.
MDAL
- MDAL - a veritable, text-based audio programming language specifically tailored for use with sound drivers operating on low-level devices such as sound chips and simple DACs.While traditional low-level music description languages such as ABC or the Music Macro Language are essentially abstractions of western sheet music notation, MDAL instead has strong roots in tracker music. The MDAL syntax adheres to several key elements of tracker modules: Song structure is broken down into patterns, which are linked via a sequence matrix. Time flow is represented vertically. The concept of measures and note values has no direct representation in MDAL, instead note length is measured in steps and ticks.MDAL is a language without a pre-defined nomenclature. Aside from a rudimentary set of structural conventions, there are no fixtures - the entire instruction set is customizable. Customization is provided through standardized configuration files, which are parsed and interpreted by an MDAL compiler at runtime. This enables programmers to adapt both user input and data output to the specific needs of their sound drivers/player routines.
dafxpipe
- https://github.com/nwoeanhinnogaehr/dafxpipe - Software for live coding audio effects and synths.
GAmuza
- GAmuza - made with OF, inspired by Processing, binding the (almost) entire OF language v.0.8.4 with his official addons included, the entire OpenGL 1.1 language, a bunch of other addons , and the small GAmuza framework of functions and GUI modules to make creative coding a lot more easier, all of it embedded into a slight modified Lua scripting environment.
Shaden
- https://github.com/brettbuddin/shaden - A modular audio synthesizer.
Sporth / AudioMasher
- Sporth - SoundPipe fORTH, is a small stack-based audio programming language.
- AudioMasher - Live, in-browser audio and music programming environmentBased on the amazing Sporth audio programming language
Stenophone
- https://github.com/jarmitage/Stenophone - a live coding keyboard which is also a digital musical instrument, called the Stenophone. The Stenophone is an augmented stenotype or chorded keyboard, which permits continuous gestural control of keys and features an ergonomic design. These capabilities are exploited to enable the manipulation of algorithms and their parameterisation simultaneously. As you can see this hasn't been worked on in a while. Nevertheless I still think about it and perhaps someone found it interesting to read so it's still here.
Monolith
- Monolith - the name of an interactive computer music ecosystem that I have written for myself. Features: On-the-fly Audio Live Coding Scheme REPL support; Stack-based approach to audio; Support for Monome Grid + Arc; Support for Griffin Knob; State saving/loading (via sqlite); Pre-built interactive pages utilizing interfaces; Offline video generation (via x264); OSX/Linux SupportMonolith is mainly a bunch of C components built to be controlled inside of a fork of s9 scheme. The s9 REPL is then set to spawn inside of emacs scheme interpretor. From there, the rest of the workflow is controlled via emacs.
Troop
- https://github.com/Qirky/Troop - a real-time collaborative tool that enables group live coding within the same document across multiple computers. Hypothetically Troop can talk to any interpreter that can take input as a string from the command line but it is already configured to work with live coding languages FoxDot, TidalCycles, and SuperCollider.
Hydra
- Hydra - Livecoding networked visuals in the browser
Cowbell.lol
LiveLily
- https://github.com/alexdrymonitis/LiveLily - a live sequencing and live scoring system through live coding with a Lilypond-like language. The system includes a text editor, a parser, a sequencer, and an interactive/animated score. The commands are inspired by the Lilypond language, with certain commands that accommodate fast typing and fast coding sequences being added.
dublang
- dublang - a multi-language live coding system with support for multiple programing languages in a single and integrated live coding session the name dublang is inspired by the musical style dub, dub consists of remixes of existing music, and just like the dub, dublang consists of remixes of existing software tools
zeroOne
- https://github.com/pd3v/zeroone - a polyphonic instrument, multi-instrument, DSLish/API MIDI sequencer for live coding music. It sends MIDI messages to any stand-alone synthesiser or DAW.
Mac
ixi
- ixi lang v3 - an extremely simple and visual live coding environment, presenting a high entry level control over synth definitions and samples in SuperCollider. The core idea is to represent events in a spatial layout, thus merging musical code and musical scores. The score is active, i.e., if a method is performed upon the score, it changes in real time.. The development of ixi lang is part of a research involving human-machine interaction, the philosophy of technology and the culture of software use in music. In return for this free software we would like to ask you few questions regarding your experience of the software.
- https://github.com/thormagnusson/ixilang - currently Cocoa only.
Impromptu
- Impromptu - an OSX programming language and environment for composers, sound artists, VJ's and graphic artists with an interest in live or interactive programming. Impromptu is a Scheme language environment, a member of the Lisp family of languages. Impromptu is used by artist-programmers in livecoding performances around the globe.
Visuals
Cyril
- Cyril - a programming language designed for fast prototyping of visualisations and livecoding visuals.
P5LIVE
- P5LIVE - p5.js collaborative live-coding vj environment!