Creative coding

From Things and Stuff Wiki
Jump to navigation Jump to search


General

See also Effects, Dataflow, Audio, Graphics, Visuals, Generative, Web Audio, etc.


  • https://en.wikipedia.org/wiki/Creative_coding - a type of computer programming in which the goal is to create something expressive instead of something functional. It is used to create live visuals and for VJing, as well as creating visual art and design, art installations, projections and projection mapping, sound art, advertising, product prototypes, and much more.






  • Learn FluCoMa - The Fluid Corpus Manipulation project (FluCoMa) instigates new musical ways of exploiting ever-growing banks of sound and gestures within the digital composition process, by bringing breakthroughs of signal decomposition DSP and machine learning to the toolset of techno-fluent computer composers, creative coders and digital artists.

Audio programming

Less GUI, more code.

See also Dataflow, Computing#Programming, Audio, Notation



  • https://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments - typically consist of an audio programming language (which may be graphical) and a user environment to design/run the language in. Although many of these environments are comparable in their abilities to produce high-quality audio, their differences and specialties are what draw users to a particular platform. This article compares noteworthy audio synthesis environments, and enumerates basic issues associated with their use.




  • https://en.wikipedia.org/wiki/Digital_filter - a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-time analog signals.A digital filter system usually consists of an analog-to-digital converter (ADC) to sample the input signal, followed by a microprocessor and some peripheral components such as memory to store data and filter coefficients etc. Program Instructions (software) running on the microprocessor implement the digital filter by performing the necessary mathematical operations on the numbers received from the ADC. In some high performance applications, an FPGA or ASIC is used instead of a general purpose microprocessor, or a specialized digital signal processor (DSP) with specific paralleled architecture for expediting operations such as filtering. Digital filters may be more expensive than an equivalent analog filter due to their increased complexity, but they make practical many designs that are impractical or impossible as analog filters. Digital filters can often be made very high order, and are often finite impulse response filters which allows for linear phase response. When used in the context of real-time analog systems, digital filters sometimes have problematic latency (the difference in time between the input and the response) due to the associated analog-to-digital and digital-to-analog conversions and anti-aliasing filters, or due to other delays in their implementation. Digital filters are commonplace and an essential element of everyday electronics such as radios, cellphones, and AV receivers.




How-tos


  • Hack Audio - Here you will find informational content related to audio and computer programming, as well as a community of coders/engineers interested in music.New information will be posted on a weekly basis for a range of experience levels – from beginner to professional.If you feel like you don’t have a lot of experience with programming, that’s great! This blog is meant to help you go from zero to hacker hero.















by Allen B. Downey









  • Fiview - freeware application for Windows, Linux and Mac OSX that can be used to design and view digital filters. It also makes it very easy to compare different filters by allowing you to switch between them using the digit keys, and it generates efficient and readable public domain example code that can be used directly in an application. It is released under the GNU GPL. Much of the underlying filter design code was based on mkfilter from Tony Fisher -- see the source code for details. The resulting filters were improved by splitting them into separate stages, which improves the accuracy and stability of them enormously, especially for higher-order Bessel and Butterworth filters. The source also includes a library, fidlib (now with its own page), which can be used to design filters at run-time. The fiview utility generates fast generic compiler-optimisable example C code both using the frequencies provided, and also in a form that allows the frequencies to be provided at run-time via a call to fidlib. This permits applications the flexibility to do things like generating banks of similar filters at run-time according to run-time parameters.


  • MicroModeler DSP - a web-based digital filter IDE. MicroModeler DSP solves the entire filter design, scaling and coding problem in one integrated process. Spend less time on the problems and more time on your project.


  • Polarbear - a tool for designing filters in the complex domain. Filters can be designed by placing any number of poles and zeros on the z plane. From this the filter coefficients are calculated, and the filter can be applied in real time on an audio stream.


  • FFmpegSource - (usually known as FFMS or FFMS2) is a cross-platform wrapper library around FFmpeg/libav. It gives you an easy, convenient way to say "open and decompress this media file for me, I don't care how you do it" and get frame- and sample-accurate access (usually), without having to bother with the sometimes less than straightforward and less than perfectly documented libav API.



  • Denormals - (or subnormals) are very small floating point numbers. When they drop below a threshold size, many CPUs (under many different conditions) exhibit considerable drops in performance when processing them. These drops can be as much as a factor of 100 times! What can be done?







  • Designing Sound by Andy Farnel
  • Welsh's Synthesizer Cookbook: Synthesizer Programming, Sound Analysis, and Universal Patch Book


  • Making Computers Sing - When most people think of computer music or sounds, “blips” and “bleeps” often are the first things to come to mind. Let’s dive in and look and at some of the ways a computer musician would make this sound Comparing Max MSP and PureData, Csound, Supercollider and ChucK, Faust.





bash

cat /dev/urandom | hexdump -v -e '/1 "%u\n"' | awk '{ split("0,2,4,5,7,9,11,12",a,","); for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }' | xxd -r -p | aplay -c 2 -f S32_LE -r 16000

Historical

MUSIC-N

1957

  • https://en.wikipedia.org/wiki/MUSIC-N - refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task.

The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.

Less obviously, MUSIC can be seen as the parent program for: RTSKED (a later RealTime Scheduling language by Max Mathews), Max/MSP, Pure Data, AudioMulch, SuperCollider, JSyn, Common Lisp Music, ChucK, or any other computer synthesis language that relies on a modular system (e.g. Reaktor).

  • Music N – Musica Informatica - Music N is an acronym, coined retrospectively, by which commonly indicates a set of computer music languages, developed over forty years. Although made by different people and in different contexts, these languages have in common certain characteristics that have led, in fact, to speak of a single family membership.
MUSIC-V

1969

  • Music V – Musica Informatica - Music V is a sound synthesis software. It is among the best known of the Music N family of computer music programming language. Certainly the first one that had a spread beyond the boundaries of the United States. Ported to a PDP10, Music V became the Mus10 music compiler system and played scores composed in Leland Smith's SCORE language.



MUSIC-10


SCORE


  • SCORE-11 was originally designed for Vercoe’s MUSIC11 system. MUSIC11, which was written in PDP-11 assembly language, was replaced in 1986 by CSOUND, a version of the program written in the programming language C, and which runs on many different computer systems. SCORE-11 works well with either version of Vercoe’s program, which will be referred to as CSOUND in this manual except where the distinction is important.

SCORE-11 (Brinkman, 1981 & 1990) is a note-list preprocessor for the CSOUND compiler, which was written by Barry Vercoe at MIT. SCORE-11 was written by Alexander Brinkman of the Eastman School of Music. The SCORE-11 input syntax is based, with some important extensions, on the well known “SCORE” program (used on the Stanford-Ircam MUS10 system) by Leland Smith (Smith, 1972 & 1980). A composer who is familiar with either preprocessor will have no problems changing to the other one. This will reduce dependency on a specific music system. Some features of SCORE were not implemented in SCORE-11. These are mostly the features that allow data to be copied from one instrument block to another. Several features wer added to SCORE-11 to make it very powerful.

SCORE-11 is written in the PASCAL programming language, increasing its potential for exportability. You will find SCORE-11 to be of great help in the process of defining the hundreds of events that make up a composition. In the following pages you will find a comprehensive manual introducing and describing in detail the various features of SCORE-11.



Cmusic

  • Cmusic - developed in 1980 by Richard Moore at the Computer Audio Research Laboratory (CARL) of the Center for Music Experiment at the University of California at San Diego (UCSD). Moore, who had worked alongside Max Mathews for the development of GROOVE, at first had thought to create a new version of Music V but upgraded to the C language. Following preferred to prepare a completely new software.
  • https://github.com/vlazzarini/cmusic - the sources and makefiles for a build of the classic CARL cmusic and related programs. The build has been tested on OSX and Linux, and although a number of warnings are issued, it compiles and links successfully (but see 'Known issues')

SSSP

In designing the system, we decided early on to adopt a highly interactive approach to the design of the human interface. Batch processing as in Music V (Mathews: 1969) is an alternative, but one which widely separates the composer and the program, causing serious delays in the feedback loop. We feel a score editor must be interactive because there are facets of the task which demand control and aesthetic judgment by the composer in an interactive and exploratory manner. Several modes of interaction have previously been used in music systems, such as alphanumeric text as in MUSlC10 (Smith: 1978), voice recognition (Tucker, Bates, Frykberg, Howrath Kennedy, Lamb, Vaughan: 1977), and piano -type keyboard (New England Digital Corp.: 1978). In our work we have adopted a bias towards graphics -based interaction (Baecker: 1979; Newman and Sproull: 1979) in the belief that this approach can make a significant contribution towards an effective human interface. First, music lends itself well to representations in the visual domain. Second, the task of editing music is complex in the sense that there are many parameters and commands to be manipulated and controlled this complexity can be reduced by the graphic representation of information. Third, previous work (Pulfer: 1972; Tanner: 1972; Vercoe: 1975) indicates that more congenial interfaces can be constructed using dynamic graphics techniques.

SAMBOX

1977

The compiler was replaced in 1977 with dedicated synthesis hardware in the form of the Systems Concepts Digital Synthesizer (built by Peter Samson and known as the ``Samson Box). The Samson Box was capable of utilizing many types of synthesis techniques such as additive synthesis, frequency modulation, digital filtering and some analysis-based synthesis methods. The PLA language, written by Bill Schottstaedt, allowed composers to specify parametric data for the Samson Box as well as for other sound processing procedures on the PDP10 mainframe (and on its eventual replacement, a Foonly F4). On April 3, 1992, the Foonly and Samson Box were officially retired.

HMSL

  • https://en.wikipedia.org/wiki/Hierarchical_Music_Specification_Language - a music programming language written in the 1980s by Larry Polansky, Phil Burk, and David Rosenboom at Mills College. Written on top of Forth, it allowed for the creation of real-time interactive music performance systems, algorithmic composition software, and any other kind of program that requires a high degree of musical informatics. It was distributed by Frog Peak Music, and runs with a very light memory footprint (~1 megabyte) on Macintosh and Amiga systems.

Unlike CSound and other languages for audio synthesis, HMSL is primarily a language for making music. As such, it interfaces with sound-making devices through built-in MIDI classes. However, it has a high degree of built-in understanding of music performance practice, tuning systems, and score reading. Its main interface for the manipulation of musical parameters is through the metaphor of shapes, which can be created, altered, and combined to create a musical texture, either by themselves or in response to real-time or scheduled events in a score. HMSL has been widely used by composers working in algorithmic composition for over twenty years. In addition to the authors (who are also composers), HMSL has been used in pieces by Nick Didkovsky, The Hub, James Tenney, Tom Erbe, and Pauline Oliveros. A Java port of HMSL was developed by Nick Didkovsky under the name JMSL, and is designed to interface to the JSyn API.


Music Mouse

  • https://en.wikipedia.org/wiki/Music_Mouse - an algorithmic musical composition software developed by Laurie Spiegel. Spiegel's best known and most widely used software, "Music Mouse - An Intelligent Instrument" (1986) is for Macintosh, Amiga and Atari computers. The "intelligent instrument" name refers to the program's built-in knowledge of chord and scale convention and stylistic constraints. Automating these processes allows the user to focus on other aspects of the music in real time. In addition to improvisations using this software, Spiegel composed several works for "Music Mouse", including Cavis muris in 1986, Three Sonic Spaces in 1989, and Sound Zones in 1990. She continued to update the program through Macintosh OS 9, and as of 2012, it remained available for purchase or demo download from her website.

Image/ine

  • Image/ine - the first piece of software (for normal computers) that allowed users to manipulate uncompressed video in real time. Limited, at the time, to 320x240 pixels at some 10 frames per second (the Macintosh 8600 was the dream machine), it nevertheless proved a point: artistic quality and stage guts made lack of frame rate and image quality of secondary interest; real time video manipulation could be done! Soon others followed, all with their strengths and weaknesses. What set Image/ine apart, and sets ImX apart, is that this is software for video people, not for musicians and not for programmers.

Lisp

Common Music

1989

  • Common Music (CM) - a music composition system that transforms high-level algorithmic representations of musical processes and structure into a variety of control protocols for sound synthesis and display. Its main user application is Grace (Graphical Realtime Algorithmic Composition Environment) a drag-and-drop, cross-platform app implemented in JUCE (C++) and S7 Scheme. In Grace musical algorithms can run in real time, or faster-than-real time when doing file-based composition. Grace provides two coding languages for designing musical algorithms: S7 Scheme, and SAL, an easy-to-learn but expressive algol-like language.

Common Music can write scores in several different syntaxes (currently CLM, CMN, Music Kit, MIDI, CSound and Paul Lansky's real-time mixing program, RT). The scores can then be rendered on workstations using any of the target synthesis programs. For example, CLM (Common Lisp Music, written by Bill Schottstaedt) is a widely used and fast software synthesis and signal processing package that can make use of multiple Motorola 56001 DSPs.

Pla is the intellectual ancestor of CM.


  • https://github.com/ormf/cm - Common Music 2.12 by Rick Taube. Also check out the related cm-incudine, cm-fomus, cm-utils and cm-svg repos here.


Common Lisp Music

1997

  • CLM (originally an acronym for Common Lisp Music) is a sound synthesis package in the Music V family. It provides much the same functionality as Stk, Csound, SuperCollider, PD, CMix, cmusic, and Arctic — a collection of functions that create and manipulate sounds, aimed primarily at composers (in CLM's case anyway). The instrument builder plugs together these functions (called generators here), along with general programming glue to make computer instruments. These are then called in a note list or through some user interface (provided by Snd, for example).


  • https://github.com/defaultxr/cl-wav-synth - a wav sample editor. It comes in two parts, the main library for manipulating wav files and a (Mc)CLIM interface with a full lisp listener, a sample pane editor, a spectrum pane editor and a song pane editor.



cl-patterns

  • https://github.com/defaultxr/cl-patterns - A library for writing patterns to generate or process (a)musical sequences of mathematically (un)related (non-)compound values in Lisp.Put more simply, cl-patterns is a system for making music and noise via Lisp code.

Ronin

  • https://github.com/hundredrabbits/Ronin - a procedural graphics tool designed to automate simple graphical tasks, like resizing, cropping, coloring, and generating algorithmic images. It interprets a minimal dialect of LISP, look at the examples to better understand how this all works. The library updates is constantly revealing new applications to Ronin, you can see the list of available functions here. Most of our iconography and designs were created with both Ronin and Dotgrid.

tosort


Incudine

  • Incudine - useful to design software synthesizers or sound plugins from scratch, exploiting the expressive power of Common Lisp, without the constraint to use pre-built unit generators. It is also a compositional tool that allows to produce high quality sounds controllable at the sample level, defining and redefining the digital signal processors and the musical structures on-the-fly.Incudine introduces the Virtual UGen (VUG), a new mechanism for defining primitive unit generators by writing concise and reusable code. The definition of a VUG contains lisp code and/or other nested VUGs (or "real" UGens), and it is re-arranged and compiled efficiently during the creation of a DSP.

thundersnow

  • thundersnow - a work-in-progress digital audio workstation and live coding laboratory in Lisp.It builds upon cl-patterns' (non-)deterministic (non-)musical composition and performance functionality with a set of graphical control surfaces and visualizations, as well as bdef's audio buffer and analysis abstractions to quickly generate and wrangle musical data.Under the hood it should support any of the sound servers these libraries do, which include SuperCollider (via cl-collider) and Incudine. Its interface is structured using the Common Lisp Interface Manager for that classic, truly dynamic Lisp "feel".
  • https://github.com/defaultxr/thundersnow - Lisp/McCLIM-based digital audio workstation and live coding laboratory.

Csound

  • Csound - a sound and music computing system which was originally developed by Barry Vercoe in 1985 at MIT Media Lab. Since the 90s, it has been developed by a group of core developers. Although Csound has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer. Csound has tradtionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context. Csound can run on a host of different platforms incuding all major operating systems as well as Android and iOS. Csound can also be called through other programming languages such as Python, Lua, C/C++, Java, etc. [8]


  • https://en.wikipedia.org/wiki/Csound - Csound was originally written at MIT by Barry Vercoe, based on his earlier system called Music 11, which in its turn followed the MUSIC-N model initiated by Max Mathews at the Bell Labs. Its development continued throughout the 1990s and 2000s, led by John ffitch at the University of Bath. The first documented version 5 release is version 5.01 on March 18, 2006.





  • /chapter: A-The-Csound-Api - An application programming interface (API) is an interface provided by a computer system, library or application that allows users to access functions and routines for a particular task. It gives developers a way to harness the functionality of existing software within a host application. The Csound API can be used to control an instance of Csound through a series of different functions thus making it possible to harness all the power of Csound in one’s own applications. In other words, almost anything that can be done within Csound can be done with the API. The API is written in C, but there are interfaces to other languages as well, such as Python, C++ and Java.Though it is written in C, the Csound API uses an object structure. This is achieved through an opaque pointer representing a Csound instance. This opaque pointer is passed as the first argument when an API function is called from the host program.



  • https://github.com/gogins/cxx-opcodes - The CXX opcodes provide a means for Csound users to embed C++ source code in Csound orchestra code, and for Csound then to compile, load, link, and run C++ during the course of the Csound performance.
  • https://github.com/vlazzarini/opcode_compiler - experimental opcode builds on the initial work by Michael Goggins, and is based on the llvm/clang interpreter example code. It provides a just-in-time C module compiler, which can be used to add new opcodes to Csound on-the-fly.



Live Coding with Csound


PythonScore

  • PythonScore - a modular Csound score environment for event generation, event processing, and the fashioning musical structures in time.


CSoundQT

  • CsoundQt - a frontend for Csound featuring a highlighting editor with autocomplete, interactive widgets and integrated help. It is a cross-platform and aims to be a simple yet powerful and complete development environment for Csound. It can open files created by MacCsound. Csound is a musical programming language with a very long history, with roots in the origins of computer music. It is still being maintained by an active community and despite its age, is still one of the most powerful tools for sound processing and synthesis. CsoundQt hopes to bring the power of Csound to a larger group of people, by reducing Csound's intial learning curve, and by giving users more immediate control of their sound. It hopes to be both a simple tool for the beginner, as well as a powerful tool for experienced users.

Cabbage Studio

  • Cabbage Studio - a Csound based DAW with a fully functional patching interface and development environment. Develop, prototype and test Csound based audio instruments on the fly using an integrated development solution that includes an embedded source code editor and rapid GUI designer. Cabbage Studio isn't just for users familiar with Csound, it can load a number of different plugin formats including VST, AU, and LADSPA and comes with over 100 high end audio plugins ready to use out of the box.

Other


  • CMask is a score file generator for Csound. Its main purpose is the generation of events to create textures of granular sounds. Versions for MacOS9, Win, Linux.


  • Hadron Particle Synthesizer - The Hadron particle synthesizer is the ultimate creative tool for granular synthesis. It is available in different plugin formats (VST, AU and Max For Live), even though the graphic appearance is slightly different for the different plugins wrappers, the audio functionality of Hadron is the same in all formats.





  • Cabel - a graphical user interface for building csound instruments by patching modules similar to modular synthesizers. It is simply customizable through XML. It's intended for Csound beginners to learn some fundamentals about this wonderful language (although installing all dependencies of Cabel isn't as trivial as it should be) and for advanced Csound users, who want to test their ideas and play with input parameters in a graphical user interface. Old!

Cecilia

  • Cecilia - a graphic user interface for the sound synthesis and sound processing package Csound. Cecilia enables the user to build very quickly graphic interfaces with sliders and curves to control Csound intruments. It is also an editor to Csound with syntax highlighting and a built-in reference. Cecilia is also a great tool to explore the parameters of a new opcode in an interactive and intuitive way.

Blue

  • Blue - An Integrated Music Environment, powered by Csound. An open-source, cross-platform desktop application for composing music. Use visual tools together with text and code to create the music of your dreams.

pch2csd

  • https://github.com/gleb812/pch2csd - The goal of this project is to (re)implement the Clavia Nord Modular G2 sound engine in Csound, a well-known sound and music computing system.

csound-expression

  • https://github.com/spell-music/csound-expression - a Haskell framework for computer music. With the help of the library we can create our instruments on the fly. A couple of lines in the interpreter is enough to get the cool sound going out of your speakers. It can be used for simple daily sound-file processing or for a full-blown live performances.

p5.Csound

  • p5.Csound - a lightweight wrapper to the web assembly build of Csound. It provides all the power of Csound inside p5.js sketches. With over a thousand processing opcodes, it is one of the most extensive audio processing libraries available in a browser. This interface provide very little in the way of wrapper functions. In fact, it only provides two. After that everything is accessible through the Csound object.

Cmix / RTcmix

  • RTcmix - An Open-Source, Digital Signal Processing and Sound Synthesis Language, one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on a IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications.

RTcmix has a number of unique (or highly unusual) features when compared with other synthesis and signal processing languages. For one, it has a built-in MINC parser, which enables the user to write C-style code within the score file, extending its innate capability for algorithmic composition and making it closer in some respects to later music software such as SuperCollider and Max/MSP. It uses a single-script instruction file (the score file), and synthesis and signal processing routines (called instruments) exist as compile shared libraries. This is different from MUSIC-N languages such as Csound where the instruments exist in a second file written in a specification language that builds the routines out of simple building blocks (organized as opcodes or unit generators). RTcmix has similar functionality to Csound and other computer music languages, however, and their shared lineage means that scripts written for one language will be extremely familiar-looking (if not immediately comprehensible) to users of the other language.


  • RT-cmix -- Using CMIX in Real Time - Several computer music "languages" intended to facilitate the coding of sound synthesis and signal-processing algorithms have become popular among practitioners of computer music (CSOUND, CMIX and CMUSIC, for example). Until quite recently, however, low-cost computing power was not available to allow these languages to run in real-time; instead the computer musician generally worked by reading and writing samples to and from disk files and listening to the results when computation was complete. Within the past few years smaller, general-purpose computers have become powerful enough to run digital sound synthesis and signal-processing algorithms in real time. This demonstration will show the capabilities of RTcmix, a set of extensions and modifications to the CMIX computer music language enabling it to function as a real-time programming environment. Included in the RTcmix package is a dynamic event scheduler and the ability to process disk files and external audio signals in real time. Thus RTcmix can be used to create interactive performance works in addition to providing a fast 'turnaround' for composers building tape-music pieces. Additional aspects of the package include a socket-based protocol for control of RTcmix by external processes, and "backwards compatibility" with all existing CMIX features and instruments. The implementation of RTcmix will be discussed in detail, as well as the rationale behind the design of the language. RTcmix is a unix-based language in the public domain, currently running under both Linux and IRIX.

Pascal


Smalltalk

C

  • Microtone - How tiny you ask? 128 bytes. The sound generation itself takes up just 14 lines of assembly, generating 31 bytes of machine code. The rest is taken up the ELF header and functions needed to open the sound device and outputting samples


  • Coding an Equalizer - This article expands the section on page 20 of my book The Audio Expert that describes briefly how digital equalizers work. As explained in the book, all equalizers are based on filters of various types. Computer code that implements a filter is called Digital Signal Processing, or DSP for short. Most digital filters emulate equivalent analog filters, and the common language for all filters is mathematics. Therefore, several trigonometry formulas are shown below, and there's no escaping this! But the basic operation of the computer code that implements an equalizer is not too difficult to follow, even if you don't understand the formulas. To keep this example as brief as possible, the code implements a simple high-pass filter having one pole (6 dB per octave). Formulas to implement other filter types including those used in parametric equalizers are shown on the Cookbook Formulae web page by Robert Bristow-Johnson.




ld_preload sounds

  • https://github.com/gordol/ld_preload-sounds - Generates WAV output by hooking malloc() and read(). Adding support for other calls should be pretty easy, pull-requests are much welcomed! Also, it should go without saying... but I will say it anyway... this is experimental.

basicwav

sndlib

  • io.c (read and write sound file data)
  • headers.c (read and write sound file headers)
  • audio.c (read and write sound hardware ports)
  • sound.c (provide slightly higher level access to the preceding files)
  • sndlib.h (header for the preceding files)
  • sndlib2xen.c and sndlib-strings.h (tie preceding into s7, Ruby, or Forth)
  • clm.c and clm.h (Music V implementation)
  • clm2xen.c, vct.c and vct.h (tie clm.c into s7, Ruby, or Forth)
  • xen.h, xen.c (the embedded language support)

libsndfile

  • http://www.mega-nerd.com/libsndfile - a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface. It is released in source code format under the Gnu Lesser General Public License.




libaudec

  • https://github.com/zrythm/libaudec - a wrapper library over ffmpeg, sndfile and libsamplerate for reading and resampling audio files, based on Robin Gareus' audio_decoder code (x42/silan:audio_decoder@master). This library is meant to be linked in statically to larger projects. Until version 1.0 is released, the API is subject to change.

dr_libs

libsoundio

  • libsoundio - a lightweight abstraction over various sound drivers. It provides a well-documented API that operates consistently regardless of the sound driver it connects to. It performs no buffering or processing on your behalf; instead exposing the raw power of the underlying backend.libsoundio is appropriate for games, music players, digital audio workstations, and various utilities.libsoundio is serious about robustness. It even handles out of memory conditions correctly.

blargg resampler

r8brain-free-src

clownresampler

hiir

  • https://github.com/unevens/hiir - A header only ready to include mirror of the HIIR library by Laurent De Soras, an oversampling and Hilbert transform library in C++.

oversimple

  • https://github.com/unevens/oversimple - A library for audio oversampling, which tries to offer a simple api while wrapping HIIR, by Laurent De Soras, for minimum phase antialiasing, and r8brain-free-src, by Aleksey Vaneev, for linear phase antialiasing.

sndfilter

Xenakis-Sieves

  • https://github.com/rodneydup/Xenakis-Sieves - This program was originally written in BASIC by Iannis Xenakis and later translated to C by Gérard Marino. Gérard Marino's code included some libraries that, at the time of writing (2019), are now obsolete. Curtis Roads adapted the code to work with std libraries and Rodney DuPlessis corrected some further errors to create this version that will compile on most modern operating systems with a C compiler. There are two programs: PointsFromSieve, which generates points from a given sieve, and its complement, SieveFromPoints deduces a sieve from given points.

libtsp

  • TSP Lab - Reports / Software - a library of routines for signal processing. It also includes a number of general purpose routines useful for program development. Programs using this library for filtering, LPC analysis/synthesis and resampling are available as part of the AFsp package.

miniBAE

Audio File Library

  • Audio File Library is a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.

bloopsaphone

lyd

  • Lyd - an embeddable signal processing language and engine. Suitable for among other things realtime audio effect and instrument synthesis and mixing. It can form the audio core of games, virtual instruments, real-time audio mixing and editing experiments. Lyd is currently mostly an application programmers toy, out of the box at the moment lyd contains a mediocre autogenerated approximation of an OPL2 FM synthesizer in its patch set. When you launch lyd you can use it as a synthesizer with an attached MIDI keyboard or with MIDI sequencers supporting ALSA midi. Lyd should ble able to enable processing plug-ins for various audio synthesis standards exposing the lyd language and efficiency in interaction to allow live tweaking of the lyd code when composing.

Pyo

  • Pyo - a Python module written in C to help DSP script creation. Pyo contains classes for a wide variety of audio signal processing. With pyo, the user will be able to include signal processing chains directly in Python scripts or projects, and to manipulate them in real time through the interpreter. Tools in the pyo module offer primitives, like mathematical operations on audio signals, basic signal processing (filters, delays, synthesis generators, etc.), but also complex algorithms to create sound granulation and other creative audio manipulations. pyo supports the OSC protocol (Open Sound Control) to ease communications between softwares, and the MIDI protocol for generating sound events and controlling process parameters. pyo allows the creation of sophisticated signal processing chains with all the benefits of a mature and widely used general programming language.

Allegro

  • Allegro - a cross-platform library mainly aimed at video game and multimedia programming. It handles common, low-level tasks such as creating windows, accepting user input, loading data, drawing images, playing sounds, etc. and generally abstracting away the underlying platform. However, Allegro is not a game engine: you are free to design and structure your program as you like. Allegro 5 has the following additional features: Supported on Windows, Linux, Mac OSX, iPhone and Android, User-friendly, intuitive C API usable from C++ and many other languages, Hardware accelerated bitmap and graphical primitive drawing support (via OpenGL or Direct3D), Audio recording support, Font loading and drawing, Video playback, Abstractions over shaders and low-level polygon drawing

Soundpipe

  • Soundpipe - a lightweight music DSP library written in C. It aims to provide a set of high-quality DSP modules for composers, sound designers, and creative coders. Soundpipe supports a wide range of synthesis and audio DSP techniques which include: Classic Filters (Moog, Butterworth, etc), High-precision and linearly interpolated wavetable oscillators, Bandlimited oscillators (square, saw, triangle), FM synthesis, Karplus-strong instruments, Variable delay lines, String resonators, Spectral Resynthesis, Partitioned Convolution, Physical modeling, Pitch tracking, Distortion, Reverberation, Samplers and sample playback, Padsynth algorithm [10]
  • Sporth - SoundPipe fORTH, is a small stack-based audio programming language. For composers, Sporth is a different approach to making sound. Stack based languages are somewhat novel in the world of musical languages, and lend themselves well to modular sound design scenarios. Sporth syntax is simple to learn, and rewarding to master. Sound designers and composers fluent in languages like Csound, ChucK, and Supercollider will find Sporth a new and refreshing take on the same basic concepts. For developers, Sporth has a simple API that allows it to be used inside of other applications. In addition to compiling Sporth code, the API has access to other features of Sporth that would otherwise be unavailable, such as audio-rate software channels, and user defined function callbacks.


  • http://paulbatchelor.github.io/proj/libline - an ANSI C library for generating audio-rate line segments and curves for computer-based music. Libline can easily interface with sample-accurate audio libraries like Soundpipe, and a local copy is used inside of Sporth via Polysporth.



  • http://paulbatchelor.github.io/proj/moons.html - Moons is an isorhythmic circular sequencer. It is written using a combination of C and C++. All visuals are created using OpenGl; All the sounds are snythesized in realtime using Sporth, Soundpipe, and RTaudio.

Patchwerk

  • Patchwerk - a portable ANSI C library for constructing audio graphs and DSP signal chains. It designed to be portable, fast, and memory efficient. The core Patchwerk is written using the CWEB literate programming system.In addition to the core patchwerk API, there is also an actively maintained Runt interface with a built-in collection of DSP nodes from Soundpipe. The goal this Runt interface is to provide a similar feeling and faster alternative to Sporth, a stack-based audio language previously created by the author.

Sound Open Firmware

Kiss FFT

  • https://sourceforge.net/projects/kissfft/ - A Fast Fourier Transform based up on the principle, "Keep It Simple, Stupid." Kiss FFT is a very small, reasonably efficient, mixed radix FFT library that can use either fixed or floating point data types.

PFFFT

  • https://bitbucket.org/jpommier/pffft/src/master/ - PFFFT does 1D Fast Fourier Transforms, of single precision real and complex vectors. It tries do it fast, it tries to be correct, and it tries to be small. Computations do take advantage of SSE1 instructions on x86 cpus, Altivec on powerpc cpus, and NEON on ARM cpus. The license is BSD-like.


mini_al

  • https://github.com/dr-soft/mini_al - a simple library for playing and recording audio. It's focused on simplicity and has a very small number of APIs. C/C++, single file, public domain.

miniaudio



tinywav

RAGE

  • https://github.com/concert/rage - Realtime Audio Graph Engine, provides an audio processing graph implementation tailored towards distributed audio workstations. The audio graph within RAGE is composed of processing elements and connections between elements which are managed by the RAGE host. Rage processing elements can be both traditional data transforms (effects, mixers, etc) as well as data producers (samplers, synthesizers, etc) and data consumers (recorders, analysis plugins). The RAGE host manages the scheduling of both the low-latency audio commutations and the higher latency subtasks such as loading a file from disk for sample replay. Additionally, RAGE provides session time interpolated parameters, freeing individual elements from this responsibility.

Digraph

  • https://github.com/grame-cncm/digraph - a very simple, C++ 11 template-based, directed graph library. It is not designed to be general, but to suit the needs of the (next) Faust compiler.

LEAF

  • https://github.com/spiricom/LEAF - Previously named OOPS, LEAF is a C library for Audio synthesis and processing created by Mike Mulshine and Jeff Snyder. The library consists of a set of high-level audio synthesis components (Oscillators, Filters, Envelopes, Delays, Reverbs, and other Utilities). Our primary use case is embedded audio computing on 32-bit ARM microcontrollers that can run "bare-metal" (without an OS), such as the STM32f4, STM32f7, and STM32H7. The code, however, is general enough to be used in many other situations as well. We have included a JUCE VST/AU generating template to test the library (2), and the python script we use to generate wavetables.

Most of these algorithms are sourced from other projects, especially the STK (Sound Toolkit) library and various discussions on the music-DSP mailing list. We also owe a lot to open source computer programming languages, such as C-sound, ChucK, PureData, and Supercollider. Other interesting projects to check out that similarly target embedded applicatons are: TeensyAudio (C++), Hoxton Owl (C++), Axoloti (C), and Mutable Instruments (C++).

sms-tools

  • Spectral Modeling Synthesis Tools - MTG - Music Technology Group (UPF) - a set of techniques and software implementations for the analysis, transformation, and synthesis of musical sounds based on various spectral modeling approaches. These techniques can be used for synthesis, processing and coding applications, while some of the intermediate results might also be applied to other music related problems, such as sound source separation, musical acoustics, music perception, or performance analysis. The basic model and implementation were developed by Xavier Serra as part of his PhD thesis published 1989. Since then many extensions have been proposed at MTG-UPF and by other researchers.
    • https://github.com/MTG/sms-tools - Sound analysis/synthesis tools for music applications written in python (with a bit of C) plus complementary teaching materials.

MTS-ESP

  • https://github.com/ODDSound/MTS-ESP - a simple but versatile C/C++ library for adding microtuning support to audio and MIDI plugins. It allows for a single master plugin to simultaneously control the tuning of any number of connected client plugins across a DAW session.

Ficus

  • https://github.com/resinbeard/ficus - development library provides your application with both multichannel playback/capture of mono wav audio files and all the flexibility and routing possibilities of a JACK client.Ficus is distributed as a group of source files, settings like available number of input/output channels and storage banks are available as compile-time options through the configuration file, config.h

Snack Sound Toolkit

  • Snack Sound Toolkit - designed to be used with a scripting language such as Tcl/Tk or Python. Using Snack you can create powerful multi-platform audio applications with just a few lines of code. Snack has commands for basic sound handling, such as playback, recording, file and socket I/O. Snack also provides primitives for sound visualization, e.g. waveforms and spectrograms. It was developed mainly to handle digital recordings of speech, but is just as useful for general audio. Snack has also successfully been applied to other one-dimensional signals.

Postfish

  • https://github.com/ePirat/Postfish - a digital audio post-processing, restoration, filtering and mixdown tool. It works as a linear audio filter, much like a rack of analog effects. The first stage of the filter pipeline provides a bank of configurable per-channel processing filters for up to 32 input channels. The second stage provides mixdown of the processed input audio into a group of up to eight output channels. The third stage applies processing filters to the output group post-mixdown. The Postfish is a stream filter; feed it audio from a list of files or input stream, and it renders audio to standard out, as well as optionally providing a configurable audio playback monitor via a sound device. If the input audio is being taken from files, Postfish also provides simple forward/back/cue seeking and A-B looping control. The next major update of Postfish will also include automation to allow mixdown settings to be 'recorded' and applied automatically during rendering.

Open-DSP-Toolbox

ame

scalemap

avpack

AIF-WAV-File-Lib

wavpack-stream

GStreamer




PortMedia



  • Playrec - a Matlab and Octave utility (MEX file) that provides simple yet versatile access to soundcards using PortAudio, a free, open-source audio I/O library. It can be used on different platforms (Windows, Macintosh, Unix) and access the soundcard via different host API including ASIO, WMME and DirectSound under Windows.


JamesDSPManager

  • https://github.com/james34602/JamesDSPManager - Cross-platform Audio Effect / Digital Signal Processing library)GUI is based on Omnirom DSP Manager and able to run on most Android devices from 5 to 10 include Samsung, AOSP, Cyanogenmod, recent HTC and Huawei(arm64). This app include many cool features.This repo is a Android variant of JamesDSP, JDSP4Linux uses core library from current repo.

SA_Toolkit

  • https://github.com/skei/SA_Toolkit - "initially the base code toolkit for all the SA (skei.audio, plugins, but now a bit more generic and hopefully useful for other projects and people.. 100% based on clap, with wrappers to and from other plugin formats (vst2/3, ladspa, lv2, standalone exe).. clap/ladspa hosting (other formats coming too, probably).. currently supports linux and windows, no mac version planned.. gpu-accelerated, freely scalable (vector) gui.. all clap goodies (poly modulation, voice stacking, thread pool, remote controls, preset discovery, parameter indication, sample accurate..), voice manager with threaded voices, overlapping notes, polyphonic modulation, etc, etc).. simplified, abstracted, streamlined for rapid installation/setup, prototyping, iterations.."

Mozzi

C++

  • YouTube: Ian Hobson - The use of std variant in realtime DSP (ADC'17) - Application Developer & Software Engineer, Ableton .C++17 introduces std::variant, a type-safe union class. A variant's value represents one of a fixed set of possible types, with C++'s type system ensuring that correct code paths are executed for the active type. This talk will explore the pros and cons of working with variants, with a special focus on DSP. Variants allow for well defined interfaces and minimal memory footprints, but what are they like to use in practice, and are they performant enough for realtime use?

RtAudio

  • RtAudio - a set of C++ classes that provide a common API for realtime audio input/output across Linux (native ALSA, JACK, PulseAudio and OSS), Macintosh OS X (CoreAudio and JACK), and Windows (DirectSound, ASIO and WASAPI) operating systems.

AFsp

Audio File Library

  • Audio File Library - a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.

AudioFile

Libnyquist

  • https://github.com/ddiakopoulos/libnyquist - a small C++11 library for reading sampled audio data from disk or memory. It is intended to be used an audio loading frontend for games, audio sequencers, music players, and more.

r8brain

  • https://github.com/avaneev/r8brain-free-src - Open source (under the MIT license) high-quality professional audio sample rate converter (SRC) (resampling) library. Features routines for SRC, both up- and downsampling, to/from any sample rate, including non-integer sample rates: it can be also used for conversion to/from SACD sample rate and even go beyond that. SRC routines were implemented in multi-platform C++ code, and have a high level of optimality.

libstdaudio

volume-cpp

softcut-lib

  • https://github.com/monome/softcut-lib -a C/C++ library for manipulating audio buffers in real time.it provides a relatively low-level but robust interface for commonly needed functions in musical systems development: crossfaded looping with subsample accuracy resampling read and write heads crossfaded overdub behavior input and output multimode filters

rt-cqt

  • https://github.com/jmerkt/rt-cqt - aims to be a reasonable fast header-only C++11 implementation of the Constant-Q transform. It is especially designed for easy usage in real-time audio applications, handling various block-sizes and samplerates. The implementation is roughly based on the Judith C. Brown, Miller S. Puckette: An efficient algorithm for the calculation of a constant Q transform paper. pffft is used to handle the ffts, Polyphase IIR lowpasses to perform upsampling / downsampling.


muFFT

  • https://github.com/Themaister/muFFT - a library for doing the fast fourier transform (FFT, in one or two dimensions. The FFT has many applications in digital signal processing. The main use cases are fast linear convolution and conversion from time domain into frequency domain and vice versa. See [The Fast Fourier Transform](@ref FFT) for details on how the algorithm works and how it is implemented in muFFT.

LibAudioStream

  • https://github.com/sletz/libaudiostream - a framework to manipulate audio ressources through the concept of streams. Using this abstraction, different applications like audio players or offline processing tools can be developed.

APF

Aquila

qm-dsp

DSP filters in C++

CamillaDSP

  • https://github.com/HEnquist/camilladsp - A flexible cross-platform IIR and FIR engine for crossovers, room correction etc. A tool to create audio processing pipelines for applications such as active crossovers or room correction. It is written in Rust to benefit from the safety and elegant handling of threading that this language provides. Supported platforms: Linux, macOS, Windows. Audio data is captured from a capture device and sent to a playback device. Alsa, PulseAudio, Jack, Wasapi and CoreAudio are currently supported for both capture and playback.

Synthesis ToolKit in C++

  • Synthesis ToolKit in C++ (STK) is a set of open source audio signal processing and algorithmic synthesis classes written in the C++ programming language. STK was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, realtime control, ease of use, and educational example code. The Synthesis ToolKit is extremely portable (it's mostly platform-independent C and C++ code), and it's completely user-extensible (all source included, no unusual libraries, and no hidden drivers). We like to think that this increases the chances that our programs will still work in another 5-10 years. In fact, the ToolKit has been working continuously for nearly 20 years now. STK currently runs with realtime support (audio and MIDI) on Linux, Macintosh OS X, and Windows computer platforms. Generic, non-realtime support has been tested under NeXTStep, Sun, and other platforms and should work with any standard C++ compiler.

CREATE Signal Library / CSL

  • https://github.com/stpope/CSL6 - CREATE Signal Library (CSL, pronounced "sizzle," previously called the CREATE Oscillator or CO). CSL is a cross-platform C++ library for digital audio signal synthesis, analysis, spatialization and interactive sound/music application development. CSL was developed at the Center for Research in Electronic Art Technology (CREATE) and the Graduate Program in Media Arts and Technology (MAT) at the University of California, Santa Barbara (UCSB) starting in the late 1990s.
  • https://github.com/stpope/CSL7 - source code and documentation for the CREATE Signal Library (CSL, pronounced "sizzle," previously called the CREATE Oscillator or CO). CSL is a cross-platform C++ library for digital audio signal synthesis, analysis, spatialization and interactive sound/music application development. CSL was developed at the Center for Research in Electronic Art Technology (CREATE) and the Graduate Program in Media Arts and Technology (MAT) at the University of California, Santa Barbara (UCSB) starting in the late 1990s.

Signalsmith Audio's DSP Library

MoogLadders

Gamma

  • https://github.com/LancePutnam/Gamma - a cross-platform, C++ library for doing generic synthesis and filtering of signals. It contains helpful mathematical functions, types, such as vectors and complex numbers, an assortment of sequence generators, and many other objects for signal processing tasks. It is oriented towards real-time sound and graphics synthesis, but is equally useful for non-real-time tasks.

Tonic

  • https://github.com/TonicAudio/Tonic - Fast and easy audio synthesis in C++. Prefer coding to patching? Love clean syntax? Care about performance? That's how we feel too, and why we made Tonic.
  • https://github.com/TonicAudio/ofxTonic - Open Frameworks Addon for the Tonic audio synthesis Library. Tonic is an efficient, pure C++ patching tool with a refreshingly crisp and simple syntax.

CLAM

  • CLAM - (C++ Library for Audio and Music) is a software framework for research and application development on the audio and music domain. It provides means to perform complex audio signal analysis, transformations and synthesis. It also provides a uniform interface to common tasks on audio applications such as accessing audio devices and audio files, thread safe communication with the user interface and DSP algorithms recombination and scaling.

You can use CLAM as a library to program your applications in C++ but you can also use graphical tools to build full applications without coding.

  • http://clam-project.org/wiki/SMSTools_tutorial - spectral; analyzes, transforms and synthesizes back a given sound. For doing so, it uses the Sinusoidal plus Residual model (sometimes referred to as SMS but also known as HILN in the context of MPEG4).

DISTRHO Plugin Framework

  • https://github.com/DISTRHO/DPF - designed to make development of new plugins an easy and enjoyable task. It allows developers to create plugins with custom UIs using a simple C++ API. The framework facilitates exporting various different plugin formats from the same code-base. DPF can build for LADSPA, DSSI, LV2 and VST formats. All current plugin format implementations are complete. A JACK/Standalone mode is also available, allowing you to quickly test plugins. Plugin DSP and UI communication is done via key-value string pairs.
  • https://github.com/DISTRHO






JUCE C++ Library

to rework


  • https://en.wikipedia.org/wiki/JUCE - Like many other frameworks (e.g., Qt, wxWidgets, GTK+, etc.), JUCE contains classes providing a range of functionality that covers user-interface elements, graphics, audio, XML and JSON parsing, networking, cryptography, multi-threading, an integrated interpreter that mimics ECMAScript's syntax, and various other commonly used features. Application developers needing several third-party libraries may thus be able to consolidate and use only the JUCE library, or at least reduce the number of third-party libraries they use. In this, the original inspiration was Java's JDK, and JUCE was intended to be "something similar for C++".

A notable feature of JUCE when compared to other similar frameworks is its large set of audio functionality; this is because JUCE was originally developed as a framework for Tracktion, an audio sequencer, before being split off into a standalone product. JUCE has support for audio devices (such as CoreAudio, ASIO, ALSA, JACK, WASAPI, DirectSound) and MIDI playback, polyphonic synthesizers, built-in readers for common audio file formats (such as WAV, AIFF, FLAC, MP3 and Vorbis), as well as wrappers for building various types of audio plugin, such as VST effects and instruments. This has led to its widespread use in the audio development community.

JUCE comes with wrapper classes for building audio and browser plugins. When building an audio plugin, a single binary is produced that supports multiple plugin formats (VST & VST3, RTAS, AAX, Audio Units). Since all the platform and format-specific code is contained in the wrapper, a user can build Mac and Windows VST/VST3/RTAS/AAX/AUs from a single codebase.






  • https://github.com/Dreamtonics/juce_emscripten - This port was originally a proof-of-concept created here by @beschulz. This fork is an unofficial continuation of the attempt towards a complete JUCE framework running inside a browser.




  • FRUT - makes it easy to build JUCE projects using CMake instead of Projucer. It enables more flexibility in project architecture, simplified CI setup, and easier integration with other JUCE and non-JUCE projects. Converting an existing JUCE project to FRUT is easy, and you don’t need to be a CMake expert to use it!
  • https://github.com/sudara/pamplejuce - Pamplejuce is a template lifestyle for creating and building JUCE plugins with 2020 best practices.Out of the box, it supports: C++20; JUCE 6.x as a submodule, tracking develop; CMake 3.19 Catch2 2.13.3; Github Actions for both CI and artifact building; Proper .gitignore given the above; Mac Silicon (arm64) built on Xcode 12.2
  • https://github.com/madskjeldgaard/Cookiejuce - A command line tool for generating modern JUCE projects with CMake. Easily generate beautiful JUCE projects and create audio, midi and music software with modern C++ using a simple command line tool interface. When run, you are asked a series of questions, and a new project is generated based on your answers with everything correctly set up and named for your project.





  • https://github.com/drowaudio/drowaudio - a 3rd party JUCE module designed for rapid audio application development. It contains classes for audio processing and gui elements. Additionally there are several wrappers around 3rd party libraries including cURL, FFTReal and SoundTouch. dRowAudio is written in the strict JUCE style, closely following the style guide set out at JUCE Coding Standards.
  • https://github.com/sudara/melatonin_inspector - A JUCE module that gives you the ability to inspect and visually edit (non-destructively) components in your UI. Inspired by Figma, web browser web inspectors and Jim Credland's Component Debugger juce-toys.






  • https://github.com/sudara/melatonin_blur - a batteries-included, cross-platform CPU blur library for the JUCE C++ framework. The goal: Get drop shadows and inner shadows fast enough that entire modern vector interfaces in JUCE can be built without resorting to deprecated solutions with lower quality of life (looking at you, OpenGL on macOS!). Melatonin Blur provides a 10-30x speedup over using Stack Blur.







Jamoma

  • Jamoma - a C++ platform for building dynamic and reflexive systems with an emphasis on audio and media. Platform is composed of a layered framework architecture that creates an object model and then specializes that object model for audio and matrix processing, and system automation and management. Jamoma makes use of polymorphic typing, dynamic binding, and introspection to create a cross-platform API pulling ideas from languages such as Smalltalk and Objective-C while remaining within the bounds of the portable and cross-platform C++ context. The implementations include modular environments for Max by Cycling '74 and Pd by Miller Puckette.

iPlug2

  • iPlug2
    • https://github.com/iPlug2/iPlug2 - a simple-to-use C++ framework for developing cross platform audio plug-ins/apps and targeting multiple plug-in APIs with the same minimalistic code. It abstracts an audio plug-in (IPlug) and it's drawing engine/GUI toolkit (IGraphics). IGraphics contains a collection of common controls well suited for audio plug-in GUIs, either using bitmap or vector graphics. The original version of iPlug was developed by John Schwartz aka schwa and released in 2008 as part of Cockos' WDL library. iPlug 2 (2018) is a substantial reworking that brings multiple vector graphics backends (including GPU accelerated options and HiDPI/scaling), a better approach to concurrency, support for distributed plug-in formats and compiling to WebAssembly via emscripten, amongst many other things.


  • https://github.com/olilarkin/SuperSynth - an example of how to set up an iPlug2 project to build "out of source", which might be desirable to keep all your project dependencies synchronised with version control. It is also set up for "containerized development" using VSCode and github codespaces.


  • https://github.com/iPlug2/iPlug2OOS - an example of how to set up an iPlug2 project to build "out of source", which might be desirable to keep all your project dependencies synchronised with version control. It is also set up for "containerized development" using VSCode and github codespaces.

APAL

Jamba

  • https://github.com/CapRat/APAL - crossplatform audio abstraction library which is currently under heavy development. At the moment it supports VST2, VST3, LV2 and LADSPA. But there are a lot more to come, because its made for adding more Formats to it. Because its in an early phase of development it just supports audio and midi processing ( sure a lot more too, but that are details).

Ayane

Resonance

  • https://github.com/resonance-audio/resonance-audio - the official open source project for the Resonance Audio SDK. This repository consists of the full source code of the Resonance Audio C++ library, as well as the platform integrations into Unity, FMOD, Wwise and DAW tools.

DSPFilters

  • https://github.com/vinniefalco/DSPFilters - A Collection of Useful C++ Classes for Digital Signal Processing. "Techniques for digital signal processing are well guarded and held close to the chest, as they have valuable applications for multimedia content. The black art of Infinite Impulse Response ("IIR") filtering has remained veiled in secrecy with little publicly available source code...until now."

Building on the work of cherished luminaries such as Sophocles Orfanidis, Andreas Antoniou, Martin Holters, and Udo Zolzer, this library harnesses the power of C++ templates to solve a useful problem in Digital Signal Processing: the realization of multichannel IIR filters of arbitrary order and prescribed specifications with various properties such as Butterworth, Chebyshev, Elliptic, and Optimum-L (Legendre) responses. The library is provided under the MIT license and is therefore fully compatible with proprietary usage.

Classes are designed as independent re-usable building blocks. Use some or all of the provided features, or extend the functionality by writing your own objects that plug into the robust framework. Only the code that you need will get linked into your application. Here's a list of features: Exclusive focus on IIR filters instead of boring FIR filters, Complete implementation of all "RBJ Biquad" Cookbook filter formulas, Butterworth, Chebyshev, Elliptic, Bessel, Legendre designs, Low Pass, High Pass, Band Pass, Band Stop transformations, Low, High, and Band Shelf filter implementations for most types, Smooth interpolation of filter settings, pole/zeros, and biquad coefficients to achieve seamless parameter changes, Representation of digital filters using poles and zeros, Realization using Direct Form I, Direct Form II, or user provided class, Fully factored to minimize template instantiations, "Design" layer provides runtime introspection into a filter, Utility template functions for manipulating buffers of sample data, No calls to malloc or new, great for embedded systems, No external dependencies, just the standard C++ library!

DSP Library

  • Signalsmith DSP library - a set of C++11 header-only classes/templates to support certain DSP tasks (mostly audio). It's still growing, but currently includes: Delay tools (circular buffers, single/multi-channel delay-lines); Interpolators (Lagrange, polyphase, Kaiser-sinc); Envelope tools (e.g. box-filter, peak-hold); FFT and spectral processing (including multi-channel STFT)

DaisySP

  • https://github.com/electro-smith/DaisySP - an open source DSP library written in C++ and specifically tailored to embedded audio applications.It was originally created for use with the Daisy Embedded Audio Platform, however it can easily be used elsewhere.

spuc

spuce

iir1

  • https://github.com/berndporr/iir1 - An infinite impulse response (IIR) filter library for Linux, Mac OSX and Windows which implements Butterworth, RBJ, Chebychev filters and can easily import coefficients generated by Python (scipy). The filter processes the data sample by sample for realtime processing. It uses templates to allocate the required memory so that it can run without any malloc / new commands. Memory is allocated at compile time so that there is never the risk of memory leaks.

Q

  • Cycfi Research: Q Audio DSP Library - a cross-platform C++ library for Audio Digital Signal Processing. Aptly named after the “Q factor”, a dimensionless parameter that describes the quality of a resonant circuit, the Q DSP Library is designed to be simple and elegant, as the simplicity of its name suggests, and efficient enough to run on small microcontrollers. Q leverages the power of modern C++ and efficient use of functional programming techniques, especially function composition using fine-grained and reusable function objects (both stateless and stateful), to simplify complex DSP programming tasks without sacrificing readability. MIT license.

libnoise


mopo

ftz

  • https://gitlab.com/ftz/audio - Digital audio processing and synthesis library. The following operations are implemented: Amplification, Mixing, Panning, Bit depth conversion, Sample rate conversion, Pitch Shifting. The following oscillators are implemented: Sine wave, Pulse wave, Triangle wave, Sawtooth wave, White noise. The following sample formats are implemented out of the box (you can always add yours): 16 bit signed int, 24 bit signed int, 32 bit signed int, 32 bit float, 64 bit float. The following file formats are supported: Microsoft RIFF Wave. The following audio interfaces available for playback: ALSA.


  • https://gitlab.com/ftz/midi - This library provides a convenient interface for manipulation of MIDI data as well as providing higher level abstractions for I/O.

AudioTK

  • https://github.com/mbrucher/AudioTK - a set of audio filters. It helps assembling workflows for specific audio processing workloads. The audio workflow is split in independent components (without feedback loops) that consist of filters. Each filter has a set of synchronized input and output ports that can be connected together. All input ports must be connected, but not all output ports need to be. Sampling rate can be independent between input and output ports, but input sampling rates are identical, and output sampling rates are also identical.

Raul

  • http://drobilla.net/software/raul - Realtime Audio Utility Library, is a C++ utility library primarily aimed at audio/musical applications. It is used by Ingen, Patchage, and Machina.

kode

ASPiK

  • ASPiK SDK: ASPiK Developer's Guide - design and develop your AU, AAX and VST plugins and assemble the GUI right inside of your DAW while it is running. You can create a new blank plugin project in a few seconds using the ASPiKreator software. And if you are a RackAFX7 user, you can now export your RAFX2 plugins in ASPiK format thanks to an agreement with the author.

eDSP

  • https://github.com/mohabouje/eDSP - easy Digital Signal Processing, is a digital signal processing framework written in modern C++ that implements some of the common functions and algorithms frequently used in digital signal processing, audio engineering & telecommunications systems. It aims to provide efficient and streamlined base calculations while at the same time having a straightforward and easy-to-use interface similar to another numeric language like MATLAB. The library integrates a C++ and Python interfaces and supports Windows, Linux, Mac OS, iOS and Android.

MadronaLib

NFDriver

MWEngine

  • https://github.com/igorski/MWEngine - 0ActionsWikiReleases 1Audio engine and DSP for Android, written in C++ providing low latency performance in a musical context, supporting both OpenSL and AAudio.

oboe

  • https://github.com/google/oboe - a C++ library which makes it easy to build high-performance audio apps on Android. It was created primarily to allow developers to target a simplified API that works across multiple API levels back to API level 16 (Jelly Bean).

KFR

fast-filters

openAudioProgrammingTutorials

jack-wrapper

JackCpp

  • JackCpp - c++ classes which wrap the Jack audio io api and lock-free ring buffer. Works with Linux and OSX (thanks to Will Wolcott for OSX testing and example/test file comments).

segmod


Maximilian

  • Maximilian - an open source, MIT licensed C++ audio synthesis library. It’s designed to be cross platform and simple to use. The syntax and program structure are based on the popular ‘Processing’ environment. Maximilian provides standard waveforms, envelopes, sample playback, resonant filters, and delay lines. In addition, equal power stereo, quadraphonic and 8-channel ambisonic support is included. There’s also Granular synthesisers with Timestretching, FFTs and some Music Information Retrieval stuff.

lsp-lltl-lib

rt-wdf_lib

  • https://github.com/RT-WDF/rt-wdf_lib - a real-time capable Wave Digital Filter library for circuit modeling, with support for arbitrary topologies and multiple/multiport non-linearities. It was introduced in a DAFx-16 paper and comes with a full API reference documentation.

Planeverb

  • https://github.com/themattrosen/Planeverb - CPU based real-time wave-based acoustics engine tailored for games. It works in 2D, handling fully dynamic scenes in a single-CPU-core budget. Technical details can be found in associated publication.

mimium

  • https://github.com/mimium-org/mimium - a domain specific programming language for describing/generating sound and music.With this language, you can write a low-level audio processing with an easy expression and high-performance powered by LLVM.

Tracktion Engine

  • Tracktion Engine - defines a high level document object model for time based, sequenced audio applications and provides an API to create, modify and play these back. Create an Engine object, our arrangement object, called an Edit, add some elements to it such as audio files, MIDI and plugins, then play it back or render it to an audio file. Tracktion Engine is provided in JUCE module format making it incredibly quick and easy to setup and start creating projects. We’ve included a bunch of examples and accompanying tutorials along with scripts to build them so you can get up and running with the minimum of fuss.


SoLoud

Modus

  • Modus is an open source, cross-platform C++ library which allows you to handle music from code.

Wavelet

  • https://github.com/Ircam-RnD/wavelet - a cross-platform c++ library for online estimation of the Continuous Wavelet Transform (CWT). The online estimation is based on a filterbank implementation of the CWT with minimal delay per scale and optimization based on multi-rate computation. The library also allows for offline estimation of the CWT using FFT.

Pitch detection




signalsmith-stretch

  • https://github.com/Signalsmith-Audio/signalsmith-stretch - a C++11 library for pitch and time stretching, using the final approach from the ADC22 presentation Four Ways To Write A Pitch-Shifter.It can handle a wide-range of pitch-shifts (multiple octaves) but time-stretching sounds best for more modest changes (between 0.75x and 1.5x).

MiniVideo

CHON

NFSmartPlayer

  • https://github.com/spotify/NFSmartPlayer - We dreamed of an audio engine that could multitrack, stream long audio, apply audio effects, react to user input through a scripting layer, be resource-light, and work on the majority of platforms. We wanted to bring new audio experiences to both listeners and creators, without requiring a degree in audio engineering. We created a declarative format to describe the audio and effects over time, known as NFGrapher. This C++ smart player consumes JSON produced by NFGrapher, and outputs audio in either realtime to the audio driver or faster-than-realtime (offline to a file). Via NFDecoder and NFHTTP, it can stream audio files from the file system, over HTTP(S), or through DASH (HLS) transmuxing.

vintage

  • https://github.com/jcelerier/vintage - an experiment in seeing how far modern C++ features allow to write purely declarative code and introspect this code through various reflection-like features.

Sequentity

Oto

Surge Filters

mbdsp

  • https://github.com/michaeldonovan/mbdsp - A collection of DSP algorithms/utilities for working with audio. Header only and intended to be suitable for embedded applications.This is mostly intended as an educational exercise for use in my hobby projects, you probably shouldn't use it for anything important.

aubergine

AlloSystem


allolib

  • https://github.com/AlloSphere-Research-Group/allolib - Library for interactive multimedia application development. Developed by: AlloSphere Research Group. allolib is a fork of AlloSystem, maintained in separate repository. University of California, Santa Barbara

diatonic

  • https://github.com/pd3v/diatonic - You don't need to create lists of vectors/arrays to set pre-definitions for "all" scales, chords and a given tonality chords'; create them dynamically; do it by calling functions.

C#

MusicEngine

Objective C

Mozzi

  • Mozzi - brings your Arduino to life by allowing it to produce much more complex and interesting growls, sweeps and chorusing atmospherics. These sounds can be quickly and easily constructed from familiar synthesis units like oscillators, delays, filters and envelopes.You can use Mozzi to generate algorithmic music for an installation or performance, or make interactive sonifications of sensors, on a small, modular and super cheap Arduino, without the need for additional shields, message passing or external synths.

resample

D

Dplug

Ddsp

libsoundio-d

Java

Java Sound

JSyn

  • JSyn - allows you to develop interactive computer music programs in Java. You can run them as stand-alone applications, or as Applets in a web page. JSyn can be used to generate sound effects, audio environments, or music. JSyn is based on the traditional model of unit generators which can be connected together to form complex sounds. For example, you could create a wind sound by connecting a white noise generator to a low pass filter that is modulated by a random contour generator.

Beads

  • Beads is a software library written in Java for realtime audio. It was started by Ollie Bown in 2008. It is an open source project and has been developed with support from Monash University in Melbourne, via the Centre for Electronic Media Art‘s ARC Discovery Grant Project “Creative Ecosystems”, and a Small Grant for Early Career Researchers from the Faculty of Information Technology. Beads contributors includes Ollie Bown, Ben Porter and Benito.

JMSL

  • http://www.algomusic.com/jmsl/ JMSL is a Java API for music composition, interactive performance, and intelligent instrument design. With JMSL, the composer/programmer can create stand-alone musical applications or deploy applets on the web. JMSL supports JSyn, MidiShare, MidiPort, and JavaSound.

JNAJack

JFuge

  • JFugue - Music Programming for Java™ and JVM Languages

JASS

  • JASS (Java Audio Synthesis System) is a unit generator based audio synthesis programming environment written in pure Java. Java 1.5 is required. The environment is based on a foundation structure consisting of a small number of Java interfaces and abstract classes, which implement the functionality needed to create filter-graphs, or "patches". Unit generators are created by extending the abstract classes and implementing a single method. Patches are created by linking together unit generators in arbitrary complex graph structures. Patches can be rendered in real-time with special unit generators that communicate with the audio hardware, which have been implemented using the JavaSound API and through JNI for some platforms.

jVSTwRapper

  • jVSTwRapper - an easy and reliable wrapper to write audio plug-ins in Java. It enables you to develop VST (2.4), Audio Unit (AU) and LADSPA compatible audio plugins and virtual instruments plus user interfaces (Swing) and run them on Windows, Linux and Mac OSX. Five demo plugins (+src) are included.

jackAudio4Java

jnajack

Pipes

  • https://github.com/jaudiolibs/pipes - an audio routing and unit generator library linking together the various other JAudioLibs libraries. Pipes consists of code that was originally written for or included in PraxisLIVE, but is now provided separately for re-use by other Java audio or DSP projects.

electro

  • https://github.com/Deep-Symmetry/electro - A Java library to help work with musical time. Based on the rhythm namespace in Afterglow, ported to plain Java to give these capabilities to Beat Link without requiring it to embed the Clojure ecosystem.

Python

PyJack

  • PyJack - jack audio client module for python

jackclient-python

python-jackpatch

python-wavefile

  • python-wavefile - Pythonic libsndfile wrapper to read and write audio files.

audioread

  • https://github.com/beetbox/audioread - Decode audio files using whichever backend is available. The library currently supports: Gstreamer via PyGObject. Core Audio on Mac OS X via ctypes. (PyObjC not required., MAD via the pymad bindings. FFmpeg or Libav via its command-line interface. The standard library wave, aifc, and sunau modules (for uncompressed audio formats).

python-sounddevice

  • https://github.com/spatialaudio/python-sounddevice - provides bindings for the PortAudio library and a few convenience functions to play and record NumPy arrays containing audio signals.The sounddevice module is available for Linux, macOS and Windows.

PyWavelets

  • PyWavelets - a free Open Source wavelet transform software for Python programming language. It is written in Python, Cython and C for a mix of easy and powerful high-level interface and the best performance.

synthplayer

AudioLazy

Pippi

Undulance

Audiostream

mutils

AVIO

  • https://github.com/ri0t/avio - The AVIO Suite is an experimental approach to the concept of a live stage performance. AVIO stands for "Audio Visual Input Output" and the suite is a component (event) based agglomeration of tools to work with Controller, Midi, Audio and Visual data.

audio_dspy

Keras Audio Preprocessors

DDSP

  • https://github.com/magenta/ddsp - a library of differentiable versions of common DSP functions (such as synthesizers, waveshapers, and filters). This allows these interpretable elements to be used as part of an deep learning model, especially as the output layers for audio generation.


pyaec

  • https://github.com/ewan-xu/pyaec - simple and efficient python implemention of a series of adaptive filters. including time domain adaptive filters(lms、nlms、rls、ap、kalman)、nonlinear adaptive filters(volterra filter、functional link adaptive filters)、frequency domain adaptive filters(frequency domain adaptive filter、frequency domain kalman filter) for acoustic echo cancellation.

MAWB

  • https://github.com/mindhog/mawb - a jumbled bundle of tools used by its author to do random things to make music, but the idea is to have a complete component management system geared towards music performance. MAWB was written for Linux systems using ALSA and jack. You may be able to get this system working under other operating systems, and portions of the system are usable indepdendantly.

Cells

Pipeline

  • https://docs.klio.io/en/latest/userguide/pipeline - encapsulates the various steps of the Klio job from reading input data, transforming the data, and writing output data. Klio pipelines offer a Pythonic interface to build upon beam pipelines and allow large-scale data processing on Docker and Google Dataflow.


Pipeline

  • Libfmp API Documentation — libfmp 1.2.1 documentation - API documentation for the Python package libfmp. This package goes hand in hand with the FMP Notebooks, a collection of educational material for teaching and learning Fundamentals of Music Processing (FMP) with a particular focus on the audio domain.
  • C0 - The FMP notebooks offer a collection of educational material closely following the textbook Fundamentals of Music Processing (FMP). This is the starting website, which is opened when calling https://www.audiolabs-erlangen.de/FMP. Besides giving an overview, this website provides information on the license and the main contributors.

pytemidi

Pedalboard

  • https://github.com/spotify/pedalboard - Python library for adding effects to audio. It supports a number of common audio effects out of the box, and also allows the use of VST3® and Audio Unit plugin formats for third-party effects. It was built by Spotify's Audio Intelligence Lab to enable using studio-quality audio effects from within Python and TensorFlow.


music21

  • What is music21? — music21 Documentation - a Python-based toolkit for computer-aided musicology.People use music21 to answer questions from musicology using computers, to study large datasets of music, to generate musical examples, to teach fundamentals of music theory, to edit musical notation, study music and the brain, and to compose music (both algorithmically and directly).One of music21’s mottos is “Listen Faster.” With the toolkit you should be able to find interesting moments and get a sense of the overall profile of a piece or a repertory of pieces. We hope that with the computer you’ll have more time for listening and playing for enjoyment and use less of your time listening for work.The system has been around since 2008 and is constantly growing and expanding. The approaches and traditions in music21 have been used in many previous software systems. See Authors, Acknowledgments, Contributing, and Licensing for information on the authors and background of the project.The 21 in music21 refers to its origins as a project nurtured at MIT. At MIT all courses have numbers and music, along with some other humanities departments, are numbered 21. The music departments of MIT, along with Harvard, Smith, and Mount Holyoke Colleges, helped bring this toolkit from its easiest roots to a mature system.


Sardine

  • Sardine - a free/open-source software for Python 3.10+. Sardine turns Python into a musical instrument. It is a framework for live coding music and controlling musical hardware from Python. You can play alone or synchronize with other audio softwares and instances of Sardine. With Sardine you can: Play synthesizers and audio samples using SuperCollider. Control synthesizers through MIDI and OSC messages. Interconnect audio/video softwares and/or sound engines. Play collaboratively with your friends or other musicians. Extend your musical environment using any Python package. Explore multiple algorithmic pattern languages for improvisation.

libavg

  • libavg - allows programmers, media artists and designers to quickly develop media applications. It uses python as scripting language, is written in high-speed C++ and uses modern OpenGL for display output. The project has been under constant development since its inception in 2003. libavg supports the full variety of display elements - images, text, videos, camera output, vector graphics - that modern graphics-intensive applications need. It is fast: the layout engine supports thousands of display elements on the screen at once as well as hardware-accelerated video output. Text using markup, videos with an alpha channel, rendering to offscreen buffers, masking, as well as GPU shader effects such as blur and chromakey are all supported. Plugins written in C++ can expand the engine and have access to all libavg internals. The Tutorial explains all these concepts using a sample application.


music_essentials

athenaCL

  • athenaCL system - an open-source, object-oriented composition tool written in Python. The system can be scripted and embedded, and includes integrated instrument libraries, post-tonal and microtonal pitch modeling tools, multiple-format graphical outputs, and musical output in Csound, SuperCollider, Pure Data, MIDI, audio file, XML, and text formats.

Musical parts are deployed as Textures, layered surface-defining objects containing numerous independent ParameterObjects to control pitch, tempo, rhythm, amplitude, panning, and instrument (Csound) parameters. The system includes an integrated library of Csound and SuperCollider instruments, and supports output for external Csound instruments, MIDI, and a variety of alternative formats. Over eighty specialized Generator, Rhythm, and Filter ParameterObjects provide tools for stochastic, chaotic, cellular automata based, Markov based, generative grammar and Lindenmayer system (L-system), wave-form, fractional noise (1/f), genetic, Xenakis sieve, linear and exponential break-point segments, masks, and various other algorithmic models. ParameterObjects can be embedded in other ParameterObjects to provide powerful dynamic and masked value generation. Textures can be combined and edited, and tuned with algorithmic Temperament objects. Texture Clones allow the filtering and processing of Texture events, performing transformations not possible with parameter generation alone.

SoundCard

  • https://github.com/bastibe/SoundCard - a library for playing and recording audio without resorting to a CPython extension. Instead, it is implemented using the wonderful CFFI and the native audio libraries of Linux, Windows and macOS.SoundCard is cross-platform, and supports Linux/pulseaudio, Mac/coreaudio, and Windows/WASAPI. While the programming interface is identical across platforms, sound card naming schemes and default block sizes can vary between devices and platforms.

Ruby

Negasonic

.NET

F#

Go

  • https://github.com/200sc/klangsynthese - Waveform and Audio Synthesis library in Go. Klangsynthese right now supports a number of features that will work regardless of OS, and a number of features specific to Windows where the hope is to move support to Linux and Darwin.


Rust





Symphonia


rsynth

  • https://github.com/PieterPenninckx/rsynth - An API abstraction for API's for audio plugins and applications. Use it to write real-time audio effects, software synthesizers, ... and target different platforms (vst, jack, ...). It is currently most suitable for real-time or "streaming" audio processing. E.g. you cannot use it to reverse audio in time.

dasp

  • https://github.com/RustAudio/dasp - Digital Audio Signal Processing in Rust.Formerly the sample crate.A suite of crates providing the fundamentals for working with PCM (pulse-code modulation) DSP (digital signal processing). In other words, dasp provides a suite of low-level, high-performance tools including types, traits and functions for working with digital audio signals.

Nannou

  • Nannou - An open-source creative-coding framework for Rust. Nannou is a library that aims to make it easy for artists to express themselves with simple, fast, reliable code. Whether working on a 12-month laser installation or a 5 minute sketch, this framework aims to give artists easy access to the tools they need.

Makepad


kira

  • https://github.com/tesselode/kira - an audio library designed to help create expressive audio for games. Besides the common features you'd expect from an audio library, it provides interfaces for scripting audio events, seamlessly looping complex pieces of music, smoothly changing parameters, and more.

baseview

  • https://github.com/RustAudio/baseview - low-level window system interface for audio plugin UIs. baseview abstracts the platform-specific windowing APIs (winapi, cocoa, xcb) into a platform-independent API, but otherwise gets out of your way so you can write plugin UIs.

audio-featrs

rust-audio-experiments

Glicol

  • Glicol - an acronym for "graph-oriented live coding language", is a computer music language with both its language and audio engine written in Rust programming language, a modern alternative to C/C++. Given this low-level nature, Glicol can run on many different platforms such as browsers, VST plugins and Bela board. Glicol's synth-like syntax and powerful audio engine also make it possible to combine high-level synth or sequencer control with low-level sample-accurate audio synthesis, all in real-time.




Rust-AV

RustFFT

rust-music

Groove

NihAV

  • NihAV - a research multimedia framework built on not invented here syndrome and licensed under GNU Affero General Public License 3.0 (relicensing to other free licenses available on request).

nightfall

pitch_calc

valib

  • valib - a library focusing on abstracting DSP algorithms to make them reusable and composable. It’s focusing on musical applications such as plugins or embedded digital synths.

Zig

Zang

Kasumi

Nim

Omni

  • https://github.com/vitreo12/omni - a cross-platform DSL (Domain Specific Language) for low level audio programming. It aims to be a new, expressive and easy to use programming language to code audio algorithms in. Omni leverages nim and C to compile code to self-contained static or shared libraries that can then be loaded and used anywhere. So far, two wrappers have already been written to compile omni code to SuperCollider UGens (omnicollider), or Max 8 objects (omnimax).

Haskell

Punctual

  • https://github.com/dktr0/Punctual - language for live coding audio and visuals. It allows you to build and change networks of signal processors (oscillators, filters, etc) on the fly. When definitions are changed, when and how they change can be explicitly indicated.Punctual runs in a web browser, and is portable to any system with a browser that supports the Web Audio API (for sound) and WebGL (for video). While it can be used in a standalone way, it is also bundled inside the Estuary platform for collaborative live coding.

Estuary

Lua

See Lua



Worp


isms

Purescript

MATLAB



  • https://github.com/ricedsp/rwt - a collection of Matlab M-files and C MEX-files for 1D and 2D wavelet and filter bank design, analysis, and processing. The toolbox provides tools for denoising and interfaces directly with our Matlab code for wavelet domain hidden Markov models and wavelet regularized deconvolution.

SuperCollider

1996 / OSS in 2002

  • SuperCollider - an environment and programming language for real time audio synthesis and algorithmic composition. It provides an interpreted object-oriented language which functions as a network client to a state of the art, realtime sound synthesis server.









  • UGen - UGens represent calculations with signals. They are the basic building blocks of synth definitions on the server, and are used to generate or process both audio and control signals. The many subclasses of UGen are the client-side representations of unit generators, and are used to specify their parameters when constructing synth definitions (see SynthDef).
  • Writing Unit Generators - "SuperCollider has a vast library of unit generators that can be assembled in unlimited ways, but sometimes even those aren't sufficient. You may have a need for an unusual signal processing algorithm, or you're running into efficiency problems that can be solved by condensing parts of your SynthDef into a single UGen. UGens are defined in server plugins written in C++. Server plugins are not to be confused with quarks, which extend the SuperCollider language. UGens exist more or less independently of the interpreter and you don't need much familiarity with SC to write them. Writing UGens is not too difficult, but it's arguably far less convenient and intuitive than the high-level tools that SuperCollider provides. You'll need a build system and a good amount of boilerplate code -- even fairly basic signal processing operations can require a lot of code. You don't have an instant live coding environment, and mistakes can easily crash the server. SuperCollider's UGens are stable and well-tested, and custom UGens are best viewed as a last resort for when the limitations of SC are impassable."


  • Pattern Guide 01: Introduction | SuperCollider 3.11.0 Help - Patterns are one of the most powerful elements of the SuperCollider language, but in some ways they can be difficult to approach using only the class-oriented help files. These documents seek to bridge the gap, explaining the conceptual background behind patterns, describing the usage of specific Pattern classes, and proceeding into examples of practical musical tasks written as patterns.



  • https://github.com/madskjeldgaard/nvim-supercollider-piano - emulates the functionality found in digital audio work stations such as Ableton where a user may select a synth and press they computer keyboard's keys to play the synth. Except, this is a million times more cool because it is NeoVim and SuperCollider.






  • sc by scgolang - provides a way to interface directly with the SuperCollider server from Go programs.





  • https://github.com/ModalityTeam/Modality-toolkit - a library to facilitate accessing (hardware) controllers in SuperCollider. It is designed and developed by the ModalityTeam, a group of people that see themselves as both developers and (advanced) users of SuperCollider.The central idea behind the Modality-toolkit is to simplify creation of individual (electronic) instruments with SuperCollider, using controllers of various kinds. To this end, a common code interface, MKtl, is used for connecting controllers from various sources (and protocols). These are atm. HID and MIDI; OSC, Serialport and GUI-based are planned to be integrated.








  • Utopia is a SuperCollider library for the creation of networked music applications, and builds upon the work of the Republic Quark and other existing network systems in SuperCollider. It aims to be modular (features available largely 'à la carte'), secure (provides methods for authentication and encryption), and flexible (to the extent possible, it tries not to impose a particular design or architecture). It provides functionality for synchronisation, communication, code sharing, and data sharing.
  • Modality Toolkit - simplifies creation of highly personalised electronic instruments in SuperCollider by introducing a common code interface. This allows for uniform access to HID, MIDI, OSC and GUI-based conrollers, as well as switching of functionality, even at runtime.







  • https://www.marcinpietruszewski.com/the-new-pulsar-generator - The New Pulsar Generator (nuPg) is an interactive program for sound synthesis developed in SuperCollider 3 (SC3) programming language. The nuPg program produces a form of synthesis called pulsar synthesis (PS). The technique of PS operationalises the notion of rhythm with its multitemporal affordances as a system of interconnected patterns evolving on multiple timescales. The technique generates a complex hybrid of sounds across the perceptual time span between infrasonic pulsations and audio frequencies, giving rise to a broad family of musical structures: singular impulses, sequences, continuous tones, time-varying phrases, and beating textures.






  • ScalaCollider - a real-time sound synthesis and signal processing system, based on SuperCollider and the Scala programming language. It can be used as a library in a Scala application, but there is also a small stand-alone prototyping environment called ScalaCollider-Swing. See Mellite for a more comprehensive IDE.







AlgaLib

  • https://github.com/vitreo12/AlgaLib - a new environment for live coding that focuses on the creation and connection of sonic modules. Unlike other audio software environments, the act of connecting Alga modules together is viewed as an essential component of music composing and improvising, and not just as a mean towards static audio patches. In Alga, the definition of a new connection between the output of a module and the input of another does not happen instantaneously, but it triggers a process of parameter interpolation over a specified window of time.

Clojure

Overtone

  • Overtone is an open source audio environment being created to explore musical ideas from synthesis and sampling to instrument building, live-coding and collaborative jamming. We use the SuperCollider synth server as the audio engine, with Clojure being used to develop the APIs and the application. Synthesizers, effects, analyzers and musical generators can be programmed in Clojure.
lein repl
user=>(use 'overtone.live)




Pink

  • https://github.com/kunstmusik/pink - provides the basis for developing music systems. It is also designed so to scale to user needs, whether they are exploring and designing low-level signal processing algorithms, developing pre-written compositions, or creating interactive real-time systems. It offers a slim core engine designed to be highly customizable.

Music as Data

Scala

  • Mellite - Mellite is an environment for creating experimental computer-based music and sound art. It is a desktop application, allowing you to work with real-time and offline sound synthesis processes, combining multiple perspectives such as live improvisation, implementing sound installations, or working in DAW-like timeline views. Mellite runs on all major operating systems and can be used both in a purely graphical fashion, or by writing and connecting snippets in the Scala programming language.

ChucK

2003

  • ChucK is a programming language for real-time sound synthesis and music creation. It is open-source and freely available on MacOS X, Windows, and Linux. ChucK presents a unique time-based, concurrent programming model that's precise and expressive (we call this strongly-timed), dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive music. [16]


  • WebChucK IDE
    • https://github.com/ccrma/webchuck - WebChucK brings ChucK, a strongly-timed audio programming language, to the web! ChucK's C++ source code has been compiled with Emscripten and targets WebAssembly (WASM, to run via the AudioWorkletNode interface of the Web Audio API. With near-native performance, WebChucK runs on modern desktop browsers as well as tablets and mobile devices! Bring together ChucK's real-time sound synthesis engine and web tools to create new experiences and develop creative workflows. Embed WebChucK into any website to build online audiovisual experiences, immersive multi-channel audio web apps, or shareable musical instruments! To learn more about WebChucK and what it can do, check out https://chuck.stanford.edu/webchuck/


  • LiCK - Library for ChucK.




Processing

  • Processing.org - a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping.



  • XYscope - a library for Processing to render graphics on a vector display (oscilloscope, laser) by converting them to audio.This includes most primitive shapes (point, line, rect, ellipse, vertex, ...) by converting those points to waveforms (oscillators with custom wavetables) and generating audio in real time using the Minim library. Vector graphics shine on a vector display and now you can view your generative works like never before! Tested on MacOS 10.9+ / Windows / Linux (RPi!).


FAUST

2002

  • FAUST - Functional Audio Stream is a functional programming language specifically designed for real-time signal processing and synthesis. FAUST targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. Simply put, Faust lets one program dsp code once in a purely functional language, and compile it to various platforms including max/msp, supercollider, audio unit, vst, lv2, and more.


_
  # a channel
_,_
  # two channels in parallel
_,_:_,!
  # two channels sequenced to a into one channel
process = _:_;       // series combination (1 in, 1 out)
process = _,_;       // parallel combination (2 ins, 2 outs)
process = +;           // summer (2 ins, 1 out)process = _,_ : +;     // same summer
process = _,_ : + : _; // same summer
process = -;           // signal subtractor
process = *;           // pointwise signal multiplier (nonlinear)7
process = /;           // pointwise signal divider (nonlinear)
process = mem;         // unit-sample delay
process = _, 1 : @;    // unit-sample delay
process = _,10 : @;    // ten-sample delay
process = a ~ b;       // feedback thru b around a
process = _ ~ _ ;      // feedback thru _ (generates 0)
process = mem ~ _;     // two-sample closed loop (generates 0)
process = + ~ _;       // digital integrator
process = _ <: _ , _;  // mono to stereo
process = _ <: _ , _, _, _;     // mono to quad
process = _ , _ <: _ , _, _, _; // stereo to quad (see diagram)
process = _ , _ :> _;           // stereo to mono [equiv to +]
process = _, _ , _ , _ :> _ ;   // quad to mono [equiv to +,+:+]
hslider("gain", 0, 0, 1, 0)

Grouping solution 1: [18]

a = vgroup("term1", nentry("a",1,0,10,1));
b = vgroup("term1", nentry("b",2,0,10,1));
x = vgroup("term2", nentry("x",3,0,10,1));
y = vgroup("term2", nentry("y",4,0,10,1));
process = a*x + b*y;

Solution 2:

a = nentry("v:term1/a",1,0,10,1);
b = nentry("v:term1/b",2,0,10,1);
x = nentry("v:term2/x",3,0,10,1);
y = nentry("v:term2/y",4,0,10,1);
process = a*x + b*y


Environments



FaustWorks:

git clone git://faudiostream.git.sourceforge.net/gitroot/faudiostream/FaustWorks

FaustLive:



Videos

Guides

  • PDF: FAUST : an Efficient Functional Approach to DSP Programming - Yann Orlarey, Dominique Fober and Stephane Letz. FAUST is a programming language that provides a purely functional approach to signal processing while offering a high level of performance. FAUST aims at being complementary to existing audio languages by offering a viable and efficient alternative to C/C++to develop signal processing libraries, audio plug-ins or standalone applications. The language is based on a simple and well formed formal semantics. A FAUST program denotes a signal processor, a mathematical function that transforms input signals into output signals. Being able to know precisely what a program computes is important not only for programmers, but also for compilers needing to generate the best possible code. Moreover these semantics questions are crucial for the longterm preservation of music programs. The following paragraphs will give an overview of the language as well as a description of the compiler, including the generation of parallel code.



  • Romain Michon - Faust Tutorials - This page gathers a series of tutorials around the Faust programming language wrote as part of various projects. It was not written in a "linear way", so each section is independent and covers a different topic. As a result, it is quite possible that you find some redundancy if you read this page from top to bottom. Should you have any question, feel free to send me an e-mail.






Tools


  • https://github.com/hrtlacek/faustTools - Tools for analyzing Faust programs. At the moment there is one tool present, faustwatch. Faustwatch is a tool that observes a .dsp file used by the dsp language FAUST. If the file is changed (saved after editing); the block diagram can be automatically shown in the default browser; the impulse response can be plotted in the time domain; the impulse response can be plotted in the frequency domain. The time and frequency domain plots of the last saved version are always visible so the current and last saved version can be compared. The impulse response is played back via pyo and JACK Audio. Basically it is supposed to make FAUST development faster.



  • https://github.com/droosenb/faust-wdf-library - in progress library for creating Wave Digital Filter based models for the Faust programming language, written entirely in faust. Currently the library only includes files and functions which are being tested for use in the eventual completed library.


  • https://github.com/jpcima/faustmd - This program builds the metadata for a Faust DSP ahead of time, rather than dynamically.The result is a block of C++ code which can be appended to the code generation.




Targets

  • https://bitbucket.org/agraef/faust-lv2 - provides an LV2 plugin architecture for the Faust programming language. The package contains the Faust architecture and templates for the needed LV2 manifest (ttl) files, a collection of sample plugins written in Faust, and a generic GNU Makefile as well as a shell script to compile plugins using the architecture.


  • https://bitbucket.org/agraef/faust-vst - provides a VST plugin architecture for the Faust programming language. The package contains the Faust architecture, faustvst.cpp, the faust2faustvst helper script which provides a quick way to compile a plugin, a collection of sample plugins written in Faust, and a generic GNU Makefile for compiling and installing the plugins.




Projects



OpenFrameworks



  • https://github.com/danomatika/loaf - an interpreter for openFrameworks which allows you to write OF applications in the Lua scripting language. This means you can quickly create using openFrameworks but without having to compile C++ or use a heavy IDE like Xcode or Visual Studio. A built-in OSC (Open Sound Control) server enables loaf to communicate natively with other creative coding and music applications over a network connection. Additionally, a built-in Syphon server allows for streaming loaf's screen output to visual applications on the same macoS system.




  • OpenFrameworks UI – braitsch - a simple to use, fully customizable, high-resolution graphical user interface for OpenFrameworks inspired by the popular JavaScript datgui interface.


Happy Brackets

  • Happy Brackets - aims to make creative coding on multiple devices easier. Products like the Beagle Bone and Raspberry Pi herald an exciting new world of computing; they cost very little but are capable of running a full Linux OS, process realtime audio and video, and offer rich networking and IO capabilities. This has opened up a great world of creative possibilities, especially where you have many devices all communicating over a network, gathering sensor data and producing audio-visual outputs. It can be a pain to set-up and configure all of these devices. Development can be slow and debugging confusing. With HappyBrackets, you set them all up once and then remote control them from a single computer. You don’t need to plug your devices into a screen or keyboard, just run them on a disk image set up with the HappyBrackets toolkit. Finding your devices on the network is easy. Then you can write and deploy Java code straight from our special plugin for the popular development environment IntelliJ IDEA. HappyBrackets is an open source project.


Cinder

  • Cinder - a C++ library for programming with aesthetic intent - the sort of development often called creative coding. This includes domains like graphics, audio, video, and computational geometry. Cinder is cross-platform, with official support for macOS, Windows, Linux, iOS, and Windows UWP. Cinder is production-proven, powerful enough to be the primary tool for professionals, but still suitable for learning and experimentation. Cinder is released under the 2-Clause BSD License.



Two

Audiality

  • Audiality 2 - a realtime audio and music engine, primarily intended for video games. While it supports traditional sample playback as well as additive, subtractive and granular synthesis, the distinctive feature is subsample accurate realtime scripting.

acidforth

Jupyter

nnAudio


Nyquist

  • Nyquist is a sound synthesis and composition language offering a Lisp syntax as well as an imperative language syntax (SAL) and a powerful integrated development environment.. Nyquist is an elegant and powerful system based on functional programming.

Extempore

  • Extempore - a programming language and runtime environment designed to support 'cyberphysical programming'. Cyberphysical programming supports the notion of a human programmer operating as an active agent in a real-time distributed network of environmentally aware systems. The programmer interacts with the distributed real-time system procedurally by modifying code on-the-fly.

Central to the Extempore programming environment is a new systems programming language designed to support the programming of real-time systems in real-time. xtlang is designed to mix the high-level expressiveness of Lisp with the low-level expressiveness of C. xtlang uses an s-expression syntax common to Lisp, and more particularly to Scheme. xtlang also borrows many Lisp like semantics including first class closures, tail recursion and macros. However, xtlang also borrows heavily from systems languages like 'C' including static typing, low-level type expressivity, direct pointer manipulation and explicit memory managment (i.e. no GC). xtlang then extends these 'C' semantics with type-inferencing, ad-hoc polymorphism, reified generics, and zone/region based memory management.


Audio Programming Environment

  • Audio Programming Environment (APE) is an open-source audio plugin, that allows to directly script/code DSP real time, integrated in your signal chain. Utilizing a built-in code editor, compiler, console and a basic control surface API, testing and prototyping DSP is extremely easy and convenient.

AudioKit

  • AudioKit - a powerful audio toolkit for synthesizing, processing, and analyzing sounds. It contains several examples for iOS (iPhone & iPad) and Mac OSX, written in both Objective-C and in Swift. A test suite is provided for many of the operations included in AudioKit. A playground project can be used for trying out AudioKit instruments and for greatly speeding up the development of your own instruments and applications. [19]




Signalflow

  • SignalFlow - an audio synthesis framework whose goal is to make it quick and intuitive to explore complex sonic ideas. It has a simple and consistent Python API, allowing for rapid prototyping in Jupyter, PyCharm, or on the command-line. It comes with over 100 of built-in node classes for creative exploration.Its core is implemented in C++11, with cross-platform hardware acceleration. SignalFlow has robust support for macOS and Linux (including Raspberry Pi), and has work-in-progress support for Windows. The overall project is currently in alpha status, and interfaces may change without warning.

The #MusicBricks Toolkit

FluCoMa

  • FluCoMa - The first release of software from the FluCoMa project is a toolkit of objects for decomposing and describing audio in musical creative coding environments.

Jacktube

  • Jacktube is an open source audio/MIDI processing program. It uses LADSPA and DSSI plugins to generate and process audio, and MIDI events to control its operation. The exact behavior is defined by using a simple scripting language to define rules. Even though Jacktube is primarily meant for audio work, it can be used in any signal processing application. The language has some superficial similarities to Perl, but the programming language is designed to be as small and efficient as possible for its purpose, namely setting up plugin graphs and responding to MIDI events.

Sonic Pi

  • Sonic Pi - The Live Coding Synth for Everyone. Simple enough for computing and music lessons. Powerful enough for professional musicians. Free to download with a friendly tutorial. Learn to code creatively by composing or performing music in an incredible range of styles from classical to algorave. Ruby DSL.

FunDSP

  • https://github.com/SamiPerttu/fundsp - an audio DSP (digital dignal processing) library with a focus on usability. It features a powerful inline graph notation that empowers users to accomplish diverse audio processing tasks with ease and elegance.The custom notation taps into composable, zero-cost abstractions that express audio processing networks as Rust types. FunDSP comes with a combinator environment containing a suite of audio components, math and utility functions and procedural generation tools.

alda

  • alda - Inspired by other music/audio programming languages such as PPMCK, LilyPond and ChucK, Alda aims to be a powerful and flexible programming language for the musician who wants to easily compose and generate music on the fly, using naught but a text editor. Alda is designed in a way that equally favors aesthetics, flexibility and ease of use, with (eventual) support for the text-based creation of all manner of music: classical, popular, chiptune, electroacoustic, and more! [20] [21] [22]

Platonic Music Engine

  • Platonic Music Engine takes an initial input from the user (like a name or number or random string of characters) and converts it using a non-random process into a piece of music, the Platonic Score. The software then allows you to manipulate this random-sounding music via the use of various style algorithms and quantizers into sounding like any style of music imaginable while still preserving the Platonic Score in its core.

Serpent / Aura

  • Serpent is the scripting language for Aura, a platform for computer music, animation, and interactive systems. Serpent was designed and implemented as a stand-alone, general purpose interpreter. Serpent is perhaps ideal as a game scripting language due to its real-time design and support for external C++ objects and C functions. It is open source, and I would be happy to share code as well as future design and development with others.
  • AuraRT is a software framework for creating interactive multimedia software, particularly advanced interactive music compositions. A subproject is AuraFX, a flexible signal processor configurable by end-users.

https://wiki.python.org/moin/Audio/

TMC

  • TMC - Tiny Music Compiler, a DSL (Domain-Specific Language) that describes a set of operations of audio files. It does not manipulate audio itself. Instead, it calls existing tools such as SoX.

Bipscript

  • bipscript - a simple programming environment for creating music. instantiate and connect LV2 plugins to create audio and MIDI networks, schedule MIDI and other control events directly on the plugins and system outputs, schedule logic to react to external events e.g. from a human performer.




"For this example we'll create two LV2 plugins: a software synthesizer and a reverb; we'll feed the output of the synth into the reverb and connect the reverb to the main system outputs. Also note that we set the initial value of the reverb amount to zero;"

local synth = Lv2.Plugin("http://calf.sourceforge.net/plugins/Monosynth", "Velo Bee")
 
local reverb = Lv2.Plugin("http://calf.sourceforge.net/plugins/Reverb")
reverb.setControl("amount", 0.0)
reverb.connect(synth)
 
local mainOutput = Audio.StereoOutput("main", "system:playback_1", "system:playback_2")
mainOutput.connect(reverb)


Sporth

stak

wcnt

  • wcnt - Wav Composer Not Toilet is a not real time modular audio synthesis/sequencer/sampler application for GNU/Linux systems. It outputs audio into 8/16/24/32bit PCM or floating point format .WAV audio files. wcnt is commandline based and reads plain text files, within which definitions of modules and data objects are placed. Modules are where the synthesis/sampling/sequencing happens and operate on a sample by sample basis. Transmission of events between modules only occurrs at the time of the event. Occurrences of events are transparent, the data stream is continuous.

Tao

  • Tao is a software package for sound synthesis using physical models. It provides a virtual acoustic material constructed from masses and springs which can be used as the basis for building quite complex virtual musical instruments. Tao comes with a synthesis language for creating and playing instruments and a fully documented (eventually) C++ API for those who would like to use it as an object library.


OpenAL

AeonWave

  • AeonWave is a low-level, hardware accelerated 4D spatialized audio library aimed at the professional simulation market. The software currently runs on Windows and Linux for ARM and x86 and tests have shown that AeonWave renders 3D audio between 450% and 1400% faster than any competing product depending on the hardware configuration. AeonWave started out as project Anaconda; a fast rendering new OpenAL implementation. After realizing this would not be good enough for spatialized 3D audio demands the library has been rewritten;


Vult

  • Vult - a simple and powerful language to program high-performance algorithms that may run in small microprocessors or microcontrollers. Vult is specially useful when programming Digital Signal Processing (DSP) algorithms like audio effects or synthesizers. The Vult compiler is a transcompiler, which takes Vult code and produces plain C/C++ code that can be compiled in any platform with a C/C++ compiler, for example: Arduino or Teensy boards. Vult can also generate JavaScript that can be run directly in the browser or C/C++ that can be compiled as Pure Data externals. Code written in Vult has freedom.

Norns






  • https://github.com/madskjeldgaard/nornsgenerate - A command line tool for generating a basic project for the monome norns sound computing platform.The tool generates a basic main lua file and optionally a custom SuperCollider engine as well.







  • https://github.com/schollz/amenbreak - dedicated amen break script for norns. Requires least 150MB of disk space. Over 200 amen break samples (from internet archive), loaded into memory for instant playback, automatic stutter/stretch/delay effects (amen+break), overdrive/decimate/degrade control (punch), kick drum added to bolster kicks of pre-analyzed samples, edit mode allows manipulating individual slices/kick volume, all effects are determined by probabilities with sweet spots controlled by many different easing functions
















SOUL

  • SOUL - represents a paradigm shift in the way audio development is done. Its architecture improves latency and performance in ways that are impossible using current techniques, and it opens up the use of new computing platforms for audio acceleration. Soulunlocks native-level speed, even when used within slower, safer languages. The SOUL language makes audio coding more accessible and less error-prone, improving productivity for beginners and expert professionals. [23]


  • SOUL - creating a new language and infrastructure for writing and deploying audio code. It aims to unlock improvements in latency, performance, portability and ease-of-development that aren't possible with the current mainstream techniques that are being used.






Gwion

  • Gwion - strongly timed programming language for making music. It is strongly influenced by chuck, but adds a bunch of high-level features: templating, first-class functions and more. It aims to be simple, small, fast, extendable and embeddable.

IBNIZ

  • IBNIZ - a virtual machine designed for extremely compact low-level audiovisual programs. The leading design goal is usefulness as a platform for demoscene productions, glitch art and similar projects. Mainsteam software engineering aspects are considered totally irrelevant. IBNIZ stands for Ideally Bare Numeric Impression giZmo. The name also refers to Gottfried Leibniz, the 17th-century polymath who, among all, invented binary arithmetic, built the first four-operation calculating machine, and believed that the world was designed with the principle that a minimal set of rules should yield a maximal diversity.

Cmajor

Game audio




  • https://www.ambiera.com/irrklang - a high level 2D and 3D cross platform (Windows, Mac OS X, Linux) sound engine and audio library which plays WAV, MP3, OGG, FLAC, MOD, XM, IT, S3M and more file formats, and is usable in C++ and all .NET languages (C#, VisualBasic.NET, etc). It has all the features known from low level audio libraries as well as lots of useful features like a sophisticated streaming engine, extendable audio reading, single and multithreading modes, 3d audio emulation for low end hardware, a plugin system, multiple rolloff models and more. All this can be accessed via an extremely simple API.

JavaScript / Web Audio




MIDI



  • Synthy - an online synthesiser and sequencer with live world output and colours made by Filip Hnízdo using the Web Audio API, the live server is powered by socket.io and Node.js. The database of patterns pushed to synthy is powered by the wonderful NeDB.


Audiocogs


DSP.js

  • https://github.com/corbanbrook/dsp.js/ DSP.js is a comprehensive digital signal processing library for javascript. It includes many functions for signal analysis and generation, including Oscillators(sine, saw, square, triangle), Window functions (Hann, Hamming, etc), Envelopes(ADSR), IIR Filters(lowpass, highpass, bandpass, notch), FFT and DFT transforms, Delays, Reverb.


Tone.js

  • Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers looking to create web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for scheduling and timing events and prebuilt synths and effects. For signal-processing programmers (coming from languages like Max/MSP), Tone provides a wealth of high performance, low latency building blocks and DSP modules to build your own synthesizers, effects, and complex control signals.


Sawtooth.io


tonal

  • tonal is a modular, functional music theory library. Built from a collection of modules, it's able to create and manipulate tonal elements of music (pitches, chords, scales, keys). It deals with abstractions (not actual music) and while is designed for algorithmic composition and music generation, can be used to develop any kind of midi or audio software.


Audiolet

  • https://github.com/oampo/Audiolet - a JavaScript library for real-time audio synthesis and composition from within the browser. It uses graph-based routing and pattern-based scheduling to make complex audio simple to program, and easy to understand.


Pedalboard.js


Wad

  • https://github.com/rserota/wad - Wad is a Javascript library for manipulating audio using the new HTML5 Web Audio API. It greatly simplifies the process of creating, playing, and manipulating audio, either for real-time playback, or at scheduled intervals. Wad provides a simple interface to use many features one would find in a desktop DAW (digital audio workstation), but doesn't require the user to worry about sending XHR requests or setting up complex audio graphs.


noise.sh

  • noise.sh - digital signal processing spreadsheet for sound design.


Vodka

Handel

Handel - a small procedural programming language for writting songs in browser.The Handel Interpreter interprets Handel programs and plays compositions in browser, thanks to Tone.js. [32]

Krill

  • https://github.com/Mdashdotdashn/krill - a livecoding environment inspired from TidalCycles. The aim of this project is preserve Tidal's wonderful flexibility while allowing an faster way to hack at it in Javascript. At this moment, Krill only sequences note-events via midi, there is no equivalent to Tidal's superdirt (although there's some plan in the future to allow things along that line). It is presently under active development.

mono

Other



GridSound

DSPGraph

  • https://github.com/ReplicatAudio/DSPGraph - DSP Graph is a web based tool that allows users to write and run JavaScript code directly in their web browser.Note: DSP Graph is not intended for end-users, and is instead designed for audio software developers and people who want to learn about building audio software. If you are not an audio software developer or are not interested in building audio software, this tool may not be suitable for your needs.This graphing tool is specifically designed for digital audio signal processing, and features a simple, intuitive interface that makes it easy for users to enter and edit their code. The built-in graph generator can be used to visualize the output of the code in real time, providing a useful tool for experimenting with different algorithms and techniques for digital audio signal processing.

JSFFT

Gibberish

  • https://github.com/gibber-cc/gibberish - designed to be a fast audio API for the browser. It takes the low-level building blocks provided by genish.js and uses them to create higher-level synthesizers, effects, and sequencers. Gibberish proceses each sample of each synthesis block one sample at a time, enabling a variety of effects not typically possible in JavaScript synthesis systems, most importantly single-sample feedback loops and audio-rate modulation of scheduling.

genish.js

Opimodus / OMN

Wolfram:Alpha

GPU


Neural net




  • https://github.com/pranaymanocha/PerceptualAudio - a Tensorflow implementation of our audio perceptual metric. It contains (0) minimal code to run our perceptual metric, (1) code to train the perceptual metric on our JND dataset, and (2) an example of using our perceptual metric as a loss function for speech denoising.

Live coding








Gibber

  • Gibber - a live coding environment for the web browser, using the Gibberish.js audio engine, the CodeMirror code editor library and wrapping Three.js for 3d graphics and shader support. Version 2 of Gibber features a much more efficient audio engine, some interesting mapping abstractions and a server/database backend for publishing and browsing files and collaboratively live coding. In order of preference, Gibber runs in Chrome, Safari, and Firefox.


Fluxus

Al-Jazari

  • Al-Jazari - livecoded entirely by gamepad, and employs a simple graphical language to allow robots to interact with each other and move over a terrain populated by audio triggers. The running code is displayed and edited in thought bubbles over each robot. For upcoming performance dates see this page. Al-Jazari is written entirely in Scheme, and runs on Fluxus which is a small game/graphical engine which runs on Linux and OSX.


TidalCycles

  • TidalCycles - or Tidal for short, is a language for live coding pattern. It allows you to make musical patterns with text, describing sequences and ways of transforming and combining them, exploring complex interactions between simple parts. Tidal allows you to express music with very flexible timing, providing a little language for describing patterns as step sequences (which can be polyphonic and polymetric), some generators of continuous patterns (e.g. sinewaves, sawtooths) and a wide range of pattern transformations. Tidal is highly ‘composable’ in that pattern transformations can be easily combined together, allowing you to quickly create complex patterns from simple ingredients. [34]




  • Strudel REPL - a new live coding platform incorporating a faithful port of TidalCycles to JavaScript for creating dynamic music pieces in the browser! It is free and open-source and made for beginners and experts alike.








Petal


Livecodelab

  • Livecodelab - a special secret place where you can make fancy "on-the-fly" 3d visuals and play awesomely offbeat (literally) sounds. "On-the-fly" meaning: as you type. Type just three letters: "box", and boom! a box appears. No clicking play, no waiting, no nothing. What are you waiting for? Try the magic. Press the button below and play with the examples.


Topos

  • https://github.com/Bubobubobubobubo/Topos - a web based live coding environment designed to be installation-free, independant and fun. Topos is loosely based on the Monome Teletype. The application follows the same operating principle, but adapts it to the rich multimedia context offered by web browsers. Topos is capable of many things: it is a generative/algorithmic music sequencer made for improvisation and composition alike; it is a synthesizer capable of additive, substractive, FM and wavetable synthesis, backed up by a powerful web based audio engine; it can also generate video thanks to Hydra, oscilloscopes, frequency visualizers and image/canvas sequencing capabilities; it can be used to sequence other MIDI and OSC devices (the latter using a NodeJS script,; it is made to be used without the need of installing anything, always ready at https://topos.live

flok


Moony.lv2

csynth

SAU / saugns

  • https://codeberg.org/sau/extra-scripts - Extra SAU scripts collection. Makefile for bulk-rendering to MP3 files. Currently, mainly various quick test scripts kept here instead of thrown away.


wide

  • https://github.com/pd3v/wide - a polyphonic instrument, multi-instrument, DSLish/API MIDI sequencer for live coding music. It sends MIDI messages to any stand-alone synthesiser or DAW. cling (an interative C++ interpreter) is for the live coding enverionment.

Wavepot


extramuros


NoiseFunge


Protoplug

Worp


Klangmeister

  • Klangmeister - a live coding environment for the browser. It lets you design synthesisers and compose music using computer code - without having to install anything on your own computer. Klangmeister works best in Chrome, because the synthesis features that it relies on have patchy support across the other browsers. [36]


jazzari


Wulfcode

  • Wulfcode - Ostensibly for live-coding, but it’s dramatically simplified compared to the SuperCollider and Csound-based environments used by the live-coding community proper. It’s an object-oriented, text-based MIDI sequencer with its own syntax, and a repertoire of commands and structures that allow interesting looping motifs and polyrythmic phrases to be easily generated and manipulated on the fly.


FoxDot

  • FoxDot - a pre-processed Python-based programming language that provides a fast and user-friendly abstraction to SuperCollider. It also comes with its own IDE, which means it can be used straight out of the box and no fiddling around with config files.


Fragment



  • https://github.com/grz0zrg/fas_launcher - A wxWidgets launcher for Fragment Audio Server built for the Fragment Synthesizer, a web-based Collaborative Spectral Synthesizer. This program should compile on most platforms. This program is a simple native launcher which provide an easy to use interface to start the Fragment Audio Server, the launcher also provide a convenient way to configure the audio server for individual sessions and provide a direct way to launch the web. application pre-linked with the native audio server by passing ?fas=1 as a web. argument.


Fragment add sine waves together to produce sounds, the software gather frequencies from vertical slices containing the pixels data of a graphical WebGL powered canvas, each horizontal lines of the score is associated to a pure sine wave generator, all the vertical slices are grouped into one before being "fed" to the synthesis engine, the pixels data (red and green channel) determine the amplitude of the associated sine wave for each audio channels (it is stereophonic) and the vertical position of the pixel determine which sine wave generator is active, the synthesizer is mainly controlled by the visuals generated from the GLSL script.

The synthesizer support the WebMIDI API which is only supported by Chrome and Opera at the moment, it is possible to assign controllers to widgets and controls the GLSL script "uniform" variables.



PraxisLIVE

  • PraxisLIVE - an open-source hybrid visual environment for live creative coding. Praxis LIVE mixes intuitive real-time visual node editing, with a range of built-in components for audio, visual & data processing, together with an embedded compiler and editor for live-coding Processing, Java and GLSL. While including specific support for audio and video processing, Praxis LIVE is designed to support other forms of cyber-physical coding.

clive

dafxpipe

mercury

  • mercury - minimal and human-readable language for the live coding of algorithmic electronic audiovisual performances.Programmed in the Cycling'74 Max8 node-based creative coding environment, as an abstracted layer on the Max/MSP audio engine and with the use of Node4Max for parsing, lexing and generative algorithms and Jitter/OpenGL for the visuals and the responsive texteditor.

MDAL

  • MDAL - a veritable, text-based audio programming language specifically tailored for use with sound drivers operating on low-level devices such as sound chips and simple DACs.While traditional low-level music description languages such as ABC or the Music Macro Language are essentially abstractions of western sheet music notation, MDAL instead has strong roots in tracker music. The MDAL syntax adheres to several key elements of tracker modules: Song structure is broken down into patterns, which are linked via a sequence matrix. Time flow is represented vertically. The concept of measures and note values has no direct representation in MDAL, instead note length is measured in steps and ticks.MDAL is a language without a pre-defined nomenclature. Aside from a rudimentary set of structural conventions, there are no fixtures - the entire instruction set is customizable. Customization is provided through standardized configuration files, which are parsed and interpreted by an MDAL compiler at runtime. This enables programmers to adapt both user input and data output to the specific needs of their sound drivers/player routines.


dafxpipe

GAmuza

  • GAmuza - made with OF, inspired by Processing, binding the (almost) entire OF language v.0.8.4 with his official addons included, the entire OpenGL 1.1 language, a bunch of other addons , and the small GAmuza framework of functions and GUI modules to make creative coding a lot more easier, all of it embedded into a slight modified Lua scripting environment.


Shaden

Sporth / AudioMasher



Stenophone

  • https://github.com/jarmitage/Stenophone - a live coding keyboard which is also a digital musical instrument, called the Stenophone. The Stenophone is an augmented stenotype or chorded keyboard, which permits continuous gestural control of keys and features an ergonomic design. These capabilities are exploited to enable the manipulation of algorithms and their parameterisation simultaneously. As you can see this hasn't been worked on in a while. Nevertheless I still think about it and perhaps someone found it interesting to read so it's still here.


Monolith

  • Monolith - the name of an interactive computer music ecosystem that I have written for myself. Features: On-the-fly Audio Live Coding Scheme REPL support; Stack-based approach to audio; Support for Monome Grid + Arc; Support for Griffin Knob; State saving/loading (via sqlite); Pre-built interactive pages utilizing interfaces; Offline video generation (via x264); OSX/Linux SupportMonolith is mainly a bunch of C components built to be controlled inside of a fork of s9 scheme. The s9 REPL is then set to spawn inside of emacs scheme interpretor. From there, the rest of the workflow is controlled via emacs.

Troop

  • https://github.com/Qirky/Troop - a real-time collaborative tool that enables group live coding within the same document across multiple computers. Hypothetically Troop can talk to any interpreter that can take input as a string from the command line but it is already configured to work with live coding languages FoxDot, TidalCycles, and SuperCollider.

Hydra

Cowbell.lol

LiveLily

  • https://github.com/alexdrymonitis/LiveLily - a live sequencing and live scoring system through live coding with a Lilypond-like language. The system includes a text editor, a parser, a sequencer, and an interactive/animated score. The commands are inspired by the Lilypond language, with certain commands that accommodate fast typing and fast coding sequences being added.

dublang

  • dublang - a multi-language live coding system with support for multiple programing languages in a single and integrated live coding session the name dublang is inspired by the musical style dub, dub consists of remixes of existing music, and just like the dub, dublang consists of remixes of existing software tools

zeroOne

  • https://github.com/pd3v/zeroone - a polyphonic instrument, multi-instrument, DSLish/API MIDI sequencer for live coding music. It sends MIDI messages to any stand-alone synthesiser or DAW.

Mac

ixi

  • ixi lang v3 - an extremely simple and visual live coding environment, presenting a high entry level control over synth definitions and samples in SuperCollider. The core idea is to represent events in a spatial layout, thus merging musical code and musical scores. The score is active, i.e., if a method is performed upon the score, it changes in real time.. The development of ixi lang is part of a research involving human-machine interaction, the philosophy of technology and the culture of software use in music. In return for this free software we would like to ask you few questions regarding your experience of the software.

Impromptu

  • Impromptu - an OSX programming language and environment for composers, sound artists, VJ's and graphic artists with an interest in live or interactive programming. Impromptu is a Scheme language environment, a member of the Lisp family of languages. Impromptu is used by artist-programmers in livecoding performances around the globe.

Visuals

See Visuals, Video

Cyril

P5LIVE