Audio

From Things and Stuff Wiki
Revision as of 17:07, 2 November 2019 by Milk (talk | contribs) (→‎QJackRcd)
Jump to navigation Jump to search


General

Mostly Linux, mostly free software.

See also the related articles in the menu, plus Playback, Dataflow, Pure Data, Distros#Media





  • CDM - blog, etc. on Create Digital music, motion, and more.


“Sound is when you mow your lawn, noise is when your neighbor mows their lawn, and music is when your neighbor mows your lawn,” [1]

Acoustics

mess, to merge with parts of Music, Speaker


  • https://en.wikipedia.org/wiki/Acoustics - the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.


  • https://en.wikipedia.org/wiki/Acoustic_wave - type of longitudinal waves that propagate by means of adiabatic compression and decompression. Longitudinal waves are waves that have the same direction of vibration as their direction of travel. Important quantities for describing acoustic waves are sound pressure, particle velocity, particle displacement and sound intensity. Acoustic waves travel with the speed of sound which depends on the medium they're passing through.



  • https://en.wikipedia.org/wiki/Sound - a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a medium such as air or water. In physiology and psychology, sound is the reception of such waves and their perception by the brain.




  • https://en.wikipedia.org/wiki/Sound_energy - a form of energy associated with the vibration of matter. The SI unit of sound energy is the joule (J). Sound is a mechanical wave and as such consists physically in oscillatory elastic compression and in oscillatory displacement of a fluid. Therefore, the medium acts as storage for both potential and kinetic energy as well.
  • https://en.wikipedia.org/wiki/Speed_of_sound - the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C (68 °F), the speed of sound is 343.2 metres per second (1,126 ft/s; 1,236 km/h; 768 mph; 667 kn), or a kilometre in 2.914 s or a mile in 4.689 s.


  • https://en.wikipedia.org/wiki/Sound_power - or acoustic power is the rate at which sound energy is emitted, reflected, transmitted or received, per unit time. The SI unit of sound power is the watt (W). It is the power of the sound force on a surface of the medium of propagation of the sound wave. For a sound source, unlike sound pressure, sound power is neither room-dependent nor distance-dependent. Sound pressure is a measurement at a point in space near the source, while the sound power of a source is the total power emitted by that source in all directions. Sound power passing through an area is sometimes called sound flux or acoustic flux through that area.


  • https://en.wikipedia.org/wiki/Sound_intensity - also known as acoustic intensity is defined as the sound power per unit area. The SI unit of sound intensity is the watt per square meter (W/m2). The usual context is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity. Sound intensity is not the same physical quantity as sound pressure. Hearing is directly sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone. Sound energy passing per second through a unit area held perpendicular to the direction of propagation of sound waves is called intensity of sound.


  • https://en.wikipedia.org/wiki/Sound_pressure - or acoustic pressure is the local pressure deviation from the ambient (average, or equilibrium) atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa).


  • https://en.wikipedia.org/wiki/Decibel - a logarithmic unit used to express the ratio of two values of a physical quantity, often power or intensity. One of these values is often a standard reference value, in which case the decibel is used to express the level of the other value relative to this reference. The number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities, or of the ratio of the squares of two field amplitude quantities.

The decibel is commonly used in acoustics as a unit of sound pressure level. The reference pressure in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure.

  • Gain structure: input and output levels - This article describes input and output gain structure of audio devices. Audio components are typically rated by their input sensitivity and/or maximum output voltage. This article explains how to match the output voltage of an audio device to the input voltage range of the next device in the signal chain, and how to adjust input sensitivity to accommodate a variety of voltages from different source devices. dB, dBu, dBV, dBFS, and dB-SPL.


  • https://en.wikipedia.org/wiki/Audio_frequency - AF or audible frequency, is characterized as a periodic vibration whose frequency is audible to the average human. The SI unit of audio frequency is the hertz (Hz). It is the property of sound that most determines pitch. The generally accepted standard range of audible frequencies is 20 to 20,000 Hz, although the range of frequencies individuals hear is greatly influenced by environmental factors. Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Frequencies above 20,000 Hz can sometimes be sensed by young people. High frequencies are the first to be affected by hearing loss due to age and/or prolonged exposure to very loud noises.


See Music






  • https://en.wikipedia.org/wiki/Onset_(audio) - the beginning of a musical note or other sound, in which the amplitude rises from zero to an initial peak. It is related to (but different from) the concept of a transient: all musical notes have an onset, but do not necessarily include an initial transient.
  • https://en.wikipedia.org/wiki/Transient_(acoustics) - a high amplitude, short-duration sound at the beginning of a waveform that occurs in phenomena such as musical sounds, noises or speech. It can sometimes contain a high degree of non-periodic components and a higher magnitude of high frequencies than the harmonic content of that sound. Transients do not necessarily directly depend on the frequency of the tone they initiate. Transients are more difficult to encode with many audio compression algorithms, causing pre-echo.



  • https://en.wikipedia.org/wiki/Acoustic_attenuation - a measure of the energy loss of sound propagation in media. Most media have viscosity, and are therefore not ideal media. When sound propagates in such media, there is always thermal consumption of energy caused by viscosity. For inhomogeneous media, besides media viscosity, acoustic scattering is another main reason for removal of acoustic energy. Acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields, such as medical ultrasonography, vibration and noise reduction.


  • https://en.wikipedia.org/wiki/Absorption_(acoustics) - refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. Part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. The energy transformed into heat is said to have been 'lost'.

When sound from a loudspeaker collides with the walls of a room part of the sound's energy is reflected, part is transmitted, and part is absorbed into the walls. As the waves travel through the wall they deform the material thereof (just like they deformed the air before). This deformation causes mechanical losses via conversion of part of the sound energy into heat, resulting in acoustic attenuation, mostly due to the wall's viscosity. Similar attenuation mechanisms apply for the air and any other medium through which sound travels.

The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction. Acoustic absorption is of particular interest in soundproofing. Soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location. In general, soft, pliable, or porous materials (like cloths) serve as good acoustic insulators - absorbing most sound, whereas dense, hard, impenetrable materials (such as metals) reflect most.



  • https://en.wikipedia.org/wiki/Acoustic_resonance - a phenomenon where acoustic systems amplify sound waves whose frequency matches one of its own natural frequencies of vibration (its resonance frequencies). The term "acoustic resonance" is sometimes used to narrow mechanical resonance to the frequency range of human hearing, but since acoustics is defined in general terms concerning vibrational waves in matter, acoustic resonance can occur at frequencies outside the range of human hearing. An acoustically resonant object usually has more than one resonance frequency, especially at harmonics of the strongest resonance. It will easily vibrate at those frequencies, and vibrate less strongly at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of a drum membrane. Acoustic resonance is also important for hearing. For example, resonance of a stiff structural element, called the basilar membrane within the cochlea of the inner ear allows hair cells on the membrane to detect sound. (For mammals the membrane has tapering resonances across its length so that high frequencies are concentrated on one end and low frequencies on the other.)


  • https://en.wikipedia.org/wiki/Helmholtz_resonance - or wind throb is the phenomenon of air resonance in a cavity, such as when one blows across the top of an empty bottle. The name comes from a device created in the 1850s by Hermann von Helmholtz, the Helmholtz resonator, which he used to identify the various frequencies or musical pitches present in music and other complex sounds.



  • https://en.wikipedia.org/wiki/Diffusion_(acoustics) - in acoustics and architectural engineering, is the efficacy by which sound energy is spread evenly in a given environment. A perfectly diffusive sound space is one that has certain key acoustic properties which are the same anywhere in the space. A non-diffuse sound space would have considerably different reverberation time as the listener moved around the room. Virtually all spaces are non-diffuse. Spaces which are highly non-diffuse are ones where the acoustic absorption is unevenly distributed around the space, or where two different acoustic volumes are coupled. The diffusiveness of a sound field can be measured by taking reverberation time measurements at a large number of points in the room, then taking the standard deviation on these decay times.[citation needed] Alternately, the spatial distribution of the sound can be examined. Small sound spaces generally have very poor diffusion characteristics at low frequencies due to room modes.


  • https://en.wikipedia.org/wiki/Anechoic_chamber - an-echoic meaning "non-reflective, non-echoing, echo-free", is a room designed to completely absorb reflections of either sound or electromagnetic waves. They are also often isolated from waves entering from their surroundings. This combination means that a person or detector exclusively hears direct sounds (no reverberant sounds), in effect simulating being inside an infinitely large room.




Performing Musician Magazine:







Loudness

  • https://en.wikipedia.org/wiki/Loudness - the subjective perception of sound pressure. More formally, it is defined as, "That attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud." The relation of physical attributes of sound to perceived loudness consists of physical, physiological and psychological components. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics.

In different industries, loudness may have different meanings and different measurement standards. Some definitions such as LKFS refer to relative loudness of different segments of electronically reproduced sounds such as for broadcasting and cinema. Others, such as ISO 532A (Stevens loudness, measured in sones), ISO 532B (Zwicker loudness), DIN 45631 and ASA/ANSI S3.4, have a more general scope and are often used to characterize loudness of environmental noise.

Loudness, a subjective measure, often confused with physical measures of sound strength such as sound pressure, sound pressure level (in decibels), sound intensity or sound power. Filters such as A-weighting and ITU-R BS.1770 attempt to compensate measurements to correspond to loudness as perceived by the typical human.


  • https://en.wikipedia.org/wiki/Phon - a unit of loudness level for pure tones. Its purpose is to compensate for the effect of frequency on the perceived loudness of tones. By definition, the number of phon of a sound is the dB SPL of a sound at a frequency of 1 kHz that sounds just as loud. This implies that 0 phon is the limit of perception, and inaudible sounds have negative phon levels. The equal-loudness contours are a way of mapping the dB SPL of a pure tone to the perceived loudness level (LN) in phons. These are now defined in the international standard ISO 226:2003, and the research on which this document is based concluded that earlier Fletcher–Munson curves and Robinson–Dadson curves were in error. The phon unit is not an SI unit in metrology. It is used as a unit of loudness level by the American National Standards Institute.


  • https://en.wikipedia.org/wiki/Equal-loudness_contour - a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon, and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment. Equal-loudness contours are often referred to as "Fletcher-Munson" curves, after the earliest researchers, but those studies have been superseded and incorporated into newer standards. The definitive curves are those defined in the international standard ISO 226:2003, which are based on a review of modern determinations made in various countries.


  • https://en.wikipedia.org/wiki/Fletcher–Munson_curves - ne of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America.




  • https://en.wikipedia.org/wiki/LKFS - a loudness standard designed to enable normalization of audio levels for delivery of broadcast TV and other video. Loudness units relative to full scale (LUFS) is a synonym for LKFS that was introduced in EBU R128. Loudness units (LU) is an additional unit used in EBU R128. It describes Lk without direct absolute reference and therefore describes loudness level differences. LKFS is standardized in ITU-R BS.1770.
  • YouTube: EBU R128 Introduction - Florian Camerer - an introduction to the European Broadcasting Union's R128 Broadcast Standard and speaks in general about perceived loudness, peak normalization, loudness normalization, etc.




  • https://en.wikipedia.org/wiki/Sone - a unit of how loud a sound is perceived. The sone scale is linear. Doubling the perceived loudness doubles the sone value. Proposed by Stanley Smith Stevens in 1936, it is a non-SI unit. In acoustics, loudness is the subjective perception of sound pressure. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics.

Psychoacoustics

  • https://en.wikipedia.org/wiki/Psychoacoustics - the scientific study of sound perception. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including speech and music). It can be further categorized as a branch of psychophysics.




Software

  • I-Simpa: I-Simpa - an open software dedicated to the modelling of sound propagation in 3D complex domains. It is a perfect tool for experts (i.e. acousticians), for teachers and students, as well as for researchers, in their projects (room acoustics, urban acoustics, industrial spaces, acoustic courses...).

Training

See Music#Training



  • RealSimple Project - musical acoustics laboratory exercises integrating both hands-on laboratory experience and computer-based simulation.

Cymatics

Non-Newtonian

Electrical

  • https://en.wikipedia.org/wiki/Audio_signal - a representation of sound, typically as an electrical voltage. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz (the limits of human hearing). Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal into sound. Digital representations of audio signals exist in a variety of formats. An audio channel or audio track is an audio signal communications channel in a storage device, used in operations such as multi-track recording and sound reinforcement.


  • https://en.wikipedia.org/wiki/Audio_power - the electrical power transferred from an audio amplifier to a loudspeaker, measured in watts. The electrical power delivered to the loudspeaker, together with its efficiency, determines the sound power generated (with the rest of the electrical power being converted to heat). Amplifiers are limited in the electrical energy they can output, while loudspeakers are limited in the electrical energy they can convert to sound energy without being damaged or distorting the audio signal. These limits, or power ratings, are important to consumers finding compatible products and comparing competitors.


Digital










  • SRC Comparisons - "We have organized the testing of some of the objective parameters of SRC algorithms in the 96 kHz - 44.1 kHz conversion mode. This mode is considered "hard" because of its fractional resampling ratio. The set of test signals has been discussed among engineers from Weiss Engineering, Alexey Lukin and members of Glenn Meadows' Mastering Web-Board. The test files were available in a variety of resolutions (32-bit int, 32-bit float, 24-bit), and the best supported resolution has been used for each of the SRC algorithms tested. The resulting graphs have been drawn by a modified version of the RightMark Audio Analyzer (RMAA) and some specially developed analysis software."



  • Sound on Sound: Phase Demystified - Phase interactions are well known for their ability to destructively interfere with recorded signals, but an understanding of the process can turn it into one of the most powerful creative tools available to you.



Production

See also Music, DAW, Synthesis#Programming, etc.



Synth programming, sequencer programming, etc.










Communities

  • Dancetech - music recording technology forums


Amplitude


Frequency



Drums


Mixdown

Hardware

see sound on sound, etc.


  • Schematic Vault - This collection of pro audio schematics and reference materials has been amassed both from my private stock and from various internet resources. All materials have been formatted as multi-page pdf files for ease of use. Please feel free to email me (address on home page) with any material you'd care to add.


Studio


Wiring


  • https://en.wikipedia.org/wiki/Phone_connector_(audio) - also known as phone jack, audio jack, headphone jack or quarter inch jack plug, is a family of electrical connectors typically used for analog audio signals. The phone connector was invented for use in telephone switchboards in the 19th century and is still widely used. The phone connector is cylindrical in shape, with a grooved tip to retain it. In its original audio configuration, it typically has two, three, four and, occasionally, five contacts. Three-contact versions are known as TRS connectors, where T stands for "tip", R stands for "ring" and S stands for "sleeve". Ring contacts are typically the same diameter as the sleeve, the long shank. Similarly, two-, four- and five- contact versions are called TS, TRRS and TRRRS connectors respectively. The outside diameter of the "sleeve" conductor is 1⁄4 inch (6.35 millimetres).

The "mini" connector has a diameter of 3.5 mm (0.14 in) and the "sub-mini" connector has a diameter of 2.5 mm (0.098 in).






  • https://en.wikipedia.org/wiki/Balanced_audio - a method of interconnecting audio equipment using balanced lines. This type of connection is very important in sound recording and production because it allows the use of long cables while reducing susceptibility to external noise caused by electromagnetic interference.Balanced connections typically use shielded twisted-pair cable and three-conductor connectors. The connectors are usually 3-pin XLR or 1⁄4 inch (6.35 mm) TRS phone connectors. When used in this manner, each cable carries one channel, therefore stereo audio (for example) would require two of them.


  • https://en.wikipedia.org/wiki/XLR_connector - a style of electrical connector, primarily found on professional audio, video, and stage lighting equipment. The connectors are circular in design and have between 3 and 7 pins. They are most commonly associated with balanced audio interconnection, including AES3 digital audio, but are also used for lighting control, low-voltage power supplies, and other applications. XLR connectors are available from a number of manufacturers and are covered by an international standard for dimensions, IEC 61076-2-103.[1] They are superficially similar to the older and smaller DIN connector range, but are not physically compatible with them.



Patch bay



"Full-Normal : Each jack on the top-row is connected to the jack under it on the bottom-row. This allows the audio or video signal to “pass-through” the patchbay without using a patch cable. When we want to change the “normal” signal path we can use a patch cable to change the destination of the signal. Placing a patch cable into the either row breaks the signal path. The signal follows the patch cable to where it is patched.

"Half-Normal: ...Placing a patch cable into the bottom-row breaks the signal path. Placing a patch cable into the top-row allows the signal to still go to the jack under it on the bottom-row (without breaking the normal) and also follows the patch cable."


Microphones

  • https://en.wikipedia.org/wiki/Microphone - colloquially nicknamed mic or mike (/maɪk/), is a transducer that converts sound into an electrical signal.Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, radio and television broadcasting, and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes such as ultrasonic sensors or knock sensors.Several different types of microphone are in use, which employ different methods to convert the air pressure variations of a sound wave to an electrical signal. The most common are the dynamic microphone, which uses a coil of wire suspended in a magnetic field; the condenser microphone, which uses the vibrating diaphragm as a capacitor plate, and the piezoelectric microphone, which uses a crystal of piezoelectric material. Microphones typically need to be connected to a preamplifier before the signal can be recorded or reproduced.





Preamplifier


Mixer



  • RD/MPCTools - Extension Toolkit for Martin M-Series Software


Amplifier

For amp to be twice as loud as the 10 watt RMS amp you need a 100 watt RMS amp, and then for a amp to be twice as loud as the 100 watt RMS amp you need 1,000 watt RMS amp.

  • Class A - 100% of the input signal is used (conduction angle Θ = 360°). The active element remains conducting all of the time.
  • Class B - 50% of the input signal is used (Θ = 180°); the active element carries current half of each cycle, and is turned off for the other half.
  • Class AB - Class AB is intermediate between class A and B, the two active elements conduct more than half of the time.
  • Class C - Less than 50% of the input signal is used (conduction angle Θ < 180°).
  • Class D - uses some form of pulse-width modulation to control the output devices; the conduction angle of each device is no longer related directly to the input signal but instead varies in pulse width. These are sometimes called "digital" amplifiers because the output device is switched fully on or off, and not carrying current proportional to the signal amplitude.


  • https://en.wikipedia.org/wiki/Amplifier_figures_of_merit - numerical measures that characterize its properties and performance. Figures of merit can be given as a list of specifications that include properties such as gain, bandwidth, noise and linearity, among others listed in this article. Figures of merit are important for determining the suitability of a particular amplifier for an intended use.



  • YouTube: Is equipment burn in real? - "all capacitors have to form, you put voltage on them and little microscopic holes are filled and it changes the equivalent series resistance and a number of other characteristics"






Crossover

Speaker

See Speaker

Headphones


Calibration

Synthesizer

See Synthesis, #Audio programming

Vocoder

  • Elektor 10 Channel Vocoder - "I'm a synth & electronics passionate fan and live near Antwerp, Belgium. In the mid 1990's, really young and inexperienced, I decided to build the Elektor 10 channel vocoder as described in the Dutch Elektor magazine from the early 1980's."

Drum machine

  • LXR Drum Synthesizer - The LXR is a full fledged digital drum machine with integrated sequencer. Its sound engine provides 6 different instruments, each with over 30 parameters to tweak. It can produce a wide variety of sounds, ranging from classic analogue emulations to crunchy digital mayhem.


  • eXaDrums - Electronic drums for Linux. The goal project is to use a Raspberry Pi to make a drum module. As far as the software goes, it is written in C++ and uses Gtkmm to display a nice graphical user interface (GUI) on a Raspberry Pi official 7" touchscreen. The hardware consists of some accelerometers or piezos connected to an analog to digital converter (ADC).

Sampler



  • https://github.com/ryanjamesmcgill/Audio-Sampler-Selector - An embedded linux system that plays back audio samples for musical performance. The development board used was a Texas Instruments (TI) Beagle Bone Black based on an AM335x 1GHz ARM® Cortex-A8 processer. For high-quality audio, a TI PCM5102A DAC was used. The program leveraged the c++ library: JUCE and Linux sound server: JACK.

Sound module

  • https://en.wikipedia.org/wiki/Sound_module - an electronic musical instrument without a human-playable interface such as a piano-style musical keyboard. Sound modules have to be operated using an externally connected device, which is often a MIDI controller, of which the most common type is the musical keyboard (although wind controllers, guitar controllers and electronic drum pads are also used). Controllers are devices that provide the human-playable interface and which may or may not produce sounds of its own. Another common way of controlling a sound module is through a sequencer, which is computer hardware or software designed to record and play back control information for sound-generating hardware (e.g., a DJ may program a bassline and use the sound module to produce the sound). Connections between sound modules, controllers, and sequencers are generally made with MIDI (Musical Instrument Digital Interface), which is a standardized protocol designed for this purpose, which includes special ports (jacks) and cables.



DAC / ADC


Sound chip/card






  • https://en.wikipedia.org/wiki/AC'97 - Audio Codec '97; also MC'97 for Modem Codec '97, is an audio codec standard developed by Intel Architecture Labs in 1997. The standard was used in motherboards, modems, and sound cards. Audio components integrated into chipsets consist of two component classes: an AC'97 digital controller (DC97), which is built into the southbridge of the chipset, and AC'97 audio and modem codecs, which are the analog components of the architecture. AC'97 defines a high-quality, 16- or 20-bit audio architecture with surround sound support for the PC. AC'97 supports a 96 kHz sampling rate at 20-bit stereo resolution and a 48 kHz sampling rate at 20-bit stereo resolution for multichannel recording and playback. AC97 defines a maximum of 6 channels of analog audio output.





ISA



  • https://github.com/schlae/snark-barker - a 100% compatible clone of the famed SB 1.0 "Killer Card" sound card from 1989. It implements all the features, including the digital sound playback and recording, Ad Lib compatible synthesis, the joystick/MIDI port, and the CMS chips (which are actually Philips SAA1099 synthesizer devices). [10]

PCI

  • envy24control - alsa-utils

Wavetable


  • https://en.wikipedia.org/wiki/Creative_Wave_Blaster - was an add-on MIDI-synthesizer for Creative Sound Blaster 16 and Sound Blaster AWE32 family of PC soundcards. It was a sample-based synthesis General MIDI compliant synthesizer. For General MIDI scores, the Wave Blaster's wavetable-engine produced more realistic instrumental music than the SB16's onboard Yamaha-OPL3.


Music workstaion


Digital Audio Workstation




FireWire

  • https://en.wikipedia.org/wiki/IEEE_1394 - an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple, which called it FireWire. The 1394 interface is also known by the brands i.LINK (Sony), and Lynx (Texas Instruments). The copper cable it uses in its most common implementation can be up to 4.5 metres (15 ft) long. Power is also carried over this cable allowing devices with moderate power requirements to operate without a separate power supply. FireWire is also available in Cat 5 and optical fiber versions. The 1394 interface is comparable to USB, though USB requires a master controller and has greater market share.


  • FFADO project aims to provide a generic, open-source solution for the support of FireWire based audio devices for the Linux platform. It is the successor of the FreeBoB project.


AES3

  • https://en.wikipedia.org/wiki/AES3 - (also known as AES/EBU) is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of PCM audio over several transmission media including balanced lines, unbalanced lines, and optical fiber. AES3 was jointly developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU). The standard was first published in 1985 and was revised in 1992 and 2003. AES3 has been incorporated into the International Electrotechnical Commission's standard IEC 60958, and is available in a consumer-grade variant known as S/PDIF.

ADAT Lightpipe

  • https://en.wikipedia.org/wiki/ADAT_Lightpipe - officially the ADAT Optical Interface, is a standard for the transfer of digital audio between equipment. It was originally developed by Alesis but has since become widely accepted,[1] with many third party hardware manufacturers including Lightpipe interfaces on their equipment. The protocol has become so popular that the term "ADAT" is now often used to refer to the transfer standard rather than to the Alesis Digital Audio Tape itself.

MADI

  • http://en.wikipedia.org/wiki/MADI - or Multichannel Audio Digital Interface or AES10, is an Audio Engineering Society (AES) standard electronic communications protocol that defines the data format and electrical characteristics of an interface that carries multiple channels of digital audio. The AES first documented the MADI standard in AES10-1991, and updated it in AES10-2003 and AES10-2008. The MADI standard includes a bit-level description and has features in common with the two-channel format of AES3. It supports serial digital transmission over coaxial cable or fibre-optic lines of 28, 56, or 64 channels; and sampling rates of up to 96 kHz with resolution of up to 24 bits per channel. Like AES3 or ADAT it is a Uni-directional interface (one sender and one receiver).



USB


  • FOSDEM 2019: Linux and USB Audio Class 3 - The USB Audio Class 3.0 is a specification recently introduced by USB Implementers Forum. Ruslan is an author of UAC3 implementation in Linux, he will give an overview of improvements and changes in this USB Audio spec, and will share current Linux support status and challenges faced during ALSA drivers implementation.


Pedal

Jesusonic

See also DAW#JSFX

  • Jesusonic - A dynamic text mode live FX processor
  • https://code.google.com/p/jsfxgen - a working proof-of-concept modular IDE for generating DSP code for the JesuSonic platform, written using the Flex and AIR SDKs. JesuSonic has a standalone version and is also part of Reaper as a plugin.



The OWL

  • The OWL - an open source, programmable audio platform made for musicians, hackers and programmers alike. Users can program their own effects, or download ready-made patches from our growing online patch library. It is available both as a guitar fx pedal and a Eurorack synthesizer module. OWL stands for Open Ware Laboratory which refers to the fact that the entire project is open source in both hardware and software. Being open source is an important issue for us in terms of making all of the technology completely accessible to the end user.

to sort


Controller

See also MIDI#Controllers


Game controller

Eye tracking

  • http://paulbatchelor.github.io/proj/eyejam - an open-source eye-controlled music composition environment. This was developed during for my summer internship with the Enable Group at Microsoft Research. The source code can be found on github under the official project name Microsoft Hands-Free Sound Jam. EyeJam is cross-platform, with suport for Windows, Mac, and Linux. Eye-control is only available on Windows. On the other platforms, eye control is simulated using the mouse cursor.

Raspberry PI

Instruments

  • Chimaera - a poly-magneto-phonic-theremin (we had to come up with this new subcategory in the domain of electronic instruments, as the Chimaera did not fit anywhere else). Other terms that would describe it well could be: a general-purpose-continuous-music-controller, a multi-touch-less-ribbon-controller or a possible offspring of a mating experiment of a keyboard and violin. Think of it as an invisible string that is excitable by an arbitrary number of magnetic sources. Depending on where the magnetic sources are located on the string and depending on how strong (or how near) they are, the device outputs different event signals. These general-purpose event signals then can be used to e.g. drive a synthesizer, an effects processing unit or some other hardware.


  • chair.audio - making digital instruments with analog interfaces. Our mission is to make sounds tangible. That's why we are developing instruments with haptic interfaces for electronic sound - both analog and software synthesis. Our Instruments have excitable surfaces that you can scratch, hit or bow. A very limited run of our developer edition will soon be available here.



Wire


Music roll

  • https://en.wikipedia.org/wiki/Music_roll - a storage medium used to operate a mechanical musical instrument. They are used for the player piano, mechanical organ, electronic carillon and various types of orchestrion. The vast majority of music rolls are made of paper. Other materials that have been utilized include thin card (Imhof-system), thin sheet brass (Telektra-system), composite multi-layered electro-conductive aluminium and paper roll (Triste-system) and, in the modern era, thin plastic or PET film. The music data is stored by means of perforations. The mechanism of the instrument reads these as the roll unwinds, using a pneumatic, mechanical or electrical sensing device called a tracker bar, and the mechanism subsequently plays the instrument. After a roll is played, it is necessary for it to be rewound before it can be played again. This necessitates a break in a musical performance. To overcome this problem, some instruments were built with two player mechanisms allowing one roll to play while the other rewinds. A piano roll is a specific type of music roll, and is designed to operate an automatic piano like the player piano or the reproducing piano.


  • https://en.wikipedia.org/wiki/Piano_roll - a music storage medium used to operate a player piano, piano player or reproducing piano. A piano roll is a continuous roll of paper with perforations (holes) punched into it. The perforations represent note control data. The roll moves over a reading system known as a 'tracker bar' and the playing cycle for each musical note is triggered when a perforation crosses the bar and is read. A rollography is a listing of piano rolls, especially made by a single performer, analogous to a discography.

Piano rolls were in continuous mass production from around 1896 to 2008, and are still available today, with QRS Music claiming to have 45,000 titles available with "new titles being added on a regular basis". Largely replacing piano rolls, which are no longer mass-produced today, MIDI files represent a modern way in which musical performance data can be stored. MIDI files accomplish digitally and electronically what piano rolls do mechanically. Software for editing a performance stored as MIDI data often has a feature to show the music in a piano roll representation.


  • Midimusic eplayWin32 - Estey and Wurlitzer e-roll player for Hauptwerk, Miditzer, GrandOrgue & eplayOrgan. This graphical player will play Estey e-rolls on any Hauptwerk or Miditzer organ and Wurlitzer Band Organ e-rolls on eplayOrgan (Windows, iMac and Linux) It will automatically operate the manuals, pedals, stops, couplers and swell. As supplied this version plays the Hauptwerk St. Annes Moseley and Paramount 310 plus the Miditzer 160, 216 or 260 organs. It also plays Wurlitzer 125, 150 and 165 organs. Other Hauptwerk or Miditzer organs can be played by adding their data via the menus. It also plays my new eplayOrgan and most other organs which can be played from midi keyboards, including GrandOrgue, Viscount and jOrgan.

MP3

PA system



  • https://en.wikipedia.org/wiki/Backline_(stage) - used in popular music and sound reinforcement system contexts to refer to electronic audio amplification equipment and speaker enclosures that are placed behind the band or the rhythm section on stage, including amplifiers and speaker cabinets for guitars, bass guitars and keyboards. In the US and Canada, the term has expanded to include many of the musical instruments that the rhythm section musicians play, including pianos, Hammond organs, drum kits and various percussion instruments such as congas and bongos.

Sound system

  • https://en.wikipedia.org/wiki/Sound_reinforcement_system - the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.

A sound reinforcement system for a rock concert in a stadium may be very complex, including hundreds of microphones, complex live sound mixing and signal processing systems, tens of thousands of watts of amplifier power, and multiple loudspeaker arrays, all overseen by a team of audio engineers and technicians. On the other hand, a sound reinforcement system can be as simple as a small public address (PA) system, consisting of, for example, a single microphone connected to a 100 watt amplified loudspeaker for a singer-guitarist playing in a small coffeehouse. In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience.

Some audio engineers and others in the professional audio industry disagree over whether these audio systems should be called sound reinforcement (SR) systems or PA systems. Distinguishing between the two terms by technology and capability is common, while others distinguish by intended use (e.g., SR systems are for live event support and PA systems are for reproduction of speech and recorded music in buildings and institutions). In some regions or markets, the distinction between the two terms is important, though the terms are considered interchangeable in many professional circles.





  • https://en.wikipedia.org/wiki/Sound_system_(DJ) - a group of DJs and audio engineers contributing and working together as one, playing and producing music over a large PA system or sound reinforcement system, typically for a dance event or party.

Linux



"there are many good reasons to consider Linux [for pro audio]: low- or no-cost software; a high-performance sound system; great audio routing; powerful sound synthesis; high-quality score notation; it's extremely customisable; more of your CPU power may be used for audio processing; it avoids many patent/license restriction pitfalls; it avoids costly tie-ins to specific product ranges; software may be (legally) modified to suit your individual needs; modular software allows you to configure your software studio the way that you want; and it's driven by passion before profit." [12]

News and communities


  • LinuxMusicians forum - mission: to facilitate discussion, learning, and discovery of music making on the Linux platform.


  • Libre Music Production is a community-driven online resource, focused on promoting musical creation and composition using free and open source (FLOSS) software. By providing hands-on material submitted by the community, such as guides, tutorials, articles and news updates, we want to show not only that there is great FLOSS audio software out there, but also how to practically use that software to make music.



Mailing lists


IRC

Freenode
  • #lau - Linux Audio Users, slower traffic, related to the mailing list
  • #lad - Linux Audio Developers programming chat, related to the mailing list
  • #linuxmusicians - slower traffic, related to the forum
  • #opensourcemusicians - related to the podcast, FLOSS on all platforms, chat can get quite general
  • #linuxmao - Francophone, related to the site
  • #audio4linux.de - Germanophone, related to the site
  • #archlinux-proaudio - Arch Linux proaudio project and general discussion
  • #kxstudio - Debian audio repo/distro
  • #studioware - Slackware multimedia
  • #proaudio-overlay - Gentoo audio
  • #lv2 - open audio plugin format
  • #jack - audio system
  • #alsa - audio system
  • #pulseaudio - audio system
  • #audacity - sample editor
  • #ardour - DAW
  • #ingen - audio host
  • #non - "DAW"
  • #lmms - DAW
  • #rosegarden - sequencer
  • #zrythm - DAW
  • #surgesynth - synth
  • ##zynsubaddfx - synth
  • #dataflow - Pure Data
  • #lilypond - notation
  • #laborejo - notation, sequencer, SF2
  • #openal - 3d audio
  • #vorbis - codec
  • ##dsp - digital signal processing
  • ##music-electronics
  • #musicbrainz - music tagging
  • #metabrainz - musicbrains dev
  • #edmproduction - related to the subreddit
  • #RedditAudio - mostly audio consumer electronics
  • ##xenharmonic - microtonal
  • ##audio
  • #audiovisual
  • ##radio
  • ##hamradio
  • ##rtlsdr - software defined radio
  • ##electronics
  • ##music - general music listening chat
  • #music - general music listening chat
  • #EDM - slow, general electronic dance music chat channel
  • #Juce - framework
OFTC
  • #debian-multimedia

Cons

Distros

See Distros#Audio/visual, Playback#Operating System

Software lists


An in-depth look at programming sound using Linux — jannewmarch


Real time

See also *nix#linux-rt



Audio systems

In a Unix-like operating system, a sound server mixes different data streams and sends out a single unified audio to an output device. The mixing is usually done by software, or by hardware if there is a supported sound card.

The "sound stack" can be visualized as follows, with programs in the upper layers calling elements in the lower layers:

  • Applications (e.g. mp3 player, web video)
  • Sound server (e.g. aRts, ESD, JACK, PulseAudio)
  • Sound subsystem (described as kernel modules or drivers; e.g. OSS, ALSA)
  • Operating system kernel (e.g. Linux, Unix)



  • https://github.com/linuxaudio/realtime-suggestions - A bash script, that suggests optimization options (while not stating the obvious) for Linux kernel realtime use. As these are just suggestions, they should be considered with a grain of salt: Configurations on Linux distributions can differ quite a lot. That being said: This script will not think for you!


"If we were drawing the [internet] OSI model used to describe the networking framework that connects your machine to every other machine on the network, we'd find clear strata, each with its own domain of processes and functionality. There's very little overlap in layers, and you certainly don't find end-user processes in layer seven messing with the electrical impulses of the raw bitstreams in layer one.

"Yet this is exactly what can happen with the Linux audio framework. There isn't even a clearly defined bottom level, with several audio technologies messing around with the kernel and your hardware independently. Linux's audio architecture is more like the layers of the Earth's crust than the network model, with lower levels occasionally erupting on to the surface, causing confusion and distress, and upper layers moving to displace the underlying technology that was originally hidden."

"ALSA itself has a kernel level stack and a higher API for programmers to use, mixing drivers and hardware properties with the ability to play back surround sound or an MP3 codec. Most distributions stick PulseAudio and GStreamer on top[,] ... The deeper the layer, the closer to the hardware it is." [13]


OSS

Old.

The API is designed to use the traditional Unix framework of open(), read(), write(), and ioctl(), via special devices. For instance, the default device for sound input and output is /dev/dsp. Examples using the shell:

cat /dev/random > /dev/dsp
  # plays white noise through the speaker

cat /dev/dsp > a.a
  # reads data from the microphone and copies it to file a.a


  • https://github.com/libfuse/osspd - OSS Proxy uses CUSE (extension of FUSE allowing character devices to be implemented in userspace) to implement OSS interface - /dev/dsp, /dev/adsp and /dev/mixer. From the POV of the applications, these devices are proper character devices and behave exactly the same way so it can be made quite versatile.

ALSA

"ALSA is responsible for translating your audio hardware's capabilities into a software API that the rest of your system uses to manipulate sound. It was designed to tackle many of the shortcomings of OSS (and most other sound drivers at the time), the most notable of which was that only one application could access the hardware at a time. This is why a software component in ALSA needs to manages audio requests and understand your hardware's capabilities.

"ALSA was designed to replace OSS. However, OSS isn't really dead, thanks to a compatibility layer in ALSA designed to enable older, OSS-only applications to run. It's easiest to think of ALSA as the device driver layer of the Linux sound system. Your audio hardware needs a corresponding kernel module, prefixed with snd_, and this needs to be loaded and running for anything to happen. This is why you need an ALSA kernel driver for any sound to be heard on your system, and why your laptop was mute for so long before someone thought of creating a driver for it. Fortunately, most distros will configure your devices and modules automatically. [14]



re official wiki




Information

less /proc/asound/card0/pcm0p/sub0/hw_params
  # current hardware info

cat /proc/asound/cards
  # List audio hardware

cat /proc/asound/card0
  # List card info

cat /proc/asound/devices
  # List audio hardware

aplay -L
  # List all PCMs defined

modinfo soundcore
  # Kernel sound module info

lsmod | grep snd

lspci -v | grep -i audio
  # show some kernel info


  • alsacap - ALSA device capability lister. alsacap - ALSA device capability lister. scans soundcards known to ALSA for devices and subdevices. displays ranges of configuration parameters for the given ALSA device.
  • QasConfig is a graphical browser for the configuration tree and can help to analyze and debug an ALSA setup.

Configuration

ALSA settings are stored in file 'asound.state', location can vary depending on distribution

alsactl
  # advanced controls for ALSA soundcard driver

alsactl init
  # initiate basic configure

alsactl store
  # storae configuration


  • stativ / asoundconf - asoundconf-gtk, the GTK+ applet to allow you to select your default sound card.


  • http://www.alsa-project.org/main/index.php/Asoundrc - Neither the user-side .asoundrc nor the asound.conf configuration files are required for ALSA to work properly. Most applications will work without them. These files are used to allow extra functionality, such as routing and sample-rate conversion, through the alsa-lib layer.

The keyword default is defined in the ALSA lib API and will always access hw:0,0 — the default device on the default soundcard. Specifying the !default name supersedes the one defined in the ALSA lib API.

pcm.NAME { 
	type hw               # Kernel PCM 
	card INT/STR          # Card name or number
  	[device] INT          # Device number (default 0)     
	[subdevice] INT       # Subdevice number, -1 first available (default -1)
	mmap_emulation BOOL   # enable mmap emulation for ro/wo devices
}


  • PCM (digital audio) plugins - these extend functionality and features of PCM devices. The plugins take care about various sample conversions, sample copying among channels and so on.

Plugin: hw

This plugin communicates directly with the ALSA kernel driver. It is a raw communication without any conversions. The emulation of mmap access can be optionally enabled, but expect worse latency in the case.

The nonblock option specifies whether the device is opened in a non-blocking manner. Note that the blocking behavior for read/write access won't be changed by this option. This influences only on the blocking behavior at opening the device. If you would like to keep the compatibility with the older ALSA stuff, turn this option off.

Plugin: file

This plugin stores contents of a PCM stream to file or pipes the stream to a command, and optionally uses an existing file as an input data source (i.e., "virtual mic")


  • http://alsa.opensrc.org/Dmix - Mixing enables multiple applications to output sound at the same time. Most discrete sound cards support hardware mixing, which is enabled by default if available. Integrated motherboard sound cards (such as Intel HD Audio), usually do not support hardware mixing. On such cards, software mixing is done by an ALSA plugin called dmix. This feature is enabled automatically if hardware mixing is unavailable.


  • http://alsa.opensrc.org/Dsnoop - the equivalent of the dmix plugin, but for recording sound. The dsnoop plugin allows several applications to record from the same device simultaneously.


  • http://linux.die.net/man/1/alsaloop - allows create a PCM loopback between a PCM capture device and a PCM playback device, supports multiple soundcards, adaptive clock synchronization, adaptive rate resampling using the samplerate library (if available in the system). Also, mixer controls can be redirected from one card to another (for example Master and PCM).


  • http://alsa.opensrc.org/Alsa-tools
    • https://launchpad.net/ubuntu/xenial/+package/alsa-tools-gui
      • echomixer - control tool for Echoaudio soundcards
      • envy24control - control tool for Envy24 (ice1712) based soundcards
      • hdajackretask - retask jacks on HDA Intel hardware
      • hdspconf - GUI program to control the Hammerfall HDSP Alsa Settings.
      • hdspmixer - tool to control the advanced routing features of the
      • RME Hammerfall DSP.
      • rmedigicontrol - control tool for RME Digi32 and RME Digi96 soundcards


speaker-test -c 2
  # Using 16 octaves of pink noise, alsa-utils


PulseAudio

  • PulseAudio is a sound system for POSIX OSes, meaning that it is a proxy for your sound applications. It allows you to do advanced operations on your sound data as it passes between your application and your hardware. Things like transferring the audio to a different machine, changing the sample format or channel count and mixing several sounds into one are easily achieved using a sound server.


Configuration

user specific pulseaudio config;

~/.pulse/default.pa
  # to load modules and define defaults
~/.pulse/client.conf
  # to configure a client for the sound server
~/.pulse/daemon.conf
  # to define sample rates and buffers

To avoid .pulse-cookie in home folder, set the following in /etc/pulse/client.conf [17]

cookie-file = /tmp/pulse-cookie

By default, pulseaudio changes master and application volume at the same time. To disable this, edit /etc/pulse/daemon.conf or ~/.config/pulse/daemon.conf with:

flat-volumes = no


man pulse-cli-syntax
  # pulseaudio commandline help


pactl
  # control a running PulseAudio sound server


pacmd
  # reconfigure a PulseAudio sound server during runtime

pacmd list-cards

pacmd dump








  • https://github.com/rhaas80/pa_volume - a simple tool to set the remembered volume level of pulseaudio clients. It requires module-stream-restore to be loaded (which is usually the case) to function. When called without arguments it shows all the known clients (running and non-running) and their remembered volume level. To set the volume level pass it the name of the client followed by the volume in pecent.


Mixer

GUI
CLI
TUI
 h/j/k/l, arrows               navigation, volume change
 H/L, Shift+Left/Shift+Right   change volume by 10
 1/2/3/4/5/6/7/8/9/0           set volume to 10%-100%
 m                             mute/unmute
 Space                         lock/unlock channels together
 Enter                         context menu
 F1/F2/F3                      change modes
 Tab                           go to next mode
 Mouse left click              select device or mode
 Mouse wheel                   volume change
 q/Esc/^C                      quit


  • https://github.com/mk-fg/pulseaudio-mixer-cli - Interactive python/ncurses UI to control volume of pulse streams, kinda like alsamixer, focused not on sink volume levels (which can actually be controlled via alsamixer, with alsa-pulse plugin), but rather on volume of individual streams, so you can tune down the music to hear the stuff from games, mumble, skype or browser.


Web

Other

pasuspender -- audacity
  # temporaraly suspend pulseaudio and launch audacity, for when PA gets in the way and config yak shaving isn't an option


  • https://linux.die.net/man/1/padsp - PulseAudio OSS Wrapper. starts the specified program and redirects its access to OSS compatible audio devices (/dev/dsp and auxiliary devices) to a PulseAudio sound server. padsp uses the $LD_PRELOAD environment variable that is interpreted by ld.so(8) and thus does not work for SUID binaries and statically built executables. Equivalent to using padsp is starting an application with $LD_PRELOAD set to libpulsedsp.so
cat /dev/urandom | padsp tee /dev/audio > /dev/null







JACK

See also #JACK configuration

  • JACK - a system for handling real-time, low latency audio (and MIDI). It runs on GNU/Linux, Solaris, FreeBSD, OS X and Windows (and can be ported to other POSIX-conformant platforms). It can connect a number of different applications to an audio device, as well as allowing them to share audio between themselves. Its clients can run in their own processes (ie. as normal applications), or can they can run within the JACK server (ie. as a "plugin"). JACK also has support for distributing audio processing across a network, both fast & reliable LANs as well as slower, less reliable WANs.






  • Sound Engineers Guide to Jackd - This page attempts to collect in one place various issues surrounding the design and abilities of Paul Davis's wonderful Jackd from the point of view of a technical user. It is not an introduction or a usage howto.


Libraries




  • https://github.com/stetre/luajack - a Lua binding library for the JACK Audio Connection Kit. It runs on GNU/Linux and requires Lua (>=5.3) and JACK (API >= v0.124.1).


Utils





GStreamer

sndio

  • sndio is a small audio and MIDI framework part of the OpenBSD project. It provides an lightweight audio & MIDI server and a fully documented user-space API to access either the server or directly the hardware in a uniform way. Sndio is designed to work for desktop applications, but pays special attention to synchronization mechanisms and reliability required by music applications. Reliability through simplicity are part of the project goals.

CRAS

  • CRAS: ChromeOS Audio Server - allows for sound to be routed dynamically to newly attached audio-capable monitors (DisplayPort and HDMI), USB webcam, USB speakers, bluetooth headsets, etc., and in a way that requires as little CPU as possible and that adds little or no latency.

dspd

aRts

See #aRts_2

NAS

Enlightened Sound Daemon

Pipewire

  • PipeWire - a project that aims to greatly improve handling of audio and video under Linux. It aims to support the usecases currently handled by both PulseAudio and Jack and at the same time provide same level of powerful handling of Video input and output. It also introduces a security model that makes interacting with audio and video devices from containerized applications easy, with supporting Flatpak applications being the primary goal. Alongside Wayland and Flatpak we expect PipeWire to provide a core building block for the future of Linux application development.


Integration

  • Setting up Jack Audio for GStreamer, Flash, and VLC - First of all I need to say that I won't be mentioning Pulseaudio so if that is what you're here for then you are at the wrong place because I don't use Pulseaudio at all, Pulseaudio can be run ontop of Jack but doing so will increase CPU load (a very tiny amount on modern systems).


  • https://github.com/brummer10/pajackconnect - Make JACK Work With PulseAudio. This script is intended to be invoked via QjackCtl to start up and shut down JACK on a system running PulseAudio. It handles the necessary setup to make the two work together, so PulseAudio clients get transparently routed through JACK while the latter is running, or if pulseaudio is suspend by pasuspender, do nothing [19]

Windows




  • https://wiki.jriver.com/index.php/WASAPI - is Microsoft's most modern method for talking with audio devices. It is available in Windows Vista, Windows 7, and later versions of Windows. It allows delivering an unmodified bitstream to a sound device, and provides benefits similar to those provided by ASIO drivers. One of the other main benefits of WASAPI is that it provides applications with exclusive access to audio devices, bypassing the system mixer, default settings, and any typically any effects provided by the audio driver. WASAPI is the recommended Audio Output Mode for Windows unless your audio device has a well-behaved ASIO driver, and it effectively replaces all legacy output modes including Kernel Streaming and Direct Sound.


  • https://en.wikipedia.org/wiki/DirectSound - a deprecated software component of the Microsoft DirectX library for the Windows operating system. DirectSound provides a low-latency interface to sound card drivers written for Windows 95 through Windows XP and can handle the mixing and recording of multiple audio streams.
  • https://en.wikipedia.org/wiki/DirectMusic - a deprecated component of the Microsoft DirectX API that allows music and sound effects to be composed and played and provides flexible interactive control over the way they are played. Architecturally, DirectMusic is a high-level set of objects, built on top of DirectSound, that allow the programmer to play sound and music without needing to get quite as low-level as DirectSound. DirectSound allows for the capture and playback of digital sound samples, whereas DirectMusic works with message-based musical data. Music can be synthesized either in hardware, in the Microsoft GS Wavetable SW Synth, or in a custom synthesizer.
  • https://en.wikipedia.org/wiki/XAudio2 - a lower-level audio API for Microsoft Windows, Xbox 360 and Windows Phone 8, the successor to DirectSound on Windows and a supplement to the original XAudio on the Xbox 360. XAudio2 operates through the XAudio API on the Xbox 360, through DirectSound on Windows XP, and through the low-level audio mixer WASAPI on Windows Vista and higher.


  • ASIO4ALL - Universal ASIO Driver For WDM Audio

JACK configuration

pasuspender -- jackd
  # temporaraly suspend pulseaudio and start jack (needed for jack1 without PA patch)


  • jackd(1) - the JACK audio server daemon, a low-latency audio server. Originally written for the GNU/Linux operating system, it also supports Mac OS X and various Unix platforms. JACK can connect a number of different client applications to an audio device and also to each other. Most clients are external, running in their own processes as normal applications. JACK also supports internal clients, which run within the jackd process using a loadable "plugin" interface.
 jackd -R -P89 -s -dalsa -dhw:0 -r48000 -p256 -njack-server
  # start jackd, realtime priority 89, ALSA engine soundcard hw:0, sample rate of 48k, 256 max ports, instancename


# jack2 package commands
jack_alias
jack_bufsize
jack_control
jack_cpu
jack_cpu_load
jack_disconnect
jack_evmon
jack_freewheel
jack_iodelay
jack_latent_client
jack_load
jack_metro
jack_midi_dump
jack_midi_latency_test
jack_midiseq
jack_midisine
jack_monitor_client
jack_multiple_metro
jack_net_master
jack_net_slave
jack_netsource
jack_rec
jack_samplerate
jack_server_control
jack_session_notify
jack_showtime
jack_simple_client
jack_simple_session_client
jack_test
jack_thru
jack_transport
jack_unload
jack_wait
jack_zombie

jack_connect fluidsynth:l_00 system:playback_3 jack_connect fluidsynth:r_00 system:playback_4

jack_lsp
  # list jack ports

jack_lsp -c
  # list jack port connections (sinks indented)


  • jack-play(1) — jack-tools — Debian stretch — Debian Manpages - a light-weight JACK sound file player. It creates as many output ports as there are channels in the input file. It will connect to ports mentioned in the environment variable JACK_PLAY_CONNECT_TO which must include a %d pattern to indicate port number, otherwise it implements no connection logic, use jack-plumbing(1) instead. Written by Rohan Drape.
  • q=0: SRC_LINEAR
  • q=1: SRC_ZERO_ORDER_HOLD
  • q=2: SRC_SINC_FASTEST
  • q=3: SRC_SINC_MEDIUM_QUALITY
  • q=4: SRC_SINC_BEST_QUALITY


  • jack_iodelay - will create one input and one output port, and then measures the latency (signal delay) between them. For this to work, the output port must be connected to its input port. The measurement is accurate to a resolution of greater than 1 sample.


  • https://gareus.org/oss/jackfreqd/start - heavily based on powernowd. Instead of taking CPU load as parameter for deciding on the CPU frequency jackfreqd uses JACK DSP-load and jackfreqd only supports the powernowd's aggressive mode 1). Optionally jackfreqd can also take CPU load into account which comes in handy when the JACK-daemon is temporarily unavailable or if frequency-scaling should also be done for on non-audio processes.

Utilities

jack_control

D-Bus control via python2-dbus

jack_control start
  # starts the jack server

jack_control stop
  # stops the jack server

jack_control status
  # check whether jack server is started, return value is 0 if running and 1 otherwise
jack_control dg
  # current driver

jack_control dp
  # current driver paramaters
jack_control dl
  # drivers list

jack_control ds alsa
  # selects alsa as the driver (backend)
 
jack_control sm
  # switch master to currently selected driver
jack_control eps realtime True
  # set engine parameters, such as realtime

jack_control dps period 256
  # set the driver parameter period to 256
etc.:
  help                       - print this help text
  dpd <param>                - get long description for driver parameter
  dps <param> <value>        - set driver parameter
  dpr <param>                - reset driver parameter to its default value
  asd <driver>               - add slave driver
  rsd <driver>               - remove slave driver
  il                         - get list of available internals
  ip <name>                  - get parameters of given internal
  ipd <name> <param>         - get long description for internal parameter
  ips <name> <param> <value> - set internal parameter
  ipr <name> <param>         - reset internal parameter to its default value
  iload <name>               - load internal
  iunload <name>             - unload internal
  ep                         - get engine parameters
  epd <param>                - get long description for engine parameter
  eps <param> <value>        - set engine parameter
  epr <param>                - reset engine parameter to its default value

jack-select

jackman

Connections

alsa_in / alsa_out


alsa_in -j "Description" -d prefix:Name -q 1 2>&1 1> /dev/null &
  # used to send ALSA microphone input to an JACK input device
  # -d = device name, hw:2
  # -q = quality of resampler, 1-4
  # -c = channels, automatic default
  # -r 48000 = sample rate, automatic default
alsa_in
  # can automatically detect and open an available soundcard (what type? doesn't work for usb mic)
arecord -l
 ...
  card 2: AK5370 [AK5370], device 0: USB Audio [USB Audio]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

alsa_in  -dhw:2 -jusb-mic
  # or
alsa_in -dhw:AK5370 -j "USB Mic" 


alsa_out -j "Description" -d prefix:Name -q 1 2>&1 1> /dev/null &
  # used to send JACK output to an ALSA device, like a speaker or headphones


If you get "Capture open error: Device or resource busy", some other program has control of the playback interface.

To see what application has control of the interface:

fuser -u /dev/snd/pcmC0D0p
  # this is card 0, device 0, pcm playback

If it's pulseaudio, launch pavucontrol, go to the Configuration tab and select Off for the device(s).


Zita-ajbridge

  • Zita-ajbridge - provides two applications, zita-a2j and zita-j2a. They allow to use an ALSA device as a Jack client, to provide additional capture (a2j) or playback (j2a) channels. Functionally these are equivalent to the alsa_in and alsa_out clients that come with Jack, but they provide much better audio quality. The resampling ratio will typically be stable within 1 PPM and change only very smoothly. Delay will be stable as well even under worse case conditions, e.g. the Jack client running near the end of the cycle.
cat /proc/asound/cards

zita-a2j -dhw:3,0 -jwebcam


Jack Std-I/O

  • jackstdio - jack-stdout writes JACK audio-sample data to buffered standard output. jack-stdin reads raw audio data from standard-input and writes it to a JACK audio port.

Session/config management

LASH Audio Session Handler

  • LASH - a session management system for GNU/Linux audio applications. It allows you to save and restore audio sessions consisting of multiple interconneced applications, restoring program state (ie loaded patches) and the connections between them.

Dead. Inflexible and underused.

  • GLASHCtl - a simple applet for controlling the LASH Audio Session Handler. When you run it it will appear as a small LASH icon in your "notification area" or "system tray".


LADISH

  • ladish - LADI Session Handler or simply ladish is a session management system for JACK applications on GNU/Linux using Dbus. Its aim is to allow you to have many different audio programs running at once, to save their setup, close them down and then easily reload the setup at some other time. ladish doesn't deal with any kind of audio or MIDI data itself; it just runs programs, deals with saving/loading (arbitrary) data and connects JACK ports together. It can also be used to move entire sessions between computers, or post sessions on the Internet for download.
    • https://github.com/LADI/ladish - LADI Session Handler, a rewrite of LASH. a session management system for JACK applications on GNU/Linux. Its aim is to allow you to have many different audio programs running at once, to save their setup, close them down and then easily reload the setup at some other time. ladish doesn't deal with any kind of audio or MIDI data itself; it just runs programs, deals with saving/loading (arbitrary) data and connects JACK ports together. It can also be used to move entire sessions between computers, or post sessions on the Internet for download. ladish has GUI frontend, gladish, based on lpatchage (LADI Patchage). and the ladish_control command line app for headless operation. LADI Tools is set of apps that interface with ladish, JACK server and a2jmidid
    • https://github.com/alessio/ladish/


LADI Tools
  • https://github.com/alessio/laditools - LADI Tools, forked from LADI/laditools, is a set of tools aiming to achieve the goals of the LADI project to improve desktop integration and user workflow of Linux audio system based on JACK and LADISH. Those tools take advantage of the DBus interfaces of JACK2 and LADISH to ease the configuration and use of your software studio.

In a near future, it should also be possible to use laditools to control JACK through an OSC interface.

You will find in this suite:

  • laditools - python module
  • ladi-system-tray - a system tray icon that allows you to start, stop and monitor JACK, as well as start some JACK related apps (log viewer, connections...)
  • wmladi - a controller as a Window Maker dockapp which uses a menu similar to ladi-system-tray's
  • ladi-system-log - a JACK, LADISH and a2jmidid log viewer
  • ladi-control-center - a GUI to setup JACK's and laditools' configuration
  • ladi-player - compact front-end to that allows users to start, stop and monitor a LADI system.
  • g15ladi - a JACK monitor for g15 keyboards


JACK Session

  • originally jack 1 (not D-Bus)
  • Saving a session will save the state of all 'JACK Session'-supported apps plus their JACK connections
  • Opening a session will automatically launch those apps, restoring their state and JACK connections
  • Supported apps can be told to save/load their state to/from a specific location






QjackCtl

  • QjackCtl - JACK Audio Connection Kit - Qt GUI Interface

QjackCtl holds its settings and configuration state per user, in a file located as $HOME/.config/rncbc.org/QjackCtl.conf. Normally, there's no need to edit this file, as it is recreated and rewritten everytime qjackctl is run.

  • D-Bus control and Jack 2 D-Bus control
  • Connection and JACK Session manager


Cadence

  • Cadence - a set of tools useful for audio production. Cadence itself is also an application (the main one), which this page will document. There are other applications that are part of the Cadence suite, they are usually named as the "Cadence tools". They are: Catarina (simple patching), Catia (patching), Claudia (LADISH)
    • Cadence - controls and monitors various Linux sound systems as well as audio-related system settings
cadence --minimized &


Claudia
  • Claudia - a LADISH frontend; it's just like Catia, but focused at session management through LADISH.
  • jack2 (dbus)
  • Claudia-Launcher is a multimedia application launcher with LADISH support. It searches for installed packages (not binaries), and displays the respective content as a launcher. The content is got through an hardcoded database, created and/or modified to suit the target distribution.



Non Session Manager

  • Non Session Manager is a graphical interface to the NSM Daemon (nsmd). By default, running the command non-session-manager will start both the GUI and an instance of the daemon. NSM manages clients together in a session. That's it. NSM doesn't know or care what Window Manager or audio subsystem those clients use--nor should it. Specific clients must be written to persist these environmental factors, and added to sessions when required.

For saving and restoring the JACK connection graph, a simple headless client named jackpatch has been developed and included in the NSM distribution. Simply add jackpatch do your basic template session and all the sessions you base on it will have their JACK connection graphs automatically saved and restored.

non-session-manager -- --session-root path


  • http://non.tuxfamily.org/wiki/nsm-proxy - a simple NSM client for wrapping non-NSM capable programs. It enables the use of programs supporting LADISH Level 0 and 1, and programs which accept their configuration via command-line arguments.


  • https://github.com/vktec/njsm - bridges Non Session Manager and JACK Session. This allows programs that support JACK Session (say, jalv) to run inside nsm-proxy and have their data saved using njsm.


  • https://github.com/rhetr/nsm-git - makes git a little easier to use with non session manager sessions. creates a git repository in the current session and commits all untracked and unstaged files to it whenever save is pressed. nsm-git also reads the session.nsm file and deletes any saved applications that are not listed in the session. This program is meant to be executed within NSM.



RaySession
  • https://github.com/Houston4444/RaySession - a GNU/Linux session manager for audio programs as Ardour, Carla, QTractor, Non-Timeline, etc... It uses the same API as Non Session Manager, so programs compatible with NSM are also compatible with Ray Session. As Non Session Manager, the principle is to load together audio programs, then be able to save or close all documents together.


Stagepatch


Gonzo


MonoMultiJack

  • MonoMultiJack - a program for managing, starting and stopping Jackd and music programs. Another feature is connecting and disconnecting Jack audio and MIDI ports, as well as ALSA MIDI ports. It is programmed in Mono using GTK# as GUI toolkit.


JackLinx

Preselected applications.


chino

  • chino - a 'special-purpose session manager', requiring customisation to cover one or more similar setups. Once customised, using it is dead simple. Perhaps it is best to not overstress the term "session management", instead describing chino as a framework and toolset to build and manage a meta-application consisting of the user's favorite modular Jack audio and Midi tools, each started and interconnected in predefined ways.
chino -n newproject
  # start newproject in current directory

chino -o existingproject
  # open existingproject


Broken? Depends on listlib which depends on anch which depends on tml which depends on flex. None of these were in the AUR. Then tml didn't build.

Routing

  • QJackConnect - a QT based patchbay for the JACK Audio Connection Kit.

Catia

  • Catia is a JACK Patchbay, with some neat features like A2J bridge support and JACK Transport.

Patchage

  • Patchage is a modular patch bay for audio and MIDI systems based on Jack and Alsa.


Patchmatrix

jsweeper

  • jsweeper will be a programmable port connection manager for ALSA sequencer and JACK audio and midi ports. Ports are laid out in a matrix so that connecting or disconnecting a port or a group of ports is just one mouse click or keypress.

sigromat

  • https://github.com/rhetr/sigromat - Elm signal routing matrix, mainly for JACK audio connections but possibly generalizable to other usecases. frontend sort of works, JACK backend definitely WIP and is therefore not yet pushed

njconnect

CliConnect

esjit


Routing snapshots

aj-snapshot

  • aj-snapshot - a small program that can be used to make snapshots of the connections made between JACK and/or ALSA clients. Because JACK can provide both audio and MIDI support to programs, aj-snapshot can store both types of connections for JACK. ALSA, on the other hand, only provides routing facilities for MIDI clients. You can also run aj-snapshot in daemon mode if you want to have your connections continually restored.
aj-snapshot filename
  # make a snapshot

aj-snapshot -r filename
  # restore a snapshot

aj-snapshot -d filename &
  # run in daemon mode


  • Robust Session Management - QJackCTL and Patchage for setup, diagnostics, and testing, aj-snapshot for management of Jack and ALSA MIDI connections, The DBus version of Jack2


autocable

echo "connect system:capture_1 system:playback_1

> disconnect system:capture_2 system:playback_2" | ./autocable

./autocable yourdirectory/textfile.ac


JMess

  • https://github.com/jcacerec/jmess-jack - JMess - A utility to save your audio connections (mess). JMess can save an XML file with all the current Jack Audio connections. This same file can be loaded to connect everything again. The XML file can also be edited. It also also has the option to disconnect all the clients.
jmess -s filename.xml
  # save

jmess -c filename.xml
  # load

jmess -d -c filename.xml
  # disconnect all then load

jmess -d
  # disconnect all

jack_snapshot

  • jack_snapshot - a little tool for storing/restoring jack connection states. it does this by writing/reading the names of the connected ports into/from a simple textfile. and here is also one weakness: some jack clients don't use the same jack name on each run, but dynamically assign one [like meterbridge] but most of them can be told to use a specific name, so this isn't really a problem. at least not for me. some pattern matching might be added in the future..

jack-plumbing

  • jack-plumbing maintains a set of port connection rules and manages these as clients register ports with JACK- Port names are implicitly bounded regular expressions and support sub-expression patterns.

jack-matchmaker

  • jack-matchmaker - a small command line utility that listens to JACK port registrations by clients and connects them when they match one of the port pattern pairs given on the command line at startup. jack-matchmaker never disconnects any ports. The port name patterns are specified as pairs of positional arguments or read from a file (see below) and are interpreted as Python regular expressions

jack_autoconnect

  • https://github.com/kripton/jack_autoconnect - Tiny application that reacts on port registrations by clients and connects them. The port names are interpreted as regular expressions and more than one pair can be defined upon calling.

Jack Sanity

jacklistener etc.


patchy


ASTRUX

Timing

JACK Transport

  • JACK Transport Design - The JACK Audio Connection Kit provides simple transport interfaces for starting, stopping and repositioning a set of clients. This document describes the overall design of these interfaces, their detailed specifications are in <jack/transport.h>
jack_transport> ?
  activate	Call jack_activate().
  exit		Exit transport program.
  deactivate	Call jack_deactivate().
  help		Display help text [<command>].
  locate	Locate to frame <position>.
  master	Become timebase master [<conditionally>].
  play		Start transport rolling.
  quit		Synonym for `exit'.
  release	Release timebase.
  stop		Stop transport.
  tempo         Set beat tempo <beats_per_min>.
  timeout	Set sync timeout in <seconds>.
  ?  		Synonym for `help'.
echo play |jack_transport
  # pass command to execute
  # tempo change doesn't work via this method


  • JackDirector is a Linux app that lets you control Jack Audio Connection Kit's transport play/pause using midi commands (noteon) and let you assign bpm changes and other commands to midi program changes. This program plays a metronome thru 2 audio outputs exposed in Jack.


  • gjacktransport - a standalone application that provides access to the jack audio connection kit‘s, JACK transport mechanism via a dynamic graphical slider. in other words: this software allows to seek Audio/Video media files when they are played along jack transport. Intended for audio-engineers or A/V editors that work with arodour, ecasound, hydrogen and/or xjadeo. Additionally it provides 'gjackclock'. A "Big Clock" display for jack-transport.


  • cabestan is a small GTK+ program that interfaces with the jack audio connection kit to play, rewind, or fast forward the stream via the jack transport interface.


  • jack-transport is a minimalist Jack transport control interface using ncurses. It displays the transport state and current time, and provides standard operating keys.


  • QJackMMC - a Qt based program that can connect to a device or program that emits MIDI Machine Control (MMC) and allow it to drive JACK transport, which in turn can control other programs. JackCtlMMC is a slightly simpler command-line version of QJackMMC.



  • jack-osc - publishes the transport state of the local JACK server as OSC packets over a UDP connection. jack-osc allows any OSC enabled application to act as a JACK transport client, receiving sample accurate pulse stream timing data, and monitoring and initiating transport state change.


  • InConcert - a MIDI-controlled application that allows a musician to control the tempo and synchronization of a MIDI sequence. It features a tap tempo to adjust the beat (and synchronize the beat) and the ability to skip beats or insert beats. It works by controlling the Jack Audio Connection Kit's transport. InConcert depends on Jack and ALSA, and therefore only runs on Linux.

Doesn't work??


  • TapStart - measures a tempo you tap. But: It sends OSC-messages with the tempo or delay to customizable hosts and paths. It updates the Jack tempo on each click (=new averaged tempo). It can start the Jack transport after tapping a defined number of beats.


  • jack-trans2midi - a utility that converts jack transport into midi clock messages



Ableton Link




  • https://github.com/Deep-Symmetry/carabiner - A loose connector for interacting with Ableton Link. Carabiner is a program that embeds the C++ Link library and listens for local TCP connections to allow other programs, like beat-link-trigger and Afterglow, to participate in some aspects of a Link session, even though they were not written using C++ compatible languages and runtimes.



Pioneer DJ Link






Calculation

  • Qrest is a musician toolkit aimed at helping composers, performers, recordists and mixers : Find out the tempo of a musical piece, Calculate delay times, Calculate LFO frequencies (i.e., timing conversions)


  • RASP - "RASP Aids Song Production" is a set of utilities for song production, supplementing functions missing in some DAWs. Features: Tap Tempo, Delay/Hz Calculator, Song Time Calculator, Note-to-Frequency Conversion, Simple Frequency Generator (v2), Metronome (v2)


Metronomes


kmetronome

  • Drumstick Metronome (kmetronome) is a MIDI based metronome using the ALSA sequencer. Intended for musicians and music students, it is a tool to keep the rhythm while playing musical instruments.
    • No decimal BPM, not MIDI driven

ametro

klick

  • klick is an advanced command-line based metronome for JACK. It allows you to define complex tempo maps for entire songs or performances.

JACK transport connect but not driven by it? BPM argument required, doesn't change when transport master runs.

  • gtklick - a GTK frontend to klick. It's written in Python and communicates with klick via OSC.
klick -o 12345 60 &
gtklick -q osc.udp://localhost:12345

Polygnome

GTick

  • GTick - an audio metronome application written for GNU/Linux and other UN*X-like operting systems supporting different meters (Even, 2/4, 3/4, 4/4 and more) and speeds ranging from 10 to 1000 bpm. It utilizes GTK+ and OSS (ALSA compatible).

Hubcap

  • Hubcap - a fairly simple metronome *nix app with a tempo fader and both auditory and visual feedback on a beat.
    • Audio only, no MIDI

Accelerando

  • Accelerando - a musical metronome that can speed up, allowing you to practice your music at progressively faster tempos. For example, you could set it to play 60 beats per minute for 4 bars, then automatically speed up by 10 beats per minute, and so on. It runs on Unix.

jmetro

midiclick

ctronome

  • ctronome - a very simple yet powerful ;) programmable console metronome software.
    • OSS Audio only, no MIDI

Click Tracker

  • Click Tracker - a program designed for composers, conductors and instrumentalists working with modern music. The main goal of the software is to prepare a click track of any score, no matter how complex it is. This software runs in Windows, OSX and Linux under the open source program Pure Data, and can be used either by conductors in concert, by musicians for practice purposes, by composers while composing.


Graphical metronomes


  • https://gitlab.com/smondet/vimebac - graphical metronome and instructions display that interfaces with JACK-midi applications. The display can be completely driven by MIDI events and it can also send MIDI events. It can also be self-driven and hence run without jackd although this is somewhat less interesting since it becomes just a visual metronome.


  • https://github.com/kunstmusik/BeatViz - BeatViz shows a 4x4 grid that represents additive groupings of beats. (Beat here meaning a single atomic tick, equal to a 16th note within the author's Csound Live Code system. UDP controlled.


Web metronomes


  • Chrome Web Store: Dr. Beat - developed as a part of the HackTime (http://goo.gl/SscNs) project from GDG Chrome Korea. It's a metro style metronome app. It helps you to keep the beats.

Windows

  • Open Metronome - Windows only. User definable BPM; Measure can be set to any length, with emphasis on any beat(s); Each beat can be one or more of over forty voices, with the supplied Samples covering the complete General MIDI percussion set, or custom samples; Visual indicator as well as audible output;


Networked

See also Streaming#Audio



  • https://en.wikipedia.org/wiki/Audio_over_Ethernet - the use of an Ethernet-based network to distribute real-time digital audio. AoE replaces bulky snake cables or audio-specific installed low-voltage wiring with standard network structured cabling in a facility. AoE provides a reliable backbone for any audio application, such as for large-scale sound reinforcement in stadiums, airports and convention centers, multiple studios or stages.

While AoE bears a resemblance to voice over IP (VoIP) and audio over IP (AoIP), AoE is intended for high-fidelity, low-latency professional audio. Because of the fidelity and latency constraints, AoE systems generally do not utilize audio data compression. AoE systems use a much higher bit rate (typically 1 Mbit/s per channel) and much lower latency (typically less than 10 milliseconds) than VoIP. AoE requires a high-performance network. Performance requirements may be met through use of a dedicated local area network (LAN) or virtual LAN (VLAN), overprovisioning or quality of service features. Some AoE systems use proprietary protocols (at the higher OSI layers) which create Ethernet frames that are transmitted directly onto the Ethernet (layer 2) for efficiency and reduced overhead. The word clock may be provided by broadcast packets.



See also Networking#ISDN


  • https://en.wikipedia.org/wiki/mLAN - short for Music Local Area Network, is a transport level protocol for synchronized transmission and management of multi-channel digital audio, video, control signals and multi-port MIDI over a network. The mLAN protocol was originally developed by Yamaha Corporation, and publicly introduced in January 2000. It was available under a royalty-free license to anyone interested in utilizing the technology. mLAN exploits several features of the IEEE 1394 (FireWire) standard such as isochronous transfer and intelligent connection management. There are two versions of the mLAN protocol. Version 1 requires S200 rate, while Version 2 requires S400 rate and supports synchronized streaming of digital audio at up to 24 bit word length and 192 kHz sample rate, MIDI and wordclock at a bitrate up to 400 Megabits per second. As of early 2008, mLAN appeared to have reached the end of its product life.



  • https://en.wikipedia.org/wiki/AES67 - a technical standard for audio over IP and audio over ethernet interoperability. The standard was developed by the Audio Engineering Society and first published in September 2013. It is a layer 3 protocol suite based on existing standards and is designed to allow interoperability between various IP-based audio networking systems such as RAVENNA, Livewire, Q-LAN and Dante. It also provides interoperability with layer 2 technologies, like Audio Video Bridging (AVB). AES67 promises interoperability between previously competing networked audio systems and long-term network interoperation between systems. Since its publication, AES67 has been implemented independently by several manufacturers and adopted by many others.



Jack

Netjack

  • Netjack - a Realtime Audio Transport over a generic IP Network. It is fully integrated into JACK. Syncs all Clients to one Soundcard so no resampling or glitches in the whole network. Packet loss is now also handled gracefully. By using the celt codec, its even possible, that single packet losses get masked by the Packet Loss Concealment Code.


JackTrip

  • JackTrip - a Linux and Mac OS X-based system used for multi-machine network performance over the Internet. It supports any number of channels (as many as the computer/network can handle) of bidirectional, high quality, uncompressed audio signal streaming.

Zita-njbridge

  • Zita-njbridge Command line Jack clients to transmit full quality multichannel audio over a local IP network, with adaptive resampling by the receiver(s). Zita-njbridge can be used for a one-to-one connection (using UDP) or in a one-to-many system (using multicast). Sender and receiver(s) can each have their own sample rate and period size, and no word clock sync between them is assumed. Up 64 channels can be transmitted, receivers can select any combination of these. On a lightly loaded or dedicated network zita-njbridge can provide low latency (same as for an analog connection). Additional buffering can be specified in case there is significant network delay jitter. IPv6 is fully supported.



jack_audio_send / jack_audio_receive

native JACK 32 bit float audio data on the network using UDP OSC messages.


MultiJACK

  • https://github.com/ponderworthy/MultiJACK - a fully operational demo of a framework to increase available audio DSP power available to JACK within a single multicore motherboard, using multiple JACK processes in concert, connected via IP transport.

Compared to jack2??


FLACJACKet

  • https://github.com/0xsx/FLACJACKet - a DLNA media server that broadcasts streams of audio routed to JACK input ports over the local network encoded in the FLAC format. It aims to provide reliable audio transmission while minimizing latency and taking advantage of FLAC features such as lossless compression and support for surround sound. It is Free and Open Source Software, released under the GNU General Public License.

TPF

Studio Link


Audio programming

Less GUI, more code.

See also Creative / live coding, Dataflow, Computing#Programming, Notation

  • https://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments - typically consist of an audio programming language (which may be graphical) and a user environment to design/run the language in. Although many of these environments are comparable in their abilities to produce high-quality audio, their differences and specialties are what draw users to a particular platform. This article compares noteworthy audio synthesis environments, and enumerates basic issues associated with their use.





How-tos

  • Hack Audio - Here you will find informational content related to audio and computer programming, as well as a community of coders/engineers interested in music.New information will be posted on a weekly basis for a range of experience levels – from beginner to professional.If you feel like you don’t have a lot of experience with programming, that’s great! This blog is meant to help you go from zero to hacker hero.











by Allen B. Downey









  • Fiview - freeware application for Windows, Linux and Mac OSX that can be used to design and view digital filters. It also makes it very easy to compare different filters by allowing you to switch between them using the digit keys, and it generates efficient and readable public domain example code that can be used directly in an application. It is released under the GNU GPL. Much of the underlying filter design code was based on mkfilter from Tony Fisher -- see the source code for details. The resulting filters were improved by splitting them into separate stages, which improves the accuracy and stability of them enormously, especially for higher-order Bessel and Butterworth filters. The source also includes a library, fidlib (now with its own page), which can be used to design filters at run-time. The fiview utility generates fast generic compiler-optimisable example C code both using the frequencies provided, and also in a form that allows the frequencies to be provided at run-time via a call to fidlib. This permits applications the flexibility to do things like generating banks of similar filters at run-time according to run-time parameters.


  • MicroModeler DSP - a web-based digital filter IDE. MicroModeler DSP solves the entire filter design, scaling and coding problem in one integrated process. Spend less time on the problems and more time on your project.


  • Polarbear - a tool for designing filters in the complex domain. Filters can be designed by placing any number of poles and zeros on the z plane. From this the filter coefficients are calculated, and the filter can be applied in real time on an audio stream.


  • FFmpegSource - (usually known as FFMS or FFMS2) is a cross-platform wrapper library around FFmpeg/libav. It gives you an easy, convenient way to say "open and decompress this media file for me, I don't care how you do it" and get frame- and sample-accurate access (usually), without having to bother with the sometimes less than straightforward and less than perfectly documented libav API.








  • Designing Sound by Andy Farnel
  • Welsh's Synthesizer Cookbook: Synthesizer Programming, Sound Analysis, and Universal Patch Book


  • Making Computers Sing - When most people think of computer music or sounds, “blips” and “bleeps” often are the first things to come to mind. Let’s dive in and look and at some of the ways a computer musician would make this sound Comparing Max MSP and PureData, Csound, Supercollider and ChucK, Faust.




bash

cat /dev/urandom | hexdump -v -e '/1 "%u\n"' | awk '{ split("0,2,4,5,7,9,11,12",a,","); for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }' | xxd -r -p | aplay -c 2 -f S32_LE -r 16000

Historical

MUSIC-N

1957

  • https://en.wikipedia.org/wiki/MUSIC-N - refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task.

The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.

Less obviously, MUSIC can be seen as the parent program for: RTSKED (a later RealTime Scheduling language by Max Mathews), Max/MSP Pure Data, AudioMulch, SuperCollider, JSyn, Common Lisp Music, ChucK, or any other computer synthesis language that relies on a modular system (e.g. Reaktor).

1969

Ported to a PDP10, Music V became the Mus10 music compiler system and played scores composed in Leland Smith's SCORE language.

  • SCORE-11 was originally designed for Vercoe’s MUSIC11 system. MUSIC11, which was written in PDP-11 assembly language, was replaced in 1986 by CSOUND, a version of the program written in the programming language C, and which runs on many different computer systems. SCORE-11 works well with either version of Vercoe’s program, which will be referred to as CSOUND in this manual except where the distinction is important.

SSSP

In designing the system, we decided early on to adopt a highly interactive approach to the design of the human interface. Batch processing as in Music V (Mathews: 1969) is an alternative, but one which widely separates the composer and the program, causing serious delays in the feedback loop. We feel a score editor must be interactive because there are facets of the task which demand control and aesthetic judgment by the composer in an interactive and exploratory manner. Several modes of interaction have previously been used in music systems, such as alphanumeric text as in MUSlC10 (Smith: 1978), voice recognition (Tucker, Bates, Frykberg, Howrath Kennedy, Lamb, Vaughan: 1977), and piano -type keyboard (New England Digital Corp.: 1978). In our work we have adopted a bias towards graphics -based interaction (Baecker: 1979; Newman and Sproull: 1979) in the belief that this approach can make a significant contribution towards an effective human interface. First, music lends itself well to representations in the visual domain. Second, the task of editing music is complex in the sense that there are many parameters and commands to be manipulated and controlled this complexity can be reduced by the graphic representation of information. Third, previous work (Pulfer: 1972; Tanner: 1972; Vercoe: 1975) indicates that more congenial interfaces can be constructed using dynamic graphics techniques.

SAMBOX

1977

The compiler was replaced in 1977 with dedicated synthesis hardware in the form of the Systems Concepts Digital Synthesizer (built by Peter Samson and known as the ``Samson Box). The Samson Box was capable of utilizing many types of synthesis techniques such as additive synthesis, frequency modulation, digital filtering and some analysis-based synthesis methods. The PLA language, written by Bill Schottstaedt, allowed composers to specify parametric data for the Samson Box as well as for other sound processing procedures on the PDP10 mainframe (and on its eventual replacement, a Foonly F4). On April 3, 1992, the Foonly and Samson Box were officially retired.

HMSL

  • https://en.wikipedia.org/wiki/Hierarchical_Music_Specification_Language - a music programming language written in the 1980s by Larry Polansky, Phil Burk, and David Rosenboom at Mills College. Written on top of Forth, it allowed for the creation of real-time interactive music performance systems, algorithmic composition software, and any other kind of program that requires a high degree of musical informatics. It was distributed by Frog Peak Music, and runs with a very light memory footprint (~1 megabyte) on Macintosh and Amiga systems.

Unlike CSound and other languages for audio synthesis, HMSL is primarily a language for making music. As such, it interfaces with sound-making devices through built-in MIDI classes. However, it has a high degree of built-in understanding of music performance practice, tuning systems, and score reading. Its main interface for the manipulation of musical parameters is through the metaphor of shapes, which can be created, altered, and combined to create a musical texture, either by themselves or in response to real-time or scheduled events in a score.

HMSL has been widely used by composers working in algorithmic composition for over twenty years. In addition to the authors (who are also composers), HMSL has been used in pieces by Nick Didkovsky, The Hub, James Tenney, Tom Erbe, and Pauline Oliveros. A Java port of HMSL was developed by Nick Didkovsky under the name JMSL, and is designed to interface to the JSyn API.


Music Mouse

  • https://en.wikipedia.org/wiki/Music_Mouse - an algorithmic musical composition software developed by Laurie Spiegel. Spiegel's best known and most widely used software, "Music Mouse - An Intelligent Instrument" (1986) is for Macintosh, Amiga and Atari computers. The "intelligent instrument" name refers to the program's built-in knowledge of chord and scale convention and stylistic constraints. Automating these processes allows the user to focus on other aspects of the music in real time. In addition to improvisations using this software, Spiegel composed several works for "Music Mouse", including Cavis muris in 1986, Three Sonic Spaces in 1989, and Sound Zones in 1990. She continued to update the program through Macintosh OS 9, and as of 2012, it remained available for purchase or demo download from her website.

Common Music

1989

  • Common Music (CM) - a music composition system that transforms high-level algorithmic representations of musical processes and structure into a variety of control protocols for sound synthesis and display. Its main user application is Grace (Graphical Realtime Algorithmic Composition Environment) a drag-and-drop, cross-platform app implemented in JUCE (C++) and S7 Scheme. In Grace musical algorithms can run in real time, or faster-than-real time when doing file-based composition. Grace provides two coding languages for designing musical algorithms: S7 Scheme, and SAL, an easy-to-learn but expressive algol-like language.

Common Music can write scores in several different syntaxes (currently CLM, CMN, Music Kit, MIDI, CSound and Paul Lansky's real-time mixing program, RT). The scores can then be rendered on workstations using any of the target synthesis programs. For example, CLM (Common Lisp Music, written by Bill Schottstaedt) is a widely used and fast software synthesis and signal processing package that can make use of multiple Motorola 56001 DSPs.

Pla is the intellectual ancestor of CM.

Common Lisp Music

1997

  • CLM (originally an acronym for Common Lisp Music) is a sound synthesis package in the Music V family. It provides much the same functionality as Stk, Csound, SuperCollider, PD, CMix, cmusic, and Arctic — a collection of functions that create and manipulate sounds, aimed primarily at composers (in CLM's case anyway). The instrument builder plugs together these functions (called generators here), along with general programming glue to make computer instruments. These are then called in a note list or through some user interface (provided by Snd, for example).


Csound

  • Csound - a sound and music computing system which was originally developed by Barry Vercoe in 1985 at MIT Media Lab. Since the 90s, it has been developed by a group of core developers. Although Csound has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer. Csound has tradtionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context. Csound can run on a host of different platforms incuding all major operating systems as well as Android and iOS. Csound can also be called through other programming languages such as Python, Lua, C/C++, Java, etc.
  • https://en.wikipedia.org/wiki/Csound - Csound was originally written at MIT by Barry Vercoe, based on his earlier system called Music 11, which in its turn followed the MUSIC-N model initiated by Max Mathews at the Bell Labs. Its development continued throughout the 1990s and 2000s, led by John ffitch at the University of Bath. The first documented version 5 release is version 5.01 on March 18, 2006.





CSoundQT

  • CsoundQt - a frontend for Csound featuring a highlighting editor with autocomplete, interactive widgets and integrated help. It is a cross-platform and aims to be a simple yet powerful and complete development environment for Csound. It can open files created by MacCsound. Csound is a musical programming language with a very long history, with roots in the origins of computer music. It is still being maintained by an active community and despite its age, is still one of the most powerful tools for sound processing and synthesis. CsoundQt hopes to bring the power of Csound to a larger group of people, by reducing Csound's intial learning curve, and by giving users more immediate control of their sound. It hopes to be both a simple tool for the beginner, as well as a powerful tool for experienced users.

Cabbage Studio

  • Cabbage Studio - a Csound based DAW with a fully functional patching interface and development environment. Develop, prototype and test Csound based audio instruments on the fly using an integrated development solution that includes an embedded source code editor and rapid GUI designer. Cabbage Studio isn't just for users familiar with Csound, it can load a number of different plugin formats including VST, AU, and LADSPA and comes with over 100 high end audio plugins ready to use out of the box.

PWCsound

  • PWCsound is a tool for software synthesis control implemented in Pwgl (a visual programming environment based on Lisp and Clos), provides a graphical interface to Csound6 programming language. PWCsound is, of course, inspired by several historical Patchwork libraries (in particular by Csound/Edit-sco for Patchwork, better know as "PW-Csound", 1993) and Open Music. The connection between the techniques of computer aided composition and sound synthesis has been explored within these software: Csound/Edit-sco, Pwcollider, Om2Csound, OmChroma.

Mac/Windows.

Other

  • CMask is a score file generator for Csound. Its main purpose is the generation of events to create textures of granular sounds. Versions for MacOS9, Win, Linux.


  • Hadron Particle Synthesizer - The Hadron particle synthesizer is the ultimate creative tool for granular synthesis. It is available in different plugin formats (VST, AU and Max For Live), even though the graphic appearance is slightly different for the different plugins wrappers, the audio functionality of Hadron is the same in all formats.


Cecilia

  • Cecilia - a graphic user interface for the sound synthesis and sound processing package Csound. Cecilia enables the user to build very quickly graphic interfaces with sliders and curves to control Csound intruments. It is also an editor to Csound with syntax highlighting and a built-in reference. Cecilia is also a great tool to explore the parameters of a new opcode in an interactive and intuitive way.

Blue

  • Blue - An Integrated Music Environment, powered by Csound. An open-source, cross-platform desktop application for composing music. Use visual tools together with text and code to create the music of your dreams.

pch2csd

  • https://github.com/gleb812/pch2csd - The goal of this project is to (re)implement the Clavia Nord Modular G2 sound engine in Csound, a well-known sound and music computing system.

Cmix / RTcmix

  • RTcmix - An Open-Source, Digital Signal Processing and Sound Synthesis Language, one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on a IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications.

RTcmix has a number of unique (or highly unusual) features when compared with other synthesis and signal processing languages. For one, it has a built-in MINC parser, which enables the user to write C-style code within the score file, extending its innate capability for algorithmic composition and making it closer in some respects to later music software such as SuperCollider and Max/MSP. It uses a single-script instruction file (the score file), and synthesis and signal processing routines (called instruments) exist as compile shared libraries. This is different from MUSIC-N languages such as Csound where the instruments exist in a second file written in a specification language that builds the routines out of simple building blocks (organized as opcodes or unit generators). RTcmix has similar functionality to Csound and other computer music languages, however, and their shared lineage means that scripts written for one language will be extremely familiar-looking (if not immediately comprehensible) to users of the other language.

Live Coding with Csound

PortMedia



  • Playrec - a Matlab and Octave utility (MEX file) that provides simple yet versatile access to soundcards using PortAudio, a free, open-source audio I/O library. It can be used on different platforms (Windows, Macintosh, Unix) and access the soundcard via different host API including ASIO, WMME and DirectSound under Windows.

C

  • Microtone - How tiny you ask? 128 bytes. The sound generation itself takes up just 14 lines of assembly, generating 31 bytes of machine code. The rest is taken up the ELF header and functions needed to open the sound device and outputting samples
  • Coding an Equalizer - This article expands the section on page 20 of my book The Audio Expert that describes briefly how digital equalizers work. As explained in the book, all equalizers are based on filters of various types. Computer code that implements a filter is called Digital Signal Processing, or DSP for short. Most digital filters emulate equivalent analog filters, and the common language for all filters is mathematics. Therefore, several trigonometry formulas are shown below, and there's no escaping this! But the basic operation of the computer code that implements an equalizer is not too difficult to follow, even if you don't understand the formulas. To keep this example as brief as possible, the code implements a simple high-pass filter having one pole (6 dB per octave). Formulas to implement other filter types including those used in parametric equalizers are shown on the Cookbook Formulae web page by Robert Bristow-Johnson.

ld_preload sounds

  • https://github.com/gordol/ld_preload-sounds - Generates WAV output by hooking malloc() and read(). Adding support for other calls should be pretty easy, pull-requests are much welcomed! Also, it should go without saying... but I will say it anyway... this is experimental.

sndlib

  • io.c (read and write sound file data)
  • headers.c (read and write sound file headers)
  • audio.c (read and write sound hardware ports)
  • sound.c (provide slightly higher level access to the preceding files)
  • sndlib.h (header for the preceding files)
  • sndlib2xen.c and sndlib-strings.h (tie preceding into s7, Ruby, or Forth)
  • clm.c and clm.h (Music V implementation)
  • clm2xen.c, vct.c and vct.h (tie clm.c into s7, Ruby, or Forth)
  • xen.h, xen.c (the embedded language support)

libsndfile

  • http://www.mega-nerd.com/libsndfile/ - a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface. It is released in source code format under the Gnu Lesser General Public License.

libsoundio

  • libsoundio - a lightweight abstraction over various sound drivers. It provides a well-documented API that operates consistently regardless of the sound driver it connects to. It performs no buffering or processing on your behalf; instead exposing the raw power of the underlying backend.libsoundio is appropriate for games, music players, digital audio workstations, and various utilities.libsoundio is serious about robustness. It even handles out of memory conditions correctly.

libtsp

  • TSP Lab - Reports / Software - a library of routines for signal processing. It also includes a number of general purpose routines useful for program development. Programs using this library for filtering, LPC analysis/synthesis and resampling are available as part of the AFsp package.

miniBAE

Audio File Library

  • Audio File Library is a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.

bloopsaphone

lyd

  • Lyd - an embeddable signal processing language and engine. Suitable for among other things realtime audio effect and instrument synthesis and mixing. It can form the audio core of games, virtual instruments, real-time audio mixing and editing experiments. Lyd is currently mostly an application programmers toy, out of the box at the moment lyd contains a mediocre autogenerated approximation of an OPL2 FM synthesizer in its patch set. When you launch lyd you can use it as a synthesizer with an attached MIDI keyboard or with MIDI sequencers supporting ALSA midi. Lyd should ble able to enable processing plug-ins for various audio synthesis standards exposing the lyd language and efficiency in interaction to allow live tweaking of the lyd code when composing.

Pyo

  • Pyo - a Python module written in C to help DSP script creation. Pyo contains classes for a wide variety of audio signal processing. With pyo, the user will be able to include signal processing chains directly in Python scripts or projects, and to manipulate them in real time through the interpreter. Tools in the pyo module offer primitives, like mathematical operations on audio signals, basic signal processing (filters, delays, synthesis generators, etc.), but also complex algorithms to create sound granulation and other creative audio manipulations. pyo supports the OSC protocol (Open Sound Control) to ease communications between softwares, and the MIDI protocol for generating sound events and controlling process parameters. pyo allows the creation of sophisticated signal processing chains with all the benefits of a mature and widely used general programming language.

Allegro

  • Allegro - a cross-platform library mainly aimed at video game and multimedia programming. It handles common, low-level tasks such as creating windows, accepting user input, loading data, drawing images, playing sounds, etc. and generally abstracting away the underlying platform. However, Allegro is not a game engine: you are free to design and structure your program as you like. Allegro 5 has the following additional features: Supported on Windows, Linux, Mac OSX, iPhone and Android, User-friendly, intuitive C API usable from C++ and many other languages, Hardware accelerated bitmap and graphical primitive drawing support (via OpenGL or Direct3D), Audio recording support, Font loading and drawing, Video playback, Abstractions over shaders and low-level polygon drawing

Soundpipe

  • Soundpipe - a lightweight music DSP library written in C. It aims to provide a set of high-quality DSP modules for composers, sound designers, and creative coders. Soundpipe supports a wide range of synthesis and audio DSP techniques which include: Classic Filters (Moog, Butterworth, etc), High-precision and linearly interpolated wavetable oscillators, Bandlimited oscillators (square, saw, triangle), FM synthesis, Karplus-strong instruments, Variable delay lines, String resonators, Spectral Resynthesis, Partitioned Convolution, Physical modeling, Pitch tracking, Distortion, Reverberation, Samplers and sample playback, Padsynth algorithm [27]
  • Sporth - SoundPipe fORTH, is a small stack-based audio programming language. For composers, Sporth is a different approach to making sound. Stack based languages are somewhat novel in the world of musical languages, and lend themselves well to modular sound design scenarios. Sporth syntax is simple to learn, and rewarding to master. Sound designers and composers fluent in languages like Csound, ChucK, and Supercollider will find Sporth a new and refreshing take on the same basic concepts. For developers, Sporth has a simple API that allows it to be used inside of other applications. In addition to compiling Sporth code, the API has access to other features of Sporth that would otherwise be unavailable, such as audio-rate software channels, and user defined function callbacks.


  • http://paulbatchelor.github.io/proj/libline - an ANSI C library for generating audio-rate line segments and curves for computer-based music. Libline can easily interface with sample-accurate audio libraries like Soundpipe, and a local copy is used inside of Sporth via Polysporth.



  • http://paulbatchelor.github.io/proj/moons.html - Moons is an isorhythmic circular sequencer. It is written using a combination of C and C++. All visuals are created using OpenGl; All the sounds are snythesized in realtime using Sporth, Soundpipe, and RTaudio.


Sound Open Firmware

Kiss FFT

  • https://sourceforge.net/projects/kissfft/ - A Fast Fourier Transform based up on the principle, "Keep It Simple, Stupid." Kiss FFT is a very small, reasonably efficient, mixed radix FFT library that can use either fixed or floating point data types.

mini_al

  • https://github.com/dr-soft/mini_al - a simple library for playing and recording audio. It's focused on simplicity and has a very small number of APIs. C/C++, single file, public domain.

miniaudio

RAGE

  • https://github.com/concert/rage - Realtime Audio Graph Engine, provides an audio processing graph implementation tailored towards distributed audio workstations. The audio graph within RAGE is composed of processing elements and connections between elements which are managed by the RAGE host. Rage processing elements can be both traditional data transforms (effects, mixers, etc) as well as data producers (samplers, synthesizers, etc) and data consumers (recorders, analysis plugins). The RAGE host manages the scheduling of both the low-latency audio commutations and the higher latency subtasks such as loading a file from disk for sample replay. Additionally, RAGE provides session time interpolated parameters, freeing individual elements from this responsibility.

LEAF

  • https://github.com/spiricom/LEAF - Previously named OOPS, LEAF is a C library for Audio synthesis and processing created by Mike Mulshine and Jeff Snyder. The library consists of a set of high-level audio synthesis components (Oscillators, Filters, Envelopes, Delays, Reverbs, and other Utilities). Our primary use case is embedded audio computing on 32-bit ARM microcontrollers that can run "bare-metal" (without an OS), such as the STM32f4, STM32f7, and STM32H7. The code, however, is general enough to be used in many other situations as well. We have included a JUCE VST/AU generating template to test the library (2), and the python script we use to generate wavetables.

Most of these algorithms are sourced from other projects, especially the STK (Sound Toolkit) library and various discussions on the music-DSP mailing list. We also owe a lot to open source computer programming languages, such as C-sound, ChucK, PureData, and Supercollider. Other interesting projects to check out that similarly target embedded applicatons are: TeensyAudio (C++), Hoxton Owl (C++), Axoloti (C), and Mutable Instruments (C++).

Ficus

  • https://github.com/resinbeard/ficus - development library provides your application with both multichannel playback/capture of mono wav audio files and all the flexibility and routing possibilities of a JACK client.Ficus is distributed as a group of source files, settings like available number of input/output channels and storage banks are available as compile-time options through the configuration file, config.h

C++

  • YouTube: Ian Hobson - The use of std variant in realtime DSP (ADC'17) - Application Developer & Software Engineer, Ableton .C++17 introduces std::variant, a type-safe union class. A variant's value represents one of a fixed set of possible types, with C++'s type system ensuring that correct code paths are executed for the active type. This talk will explore the pros and cons of working with variants, with a special focus on DSP. Variants allow for well defined interfaces and minimal memory footprints, but what are they like to use in practice, and are they performant enough for realtime use?

RtAudio

  • RtAudio - a set of C++ classes that provide a common API for realtime audio input/output across Linux (native ALSA, JACK, PulseAudio and OSS), Macintosh OS X (CoreAudio and JACK), and Windows (DirectSound, ASIO and WASAPI) operating systems.

AFsp

Audio File Library

  • Audio File Library - a C-based library for reading and writing audio files in many common formats. The Audio File Library provides a uniform API which abstracts away details of file formats and data formats. The same calls for opening a file, accessing and manipulating audio metadata (e.g. sample rate, sample format, textual information, MIDI parameters), and reading and writing sample data will work with any supported audio file format. The Audio File Library lets you work with audio data in whatever format is most convenient for you.

r8brain

  • https://github.com/avaneev/r8brain-free-src - Open source (under the MIT license) high-quality professional audio sample rate converter (SRC) (resampling) library. Features routines for SRC, both up- and downsampling, to/from any sample rate, including non-integer sample rates: it can be also used for conversion to/from SACD sample rate and even go beyond that. SRC routines were implemented in multi-platform C++ code, and have a high level of optimality.

LibAudioStream

  • https://github.com/sletz/libaudiostream - a framework to manipulate audio ressources through the concept of streams. Using this abstraction, different applications like audio players or offline processing tools can be developed.

Synthesis ToolKit in C++

  • Synthesis ToolKit in C++ (STK) is a set of open source audio signal processing and algorithmic synthesis classes written in the C++ programming language. STK was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, realtime control, ease of use, and educational example code. The Synthesis ToolKit is extremely portable (it's mostly platform-independent C and C++ code), and it's completely user-extensible (all source included, no unusual libraries, and no hidden drivers). We like to think that this increases the chances that our programs will still work in another 5-10 years. In fact, the ToolKit has been working continuously for nearly 20 years now. STK currently runs with realtime support (audio and MIDI) on Linux, Macintosh OS X, and Windows computer platforms. Generic, non-realtime support has been tested under NeXTStep, Sun, and other platforms and should work with any standard C++ compiler.

Gamma

  • https://github.com/LancePutnam/Gamma - a cross-platform, C++ library for doing generic synthesis and filtering of signals. It contains helpful mathematical functions, types, such as vectors and complex numbers, an assortment of sequence generators, and many other objects for signal processing tasks. It is oriented towards real-time sound and graphics synthesis, but is equally useful for non-real-time tasks.

Tonic

  • https://github.com/TonicAudio/Tonic - Fast and easy audio synthesis in C++. Prefer coding to patching? Love clean syntax? Care about performance? That's how we feel too, and why we made Tonic.
  • https://github.com/TonicAudio/ofxTonic - Open Frameworks Addon for the Tonic audio synthesis Library. Tonic is an efficient, pure C++ patching tool with a refreshingly crisp and simple syntax.

CLAM

  • CLAM - (C++ Library for Audio and Music) is a software framework for research and application development on the audio and music domain. It provides means to perform complex audio signal analysis, transformations and synthesis. It also provides a uniform interface to common tasks on audio applications such as accessing audio devices and audio files, thread safe communication with the user interface and DSP algorithms recombination and scaling.

You can use CLAM as a library to program your applications in C++ but you can also use graphical tools to build full applications without coding.

  • http://clam-project.org/wiki/SMSTools_tutorial - spectral; analyzes, transforms and synthesizes back a given sound. For doing so, it uses the Sinusoidal plus Residual model (sometimes referred to as SMS but also known as HILN in the context of MPEG4).

DISTRHO Plugin Framework

  • https://github.com/DISTRHO/DPF - designed to make development of new plugins an easy and enjoyable task. It allows developers to create plugins with custom UIs using a simple C++ API. The framework facilitates exporting various different plugin formats from the same code-base. DPF can build for LADSPA, DSSI, LV2 and VST formats. All current plugin format implementations are complete. A JACK/Standalone mode is also available, allowing you to quickly test plugins. Plugin DSP and UI communication is done via key-value string pairs.


JUCE C++ Library

  • https://en.wikipedia.org/wiki/JUCE - Like many other frameworks (e.g., Qt, wxWidgets, GTK+, etc.), JUCE contains classes providing a range of functionality that covers user-interface elements, graphics, audio, XML and JSON parsing, networking, cryptography, multi-threading, an integrated interpreter that mimics ECMAScript's syntax, and various other commonly used features. Application developers needing several third-party libraries may thus be able to consolidate and use only the JUCE library, or at least reduce the number of third-party libraries they use. In this, the original inspiration was Java's JDK, and JUCE was intended to be "something similar for C++".

A notable feature of JUCE when compared to other similar frameworks is its large set of audio functionality; this is because JUCE was originally developed as a framework for Tracktion, an audio sequencer, before being split off into a standalone product. JUCE has support for audio devices (such as CoreAudio, ASIO, ALSA, JACK, WASAPI, DirectSound) and MIDI playback, polyphonic synthesizers, built-in readers for common audio file formats (such as WAV, AIFF, FLAC, MP3 and Vorbis), as well as wrappers for building various types of audio plugin, such as VST effects and instruments. This has led to its widespread use in the audio development community.

JUCE comes with wrapper classes for building audio and browser plugins. When building an audio plugin, a single binary is produced that supports multiple plugin formats (VST & VST3, RTAS, AAX, Audio Units). Since all the platform and format-specific code is contained in the wrapper, a user can build Mac and Windows VST/VST3/RTAS/AAX/AUs from a single codebase.







  • https://github.com/drowaudio/drowaudio - a 3rd party JUCE module designed for rapid audio application development. It contains classes for audio processing and gui elements. Additionally there are several wrappers around 3rd party libraries including cURL, FFTReal and SoundTouch. dRowAudio is written in the strict JUCE style, closely following the style guide set out at JUCE Coding Standards.




Jamoma

  • Jamoma - a C++ platform for building dynamic and reflexive systems with an emphasis on audio and media. Platform is composed of a layered framework architecture that creates an object model and then specializes that object model for audio and matrix processing, and system automation and management. Jamoma makes use of polymorphic typing, dynamic binding, and introspection to create a cross-platform API pulling ideas from languages such as Smalltalk and Objective-C while remaining within the bounds of the portable and cross-platform C++ context. The implementations include modular environments for Max by Cycling '74 and Pd by Miller Puckette.

iPlug2


  • iPlug2
    • https://github.com/iPlug2/iPlug2 - a simple-to-use C++ framework for developing cross platform audio plug-ins/apps and targeting multiple plug-in APIs with the same minimalistic code. It abstracts an audio plug-in (IPlug) and it's drawing engine/GUI toolkit (IGraphics). IGraphics contains a collection of common controls well suited for audio plug-in GUIs, either using bitmap or vector graphics. The original version of iPlug was developed by John Schwartz aka schwa and released in 2008 as part of Cockos' WDL library. iPlug 2 (2018) is a substantial reworking that brings multiple vector graphics backends (including GPU accelerated options and HiDPI/scaling), a better approach to concurrency, support for distributed plug-in formats and compiling to WebAssembly via emscripten, amongst many other things.

DSPFilters

  • https://github.com/vinniefalco/DSPFilters - A Collection of Useful C++ Classes for Digital Signal Processing. "Techniques for digital signal processing are well guarded and held close to the chest, as they have valuable applications for multimedia content. The black art of Infinite Impulse Response ("IIR") filtering has remained veiled in secrecy with little publicly available source code...until now."

Building on the work of cherished luminaries such as Sophocles Orfanidis, Andreas Antoniou, Martin Holters, and Udo Zolzer, this library harnesses the power of C++ templates to solve a useful problem in Digital Signal Processing: the realization of multichannel IIR filters of arbitrary order and prescribed specifications with various properties such as Butterworth, Chebyshev, Elliptic, and Optimum-L (Legendre) responses. The library is provided under the MIT license and is therefore fully compatible with proprietary usage.

Classes are designed as independent re-usable building blocks. Use some or all of the provided features, or extend the functionality by writing your own objects that plug into the robust framework. Only the code that you need will get linked into your application. Here's a list of features: Exclusive focus on IIR filters instead of boring FIR filters, Complete implementation of all "RBJ Biquad" Cookbook filter formulas, Butterworth, Chebyshev, Elliptic, Bessel, Legendre designs, Low Pass, High Pass, Band Pass, Band Stop transformations, Low, High, and Band Shelf filter implementations for most types, Smooth interpolation of filter settings, pole/zeros, and biquad coefficients to achieve seamless parameter changes, Representation of digital filters using poles and zeros, Realization using Direct Form I, Direct Form II, or user provided class, Fully factored to minimize template instantiations, "Design" layer provides runtime introspection into a filter, Utility template functions for manipulating buffers of sample data, No calls to malloc or new, great for embedded systems, No external dependencies, just the standard C++ library!

Q

  • Cycfi Research: Q Audio DSP Library - a cross-platform C++ library for Audio Digital Signal Processing. Aptly named after the “Q factor”, a dimensionless parameter that describes the quality of a resonant circuit, the Q DSP Library is designed to be simple and elegant, as the simplicity of its name suggests, and efficient enough to run on small microcontrollers. Q leverages the power of modern C++ and efficient use of functional programming techniques, especially function composition using fine-grained and reusable function objects (both stateless and stateful), to simplify complex DSP programming tasks without sacrificing readability. MIT license.

mopo


ftz

  • https://gitlab.com/ftz/audio - Digital audio processing and synthesis library. The following operations are implemented: Amplification, Mixing, Panning, Bit depth conversion, Sample rate conversion, Pitch Shifting. The following oscillators are implemented: Sine wave, Pulse wave, Triangle wave, Sawtooth wave, White noise. The following sample formats are implemented out of the box (you can always add yours): 16 bit signed int, 24 bit signed int, 32 bit signed int, 32 bit float, 64 bit float. The following file formats are supported: Microsoft RIFF Wave. The following audio interfaces available for playback: ALSA.


  • https://gitlab.com/ftz/midi - This library provides a convenient interface for manipulation of MIDI data as well as providing higher level abstractions for I/O.

AudioTK

  • https://github.com/mbrucher/AudioTK - a set of audio filters. It helps assembling workflows for specific audio processing workloads. The audio workflow is split in independent components (without feedback loops) that consist of filters. Each filter has a set of synchronized input and output ports that can be connected together. All input ports must be connected, but not all output ports need to be. Sampling rate can be independent between input and output ports, but input sampling rates are identical, and output sampling rates are also identical.

Raul

  • http://drobilla.net/software/raul - Realtime Audio Utility Library, is a C++ utility library primarily aimed at audio/musical applications. It is used by Ingen, Patchage, and Machina.

kode

ASPiK

eDSP

  • https://github.com/mohabouje/eDSP - easy Digital Signal Processing, is a digital signal processing framework written in modern C++ that implements some of the common functions and algorithms frequently used in digital signal processing, audio engineering & telecommunications systems. It aims to provide efficient and streamlined base calculations while at the same time having a straightforward and easy-to-use interface similar to another numeric language like MATLAB. The library integrates a C++ and Python interfaces and supports Windows, Linux, Mac OS, iOS and Android.

KFR

openAudioProgrammingTutorials

JackCpp

  • JackCpp - c++ classes which wrap the Jack audio io api and lock-free ring buffer. Works with Linux and OSX (thanks to Will Wolcott for OSX testing and example/test file comments).

Maximilian

  • Maximilian - an open source, MIT licensed C++ audio synthesis library. It’s designed to be cross platform and simple to use. The syntax and program structure are based on the popular ‘Processing’ environment. Maximilian provides standard waveforms, envelopes, sample playback, resonant filters, and delay lines. In addition, equal power stereo, quadraphonic and 8-channel ambisonic support is included. There’s also Granular synthesisers with Timestretching, FFTs and some Music Information Retrieval stuff.

Tracktion Engine

  • Tracktion Engine - defines a high level document object model for time based, sequenced audio applications and provides an API to create, modify and play these back. Create an Engine object, our arrangement object, called an Edit, add some elements to it such as audio files, MIDI and plugins, then play it back or render it to an audio file. Tracktion Engine is provided in JUCE module format making it incredibly quick and easy to setup and start creating projects. We’ve included a bunch of examples and accompanying tutorials along with scripts to build them so you can get up and running with the minimum of fuss.


SoLoud

  • SoLoud - an easy to use, free, portable c/c++ audio engine for games.How Easy?The engine has been designed to make simple things easy, while not making harder things impossible.

Wavelet

  • https://github.com/Ircam-RnD/wavelet - a cross-platform c++ library for online estimation of the Continuous Wavelet Transform (CWT). The online estimation is based on a filterbank implementation of the CWT with minimal delay per scale and optimization based on multi-rate computation. The library also allows for offline estimation of the CWT using FFT.

Pitch detection

C#

Objective C

Mozzi

  • Mozzi - brings your Arduino to life by allowing it to produce much more complex and interesting growls, sweeps and chorusing atmospherics. These sounds can be quickly and easily constructed from familiar synthesis units like oscillators, delays, filters and envelopes.You can use Mozzi to generate algorithmic music for an installation or performance, or make interactive sonifications of sensors, on a small, modular and super cheap Arduino, without the need for additional shields, message passing or external synths.

D

Java

Java Sound

JSyn

  • JSyn - allows you to develop interactive computer music programs in Java. You can run them as stand-alone applications, or as Applets in a web page. JSyn can be used to generate sound effects, audio environments, or music. JSyn is based on the traditional model of unit generators which can be connected together to form complex sounds. For example, you could create a wind sound by connecting a white noise generator to a low pass filter that is modulated by a random contour generator.

Beads

  • Beads is a software library written in Java for realtime audio. It was started by Ollie Bown in 2008. It is an open source project and has been developed with support from Monash University in Melbourne, via the Centre for Electronic Media Art‘s ARC Discovery Grant Project “Creative Ecosystems”, and a Small Grant for Early Career Researchers from the Faculty of Information Technology. Beads contributors includes Ollie Bown, Ben Porter and Benito.

JMSL

  • http://www.algomusic.com/jmsl/ JMSL is a Java API for music composition, interactive performance, and intelligent instrument design. With JMSL, the composer/programmer can create stand-alone musical applications or deploy applets on the web. JMSL supports JSyn, MidiShare, MidiPort, and JavaSound.

JNAJack

JFuge

  • JFugue - Music Programming for Java™ and JVM Languages

JASS

  • JASS (Java Audio Synthesis System) is a unit generator based audio synthesis programming environment written in pure Java. Java 1.5 is required. The environment is based on a foundation structure consisting of a small number of Java interfaces and abstract classes, which implement the functionality needed to create filter-graphs, or "patches". Unit generators are created by extending the abstract classes and implementing a single method. Patches are created by linking together unit generators in arbitrary complex graph structures. Patches can be rendered in real-time with special unit generators that communicate with the audio hardware, which have been implemented using the JavaSound API and through JNI for some platforms.

jVSTwRapper

  • jVSTwRapper - an easy and reliable wrapper to write audio plug-ins in Java. It enables you to develop VST (2.4), Audio Unit (AU) and LADSPA compatible audio plugins and virtual instruments plus user interfaces (Swing) and run them on Windows, Linux and Mac OSX. Five demo plugins (+src) are included.

jackAudio4Java

Python

PyJack

  • PyJack - jack audio client module for python

jackclient-python

python-jackpatch

python-wavefile

  • python-wavefile - Pythonic libsndfile wrapper to read and write audio files.

python-sounddevice

PyWavelets

  • PyWavelets - a free Open Source wavelet transform software for Python programming language. It is written in Python, Cython and C for a mix of easy and powerful high-level interface and the best performance.

AudioLazy

Pippi

Undulance

Audiostream

mutils

AVIO

  • https://github.com/ri0t/avio - The AVIO Suite is an experimental approach to the concept of a live stage performance. AVIO stands for "Audio Visual Input Output" and the suite is a component (event) based agglomeration of tools to work with Controller, Midi, Audio and Visual data.

audio_dspy

Go

  • https://github.com/200sc/klangsynthese - Waveform and Audio Synthesis library in Go. Klangsynthese right now supports a number of features that will work regardless of OS, and a number of features specific to Windows where the hope is to move support to Linux and Darwin.


Rust



Zig

Lua

See Lua

MATLAB



  • https://github.com/ricedsp/rwt - a collection of Matlab M-files and C MEX-files for 1D and 2D wavelet and filter bank design, analysis, and processing. The toolbox provides tools for denoising and interfaces directly with our Matlab code for wavelet domain hidden Markov models and wavelet regularized deconvolution.

SuperCollider

1996 / OSS in 2002

  • SuperCollider is an environment and programming language for real time audio synthesis and algorithmic composition. It provides an interpreted object-oriented language which functions as a network client to a state of the art, realtime sound synthesis server.








  • Utopia is a SuperCollider library for the creation of networked music applications, and builds upon the work of the Republic Quark and other existing network systems in SuperCollider. It aims to be modular (features available largely 'à la carte'), secure (provides methods for authentication and encryption), and flexible (to the extent possible, it tries not to impose a particular design or architecture). It provides functionality for synchronisation, communication, code sharing, and data sharing.


  • Modality Toolkit - simplifies creation of highly personalised electronic instruments in SuperCollider by introducing a common code interface. This allows for uniform access to HID, MIDI, OSC and GUI-based conrollers, as well as switching of functionality, even at runtime.


Closure

Overtone

  • Overtone is an open source audio environment being created to explore musical ideas from synthesis and sampling to instrument building, live-coding and collaborative jamming. We use the SuperCollider synth server as the audio engine, with Clojure being used to develop the APIs and the application. Synthesizers, effects, analyzers and musical generators can be programmed in Clojure.
lein repl
user=>(use 'overtone.live)




Pink

  • https://github.com/kunstmusik/pink - provides the basis for developing music systems. It is also designed so to scale to user needs, whether they are exploring and designing low-level signal processing algorithms, developing pre-written compositions, or creating interactive real-time systems. It offers a slim core engine designed to be highly customizable.

Scala


  • Mellite - an environment for creating experimental computer-based music and sound art. This system has been developed since 2012 by its author, Hanns Holger Rutz, and is made available under the GNU GPL open source license.


ChucK

2003

  • ChucK is a programming language for real-time sound synthesis and music creation. It is open-source and freely available on MacOS X, Windows, and Linux. ChucK presents a unique time-based, concurrent programming model that's precise and expressive (we call this strongly-timed), dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive music. [33]


  • LiCK - Library for ChucK.

Processing

  • Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping.


FAUST

2002

  • FAUST - Functional Audio Stream is a functional programming language specifically designed for real-time signal processing and synthesis. FAUST targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. Simply put, Faust lets one program dsp code once in a purely functional language, and compile it to various platforms including max/msp, supercollider, audio unit, vst, and more.






FaustWorks:

git clone git://faudiostream.git.sourceforge.net/gitroot/faudiostream/FaustWorks

FaustLive:




Grouping solution 1: [35]

a = vgroup("term1", nentry("a",1,0,10,1));
b = vgroup("term1", nentry("b",2,0,10,1));
x = vgroup("term2", nentry("x",3,0,10,1));
y = vgroup("term2", nentry("y",4,0,10,1));
process = a*x + b*y;

Solution 2:

a = nentry("v:term1/a",1,0,10,1);
b = nentry("v:term1/b",2,0,10,1);
x = nentry("v:term2/x",3,0,10,1);
y = nentry("v:term2/y",4,0,10,1);
process = a*x + b*y


process = _ : _;       // series combination (1 in, 1 out)
process = _ , _;       // parallel combination (2 ins, 2 outs)
process = +;           // summer (2 ins, 1 out)process = _,_ : +;     // same summer
process = _,_ : + : _; // same summer
process = -;           // signal subtractor
process = *;           // pointwise signal multiplier (nonlinear)7
process = /;           // pointwise signal divider (nonlinear)
process = mem;         // unit-sample delay
process = _, 1 : @;    // unit-sample delay
process = _,10 : @;    // ten-sample delay
process = a ~ b;       // feedback thru b around a
process = _ ~ _ ;      // feedback thru _ (generates 0)
process = mem ~ _;     // two-sample closed loop (generates 0)
process = + ~ _;       // digital integrator
process = _ <: _ , _;  // mono to stereo
process = _ <: _ , _, _, _;     // mono to quad
process = _ , _ <: _ , _, _, _; // stereo to quad (see diagram)
process = _ , _ :> _;           // stereo to mono [equiv to +]
process = _, _ , _ , _ :> _ ;   // quad to mono [equiv to +,+:+]


  • PDF: FAUST : an Efficient Functional Approach to DSP Programming - Yann Orlarey, Dominique Fober and Stephane Letz. FAUST is a programming language that provides a purely functional approach to signal processing while offering a high level of performance. FAUST aims at being complementary to existing audio languages by offering a viable and efficient alternative to C/C++to develop signal processing libraries, audio plug-ins or standalone applications. The language is based on a simple and well formed formal semantics. A FAUST program denotes a signal processor, a mathematical function that transforms input signals into output signals. Being able to know precisely what a program computes is important not only for programmers, but also for compilers needing to generate the best possible code. Moreover these semantics questions are crucial for the longterm preservation of music programs. The following paragraphs will give an overview of the language as well as a description of the compiler, including the generation of parallel code.






  • https://bitbucket.org/agraef/faust-lv2 - provides an LV2 plugin architecture for the Faust programming language. The package contains the Faust architecture and templates for the needed LV2 manifest (ttl) files, a collection of sample plugins written in Faust, and a generic GNU Makefile as well as a shell script to compile plugins using the architecture.


  • https://bitbucket.org/agraef/faust-vst - provides a VST plugin architecture for the Faust programming language. The package contains the Faust architecture, faustvst.cpp, the faust2faustvst helper script which provides a quick way to compile a plugin, a collection of sample plugins written in Faust, and a generic GNU Makefile for compiling and installing the plugins.





  • https://github.com/hrtlacek/faustTools - Tools for analyzing Faust programsAt the moment there is one tool present, faustwatch.pyFaustwatchFaustwatch is a tool that observes a .dsp file used by the dsp language FAUST. If the file is changed (saved after editing):the blockdiagram can be automatically shown in the default browserthe impulse response can be plotted in the time domainthe impulse response can be plotted in the frequency domain.the time and frequency domain plots of the last saved version are always visible so the current and last saved version can be compared.the impulse response is played back via pyo and Jack AudioBasically it is supposed to make FAUST development faster.

Snack Sound Toolkit

  • Snack Sound Toolkit - designed to be used with a scripting language such as Tcl/Tk or Python. Using Snack you can create powerful multi-platform audio applications with just a few lines of code. Snack has commands for basic sound handling, such as playback, recording, file and socket I/O. Snack also provides primitives for sound visualization, e.g. waveforms and spectrograms. It was developed mainly to handle digital recordings of speech, but is just as useful for general audio. Snack has also successfully been applied to other one-dimensional signals.

Music as Data

Extempore

  • Extempore - a programming language and runtime environment designed to support 'cyberphysical programming'. Cyberphysical programming supports the notion of a human programmer operating as an active agent in a real-time distributed network of environmentally aware systems. The programmer interacts with the distributed real-time system procedurally by modifying code on-the-fly.

Central to the Extempore programming environment is a new systems programming language designed to support the programming of real-time systems in real-time. xtlang is designed to mix the high-level expressiveness of Lisp with the low-level expressiveness of C. xtlang uses an s-expression syntax common to Lisp, and more particularly to Scheme. xtlang also borrows many Lisp like semantics including first class closures, tail recursion and macros. However, xtlang also borrows heavily from systems languages like 'C' including static typing, low-level type expressivity, direct pointer manipulation and explicit memory managment (i.e. no GC). xtlang then extends these 'C' semantics with type-inferencing, ad-hoc polymorphism, reified generics, and zone/region based memory management.

Nyquist

  • Nyquist is a sound synthesis and composition language offering a Lisp syntax as well as an imperative language syntax (SAL) and a powerful integrated development environment.. Nyquist is an elegant and powerful system based on functional programming.

athenaCL

  • athenaCL system - an open-source, object-oriented composition tool written in Python. The system can be scripted and embedded, and includes integrated instrument libraries, post-tonal and microtonal pitch modeling tools, multiple-format graphical outputs, and musical output in Csound, SuperCollider, Pure Data, MIDI, audio file, XML, and text formats.

Musical parts are deployed as Textures, layered surface-defining objects containing numerous independent ParameterObjects to control pitch, tempo, rhythm, amplitude, panning, and instrument (Csound) parameters. The system includes an integrated library of Csound and SuperCollider instruments, and supports output for external Csound instruments, MIDI, and a variety of alternative formats. Over eighty specialized Generator, Rhythm, and Filter ParameterObjects provide tools for stochastic, chaotic, cellular automata based, Markov based, generative grammar and Lindenmayer system (L-system), wave-form, fractional noise (1/f), genetic, Xenakis sieve, linear and exponential break-point segments, masks, and various other algorithmic models. ParameterObjects can be embedded in other ParameterObjects to provide powerful dynamic and masked value generation. Textures can be combined and edited, and tuned with algorithmic Temperament objects. Texture Clones allow the filtering and processing of Texture events, performing transformations not possible with parameter generation alone.

Audio Programming Environment

  • Audio Programming Environment (APE) is an open-source audio plugin, that allows to directly script/code DSP real time, integrated in your signal chain. Utilizing a built-in code editor, compiler, console and a basic control surface API, testing and prototyping DSP is extremely easy and convenient.

AudioKit

  • AudioKit is a powerful audio toolkit for synthesizing, processing, and analyzing sounds. It contains several examples for iOS (iPhone & iPad) and Mac OSX, written in both Objective-C and in Swift. A test suite is provided for many of the operations included in AudioKit. A playground project can be used for trying out AudioKit instruments and for greatly speeding up the development of your own instruments and applications. [36]


The #MusicBricks Toolkit

Jacktube

  • Jacktube is an open source audio/MIDI processing program. It uses LADSPA and DSSI plugins to generate and process audio, and MIDI events to control its operation. The exact behavior is defined by using a simple scripting language to define rules. Even though Jacktube is primarily meant for audio work, it can be used in any signal processing application. The language has some superficial similarities to Perl, but the programming language is designed to be as small and efficient as possible for its purpose, namely setting up plugin graphs and responding to MIDI events.

Sonic Pi

  • Sonic Pi - The Live Coding Synth for Everyone. Simple enough for computing and music lessons. Powerful enough for professional musicians. Free to download with a friendly tutorial. Learn to code creatively by composing or performing music in an incredible range of styles from classical to algorave. Ruby DSL.

Negasonic

alda

  • alda - Inspired by other music/audio programming languages such as PPMCK, LilyPond and ChucK, Alda aims to be a powerful and flexible programming language for the musician who wants to easily compose and generate music on the fly, using naught but a text editor. Alda is designed in a way that equally favors aesthetics, flexibility and ease of use, with (eventual) support for the text-based creation of all manner of music: classical, popular, chiptune, electroacoustic, and more! [37] [38] [39]

Platonic Music Engine

  • Platonic Music Engine takes an initial input from the user (like a name or number or random string of characters) and converts it using a non-random process into a piece of music, the Platonic Score. The software then allows you to manipulate this random-sounding music via the use of various style algorithms and quantizers into sounding like any style of music imaginable while still preserving the Platonic Score in its core.

Serpent / Aura

  • Serpent is the scripting language for Aura, a platform for computer music, animation, and interactive systems. Serpent was designed and implemented as a stand-alone, general purpose interpreter. Serpent is perhaps ideal as a game scripting language due to its real-time design and support for external C++ objects and C functions. It is open source, and I would be happy to share code as well as future design and development with others.
  • AuraRT is a software framework for creating interactive multimedia software, particularly advanced interactive music compositions. A subproject is AuraFX, a flexible signal processor configurable by end-users.

https://wiki.python.org/moin/Audio/

TMC

  • TMC - Tiny Music Compiler, a DSL (Domain-Specific Language) that describes a set of operations of audio files. It does not manipulate audio itself. Instead, it calls existing tools such as SoX.

Bipscript

  • bipscript - a simple programming environment for creating music. instantiate and connect LV2 plugins to create audio and MIDI networks, schedule MIDI and other control events directly on the plugins and system outputs, schedule logic to react to external events e.g. from a human performer.

"For this example we'll create two LV2 plugins: a software synthesizer and a reverb; we'll feed the output of the synth into the reverb and connect the reverb to the main system outputs. Also note that we set the initial value of the reverb amount to zero;"

local synth = Lv2.Plugin("http://calf.sourceforge.net/plugins/Monosynth", "Velo Bee")
 
local reverb = Lv2.Plugin("http://calf.sourceforge.net/plugins/Reverb")
reverb.setControl("amount", 0.0)
reverb.connect(synth)
 
local mainOutput = Audio.StereoOutput("main", "system:playback_1", "system:playback_2")
mainOutput.connect(reverb)

Sporth

wcnt

  • wcnt - Wav Composer Not Toilet is a not real time modular audio synthesis/sequencer/sampler application for GNU/Linux systems. It outputs audio into 8/16/24/32bit PCM or floating point format .WAV audio files. wcnt is commandline based and reads plain text files, within which definitions of modules and data objects are placed. Modules are where the synthesis/sampling/sequencing happens and operate on a sample by sample basis. Transmission of events between modules only occurrs at the time of the event. Occurrences of events are transparent, the data stream is continuous.

Tao

  • Tao is a software package for sound synthesis using physical models. It provides a virtual acoustic material constructed from masses and springs which can be used as the basis for building quite complex virtual musical instruments. Tao comes with a synthesis language for creating and playing instruments and a fully documented (eventually) C++ API for those who would like to use it as an object library.


Modus

  • Modus is an open source, cross-platform C++ library which allows you to handle music from code.

OpenAL

AeonWave

  • AeonWave is a low-level, hardware accelerated 4D spatialized audio library aimed at the professional simulation market. The software currently runs on Windows and Linux for ARM and x86 and tests have shown that AeonWave renders 3D audio between 450% and 1400% faster than any competing product depending on the hardware configuration. AeonWave started out as project Anaconda; a fast rendering new OpenAL implementation. After realizing this would not be good enough for spatialized 3D audio demands the library has been rewritten;


Vult

  • Vult - a simple and powerful language to program high-performance algorithms that may run in small microprocessors or microcontrollers. Vult is specially useful when programming Digital Signal Processing (DSP) algorithms like audio effects or synthesizers. The Vult compiler is a transcompiler, which takes Vult code and produces plain C/C++ code that can be compiled in any platform with a C/C++ compiler, for example: Arduino or Teensy boards. Vult can also generate JavaScript that can be run directly in the browser or C/C++ that can be compiled as Pure Data externals. Code written in Vult has freedom.

SOUL

  • SOUL - represents a paradigm shift in the way audio development is done. Its architecture improves latency and performance in ways that are impossible using current techniques, and it opens up the use of new computing platforms for audio acceleration. Soulunlocks native-level speed, even when used within slower, safer languages. The SOUL language makes audio coding more accessible and less error-prone, improving productivity for beginners and expert professionals. [40]


  • SOUL - creating a new language and infrastructure for writing and deploying audio code. It aims to unlock improvements in latency, performance, portability and ease-of-development that aren't possible with the current mainstream techniques that are being used.


Game audio




  • https://www.ambiera.com/irrklang - a high level 2D and 3D cross platform (Windows, Mac OS X, Linux) sound engine and audio library which plays WAV, MP3, OGG, FLAC, MOD, XM, IT, S3M and more file formats, and is usable in C++ and all .NET languages (C#, VisualBasic.NET, etc). It has all the features known from low level audio libraries as well as lots of useful features like a sophisticated streaming engine, extendable audio reading, single and multithreading modes, 3d audio emulation for low end hardware, a plugin system, multiple rolloff models and more. All this can be accessed via an extremely simple API.

Creative / live coding

See Creative / live coding

JavaScript / Web Audio




MIDI



  • Synthy - an online synthesiser and sequencer with live world output and colours made by Filip Hnízdo using the Web Audio API, the live server is powered by socket.io and Node.js. The database of patterns pushed to synthy is powered by the wonderful NeDB.


Audiocogs


DSP.js

  • https://github.com/corbanbrook/dsp.js/ DSP.js is a comprehensive digital signal processing library for javascript. It includes many functions for signal analysis and generation, including Oscillators(sine, saw, square, triangle), Window functions (Hann, Hamming, etc), Envelopes(ADSR), IIR Filters(lowpass, highpass, bandpass, notch), FFT and DFT transforms, Delays, Reverb.


Tone.js

  • Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers looking to create web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for scheduling and timing events and prebuilt synths and effects. For signal-processing programmers (coming from languages like Max/MSP), Tone provides a wealth of high performance, low latency building blocks and DSP modules to build your own synthesizers, effects, and complex control signals.


Sawtooth.io


tonal

  • tonal is a modular, functional music theory library. Built from a collection of modules, it's able to create and manipulate tonal elements of music (pitches, chords, scales, keys). It deals with abstractions (not actual music) and while is designed for algorithmic composition and music generation, can be used to develop any kind of midi or audio software.


Audiolet

  • https://github.com/oampo/Audiolet - a JavaScript library for real-time audio synthesis and composition from within the browser. It uses graph-based routing and pattern-based scheduling to make complex audio simple to program, and easy to understand.


Pedalboard.js


Wad

  • https://github.com/rserota/wad - Wad is a Javascript library for manipulating audio using the new HTML5 Web Audio API. It greatly simplifies the process of creating, playing, and manipulating audio, either for real-time playback, or at scheduled intervals. Wad provides a simple interface to use many features one would find in a desktop DAW (digital audio workstation), but doesn't require the user to worry about sending XHR requests or setting up complex audio graphs.


Other



Opimodus / OMN

Wolfram:Alpha

GPU


Neural net

Plugins

Formats

VST2 / VST3

paths:

~.vst
/usr/lib/vst
/usr/local/lib/vst
~/.wine/drive_c/Program Files (x86)/VstPlugins
~/.wine/drive_c/Program Files/VstPlugins





  • https://github.com/falkTX/JackAss - a VST plugin that provides JACK-MIDI support for VST hosts. Simply load the plugin in your favourite host to get a JACK-MIDI port. Each new plugin instance creates a new MIDI port.



Creating


  • HISE - a cross-platform open source audio application for building virtual instruments. It emphasizes on sampling, but includes some basic synthesis features for making hybrid instruments as well as audio effects. You can export the instruments as VST / AU / AAX plugins or as standalone application for Windows / macOS or iOS.


  • https://github.com/Tracktion/pluginval - a cross-platform plugin validator and tester application. It is designed to be used by both plugin and host developers to ensure stability and compatibility between plugins and hosts.


  • https://github.com/pongasoft/jamba - a set of helpers (classes, concepts, build files, etc...) built on top of the VST SDK to provide a lightweight framework to build a VST2/3 plugin. Jamba has been designed to help in building VST2/3 plugin, not to replace it: you are still writing a VST2/3 plugin, not a Jamba plugin.


LADSPA

~/.ladspa
/usr/local/lib/ladspa
/usr/lib/ladspa

LV2

  • LV2 - an open standard for audio plugins, used by hundreds of plugins and other projects. At its core, LV2 is a simple stable interface, accompanied by extensions which add functionality to support the needs of increasingly powerful audio software.



~/.lv2
/usr/local/lib/lv2
/usr/lib/lv2
  # standard lv2 paths


lv2ls
  # list all lv2 plugins available


Specifications


  • LV2 - an interface for writing audio processors, or plugins, in C/C++ which can be dynamically loaded into many applications, or hosts. This core specification is simple and minimal, but is designed so that extensions can be defined to add more advanced features, making it possibly to implement nearly any feature imaginable. API docs)



  • LV2 MIDI - defines a data type for a MIDI message, midi:MidiEvent, which is normalised for fast and convenient real-time processing. MIDI is the Musical Instrument Digital Interface, a ubiquitous binary standard for controlling digital music devices. For plugins that process MIDI (or other situations where MIDI is sent via a generic transport) the main type defined here, midi:MidiEvent, can be mapped to an integer and used as the type of an LV2 Atom or Event.
  • LV2 + midnam


  • LV2 Units - This vocabulary defines a number of units for use in audio processing.


  • LV2 UI - This extension is used to create User Interfaces (UIs) for LV2 plugins.


  • LV2 1.0 released, what's next? - "LV2 is a successor of both LADSPA (audio effects) and DSSI (instruments) with some backwards compatibility. The scope of the API more or less equals to the sum of LADSPA and DSSI, not in the last place thanks to its modular design."
Creating
  • Programming LV2 Plugins - a series of well-documented example plugins that demonstrate the various features of LV2. Starting with the most basic plugin possible, each adds new functionality and explains the features used from a high level perspective. API and vocabulary reference documentation explains details, but not the “big picture”. This book is intended to complement the reference documentation by providing good reference implementations of plugins, while also conveying a higher-level understanding of LV2.
  • Programming LV2 Plugins - new book layout


  • The LV2 Book - Rust Edition - a translation of the LV2 Book by David Robillard for the lv2rs library. As such, the examples in this book as well as the README's and comments are copied from the original, but the book itself has been altered to adapt for the differences between C and Rust.



  • LV2 programming for the complete idiot - an LV2 plugin programming guide for the complete idiot using a set of C++ classes. If you are not a complete idiot, you may want to read the LV2 spec and figure it out for yourself.


  • Lilv - a C library to make the use of LV2 plugins as simple as possible for applications. Lilv is the successor to SLV2, rewritten to be significantly faster and have minimal dependencies. It is stable, well-tested software (the included test suite covers over 90% of the code) in use by several applications.


  • Suil is a lightweight C library for loading and wrapping LV2 plugin UIs.






  • LV2 Create - a GUI utility that lets you easily enter information about a plugin, without needing to know too many details about LV2 (certainly not about those godawful, over-engineered, developer/enduser hostile, inefficient, easily-broken TTL files. Terrible design for audio work). Then you click a button, and the utility creates the TTL files, and C skeleton code for the plugin. You just need to add your DSP code, and compile to create your plugin. It even generates the GNU Makefile for you.



Testing



  • lv2lint - Check whether a given LV2 plugin is up to the specification


  • Torture tester - a program to help with testing of LADSPA and LV2 plugins.



Bridge to
  • NASPRO bridges - a collection of bridges to LV2 that, once installed, allow you to use plugins developed for other plugin standards in LV2 hosts. As of now, it contains two bridges: a LADSPA 1.1 bridge and a DSSI 1.0.0/1.1.0 bridge.



Bridge from


Offline processing
  • lv2file - a simple program which you can use to apply effects to your audio files without much hassle.
  • lv2proc - generates an output sound file by applying a LV2 effect plugin to an input sound file.
Events
  • Beatbox - Creates MIDI events based on LV2 time position events (bars and beats), e.g. to drive a drum machine. Bars and beats can be disabled/enabled separately.
  • Click - Synthesizes click tracks based on LV2 time position events (bars and beats). Bars and beats can be disabled/enabled separately.
  • Looper - Loops arbitrary LV2 atom events on a ping-pong buffer. E.g. loops MIDI, OSC or anything else that can be packed into LV2 atoms with sample accuracy. Needs to be driven by LV2 time position events.
  • Pacemaker - Creates LV2 time position events from scratch to drive other plugins.
  • Quantum - Quantizes incoming events to whole beats.
  • Subspace - Subdivide or multiply incoming time signals by whole fractions, e.g. to speed up time x2, x3, ... or slow it down to x1/2, x1/3, ...
  • Timecapsule - Record/Playback of arbitrary LV2 atoms to/from memory. Record all incoming atom messages with sample accuracy and play them back later from memory. Stored atom event data is part of the plugin state and thus preserved across instantiations.

DSSI


Hosts

Multiple

Carla
  • Carla - an audio plugin host, with support for many audio drivers and plugin formats. It has some nice features like automation of parameters via MIDI CC (and send output back as MIDI too) and full OSC control. Carla currently supports LADSPA (including LRDF), DSSI, LV2, VST2/3 and AU plugin formats, plus GIG, SF2 and SFZ file support. It uses JACK as the default and preferred audio driver but also supports native drivers like ALSA, DirectSound or CoreAudio.

There are 4 types of engine processing:

  • Single-client: (JACK driver only) - carla-jack-single
    • Same as Multi-client, except that all JACK ports belong to a single master client.
    • This is needed when a setup doesn't support multi-client JACK apps, such as LADISH.
  • Multi-client: (JACK driver only) - carla-jack-multi
    • Every single plugin is exposed as a new JACK client. Audio and MIDI ports are registered as needed.
  • Rack: - carla-rack
    • Plugins are processed in order, from top to bottom.
    • Plugins with non-stereo audio channels are not supported, but a forced-stereo option is available for Mono ones.
  • Patchbay: - carla-patchbay
    • Modular patchbay mode, just like in JACK Multi-client and many other modular applications.
    • Every plugin gets its own canvas group and ports allowing you to interconnect plugin audio and MIDI.


carla-single 
  # usage: /usr/bin/carla-single [arch (optional)] [format] [filename/uri] [label (optional)] [uniqueId (optional)]

Possible archs:

 - native (default)
 - linux32
 - linux64
 - win32
 - win64

Possible formats:

 - internal
 - ladspa
 - dssi
 - lv2
 - vst|vst2
 - gig
 - sf2
 - sfz


Command-line launch examples:

/usr/bin/carla-single internal midisplit
/usr/bin/carla-single dssi /usr/lib/dssi/whysynth.so
/usr/bin/carla-single lv2 http://calf.sourceforge.net/plugins/Compressor
/usr/bin/carla-single native vst /usr/lib/vst/TAL-NoiseMaker.so
/usr/bin/carla-single win32 vst "~/.wine/drive_c/Program Files (x86)/VstPlugins/Kontakt 5.dll"



  • https://github.com/progwolff/performer - Performer lets you manage all the songs in your setlist as individual Carla patches and loads each of them when you need it. Additionally Performer uses Okular or QWebEngine to display notes and chords of your songs.
Ingen
Jost
  • Jost (dead) is the first open source multi-technology (native vst, ladspa, dssi) host in linux. It will mainly host a chain of plugins per instance, publishing jack, alsa and alsa_seq ports in order to be connected in your main stream flow. it still have some very good features that makes it a first class host.

VST

MrsWatson
  • MrsWatson - a command-line audio plugin host. It takes an audio and/or MIDI file as input, and processes it through one or more audio plugins. Currently MrsWatson only supports VST 2.x plugins, but more formats are planned in the future. MrsWatson was designed for primarily three purposes: Audio plugin development and testing, Automated audio processing for servers or other applications, Unit testing audio plugins
dssi-vst
  • dssi-vst - Run Windows VST plugins on Linux. DSSI doesn't support host tempo to plugin features.
FST
  • FST - a program by which uses Wine, Jack and Steinberg's VST Audio Plug-Ins SDK to enable the use of many VST audio plugins under Gnu/Linux.
FeSTige
  • FeSTige - a GUI for fst and dssi-vst, allowing you to run Windows VST plugins on Linux.
fsthost
fsthost -g ~/.vst
  # build plugin db
export VST_PATH=~/VST:/usr/share/vst:/otherlocation

fsthost -g
fsthost_menu
  # Perl GTK menu to startup plugins
fsthost_ctrl
  # Perl GTK app for control via TCP socket
fsthost_list
  # simple application to show known plugins ( read about XML DB )

export FSTMENU_GTK=2 # or 3
Airwave
  • Airwave - a WINE-based VST bridge, that allows for the use of Windows 32- and 64-bit VST 2.4 audio plugins with Linux VST hosts
  • https://pastebin.com/aDNcdRjp - "Airwave is very nice, but adding more than a few plugins to it is awfully tedious. So I've taken the matter into my own hands and written a script to add a large number of plugins to Airwave (plus the ability to edit their names) as a batch process." [50]
vstserver
vst-bridge
LinVst
  • https://github.com/osxmidi/LinVst - a Linux vst plugin that runs Windows 64 bit vst's. To use LinVst, the linvst.so file simply needs to be renamed to match the windows vst dll's filename.


VSTForx
  • VSTForx - a full-modular effect network creation tool which comes as a VST-plugin. With VSTForx you are able to load any number of VST-plugins and connect them anyway you want. Additional modules allow you to manipulate such signal chains and offer a whole new way in mixing and producing. Windows/Mac. $.q


L_Pa
  • L_Pa Project - collection of tools aimed at better integration and *performance* of Linux + Wine + jackd + Proaudio applications...L_pa accomplishes this by setting up a proper mix of kernel/software with low-latency/proaudio users _specifically in mind. So far the main components are;Custom version of the (rt-)linux kernel with a delta of patches, for linux proaudio usage.Custom version of Wine with a delta of patches to ensure Wine is ready for proaudio on the linux platform. It's also has various other improvements and bug fixes.You'll want both WineASIO and FSThost;WineASIO - WineASIO provides an ASIO to JACK driver for WINE.
vsthost
NetVST

DSSI

  • ghostess - a rough start at a graphical DSSI host, based on jack-dssi-host, but capable of saving and restoring plugin configuration, as well as specifying MIDI channels and layering synths. ghostess includes three MIDI drivers: an ALSA sequencer MIDI driver, a (clumsy but functional) CoreMIDI driver (which allows ghostess to be used on Mac OS X), and a JACK MIDI driver for use with the MIDI transport in recent versions (>=0.105.0) of JACK. ghostess also comes with a universal DSSI GUI, which attempts to provide GUI services for any DSSI or LADSPA plugin, and may be used with any DSSI host.

LADSPA

  • JACK Rack is an effects "rack" for the JACK low latency audio API. The rack can be filled with LADSPA effects plugins and can be controlled using the ALSA sequencer. It's phat; it turns your computer into an effects box.
  • jackspa - A small utility which will host a LADSPA plugin, providing JACK ports for its audio inputs and outputs, and sliders in a gtkmm GUI for its control inputs. I find it useful for hosting plugins with odd port configurations (such as a vocoder or a ring modulator), and for testing plugins. This project is pretty hacky. I threw it together quickly because I needed it in a hurry, and as a result, it's fairly buggy, and the code is a mess. But, it does the job.
  • ng-jackspa is a set of simple user interfaces that host a LADSPA plugin, providing JACK ports for its audio inputs and outputs, and dynamic setting of its control inputs. Additionally, the plugin controls can be exported to or controlled by control voltages on standard JACK audio ports.
  • Soundtank hosts LADSPA plugins in "realtime objects" which embody the structure of the audio signal flow. RTObjects can be controlled in a completely customizeable fashion using MIDI events sent through the ALSA sequencer interface.
  • Stomper - a virtual pedalboard for guitar, using commonly-available audio plugins in a user-defined arrangement and MIDI for switching. It is intended for on-stage use and will be optimized as such.

LV2

Jalv
  • Jalv - a simple but fully featured LV2 host for Jack. It runs LV2 plugins and exposes their ports as Jack ports, essentially making any LV2 plugin function as a Jack application.
jalv.qt5 http://drumkv1.sourceforge.net/lv2


LV2_PATH=/path/to/plugin.lv2 jalv.gtk URI

LV2_PATH=/path/to/plugin.lv2 lv2ls
  # to find the URI(s)
zynjacku
  • zynjacku - JACK based, GTK (2.x) host for LV2 synths. It has one JACK MIDI input port (routed to all hosted synths) and one (two for stereo synths) JACK audio output port per plugin. Such design provides multi-timbral sound by running several synth plugins.
Synthpod
  • Synthpod - both LV2 host and plugin. It can be run as a standalone app and be used as a tool for live performances or general audio and event filtering. Or it can be run as a plugin itself inside another host (or inside itself) to add support for non-linear patching where only strictly linear connections are supported (e.g. as in most DAWs). Patching of audio channels is clickless.
Elven
  • Elven - written for revision 2 of the LV2 specification and is NOT compatible with revisions 3 and later. It may work, it may break subtly or it may give your computer the swine flu.
mod-host


  • https://github.com/moddevices/mod-ui - the UI for the MOD software. It's a webserver that delivers an HTML5 interface and communicates with mod-host. It also communicates with the MOD hardware, but does not depend on it to run.


lv2host
jackwrap.c
lv2h

Audio formats


  • Xiph.Org's Digital Show & Tell- video on digital media explores multiple facets of digital audio signals and how they really behave in the real world.



PCM

  • https://en.wikipedia.org/wiki/Pulse-code_modulation - a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, Compact Discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

Linear pulse-code modulation (LPCM) is a specific type of PCM where the quantization levels are linearly uniform. This is in contrast to PCM encodings where quantization levels vary as a function of amplitude (as with the A-law algorithm or the μ-law algorithm). Though PCM is a more general term, it is often used to describe data encoded as LPCM.

A PCM stream has two basic properties that determine the stream's fidelity to the original analog signal: the sampling rate, which is the number of times per second that samples are taken; and the bit depth, which determines the number of possible digital values that can be used to represent each sample.


  • https://en.wikipedia.org/wiki/Differential_pulse-code_modulation - a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal. The input can be an analog signal or a digital signal. If the input is a continuous-time analog signal, it needs to be sampled first so that a discrete-time signal is the input to the DPCM encoder. DPCM was invented by C. Chapin Cutler at Bell Labs in 1950; his patent includes both methods.

Option 1: take the values of two consecutive samples; if they are analog samples, quantize them; calculate the difference between the first one and the next; the output is the difference, and it can be further entropy coded. Option 2: instead of taking a difference relative to a previous input sample, take the difference relative to the output of a local model of the decoder process; in this option, the difference can be quantized, which allows a good way to incorporate a controlled loss in the encoding. Applying one of these two processes, short-term redundancy (positive correlation of nearby values) of the signal is eliminated; compression ratios on the order of 2 to 4 can be achieved if differences are subsequently entropy coded, because the entropy of the difference signal is much smaller than that of the original discrete signal treated as independent samples.


  • https://en.wikipedia.org/wiki/Adaptive_differential_pulse-code_modulation - ADPCM) is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio. Typically, the adaptation to signal statistics in ADPCM consists simply of an adaptive scale factor before quantizing the difference in the DPCM encoder. ADPCM was developed in the early 1970s at Bell Labs for voice coding, by P. Cummiskey, N. S. Jayant and James L. Flanagan

PDM

  • https://en.wikipedia.org/wiki/Pulse-density_modulation - a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM). Instead, it is the relative density of the pulses that corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal. Pulse-width modulation (PWM) is a special case of PDM where the switching frequency is fixed and all the pulses corresponding to one sample are contiguous in the digital signal. For a 50% voltage with a resolution of 8-bits, a PWM waveform will turn on for 128 clock cycles and then off for the remaining 128 cycles. With PDM and the same clock rate the signal would alternate between on and off every other cycle. The average is 50% for both waveforms, but the PDM signal switches more often. For 100% or 0% level, they are the same.

WAVE

  • https://en.wikipedia.org/wiki/WAV - a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in "chunks", and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.



  • https://en.wikipedia.org/wiki/Broadcast_Wave_Format - an extension of the popular Microsoft WAV audio format and is the recording format of most file-based non-linear digital recorders used for motion picture, radio and television production. It was first specified by the European Broadcasting Union in 1997, and updated in 2001 and 2003. The purpose of this file format is the addition of metadata to facilitate the seamless exchange of sound data between different computer platforms and applications. It specifies the format of metadata, allowing audio processing elements to identify themselves, document their activities, and supports timecode to enable synchronization with other recordings. This metadata is stored as extension chunks in a standard digital audio WAV file.

AU

  • https://en.wikipedia.org/wiki/Au_file_format - a simple audio file format introduced by Sun Microsystems. The format was common on NeXT systems and on early Web pages. Originally it was headerless, being simply 8-bit µ-law-encoded data at an 8000 Hz sample rate. Hardware from other vendors often used sample rates as high as 8192 Hz, often integer multiples of video clock signal frequencies. Newer files have a header that consists of six unsigned 32-bit words, an optional information chunk and then the data (in big endian format). Although the format now supports many audio encoding formats, it remains associated with the µ-law logarithmic encoding. This encoding was native to the SPARCstation 1 hardware, where SunOS exposed the encoding to application programs through the /dev/audio interface. This encoding and interface became a de facto standard for Unix sound.


MP3

  • MP3 (MPEG-1 or MPEG-2 Audio Layer III) is a patented encoding format for digital audio which uses a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players.


  • https://github.com/lieff/minimp3 - Minimalistic, single-header library for decoding MP3. minimp3 is designed to be small, fast (with SSE and NEON support), and accurate (ISO conformant). You can find a rough benchmark below, measured using perf on an i7-6700K, IO included, no CPU heat to address speedstep:


Encoding

LAME


for f in *.wav ; do lame "$f" ; done
shine


mp3fs
  • mp3fs - a read-only FUSE filesystem which transcodes between audio formats (currently FLAC to MP3) on the fly when files are opened and read. It can let you use a FLAC collection with software and/or hardware which only understands the MP3 format, or transcode files through simple drag-and-drop in a file browser. [53]

Splitting

mp3splt
  • mp3splt - a utility to split mp3, ogg vorbis and native FLAC files selecting a begin and an end time position, without decoding. It's very useful to split large mp3/ogg vorbis/FLAC to make smaller files or to split entire albums to obtain original tracks. If you want to split an album, you can select split points and filenames manually or you can get them automatically from CDDB (internet or a local file) or from .cue files. Supports also automatic silence split, that can be used also to adjust cddb/cue splitpoints. Trimming using silence detection is also available. You can extract tracks from Mp3Wrap or AlbumWrap files in few seconds. For mp3 files, both ID3v1 & ID3v2 tags are supported. Mp3splt-project is split in 3 parts : libmp3splt, mp3splt and mp3splt-gtk.
Pcutmp3
  • Pcutmp3 - a Java based program that lets you cut and trim MP3 files losslessly (i.e. without any quality loss as there is no re-encoding). Ideal for removing adverts or unwanted intros/outros from your radio mixes. Originally created by Sebastian Gesemann it is now maintained by Christopher Banes.
lossless-cut
quelcom
  • https://github.com/posixru/quelcom - provides assorted tools to perform simple editing operations on MP3 and WAV audio files. These include fading, check-and-clean, informational extraction and lossless cutting and joining without reencoding.


MP3-Splitter


Flacon
  • Flacon - extracts individual tracks from one big audio file containing the entire album of music and saves them as separate audio files. To do this, it uses information from the appropriate CUE file. Besides, Flacon makes it possible to conveniently revise or specify tags both for all tracks at once or for each tag separately.
album-splitter
  • https://github.com/crisbal/album-splitter - Do you have a music album as a single file (locally or on YouTube), with all its tracks joined together? Do you want to split that album in its single tracks? Do you want to tag these tracks so your music player can get all the required info from them?


WavePad
  • Audio Editing Software. Sound, Music, Voice & Mp3 Editor - This audio editing software is a full-featured professional audio and music editor for Windows and Mac. It lets you record and edit music, voice and other audio recordings. When editing audio files, you can cut, copy and paste parts of recordings, and then add effects like echo, amplification and noise reduction. WavePad works as a wav or mp3 editor, but it also supports a number of other file formats including vox, gsm, wma, real audio, au, aif, flac, ogg, and more.

Non-commercial usage only.

mp3DirectCut
  • mp3DirectCut - a fast and extensive audio editor and recorder for encoded MP3. Without re-encoding you can directly cut, crop or split your MP3 and AAC tracks, change the volume on MP3 and much more. Direct editing saves encoding time and preserves the original audio quality of your tracks. The built in recorder creates MP3 on the fly. By using Cue sheets, Pause detection or Auto cue you can easily divide long files.

Metadata


  • id3reader - a Python module that reads ID3 metadata tags in MP3 files. It can read ID3v1, ID3v2.2, ID3v2.3, or ID3v2.4 tags. It does not write tags at all.



  • https://github.com/quodlibet/mutagen - a Python module to handle audio metadata. It supports ASF, FLAC, MP4, Monkey's Audio, MP3, Musepack, Ogg Opus, Ogg FLAC, Ogg Speex, Ogg Theora, Ogg Vorbis, True Audio, WavPack, OptimFROG, and AIFF audio files. All versions of ID3v2 are supported, and all standard ID3v2.4 frames are parsed. It can read Xing headers to accurately calculate the bitrate and length of MP3s. ID3 and APEv2 tags can be edited regardless of audio format. It can also manipulate Ogg streams on an individual packet/page level.
  • eyeD3 - a Python tool for working with audio files, specifically mp3 files containing ID3 metadata (i.e. song info). It provides a command-line tool (eyeD3) and a Python library (import eyed3) that can be used to write your own applications or plugins that are callable from the command-line tool.


AAC

  • http://en.wikipedia.org/wiki/Advanced_Audio_Coding - AAC, is a standardized, lossy compression and encoding scheme for digital audio. Designed to be the successor of the MP3 format, AAC generally achieves better sound quality than MP3 at similar bit rates.


Ogg

Container format.

  • https://en.wikipedia.org/wiki/Vorbis_comment - a metadata container used in the Vorbis, FLAC, Theora, Speex and Opus file formats. It allows information such as the title, artist, album, track number or other information about the file to be added to the file itself. However, as the official Ogg Vorbis documentation notes, “[the comment header] is meant for short, text comments, not arbitrary metadata; arbitrary metadata belongs in a separate logical bitstream (usually an XML stream type) that provides greater structure and machine parseability.”


Opus

  • Opus is a totally open, royalty-free, highly versatile audio codec. Opus is unmatched for interactive speech and music transmission over the Internet, but is also intended for storage and streaming applications. It is standardized by the Internet Engineering Task Force (IETF) as RFC 6716 which incorporated technology from Skype's SILK codec and Xiph.Org's CELT codec.

Best lossy audio format, replaces Vorbis and Speex.



ffmpeg -i input -acodec libopus -b:a bitrate -vbr on -compression_level 10 output

Vorbis

Speex

  • https://www.speex.org
    • https://en.wikipedia.org/wiki/Speex - a lossy audio compression format specifically tuned for the reproduction of human speech and also a free software speech codec that may be used on VoIP applications and podcasts. It is based on the CELP speech coding algorithm. Speex claims to be free of any patent restrictions and is licensed under the revised (3-clause) BSD license. It may be used with the Ogg container format or directly transmitted over UDP/RTP. It may also be used with the FLV container format. The Speex designers see their project as complementary to the Vorbis general-purpose audio compression project.


  • Codec 2 | Rowetel - Codec 2 is an open source speech codec designed for communications quality speech between 700 and 3200 bit/s. The main application is low bandwidth HF/VHF digital radio. It fills a gap in open source voice codecs beneath 5000 bit/s and is released under the GNU Lesser General Public License (LGPL).

The Codec 2 project also contains several modems (FDMDV, COHPSK and mFSK) carefully designed for digital voice over HF radio; GNU Octave simulation code to support the codec and modem development; and FreeDV – an open source digital voice protocol that integrates the modems, codecs, and FEC. FreeDV is available as a GUI application, an open source library (FreeDV API), and in hardware (the SM1000 FreeDV adaptor).


FLAC

  • FLAC - stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality. This is similar to how Zip works, except with FLAC you will get much better compression because it is designed specifically for audio, and you can play back compressed FLAC files in your favorite player (or your car or home stereo, see supported devices) just like you would an MP3 file. FLAC stands out as the fastest and most widely supported lossless audio codec, and the only one that at once is non-proprietary, is unencumbered by patents, has an open-source reference implementation, has a well documented format and API, and has several other independent implementations.


flac --best --keep-foreign-metadata input.wav


metaflac --list file.flac
  # list all metadata of a FLAC file


aptX

  • https://en.wikipedia.org/wiki/AptX - a family of proprietary audio codec compression algorithms currently owned by Qualcomm. The original aptX algorithm was developed in the 1980s by Dr. Stephen Smyth as part of his Ph.D. research at Queen's University Belfast School of Electronics, Electrical Engineering and Computer Science; its design is based on time domain ADPCM principles without psychoacoustic auditory masking techniques.

Codec 2

  • Codec 2 - Codec 2 is an open source speech codec designed for communications quality speech between 700 and 3200 bit/s. The main application is low bandwidth HF/VHF digital radio. It fills a gap in open source voice codecs beneath 5000 bit/s and is released under the GNU Lesser General Public License (LGPL). The Codec 2 project also contains several modems (FDMDV, COHPSK and mFSK) carefully designed for digital voice over HF radio; GNU Octave simulation code to support the codec and modem development; and FreeDV – an open source digital voice protocol that integrates the modems, codecs, and FEC. FreeDV is available as a GUI application, an open source library (FreeDV API), and in hardware (the SM1000 FreeDV adaptor).

STEM

  • NI: Stems - a completely new way to DJ. Stems is a new format for music that redefines creative live performance. Create spontaneous edits, a cappellas, instrumentals, and more with your tracks’ musical elements available independently.


WavPack

  • WavPack - a completely open audio compression format providing lossless, high-quality lossy, and a unique hybrid compression mode. For version 5.0.0, several new file formats and lossless DSD audio compression were added, making WavPack a universal audio archiving solution.

DSD / SACD

  • https://en.wikipedia.org/wiki/Direct_Stream_Digital - DSD is the name of a trademark used by Sony and Philips for their system of digitally recreating audible signals for the Super Audio CD (SACD). DSD uses pulse-density modulation encoding—a technology to store audio signals on digital storage media that are used for the SACD. The signal is stored as delta-sigma modulated digital audio, a sequence of single-bit values at a sampling rate of 2.8224 MHz (64 times the CD audio sampling rate of 44.1 kHz, but only at 1/32768 of its 16-bit resolution). Noise shaping occurs by use of the 64-times oversampled signal to reduce noise and distortion caused by the inaccuracy of quantization of the audio signal to a single bit. Therefore, it is a topic of discussion whether it is possible to eliminate distortion in one-bit delta-sigma conversion.


  • YouTube: DSD Explained part 1 - At the 1996 AES Convention in Copenhagen the formerly CBS Research Lab introduced Direct Stream Digital, DSD for short, as a archive format that offered compact files while it contains sufficient information for conversion to sample rates as high as 352.8 kHz - unheard of in 1996. From there is went to SACD and now to DSD downloads. What is DSD all about? Hans Beekhuyzen explains.


  • Super Audio CD decoder - a command-line application which takes a Super Audio CD source and extracts a 24-bit high resolution wave file. It handles both DST and DSD streams. The application reads the following input: SACD image files (*.iso), Sony DSF files (*.dsf), Philips DSDIFF files (*.dff). Supported output sample rates: 88.2KHz, 96KHz, 176.4KHz, 192KHz


  • What is DoP (DSD over PCM)? - It involves taking groups of 16 adjacent 1-bit  samples from a DSD stream and packing them into the lower 16 bits of a 24/176.4 data stream. Data from the other channel of the stereo pair is packed the same way. A specific marker code in the top 8 bits identifies the data stream as DoP, rather than PCM. The resulting DoP stream can be transmitted through existing 24/192-capable USB, AES, Dual AES or SPDIF interfaces to a DoP-compatible DAC, which reassembles the original stereo DSD data stream COMPLETELY UNCHANGED. If something goes wrong and the data stream is decoded as PCM, the output will be low-level noise with faint music in the back ground, so it fails safely. This can happen if the computer erases the marker code by applying a volume adjustment.


  • https://en.wikipedia.org/wiki/Digital_eXtreme_Definition - or DXD, a digital audio format that originally was developed for editing high-resolution recordings recorded in DSD, the audio standard used on Super Audio CD (SACD). As the 1-bit DSD format used on SACD is not suitable for editing, alternative formats such as DXD or DSD-Wide must be used during the mastering stage. In contrast with DSD-Wide or DSD Pure which offers level, EQ, and crossfade edits at the DSD sample rate (64fs, 2.822 MHz), DXD is a PCM signal with 24-bit resolution (8 bits more than the 16 bits used for Red Book CD) sampled at 352.8 kHz – eight times 44.1 kHz, the sampling frequency of Red Book CD. The data rate is 8.4672 Mbit/s per channel – three times that of DSD64.




  • https://github.com/DocMarty84/sacd - Converts SACD image files, Philips DSDIFF and Sony DSF files to 24-bit high resolution wave files. Handles both DST and DSD streams. THIS IS ONLY A CLONE OF THE OFFICIAL REPO!!!

MQA

AMR

Dolby Digital / AC-3

  • https://en.wikipedia.org/wiki/Dolby_Digital - the name for audio compression technologies developed by Dolby Laboratories. Originally named Dolby Stereo Digital until 1994, except for Dolby TrueHD, the audio compression is lossy. The first use of Dolby Digital was to provide digital sound in cinemas from 35mm film prints; today, it is now also used for other applications such as TV broadcast, radio broadcast via satellite, DVDs, Blu-ray discs and game consoles. This format has different names: Dolby Digital, DD (an abbreviation for Dolby Digital, often combined with channel count; for instance, DD 2.0, DD 5.1), AC-3 (Audio Codec 3, Advanced Codec 3, Acoustic Coder 3. [These are backronyms. Adaptive Transform Acoustic Coding 3 is a separate format developed by Sony.]). ATSC A/52 is name of the standard.


  • https://en.wikipedia.org/wiki/Dolby_Digital_Plus - also known as Enhanced AC-3 (and commonly abbreviated as DD+ or E-AC-3, or EC-3) is a digital audio compression scheme developed by Dolby Labs for transport and storage of multi-channel digital audio. It is a successor to Dolby Digital (AC-3), also developed by Dolby, and has a number of improvements including support for a wider range of data rates (32 Kbit/s to 6144 Kbit/s), increased channel count and multi-program support (via substreams), and additional tools (algorithms) for representing compressed data and counteracting artifacts. While Dolby Digital (AC-3) supports up to 5 full-bandwidth audio channels at a maximum bitrate of 640 Kbit/s, E-AC-3 supports up to 15 full-bandwidth audio channels at a maximum bitrate of 6.144 Mbit/s. The full set of technical specifications for E-AC-3 (and AC-3) are standardized and published in Annex E of ATSC A/52:2012, as well as Annex E of ETSI TS 102 366 V1.2.1 (2008–08), published by the Advanced Television Systems Committee.


Dolby AC-4

  • https://en.wikipedia.org/wiki/Dolby_AC-4 - Dolby AC-4 is an audio compression standard supporting multiple audio channels and/or audio objects. Support for 5.1 channel audio is mandatory and additional channels up to 7.1.4 are optional.[30] AC-4 provides a 50% reduction in bit rate over AC-3/Dolby Digital Plus

Dolby TrueHD

  • https://en.wikipedia.org/wiki/Dolby_TrueHD - a lossless multi-channel audio codec developed by Dolby Laboratories which is used in home-entertainment equipment such as Blu-ray Disc players and A/V receivers. It is one of the successors to the Dolby Digital (AC-3) surround sound codec, which is used as the audio standard for the DVD-Video format. In this application, Dolby TrueHD competes with DTS-HD Master Audio, a lossless codec from DTS.

Dolby TrueHD uses Meridian Lossless Packing (MLP) as its mathematical basis for compressing audio samples. MLP is also used in the DVD-Audio format, but details of Dolby TrueHD and the MLP Lossless format as used on DVD-Audio differ substantially. A Dolby TrueHD bitstream can carry up to 16 discrete audio channels. Sample depths up to 24 bits/sample and audio sample rates up to 192 kHz are supported. Like the more common legacy codec Dolby Digital, Dolby TrueHD bitstreams carry program metadata. Metadata is separate from the coding format and compressed audio samples, but stores relevant information about the audio waveform and provides control over the decoding process. For example, dialog normalization and dynamic range compression are controlled by metadata embedded in the Dolby TrueHD bitstream. Similarly, a Dolby Atmos encoded Dolby TrueHD stream contains metadata to extract and place the objects in relevant positions. Dolby TrueHD is a variable bit-rate codec.

ATRAC

  • https://en.wikipedia.org/wiki/Adaptive_Transform_Acoustic_Coding - a family of proprietary audio compression algorithms developed by Sony. MiniDisc was the first commercial product to incorporate ATRAC in 1992. ATRAC allowed a relatively small disc like MiniDisc to have the same running time as CD while storing audio information with minimal loss in perceptible quality. Improvements to the codec in the form of ATRAC3, ATRAC3plus, and ATRAC Advanced Lossless followed in 1999, 2002, and 2006 respectively. Other MiniDisc manufacturers such as Sharp and Panasonic also implemented their own versions of the ATRAC codec. Sony has all but dropped the ATRAC related codecs in the USA and Europe and in their SonicStage powered 'Connect' Music Service (Sony's equivalent of iTunes) on 31 March 2008. However, it is being continued in Japan and various other countries.

NICAM

  • https://en.wikipedia.org/wiki/NICAM - an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.

Vinyl

  • https://en.wikipedia.org/wiki/Phonograph#Stylus - a device, invented in 1877, for the mechanical recording and reproduction of sound. In its later forms, it is also called a gramophone (as a trademark since 1887, as a generic name in the UK since 1910), or, since the 1940s, a record player. The sound vibration waveforms are recorded as corresponding physical deviations of a spiral groove engraved, etched, incised, or impressed into the surface of a rotating cylinder or disc, called a "record". To recreate the sound, the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by it, very faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated a diaphragm which produced sound waves which were coupled to the open air through a flaring horn, or directly to the listener's ears through stethoscope-type earphones.


  • https://en.wikipedia.org/wiki/Phonograph_cylinder - the earliest commercial medium for recording and reproducing sound. Commonly known simply as "records" in their era of greatest popularity (c. 1896–1915), these hollow cylindrical objects have an audio recording engraved on the outside surface, which can be reproduced when they are played on a mechanical cylinder phonograph. In the 1910s, the competing disc record system triumphed in the marketplace to become the dominant commercial audio medium.
  • https://en.wikipedia.org/wiki/Phonograph_record - also known as a gramophone record, especially in British English, or record) is an analog sound storage medium in the form of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the periphery and ends near the center of the disc. At first, the discs were commonly made from shellac; starting in the 1950s polyvinyl chloride became common. In recent decades, records have sometimes been called vinyl records, or simply vinyl, although this would exclude most records made until after World War II.


  • https://en.wikipedia.org/wiki/Magnetic_cartridge - more commonly called a phonograph cartridge or phono cartridge or (colloquially) a pickup, is an electromechanical transducer used in the playback of analog sound recordings called records on a record player, now commonly called a turntable because of its most prominent component but formally known as a phonograph in the US and a gramophone in the UK. The cartridge contains a removable or permanently mounted stylus, the tip - usually a gemstone like diamond or sapphire - of which makes physical contact with the record's groove. In popular usage and in disc jockey jargon, the stylus, and sometimes the entire cartridge, is often called the needle. As the stylus tracks the serrated groove, it vibrates a cantilever on which is mounted a permanent magnet which moves between the magnetic fields of sets of electromagnetic coils in the cartridge (or vice versa: the coils are mounted on the cantilever, and the magnets are in the cartridge). The shifting magnetic fields generate an electrical current in the coils. The electrical signal generated by the cartridge can be amplified and then converted into sound by a loudspeaker.


Playlist formats

playlist='play.m3u' ; if [ -f $playlist ]; then rm $playlist ; fi ; for f in *.mp3; do echo "$(pwd)/$f" >> "$playlist"; done
  # create m3u playlist with absolute file paths


Playback

See also Playback, Sampling, Distros#Media

jplay2

  • https://gareus.org/oss/jplay2/start - jplay2 is a command-line audio player, gluing JACK, libsamplerate and liblo (OSC control), it plays a single file (no playlist), but with ffmpeg & libsndfile it plays every file one throws at it (even DVD-vobs or timemachine-w64 ;-) ). Once started, it's only possible to interact with jplay2 via OSC or jack-transport.

Random Parallel Player

loopnaut

mfl-gramophone

SQA: the Stuttery QLab Alternative

  • https://github.com/eeeeeta/sqa - This project aims to create an audio player & cue system for live shows and staged productions, à la Figure53's QLab. All its code is written in the Rust programming language, a new language that prevents memory unsafety and improves programming ergonomics. This one large repo contains many different crates that all help accomplish that aim.

PEAR

  • https://github.com/esologic/pear - a tool for sound installations. Take a directory with .wav files named in numeric order and play them over usb sound devices attached to the host computer over and over forever, looping all files once the longest one finishes.

Recording

fmedia

  • fmedia - fast media player/recorder/converter - a fast asynchronous media player/recorder/converter for Windows, Linux and FreeBSD. It provides smooth playback and recording even if devices are very slow. It's highly customizable and can be easily extended with additional plugins. Its low CPU & memory consumption saves energy when running on a notebook's battery. Play or convert audio files, record new audio tracks from microphone, save songs from Internet radio, and much more! fmedia is free and open-source project, and you can use it as a standalone application or as a library for your own software. fmedia can decode: .mp3, .ogg (Vorbis, Opus), .opus, .m4a/.mp4 (AAC, ALAC, MPEG), .mka/.mkv (AAC, ALAC, MPEG, Vorbis), .avi (AAC, MPEG), .aac, .mpc, .flac, .ape, .wv, .wav. fmedia can encode into: .mp3, .ogg, .opus, .m4a (AAC), .flac, .wav.


arecord

 arecord -D hw:0 -f cd test.wav

audio-recorder

Ecasound

  • Ecasound is a software package designed for multitrack audio processing. It can be used for simple tasks like audio playback, recording and format conversions, as well as for multitrack effect processing, mixing, recording and signal recycling. Ecasound supports a wide range of audio inputs, outputs and effect algorithms. Effects and audio objects can be combined in various ways, and their parameters can be controlled by operator objects like oscillators and MIDI-CCs. A versatile console mode user-interface is included in the package.


ecasound -a:1,2 -i jack -o jack -a:1 -elv2:http://calf.sourceforge.net/plugins/Compressor,0,1,0,0,0,0,0.015625,20,0.01,2000,1,1,0,0,1,1 -a:2 -elv2:http://calf.sourceforge.net/plugins/Limiter,0,1,0.587231,0,0,0,0,0,0,0,0,0.0625,0.1,1000,1,1,0,0.5,4


  • Nama - manages multitrack recording, mixing and mastering using the Ecasound audio processing engine developed by Kai Vehmanen.
  • Ecasound Mastering Interface - a Python front end to ecasound. It looks a lot like Rackmount effect and can be used to create an Ecasound Chain Setup while playing with parameters in real time. It supports mixing, recording, filtering, and processing and can export to ECS files. It supports all ecasound options, chain operators, and controllers.
  • Visecas - a graphical user interface for Ecasound (http://eca.cx/ecasound), a software package written by Kai Vehmanen (k@eca.cx) which is designed for multitrack audio processing. It starts Ecasound as a child process and communicates via a pipe using Ecasound's InterActive Mode (IAM) commands.


Meterec

  • meterec works as a basic multi track tape recoder. The aim of this software is to minimise the interactions of the users with the computer and allow them to focus on their instrumental performance. For this reason meterec features are minimal. One of the main "limitation" is that meterec can only restart from time 0:00:00.00: if you srew one take, start it over again! rather than learning how to use a specific software to correct what you screw, meterec forces to learn and master your instrument. Good news is previous takes are kept in take history and if in the end, the first one was the best you could play, you can choose it in your final mix.

jack_capture

  • jack_capture is a program for recording soundfiles with jack. The default operation will record what you hear in your loudspeakers into a stereo wav file.



jrec2

  • jrec2 - simple patched jack_capture, simple patch to the jack_capture example client, that implements silence detection and splitting of output files), can call hooks (invoke 3rd party software) upon detecting silence or audio. It include an optional random-playback control script that was used in an installation to record voice and if it detects silence plays back random snippets of previously recorded material.

jack-record

  • jack-record is a light-weight JACK capture client to write an arbitrary number of channels to disk.

jack_playrec

screcord.lv2

jamRecord.lv2

Freeze

  • https://github.com/nickolas360/freeze - an LV2 plugin for freezing tracks in a digital audio workstation—that is, temporarily rendering a track as audio to reduce CPU/DSP load, as tracks with large chains of CPU-heavy effects can make buffer underruns (xruns) quite common. Some DAWs like Ardour support track freezing to a certain extent, but Ardour, for example, cannot freeze MIDI tracks.

QJackRcd

  • QJackRcd is a simple stereo recorder for Jack with few features as silence processing for automatic pause, file splitting, background file post-processing.

audio coffin

timemachine

  • JACK Timemachine - I used to always keep a minidisc recorder in my studio running in a mode where when you pressed record it wrote the last 10 seconds of audio to the disk and then caught up to realtime and kept recording. The recorder died and haven't been able to replace it, so this is a simple jack app to do the same job. It has the advantage that it never clips and can be wired to any part of the jack graph.

Rotter

  • Rotter is a Recording of Transmission / Audio Logger for JACK. It was designed for use by radio stations, who are legally required to keep a recording of all their output. Rotter runs continuously, writing to a new file every hour. Rotter can output files in servaral different strutures, including all files in a single directory or create a directory structure.The advantage of using a folder hierarchy is that you can store related files in the hour's directory.


Transcoding

  • https://en.wikipedia.org/wiki/Transcoding - the direct digital-to-digital conversion of one encoding to another,[1] such as for movie data files (e.g., PAL, SECAM, NTSC), audio files (e.g., MP3, WAV), or character encoding (e.g., UTF-8, ISO/IEC 8859). This is usually done in cases where a target device (or workflow) does not support the format or has limited storage capacity that mandates a reduced file size,[2] or to convert incompatible or obsolete data to a better-supported or modern format.


SoundConverter

  • SoundConverter - the leading audio file converter for the GNOME Desktop. It reads anything GStreamer can read (Ogg Vorbis, AAC, MP3, FLAC, WAV, AVI, MPEG, MOV, M4A, AC3, DTS, ALAC, MPC, Shorten, APE, SID, MOD, XM, S3M, etc...), and writes to Opus, Ogg Vorbis, FLAC, WAV, AAC, and MP3 files, or use any GNOME Audio Profile.


Perl Audio Converter

  • Perl Audio Converter - A tool for converting multiple audio types from one format to another. It supports the following audio formats: 3G2, 3GP, 8SVX, AAC, AC3, ADTS, AIFF, AL, AMB, AMR, APE, AU, AVR, BONK, CAF, CDR, CVU, DAT, DTS, DVMS, F32, F64, FAP, FLA, FLAC, FSSD, GSRT, HCOM, IMA, IRCAM, LA, MAT, MAUD, MAT4, MAT5, M4A, MP2, MP3, MP4, MPC, MPP, NIST, OFF, OFR, OFS, OPUS, OGA, OGG, PAF, PRC, PVF, RA, RAW, RF64, SD2, SF, SHN, SMP, SND, SOU, SPX, SRN, TAK, TTA, TXW, VOC, VMS, VQF, W64, WAV, WMA, and WV.


Secret Rabbit Code

  • Secret Rabbit Code - aka libsamplerate, is a Sample Rate Converter for audio. One example of where such a thing would be useful is converting audio from the CD sample rate of 44.1kHz to the 48kHz sample rate used by DAT players. SRC is capable of arbitrary and time varying conversions ; from downsampling by a factor of 256 to upsampling by the same factor. Arbitrary in this case means that the ratio of input and output sample rates can be an irrational number. The conversion ratio can also vary with time for speeding up and slowing down effects.

SRC provides a small set of converters to allow quality to be traded off against computation cost. The current best converter provides a signal-to-noise ratio of 145dB with -3dB passband extending from DC to 96% of the theoretical best bandwidth for a given pair of input and output sample rates. Since the library has few dependencies beyond that provided by the standard C library, it should compile and work on just about any operating system. It is known to work on Linux, MacOSX, Win32 and Solaris. With some relatively minor hacking it should also be relatively easy to port it to embedded systems and digital signal processors.


audiomap

  • audiomap - a program which converts from any audio format to any other in a uniform & sane fashion so you don't have to learn all the options for conversion program x. It will preserve all tags in the process. It's goal is bullcrap free conversion. I wrote it because while there are plunty of shell scripts out there to convert things to/from a few formats, they suck at handling weird characters and often even spaces! On top of this, they usually do not properly preserve metadata. Audiomap works with funky chars, spaces, and preserves metadata - all whilst providing encoding/decoding through many formats.


resample


transfercoder


AudioMove

  • AudioMove - a simple, easy to use GUI-based batch audio file copy-and-conversion program.


flacsync


lackey


caudec

  • caudec - a command-line utility for GNU/Linux and OS Ⅹ that transcodes (converts) audio files from one format (codec) to another. It leverages multi-core CPUs with lots of RAM by using a ramdisk, and running multiple processes concurrently (one per file and per codec). It is Free Software, licensed under the GNU General Public License.

ffcvt

Analysis

Sonic Visualiser

  • Sonic Visualiser - an application for viewing and analysing the contents of music audio files. The aim of Sonic Visualiser is to be the first program you reach for when want to study a musical recording rather than simply listen to it. We hope Sonic Visualiser will be of particular interest to musicologists, archivists, signal-processing researchers and anyone else looking for a friendly way to take a look at what lies inside the audio file. Sonic Visualiser is Free Software, distributed under the GNU General Public License (v2 or later) and available for Linux, OS/X, and Windows. It was developed at the Centre for Digital Music at Queen Mary, University of London.

Don't forget to install at least the QM vamp plugins.

Baudline

  • Baudline is a time-frequency browser designed for scientific visualization of the spectral domain. Signal analysis is performed by Fourier, correlation, and raster transforms that create colorful spectrograms with vibrant detail. Conduct test and measurement experiments with the built in function generator, or play back audio files with a multitude of effects and filters. The baudline signal analyzer combines fast digital signal processing, versatile high speed displays, and continuous capture tools for hunting down and studying elusive signal characteristics.


Friture

SoundRuler

  • SoundRuler is a tool for measuring and graphing sound and for teaching acoustics. Its visual interactive approach to analysis brings you the best of two worlds: the control of manual analysis and the objectivity and speed of automated analysis.

Binary download needs 32-bit libxp to be installed.

BRP-PACU

  • BRP-PACU - A cross platform dual channel FFT based Acoustic Analysis Tool to help engineers analyze live professional sound systems using the transfer function. One feature is the ability to capture four sample plots, average them, and invert to aid in final EQ.

DSP

BRP-PACU

japa


Spek

  • Spek - helps to analyse your audio files by showing their spectrogram. Spek is free software available for Unix, Windows and Mac OS X.

Visual only.

zrtstr

  • zrtstr is a small command line application for detecting faux-stereo WAV-files, that is, files with two identical channels that should have been saved as mono. Such files are sometimes generated by some audio-editing software and DAWs (I’m looking at you, old Cubase 5). Gotten tired of receiving such files from clients for mixing, as they are using twice the necessary space and require twice the processing power, I decided to deal with this nuisance once and for all. zrtstr is a cross-platform application which runs very fast, thanks to being written in Rust.

DFasma

  • DFasma is a free open-source software which is used to compare audio files in time and frequency. The comparison is first visual, using wavforms and spectra. It is also possible to listen to time-frequency segments in order to allow perceptual comparison. It is basically dedicated to analysis. Even though there are basic functionnalities to align the signals in time and amplitude, this software does not aim to be an audio editor.

ASAnnotation

  • AS Annotation is an application for the analysis and automated or manual annotation of sound files. It features state of the art sound analysis algortithms, specialized sound inspection tools and can import Standard MIDI files. ASAnnotation is based on AudioSculpt, a sound analysis and transformation software developed at IRCAM since 1996. In addition to the analysis and annotation features present in AS Annotation, AudioSculpt comes with state of the art sound processing, mostly based on an enhanced version of the phase vocoder. To store and exchange analysis and annotation data, ASAnnotation can use two formats; MIDI for notes and text, and SDIF for all analyses. The MIDI support facilitates the verification, alignment and correction of Standard MIDI Files to soundfiles. SDIF is a specialized format for sound description data, which combines very high precision with efficiency and interchangeability. Numerous other programs support SDIF, such as Max/MSP, OpenMusic, CLAM and SPEAR. A collection of utility programs can be used to convert SDIF files to text.

harmony-analyser

Toscanalyzer

  • Toscanalyzer is a powerful audio analysis tool for mixing and mastering. Toscanalyzer helps you to mix and master better. It is not only an analysis tool but a complete guide to understand why your song sounds as it sounds. Toscanalyzer lets you compare audible and visually your project to any reference songs in a very convenient way. Toscanalyzer offers a clear project view including many options to analyze. The analysis gives you a detailed report about possible problems and in addition a clear guidance how to fix it.

Java. Doesn't work for me.

SPAN

Raven Lite

pyAudioAnalysis

QLoud

Room EQ Wizard

  • REW - free room acoustics analysis software for measuring and analysing room and loudspeaker responses. The audio analysis features of REW help you optimise the acoustics of your listening room, studio or home theater and find the best locations for your speakers, subwoofers and listening position. It includes tools for generating audio test signals; measuring SPL and impedance; measuring frequency and impulse responses; measuring distortion; generating phase, group delay and spectral decay plots, waterfalls, spectrograms and energy-time curves; generating real time analyser (RTA) plots; calculating reverberation times; calculating Thiele-Small parameters; determining the frequencies and decay times of modal resonances; displaying equaliser responses and automatically adjusting the settings of parametric equalisers to counter the effects of room modes and adjust responses to match a target curve.

Gist

Mixers

Command-line mixers

amixer

  • amixer - a command-line program for controlling the mixer in the ALSA soundcard driver. amixer supports multiple soundcards
amixer -c 0 | pcregrep "control"
  # shows all audio channels

JackMiniMix

  • JackMiniMix - a simple mixer for the Jack Audio Connection Kit with an OSC based control interface. It supports a user configurable number of stereo inputs, which can then be queried and controlled by sending it OSC messages. It is released under the GPL license.

kbd2jackmix

  • https://github.com/dsheeler/kbd2jackmix - Listen on a keyboard event device and respond to key combinations with jack midi volume messages for use with a midi-aware jack mixer, like jackmix or jack_mixer.

jack_switch

ncurses mixers

alsamixer

aumix

Graphical mixers

Kmix

  • Kmix - an application to allow you to change the volume of your sound card. Though small, it is full-featured, and it supports several platforms and sound drivers. Features: Support for ALSA and OSS sound systems, Plasma Desktop integrated on-screen-display for volume changes

Recommended.

QasMixer

  • QasMixer - a desktop mixer application for ALSA's "Simple Mixer Interface".

QasHctl

  • QasHctl - a mixer for ALSA's more complex "High level Control Interface".

alsamixergui

  • alsamixergui - a FLTK based frontend for alsamixer. It is written directly on top of the alsamixer source, leaving the original source intact, only adding a couple of ifdefs, and some calls to the gui part, so it provides exactly the same functionality, but with a graphical userinterface.

gnome-alsamixer

volti

Buggy, forgets card sometimes, menu/preferences don't open.

Xmixer

XMMIX

  • XMMIX - Motif Audio Mixer

NewMixer

Jack mixers

Non Mixer

/strip/[STRIP_NAME]/[MODULE_NAME]/[PARAMETER_NAME]

See also non-midi-mapper [58]

mx2482

  • https://github.com/cybercatalyst/mx2482 - uses QJackAudio, works with JACK. 24 channels routed to 8 subgroups each with direct out. Three-band parametric EQ for each channel. Aux send/return for each channel, so you can hook in other effects processors. Save and restore complete EQ states. Clean source code and free sofware licensed under GPL. Using latest Qt5, which means it runs on all major platforms.

JackMix

jack_mixer

  • https://github.com/relascope/jack_mixer - a GTK+ JACK audio mixer app with a look similar to its hardware counterpart. input to output, and send outputs. only MIDI control of one level and stereo balance! manual send groups, solo and mute.

jackmixdesk

  • jackmixdesk - an audio mixer for JACK with an OSC control interface and LASH support. It has a configurable number of inputs and pre/post sends/outs which can be controlled by sending it OSC messages. There is a XML config file and a GTK interface.

JackMaster

  • JackMaster - "Master Console" for the jack-audio-connection-kit. Number of inputs/subs and screen layout can only be changed by recompiling.

qtjmix

jackmaster

injector

MU1

  • MU1 - a simple Jack app used to organise stereo monitoring. It was written originally for use with Ardour2, but still useful with Ardour3 as it provides some extra functions.

LV2 mixers

Some of these are also standalone.

xfade.lv2

balance.lv2

  • https://github.com/x42/balance.lv2 - for stereo balance control with optional per channel delay. balance.lv2 facilitates adjusting stereo-microphone recordings (X-Y, A-B, ORTF). But it also generally useful as "Input Channel Conditioner". It allows for attenuating the signal on one of the channels as well as delaying the signals (move away from the microphone). To round off the feature-set channels can be swapped or the signal can be downmixed to mono after the delay.

sonejostudios






  • https://github.com/sonejostudios/Mixer4x - A simple 4-channel stereo mixer. The main goal is to use it as a submixer on a 4 channel track, but you can use it everywhere you need a small 4 channel stereo mixer.




BalanceGain / BalanceWidth

vopa


x42-mixtrix

matrixmixer.lv2

BadAmp

BAmp

Simple Amplifier

Plujain

  • https://github.com/Houston4444/plujain-plugins - utility LV2 plugins. The fadeswitch is an audio mono switch that progressly fade between the 2 channels outs. The triswitch follow the same principle with 3 outputs. For the Quadriswitch, guess !

Kn0ck0ut

  • Kn0ck0ut - takes two mono 44.1KHz inputs and spectrally subtracts one from the other. It can be used to help create 'acapellas' - to extract vocals from a track - if an instrumental version (or section) of the track is available.

intersect-lv2

  • https://github.com/sboukortt/intersect-lv2 - an LV2 plugin which, given a stereo audio stream, “expands” it to three channels. Everything that is present in both input channels will be in the center channel of the output, and what is specific to each channel will be in the corresponding output channel. This can be useful, for example, to rediscover some of your favorite music by hearing things that you had never noticed before. (With that said, note that it does not necessarily work equally well on all songs, depending on how they were mixed.)

sm.lv2

  • https://github.com/nettings/sm.lv2 - A simple speaker management LV2 plugin with global master volume and per-channel trim, delay, and low-shelf.This plugin lets you optimize a stereo or multichannel speaker system to your listening environment.

tinyamp.lv2

Other mixers

ladspa-xfade

BackgroundMusic

faderratic

  • faderratic - brings you cross-fading of 2 stereo inputs, but with a mind of its own and a ton of options to change the fade shape, length, limits, frequency and probability. faderratic works by generating a pulse on a tempo-sync frequency, and depending on the probability it may trigger a cross-fade event. You can optionally make the fader auto-return to either side and if you feel like it, trigger a fade manually or control the fader movement totally manually. Windows VST.

Spatial audio

  • https://en.wikipedia.org/wiki/Sound_localization - a listener's ability to identify the location or origin of a detected sound in direction and distance. It may also refer to the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording, wave field synthesis). The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time- and level-differences (or intensity-difference) between both ears, spectral information, timing analysis, correlation analysis, and pattern matching.


  • https://en.wikipedia.org/wiki/Spatial_music - composed music that intentionally exploits sound localization. Though present in Western music from biblical times in the form of the antiphon, as a component specific to new musical techniques the concept of spatial music (Raummusik, usually translated as "space music") was introduced as early as 1928 in Germany. The term spatialisation is connected especially with electroacoustic music to denote the projection and localization of sound sources in physical or virtual space or sound's spatial movement in space.



  • https://en.wikipedia.org/wiki/Soundfield_microphone - an audio microphone composed of four closely spaced subcardioid or cardioid (unidirectional) microphone capsules arranged in a tetrahedron. It was invented by Michael Gerzon and Peter Craven, and is a part of, but not exclusive to, Ambisonics, a surround sound technology. It can function as a mono, stereo or surround sound microphone, optionally including height information.


  • SpatDIF - Spatial Sound Description Interchange Format. SpatDIF is a format that describes spatial sound information in a structured way, in order to support real-time and non-real-time applications. The format serves to describe, store and share spatial audio scenes across audio applications and concert venues.
  • https://github.com/GRIS-UdeM/SpatGRIS - a plugin (Mac AU/VST and VST Windows format) designed to compose multichannel space. It allows the user to spatialize the sound in 2D (up to 16 speakers) or in 3D (up to 128 speakers) under a dome of speakers (with the ServerGRIS, under development). SpatGRIS is a fusion of two former plugins by the GRIS: OctoGRIS and ZirkOSC with a lot of new festures.



  • https://en.wikipedia.org/wiki/QSound - the original name for a positional three-dimensional (3D) sound processing algorithm from QSound Labs that creates 3D audio effects from multiple monophonic sources and sums the outputs to two channels for presentation over regular stereo speakers. QSound was eventually re-dubbed "Q1" after the introduction of "Q2", a positional 3D algorithm for headphones. Later multi-speaker surround system support was added to the positional 3D process, the QSound positional 3D audio process became known simply as "Q3D". QSound was founded by Larry Ryckman (CEO), Danny Lowe and John Lees. Jimmy Iovine served as SVP of Music and Shelly Yakus as VP of Audio Engineering in its formative years.

Mono

  • https://en.wikipedia.org/wiki/Monaural - monophonic sound reproduction (often shortened to mono) is sound intended to be heard as if it were emanating from one position. This contrasts with stereophonic sound or stereo, which uses two separate audio channels to reproduce sound from two microphones on the right and left side, which is reproduced with two separate loudspeakers to give a sense of the direction of sound sources. In mono, only one loudspeaker is necessary, but, when played through multiple loudspeakers or headphones, identical signals are fed to each speaker, resulting in the perception of one-channel sound "imaging" in one sonic space between the speakers (provided that the speakers are set up in a proper symmetrical critical-listening placement). Monaural recordings, like stereo ones, typically use multiple microphones fed into multiple channels on a recording console, but each channel is "panned" to the center. In the final stage, the various center-panned signal paths are usually mixed down to two identical tracks, which, because they are identical, are perceived upon playback as representing a single unified signal at a single place in the soundstage. In some cases, multitrack sources are mixed to a one-track tape, thus becoming one signal. In the mastering stage, particularly in the days of mono records, the one- or two-track mono master tape was then transferred to a one-track lathe intended to be used in the pressing of a monophonic record. Today, however, monaural recordings are usually mastered to be played on stereo and multi-track formats, yet retain their center-panned mono soundstage characteristics.

Monaural sound has largely been replaced by stereo sound in most entertainment applications, but remains the standard for radiotelephone communications, telephone networks, and audio induction loops for use with hearing aids. FM radio stations broadcast in stereo, while most AM radio stations broadcast in mono. (Although an AM stereo broadcast standard exists, few AM stations are equipped to use it.) A few FM stations—notably talk-radio stations—choose to broadcast in monaural because of the slight advantage in signal strength and bandwidth the standard affords over a stereophonic signal of the same power.


Stereo

  • https://en.wikipedia.org/wiki/Stereophonic_sound - or, more commonly, stereo, is a method of sound reproduction that creates an illusion of multi-directional audible perspective. This is usually achieved by using two or more independent audio channels through a configuration of two or more loudspeakers (or stereo headphones) in such a way as to create the impression of sound heard from various directions, as in natural hearing. Thus the term "stereophonic" applies to so-called "quadraphonic" and "surround-sound" systems as well as the more common two-channel, two-speaker systems. It is often contrasted with monophonic, or "mono" sound, where audio is heard as coming from one position, often centered in the sound field (analogous to a visual field). In the 2000s, stereo sound is common in entertainment systems such as broadcast radio and TV, recorded music and the cinema.
  • https://en.wikipedia.org/wiki/Stereo_imaging - the aspect of sound recording and reproduction concerning the perceived spatial locations of the sound source(s), both laterally and in depth. An image is considered to be good if the location of the performers can be clearly located; the image is considered to be poor if the location of the performers is difficult to locate. A well-made stereo recording, properly reproduced, can provide good imaging within the front quadrant; a well-made Ambisonic recording, properly reproduced, can offer good imaging all around the listener and even including height information.



  • https://en.wikipedia.org/wiki/ORTF_stereo_technique - microphone technique used to record stereo sound.ORTF setup.It was devised around 1960 at the Office de Radiodiffusion Télévision Française (ORTF) at Radio France.ORTF combines both the volume difference provided as sound arrives on- and off-axis at two cardioid microphones spread to a 110° angle, as well as the timing difference as sound arrives at the two microphones spaced 17 cm apart.
  • https://en.wikipedia.org/wiki/NOS_stereo_technique - a method of capturing stereo sound.The Nederlandse Omroep Stichting (NOS, English: Dutch Broadcast Foundation) found a stereo main microphone system by a number of practical attempts in the 1960s. This system resulted in a quite even distribution of the phantom sources (hearing event direction) on the stereo loudspeaker base, with two small cardioid characteristic microphones, and a recording angle of the microphone system of ±40.5° = 81°. This system got empirical an axle angle of α = ±45° = 90° and a microphone distance (microphone basis) of a = 30 cm.
  • https://en.wikipedia.org/wiki/Blumlein_pair - the name for a stereo recording technique invented by Alan Blumlein for the creation of recordings that, upon replaying through headphones or loudspeakers, recreate the spatial characteristics of the recorded signal.The pair consists of an array of two matched microphones that have a bi-directional (figure 8) pickup pattern. They are positioned 90° from each other. Ideally, the transducers should occupy the same physical space; since this cannot be achieved, the microphone capsules are placed as close to each other as physically possible, generally with one centered directly above the other. The array is oriented so that the line bisecting the angle between the two microphones points towards the sound source to be recorded (see diagram). The pickup patterns of the pair, combined with their positioning, delivers a high degree of stereo separation in the source signal as well as the room ambiance.
  • https://en.wikipedia.org/wiki/Jecklin_disk - a sound-absorbing disk placed between two microphones to create an acoustic "shadow" from one microphone to the other. The resulting two signals can possibly produce a pleasing stereo effect. A matching pair of small-diaphragm omnidirectional microphones is always used with a Jecklin disk.

Crossfeed

  • https://en.wikipedia.org/wiki/Crossfeed - the process of blending the left and right channels of a stereo audio recording. It is generally used to reduce the extreme channel separation often featured in early stereo recordings (e.g., where instruments are panned entirely on one side or the other), or to make audio played through headphones sound more natural, as when listening to a pair of external speakers.


  • https://github.com/LnxPrgr3/crossfeed [60] Headphones have extreme stereo separation--the right ear doesn't get to hear much of what's going on on the left. This leads to the impression the music's coming from inside your head, and sounds especially weird when instruments are panned hard to one side or the other. Crossfeed filters aim to fix this by letting the channels mix a little, but in a controlled way. The goal is to mimic what happens naturally when listening to music on speakers.


  • CIP Audio Player - Java. In a normal stereo setup the spot where the stereo image really creates the best sound stage is very small. By calculating a third signal for a center speaker it is possible to enhance the stereo soundstage for more than one listener.


Leslie speaker

  • https://en.wikipedia.org/wiki/Leslie_speaker - a combined amplifier and two-way loudspeaker that projects the signal from an electric or electronic instrument, while modifying the sound by rotating the loudspeakers. It is most commonly associated with the Hammond organ, though it was later used for the guitar and other instruments. A typical Leslie speaker contains an amplifier, and a treble and bass speaker—though specific components depend upon the model. A musician controls the Leslie speaker by either an external switch or pedal that alternates between a slow and fast speed setting, known as "chorale" and "tremolo".

to sort

  • BLS1 is a digital realisation of the 'Blumlein Shuffler', invented by Alan Blumlen in the early 1930s and analysed in detail by Michael Gerzon in a paper presented at the 1993 AES Convention in San Francisco.


  • MONSTR - a multiband stereo imaging plugin, available for Windows, Mac, and Linux. It allows the user to control the stereo width of a sound in 3 different frequency bands, and so can be used to perform common tasks such as narrowing the bass frequencies while adding width to the highs, allowing fine control over the stereo image of your mix.


  • Holophon - a set of tools for the programming and real-time manipulation of sound trajectories across different speakers. Its main development is the Holo-Edit trajectory editor. It’s a graphical and algorithmic editor of sound trajectories. Holo-Edit makes it possible to draw and graphically edit trajectories across a complex sound system. It’s also possible to program those trajectories with different automatic functions. HoloEdit is a set of graphical editors and algorithmic functions for creating and manipulating sounds in space. This software allows for the precise positioning of multiple sounds in time and space (defined by a set of speakers). In order to do so, it associates sounds to trajectories (a set of points defined by their position in space (x, y, z) and their date). HoloEdit also supports the SDIF format so that it is possible to generate/transform sound trajectories from SDIF data. masOS, plus Holoboule for pd-extended.



  • https://github.com/sboukortt/intersect-lv2 - given a stereo audio stream, “expands” it to three channels. Everything that is present in both input channels will be in the center channel of the output, and what is specific to each channel will be in the corresponding output channel.



Windows / Mac

  • MStereoExpander - offers expansion based on either actual samples or on delay, and provides stereo field correction to increase or reduce the clarity of the spatial differences between channels. It is fully mono-compatible.
  • MStereoProcessor - an advanced mastering multiband stereo analyzer and enhancer plugin, which lets you easily control the stereo image and the necessary perception of depth and space.
  • A1StereoControl - expand or limit the STEREO WIDTH of your tracks using only one single knob. This powerful technique can be used on single tracks or groups tracks while mixing or even on a master bus in final mastering situations. Windows/Mac
  • Proximity - an easy to use distance “pan-pot” based on several psycho-acoustic models. The idea is to give mixing engineer a reliable tool which allows him to manipulate the “depth” of several sound source in a straight forward and convincing manner.
  • Voxengo MSED - a professional audio encoder-decoder plugin for mid-side processing which is able to encode (split) the incoming stereo signal into two components: mid-side pair, and vice versa: decode mid-side signal pair into stereo signal. MSED is also able to work in the “inline” mode with the ability to adjust mid and side channels’ gain and panning without the need of using two plugin instances in sequence. MSED can be used to flip the phase of the mid and side channels by 180 degrees, and swap the stereo channels, and to extract the mid or side channel. MSED features the “plasma” vector scope, stereo correlation and balance meters which make it easier to monitor the stereo information present in the audio signal.
  • Voxengo Stereo Touch - This professional audio plugin implements a classic technique of transforming a monophonic track into spacious stereophonic track by means of mid/side coding technique. Stereo Touch is most effective on monophonic sounds without overly sharp transients: it works great for both acoustic and electric/overdriven guitars, synthetic pad sounds and even vocals.  By means of this plugin you can easily get spacious and even “surround” sounding tracks, without utilizing a double-tracked recording technique.
  • Upstereo - A FREE stereo enhancer. Stereo width slider going from mono to wide, bringing the stereo image out and towards the listener. Loudness control boost. Loudness overdrive option. Subtle Air & Bass boosters to lift and help the audio 'breath'. Movable 3D interface, with changeable colours and light, positions. Very low CPU usage.

Quadraphonic


Surround sound



  • http://gareus.org/oss/panjack/start - PanJack implements a real-time surround-sound panorama mixer. one or more audio input(s), two or more audio outputs / speakers, control via OSC or MIDI. (optional) bcf2000 fader/pan control. Note: Jack-Transport needs to be rolling in order for panjack to process audio. It creates jack-audio input and output ports and routes audio with a latency of 1 jack-cycle, applying a amplifications depending on faders and panorama-gain. The panorama-gain settings can be adjusted manually for each output channel or be modified indirectly using built-in maths for 2D (angle, separation) or X/Y-distance panning. Furthermore there is built-in functionality to automate whirl/leslie-rotate effects. panjack itself does not provide sequencer capabilities. Yet this feature can be archived easily by controlling panjack via OSC and any OSC-sequencer


DTS

  • https://en.wikipedia.org/wiki/DTS_(sound_system) - a series of multichannel audio technologies owned by DTS, Inc. (formerly known as Digital Theater Systems, Inc.), an American company specializing in digital surround sound formats used for both commercial/theatrical and consumer grade applications. It was known as The Digital Experience until 1995. DTS licenses its technologies to consumer electronics manufacturers.

Dolby Atmos

  • https://en.wikipedia.org/wiki/Dolby_Atmos - allows up to 128 audio tracks plus associated spatial audio description metadata (most notably, location or pan automation data) to be distributed to theaters for optimal, dynamic rendering to loudspeakers based on the theater capabilities. Each audio track can be assigned to an audio channel, the traditional format for distribution, or to an audio "object." Dolby Atmos by default, has a 10-channel 7.1.2 bed for ambience stems or center dialogue, leaving 118 tracks for objects. Dolby Atmos home theaters can be built upon traditional 5.1 and 7.1 layouts. For Dolby Atmos, the nomenclature differs slightly: a 7.1.4 Dolby Atmos system is a traditional 7.1 layout with four overhead or Dolby Atmos enabled speakers.

Ambiophonics


  • https://en.wikipedia.org/wiki/Stereo_dipole - a sound source in an Ambiophonic system, made by two closely spaced loudspeakers that ideally span 10-30 degrees. Thanks to the cross-talk cancellation method, a stereo dipole can render an acoustic stereo image nearly 180° wide (single stereo dipole) or 360° (dual or double stereo dipole).


Ambisonics

  • https://en.wikipedia.org/wiki/Ambisonics - a full-sphere surround sound technique: in addition to the horizontal plane, it covers sound sources above and below the listener. Unlike other multichannel surround formats, its transmission channels do not carry speaker signals. Instead, they contain a speaker-independent representation of a sound field called B-format, which is then decoded to the listener's speaker setup. This extra step allows the producer to think in terms of source directions rather than loudspeaker positions, and offers the listener a considerable degree of flexibility as to the layout and number of speakers used for playback.


  • Ambisonics | SpringerLink - A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality, open access book



  • https://en.wikipedia.org/wiki/Ambisonic_data_exchange_formats - Researchers working on very high-order systems found no straightforward way to extend the traditional formats to suit their needs. Furthermore, there was no widely accepted formulation of spherical harmonics for acoustics, so one was borrowed from chemistry, quantum mechanics, computer graphics, or other fields, each of which had subtly different conventions. This led to an unfortunate proliferation of mutually incompatible ad-hoc formats and much head-scratching.
  • Ambisonics Component Ordering - The two primary component ordering formats for ambisonics are Furse-Malham, commonly called FuMa, and Ambisonics Channel Number, commonly called ACN. As seen in the following image, the former uses a lettered notation that - following alphabetical order per grouping - starts with the W (omni) channel, moves to its lower right, then its lower left, and then its lower center; then it moves to the next order and starts at the R, moves to its right, then to its left, then the further right, then the further left; then it moves to the next order and follows a similar pattern. On the other hand, the latter is numbered in a much easier to follow left-to-right order.
  • http://blends.debian.org/multimedia/tasks/ambisonics




  • Spatialisation - Stereo and Ambisonic - Richard W.E. Furse, "This chapter discusses an approach to generating stereo or Ambisonic sound images using Csound. The focus here is not on the acoustics of the human head but on modelling sound in an acoustic space. We will use Csound to produce a ‘virtual’ acoustic space in which to move sounds and make recordings."



  • ARCADE - a patented spatial audio codec that allows encoding scene-based 3D audio over stereo with no additional metadata required. It allows encoding sources with height in a fully spherical manner. Its decoder is able to decode to virtually any 3D or 2D audio format, for example first-order or higher-order spherical harmonics (FOA, HOA), VBAP, Surround, Binaural with or without head-tracking etc. The decoder also works as an upmixer for any stereo content, to any of the formats it can decode to.


Software

to sort




  • mcfx – multichannel audio plug-in suite, VST plug-ins for MacOS, Windows and Linuxm (mcfx_convolver, mcfx_delay, mcfx_filter, mcfx_gain_delay, mcfx_meter), these plug-ins are very handy if you want to process multiple channels in one go for example; multiple loudspeaker setups, Ambisonics (see ambiX); Microphone array post productions (eg. Eigenmike®)




  • http://spatium.ruipenha.pt/ - a set of free, open source and modular software tools for sound spatialization. It is comprised of 4 different types of software: spatialization renderers: standalone applications that render spatialized audio using ambisonics or amplitude panning; spatialization interfaces: standalone interfaces that generate spatial information to control the spatialization renderers via OSC; plugins: audio unit plugin and max for live devices to control the spatialization renderers via OSC; max objects: a library of objects for spatialization using ambisonics or amplitude panning in Cycling’74 Max.





"In short, it takes so-called 'pair-wise' panning - i.e. the panning of localised sounds between two loudspeakers - and does a little more math to extend it into triplet-wise panning. The three loudspeakers are arranged in a triangle layout. Localised sounds no longer just pan horizontally, between two positions, but now pan vertically too. This change means that we have extended from 1-dimensional movement into 2-dimensional movement.

"As Ville's diagram shows, as you add more triangles you can extend into the 3rd dimension too, by creating a 'mesh' similar to the polygons that describe 3D space in computer games. The amplitude of any sound 'moving through' the space is calculated for each of the nearest three speakers. The equation takes into account distance from the loudspeaker, and so VBAP differentiates from Ambisonics and irregular loudspeaker layouts can be supported. However there still needs to be a 'mesh' based on triangles, as any individual sound can only exist between the nearest three points. The emphasis here is still on satisfying a 'sweet spot', a localised and immobile audience. In this respect, VBAP is similar to Ambisonics."


  • https://github.com/pierreguillot/vbap - documentations and implementations of the Vector Base Amplitude Panning (VBAP). The VBAP is a spatialization techniques created by Ville Pulkki in the late 90's. For further information see the references. The VBAP is available as a C library with an implementation as externals for Pure Data (and also as abstractions).


  • https://github.com/leomccormack/HO-SIRR - A Matlab implementation of the Higher-order Spatial Impulse Response Rendering (HO-SIRR) algorithm. An alternative approach for reproducing Ambisonic IRs over loudspeakers.


Wave field synthesis


Software

SoundScape Renderer

WONDER

  • WONDER - a software suite for using Wave Field Synthesis and Binaural Synthesis. It's primary platform is Linux, but it can be used under OSX too.

Binaural

  • https://en.wikipedia.org/wiki/Binaural_recording - a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments. This effect is often created using a technique known as "dummy head recording", wherein a mannequin head is outfitted with a microphone in each ear. Binaural recording is intended for replay using headphones and will not translate properly over stereo speakers. This idea of a three dimensional or "internal" form of sound has also translated into useful advancement of technology in many things such as stethoscopes creating "in-head" acoustics and IMAX movies being able to create a three dimensional acoustic experience.


  • https://en.wikipedia.org/wiki/Binaural_fusion - a cognitive process that involves the "fusion" of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other. The process of binaural fusion is important for computing the location of sound sources in the horizontal plane (sound localization), and it is important for sound segregation. Sound segregation refers the ability to identify acoustic components from one or more sound sources. The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of both eardrums to extract physical cues and synthesize auditory objects.





  • Panagement - spatialization toolbox by Auburn Sounds

Other

VR/AR

to sort

Metering


JackMeter

  • Jack Meter is a basic console based DPM (Digital Peak Meter) for JACK. I wrote it for quickly checking remote signal levels, without having to run X11 to use a pretty graphical meter such as meterbridge.

JACK Meterbridge

  • JACK Meterbridge - software meterbridge for the UNIX based JACK audio system. It supports a number of different types of meter, rendered using the SDL library and user-editable pixmaps.

Ebumeter

  • Ebumeter provides level metering according to the EBU R-128 recommendation. The current release implements all features required by the EBU document except the oversampled peak level monitoring.

mod-peakmeter

lv2

  • meters.lv2 is a collection of audio-level meters with GUI in LV2 plugin format.



  • K-Meter - Implementation of a K-System meter according to Bob Katz’ specifications.

JACK bitmeter

  • JACK bitmeter - a diagnosis tool for JACK audio software on Linux (and perhaps other systems which have JACK and GTK+ 2.x). As its name might suggest, the bitmeter operates at the bare metal of JACK's I/O layer, looking at the 32 binary digits in each individual sample.

Visualisation

See Lighting#Visualisation

Oscilloscope



  • xoscope - a digital oscilloscope for Linux



  • jack_oscrolloscope - a simple waveform viewer for JACK. The waveform is displayed in realtime, so you can always see the signal the instant it comes through JACK's input port.


  • jack-scope - an oscilloscope for JACK under X11. jack-scope draws either a time domain signal trace or a self correlation trace. Multiple input channels are superimposed, each channel is drawn in a different color. jack-scope accepts OSC packets for interactive control of drawing parameters.


  • QOscC - a highly flexible and configurable software Oscilloscope with a large number of features. This includes support for any number of audio devices (ALSA or OSS), each with any number of channels. Each scope display can be configured individually to different display types and variants. e.g. you can chose from standard y-t mode (as on an usual oscilloscope), xy mode (e.g. for measuring the phase shift between two signals) of the FFT mode (to view a spectrum plot of the signal). This software is intended for electronic hobyists, who cannot afford a hardware oscilloscope or need a simple spectrum analyzer as well as for musicans for doing basic signal analysis.



Spectrum graph



Spectrogram


  • spectrojack - A little spectrogram/audiogram/sonogram/whatever for jack. gtk 2 and fftw 3.


  • Spectrum 3D - a 3D audio spectrogram in real time or not from the microphone or an audio file (including recorded file from the microphone); it is compatible with Jack (jack-audio-connection-kit). Optionally, it supports multitouch gestures from touchscreen and touchpad. It is build with the Gstreamer, SDL (or Gtkglext), OpenGl, GTK+-2.0 and uTouch-Geis free libraries and is under GPL license.


  • xspect3d - a bespoke, radical drawing algorithm and buffer scheduling paradigm, to render a 3D sonic landscape in real time, at up to several hundred frames a second.



  • Jack Live Spectrum - a small (only one file) C program that runs on Linux that will display a frequency spectrum of the live streaming sound as a floating down animation. Below are some still screenshots.



  • Photosounder Spiral - a music analysis plugin. It's a fresh take on spectral analysis focused on allowing you to see and understand music and the notes that make it up instantly. This is achieved mainly by coiling the spectrum into a spiral framed by a chromatic circle, thus allowing you to instantly see what's happening musically and spectrally. - $

Waveform


Various

See Lighting#Visualisation


  • Signalizer - a all-in-one signal visualizing package with a bunch of unique focus-points; real-time audio visualization with optimized 3D GPU graphics, everything being scalable and zoomable gridlessly as well as being arbitrarily precise in both settings and display. Combined with a rich feature set, Signalizer is suited both for electrical/audio engineers fullscreen-inspecting signals, or for general small windows giving an overview of your audio as you create it.



  • sndpeek - real-time 3D animated display/playback, can use mic-input or wav/aiff/snd/raw/mat file (with playback), time-domain waveform, FFT magnitude spectrum, 3D waterfall plot



  • VSXu - VSX Ultra, is an OpenGL-based (hardware-accelerated), modular visual programming environment with its main purpose to visualize music and create graphic effects in real-time. Its intention is to bridge the gap between programmer and artist and enabling acreative and inspiring environment to work in for all parties involved. VSXu is built on a modular plug-in-based architecture so anyone can extend it and or make visualization presets ("visuals" or "states"). The program is free software which means it's free from restrictions, free to share and copy, free to adapt / modify and use it any way you like.




Phase

Windows VST

  • Voxengo Correlometer free multi-band correlation meter plugin released - a free analog-style stereo multi-band correlation meter AudioUnit, AAX and VST plugin for professional music production applications. It is based on correlation meter found in PHA-979 phase-alignment plugin.Multi-band correlation meter is an advanced way to check for presence of out-of-phase elements in the mix. Broadband correlation metering reports overall phase issues and may misrepresent problems present in select spectral bands, while multi-band correlation meter easily highlights problems present in mid to high frequencies that are not easily heard by ear, but may still reduce clarity of the mix. Another application of multi-band correlation metering is phase- and time-aligning of channels and tracks, especially bass and bass-drum pairs, guitar mic and D.I. source pairs, two-microphone stereo recordings, etc.Correlometer can display 4 to 64 individual spectral bands, with adjustable band quality factor that controls the degree of band’s selectivity. Averaging time of correlation estimator can be adjusted. Correlometer supports side-chain inputs for easy correlation estimation between separate audio tracks.

Transcription

Transcribe!

  • Transcribe! - an assistant for people who want to work out a piece of music from a recording, in order to write it out, or play it themselves, or both. It doesn't do the transcribing for you, but it is essentially a specialised player program which is optimised for the purpose of transcription. It has many transcription-specific features not found on conventional music players. It is also used by many people for play-along practice. It can change pitch and speed instantly, and you can store and recall any number of named loops. So you can practice in all keys, and you can speed up as well as slow down. There is some advice about play-along practice in Transcribe!'s help, under the heading "Various Topics". And it is also used for speech transcription. With its support for foot pedals and its superior slowed-down sound quality, it is an excellent choice for this purpose. There is some advice about speech transcription in Transcribe!'s help, under the heading "Various Topics".

Feature extraction

aubio

  • aubio is a tool designed for the extraction of annotations from audio signals. Its features include segmenting a sound file before each of its attacks, performing pitch detection, tapping the beat and producing midi streams from live audio. Because these tasks are difficult, we thought it was important to gather them in a dedicated library. To increase the fun, we have made these algorithms work in a causal way, so as to be used in real time applications with as low delay as possible. Functions can be used offline in sound editors and software samplers, or online in audio effects and virtual instruments.
  • Aubio-LV2-Plugins is an unoffial set of LV2 plugins which wrap the functionality of the audio analysis library Aubio. Currently it consists of a transient/steady state separator, and an onset detector.

Vamp

  • Vamp - an audio processing plugin system for plugins that extract descriptive information from audio data — typically referred to as audio analysis plugins or audio feature extraction plugins.
  • Find and Download Plugins - listing


  • QM Vamp Plugins - A set of plugins for feature extraction from audio data, using the Vamp plugin format suitable for use in programs such as Sonic Visualiser and Sonic Annotator.VAMP diagramThis plugin set includes note onset detector, beat and barline tracker, tempo estimator, key estimator, tonal change detector, structural segmenter, timbral and rhythmic similarity, wavelet scaleogram, adaptive spectrogram, note transcription, chromagram, constant-Q spectrogram, and MFCC plugins.


  • Segmentino - a Vamp plugin for automatic music structural segmentation, based on an algorithm first used in Mauch et al.'s paper on Using Musical Structure to Enhance Automatic Chord Transcription.


  • Silvet Note Transcription - or Shift-Invariant Latent Variable Transcription, is a Vamp plugin for polyphonic music transcription (from audio to note times and pitches).What can I use it for?In conjunction with a Vamp plugin host like Sonic Visualiser, you can use Silvet to help you work out what notes are being played in a piece of music, for example if you want to learn to play it yourself.You might also use it to study performances for musicological features such as timing and dynamics.Silvet also serves as a useful stable baseline for comparative purposes, for researchers working on other methods related to note transcription.Silvet uses a high-quality and quite flexible method, but it has various limitations which you can find described in the README file. Although you can easily get interesting and useful results for many kinds of music, don't expect it to take you straight from the audio to a complete and readable score!


  • Melodia - automatically estimates the pitch of a song's main melody. More specifically, it implements an algorithm that automatically estimates the fundamental frequency corresponding to the pitch of the predominant melodic line of a piece of polyphonic (or homophonic or monophonic) music.Given a song, the algorithm estimates: When the melody is present and when it is not (a.k.a. voicing detection) The pitch of the melody when it is present


  • HPCP - a vamp plug-in for audio feature extraction that computes the instantaneous evolution of HPCP (Harmonic Pitch Class Profile) of a signal. The HPCP is an approach for chroma feature estimation which represents the pitch content of polyphonic music signals, mapped to a single octave. HPCP have been extensively used for several final applications such as key and chord estimation, similarity computation (cover version identification) and music classification.

auditok

  • https://github.com/amsehili/auditok - an Audio Activity Detection tool that can process online data (read from an audio device or from standard input) as well as audio files. It can be used as a command line program and offers an easy to use API.

Beats [per minute]

  • bonk - Pure Data unit [bonk~] is a very useful musical tool for performance and composition. It processes a stream of audio on its input and produces messages when it thinks the signal matches certain patterns. It doesn't have an audio output, just messages. What [bonk~] does is analyse the incoming signal.
  • MiniBPM is a simple, reliable tempo estimator for use in music audio applications. It quickly gets you a fixed beats-per-minute estimate from a sample of audio, provided the tempo doesn't change too much in it.
  • libbeat - a lightweight beat detection library for Qt. It currently supports ALSA and PulseAudio. It uses fftw to process the samples.
  • bpm-tools software is the result of some experiments I did into automatically calculating and tagging the tempo (in beats-per-minute) of music files. Right now the code serves as the best explanation of the algorithm — a relatively simple application of an autocorrelation by statistical sampling. As yet, there is no scientific comparison of the algorithm with others software.
  • BeatDetektor - uses a very simple statistical model designed from scratch by myself to detect the BPM of music and provides real-time feedback useful for visualization and synchronization.


  • https://github.com/adamstark/BTrack - a causal beat tracking algorithm intended for real-time use. It is implemented in C++ with wrappers for Python and the Vamp plug-in framework.


  • MIDI Trigger - LV2 plugin which detects peaks by audio signal and sends MIDI notes.


  • BeatCounter - a simple plugin designed to facilitate beatmatching software and turntables. It displays the current tempo in beats per minute (BPM), and an accumulated average over the last few seconds. BeatCounter is the perfect tool for DJ’s that want to integrate computer effects with turntables or a live band.


  • Tapita - (snack in spanish) is a BPM detector trough keyboard, MIDI and jack written in C with GTK2.

Frequency

Silence

Essentia

  • Essentia - an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPLv3 license (also available under proprietary license upon request). It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal
  • https://github.com/MTG/gaia - C++ library to apply similarity measures and classifications on the results of audio analysis, including Python bindings. Together with Essentia it can be used to compute high-level descriptions of music.

openSMILE

  • openSMILE - feature extration tool enables you to extract large audio feature spaces in realtime. It combines features from Music Information Retrieval and Speech Processing. SMILE is an acronym forSpeech & Music Interpretation by Large-space Extraction. It is written in C++ and is available as both a standalone commandline executable as well as a dynamic library. The main features of openSMILE are its capability of on-line incremental processing and its modularity. Feature extractor components can be freely interconnected to create new and custom features, all via a simple configuration file. New components can be added to openSMILE via an easy binary plugin interface and a comprehensive API.

Madmom

  • https://github.com/CPJKU/madmom - an audio signal processing library written in Python with a strong focus on music information retrieval (MIR) tasks. The library is internally used by the Department of Computational Perception, Johannes Kepler University, Linz, Austria (http://www.cp.jku.at) and the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria (http://www.ofai.at). Possible acronyms are: Madmom Analyzes Digitized Music Of Musicians, Mostly Audio / Dominantly Music Oriented Modules

jMIR

  • jMIR - an open-source software suite implemented in Java for use in music information retrieval (MIR) research. It can be used to study music in the form of audio recordings, symbolic encodings and lyrical transcriptions, and can also mine cultural information from the Internet. It also includes tools for managing and profiling large music collections and for checking audio for production errors. jMIR includes software for extracting features, applying machine learning algorithms, applying heuristic error error checkers, mining metadata and analyzing metadata.

KeyFinder

polyscribe

Editing

Audacity

  • Audacity is a free, easy-to-use, multi-track audio editor and recorder for Windows, Mac OS X, GNU/Linux and other operating systems. The interface is translated into many languages. You can use Audacity to record live audio, record computer playback on any Windows Vista or later machine, convert tapes and records into digital recordings or CDs, edit WAV, AIFF, FLAC, MP2, MP3 or Ogg Vorbis sound files, cut, copy, splice or mix sounds together, change the speed or pitch of a recording, etc.


mhWaveEdit

  • mhWaveEdit is a graphical program for editing, playing and recording sound files. It is lightweight, portable, user-friendly and handles large files very well. The program itself has only simple editing features such as cut'n'paste and volume adjustment but it can also use Ladspa effect plugins and the effects provided by the SoX application. It can also support additional file formats besides wav through libsndfile and mp3/ogg import and export through lame and oggenc/oggdec.

Sweep

  • Sweep is an audio editor and live playback tool for GNU/Linux, BSD and compatible systems. It supports many music and voice formats including WAV, AIFF, Ogg Vorbis, Speex and MP3, with multichannel editing and LADSPA effects plugins.

ReZound

  • ReZound aims to be a stable, open source, and graphical audio file editor primarily for but not limited to the Linux operating system.

ocenaudio

  • ocenaudio is a cross-platform, easy to use, fast and functional audio editor. It is the ideal software for people who need to edit and analyze audio files without complications. ocenaudio also has powerful features that will please more advanced users.

WaveSurfer

  • WaveSurfer is an open source tool for sound visualization and manipulation. Typical applications are speech/sound analysis and sound annotation/transcription. WaveSurfer may be extended by plug-ins as well as embedded in other applications.

Jokosher

soniK

  • soniK is an open source digital audio editor for Linux, using the KDE platform. soniK allows you to record, edit and process sounds on your computer.

EKO

wavbreaker

  • wavbreaker is a GTK wave file splitter for Linux and Unix-like operating systems licensed under the terms of the GNU General Public License. This application's purpose in life is to take a wave file and break it up into multiple wave files. It makes a clean break at the correct position to burn the files to an audio cd without any dead air between the tracks. It will only read wave files, so use an appropriate tool to convert ogg, mp3, etc. files and then break them up.

ISSE

LAoE

  • LAoE means Layer-based Audio Editor, and it is a rich featured graphical audiosample-editor, based on multi-layers, floating-point samples, volume-masks, variable selection-intensity, and many plugins suitable to manipulate sound, such as filtering, retouching, resampling, graphical spectrogram editing by brushes and rectangles, sample-curve editing by freehand-pen and spline and other interpolation curves, effects like reverb, echo, compress, expand, pitch-shift, time-stretch, and much more... And it is free of charge, under GPL license!

Snd

  • Snd is a sound editor modelled loosely after Emacs. It can be customized and extended using either s7 (included in the Snd sources), Ruby, or Forth.
  • San Dysth is a standalone realtime soft-synth written in SND. This softsynth has controls to generate various kinds of sounds inbetween white noise and pure tones. It also provides controllers to disturb the generated sound by using a "period counter" to extend the variety of the generated output. Common usage for the softsynth is organ-like sound, organic-like sound, alien-like sounds, water-like sounds, and various kinds of noise (noise artists could find this softsynth most useful).

GNUsound

  • GNUsound is a multitrack sound editor for GNOME 1 and 2. The current version is 0.7.5, which was released 6 July 2008.

Marlin

  • Marlin - A GNOME Sample Editor. last updated 03-08-2004

Gnoise

  • GNoise - gtk+ or gnome (you can ./configure it either way) wave file editor for Linux. Prime considerations were for it to be speedy and be able to handle big files. So far it can: load and display files, generate a display cache, play the file, cut, copy, paste, (unlimited) undo, mute, fade in/out, reverse, normalize, and more. 2003

Processing tools


python-mix

SoX

  • SoX - Sound eXchange, the Swiss Army knife of sound processing programs. SoX is a cross-platform (Windows, Linux, MacOS X, etc.) command line utility that can convert various formats of computer audio files in to other formats. It can also apply various effects to these sound files, and, as an added bonus, SoX can play and record audio files on most platforms.
play --show-progress -c 2 --null synth brownnoise reverb bass 6 treble -3 echos 0.8 0.9 1000 0.3 1800 0.25 [66]

play -n -c1 synth whitenoise band -n 100 20 band -n 50 20 gain +25  fade h 1 864000 1

play -c2 -n synth pinknoise band -n 280 80 band -n 60 25 gain +20 treble +40 500 bass -3 20 flanger 4 2 95 50 .3 sine 50 lin [67]


  • sonfilade - allows the user to rapidly strip junk audio from the beginning and end of audio files. It can be used, for example, to clean up files recorded with Streamripper (e.g., streamripper --xs_padding=5000:5000). Sonfilade is designed to be as effortless and fun as possible to use. An entire edit session can be carried out using only three keys and sound feedback as the entire user interface. (There is also text output, but it is non-essential.) Uses sox.


pyDub

Uos

  • http://wiki.lazarus.freepascal.org/uos - a multi-platform package of audio handling routines that unifies the best open-source audio libraries. play .mp3, .ogg, .wav, .flac, .m4a, .opus and cdrom audio files. 16, 32 or float 32 bit resolution. record all types of input into file, in 16 or 32 bit resolution, mono or stereo. add DSP effects and filters, however many you want and record it. play multiple inputs and outputs simultaneously. internet audio streaming of mp3 and opus files. produce sound by the build-in synthesizer. Uos can use the SoundTouch, PortAudio, SndFile, Mpg123, Faad, OpusFile and Mp4ff audio libraries. Included in the package: Examples and binary libraries for Linux 32/64, arm-Raspberry Pi, Windows 32/64, Mac OSX 32 and FreeBSD 32/64.


Composers Desktop Project

  • CDP Home Page - The CDP software, first released in 1987, now contains hundreds of ways to transform digitally sampled sound. Its software belongs to the musique concrète category, as realised on computer. Processing is off-line. Although the processing is often faster than real-time, several processes could not run in real-time for technical reasons.The CDP software can be run via one of the two available GUIs or via command line / batch file.Based in the UK, CDP is an international network of composers and programmers guided by a vision of amazing sonic possibilities and how they can be woven into the fabric of music. We have been working together since 1986.


  • CDP-Soundfiles - The CDP system is one of the most comprehensive and innovative sound design suites available. Written largely by English electro-acoustic composer Trevor Wishart and reflecting his musical aesthetics in many ways, its processes cover almost every aspect of sound manipulation you've ever heard of, plus many that will be unfamiliar, and usually from an original or compositional viewpoint.CDP has over 430 processes covering EDIT-MIX functions, SOUNDFILE processes (time domain), SPECTRAL and PITCH processes, a small but significant SYNTH group, DATA GENERATING functions and a large INFO section. In addition there are over 100 DATA massaging functions and an extensive HELP.


  • Soundshaper - a free control interface for the CDP sound transformation software, with an emphasis on speed and ease of use. CDP is a suite of over 430 command-line processes for creating, altering and manipulating sounds to make music. Soundshaper (PC only) fully supports the latest CDP Release 7.Soundshaper provides quick and easy access to CDP processes and parameters and assembles scripts which run CDP in the background. Soundshaper saves CDP output to temporary files, which you can save at any stage. Parameter values can be adjusted at any point, even after further processes have been run. Soundshaper's auto-conversion makes it possible to move seamlessly from one process to another while the program handles the different CDP file types.When run, processes are displayed in a table called the Patch Grid. Soundshaper patches are an easy way to store and recall whole sequences of CDP processes in a fully editable form. All values are retained and the patch can be re-run with any source. Soundshaper patches support up to 16 separate process chains, which can come from different sources. Soundshaper also supports bulk processing, presets and multiple parameter sets.


  • The Sound Loom - an integrated graphic interface to the CDP sound-processing software, a comprehensive collection of over 500 instruments for sound transformation developed as practical working tools by composers over many years, available from the Composers' Desktop Project. The Sound Loom + CDP software is a powerful toolbox for composers, not a performance instrument. Using it, you can specify the parameters of any process to any degree of time-varying detail, detail you may have composed or have extracted from some other complex sound-event. You cannot, however, alter these parameters while the process is running. In compensation, the system offers almost any conceivable process for transforming sounds and sound-data (the data might be loudness envelopes, pitch-tracking information, spectral analysis data, filter specifications etc.) all running in a unified, intelligent environment.




Mammut

  • Mammut does an FFT of the whole sound (no windows). Various operations can subsequently be done in the frequency domain, such as unlinear stretching of the spectrum, sprectrum shifting, etc. How is the program useful? Doing a giant FFT of the entire sound, as opposed to splitting the sound up into short windows, is unusual. Such a method implies that time-related parameters are included in the spectral coefficients in a non-intuitive manner, and changes in the frequency domain may radically change developments in the time domain. Mammut is a fairly unpredictable program, and the user will need to get used to letting go of controlling the time axis. The sounding results are often surprising and exciting.

Fscape

FreqTweak

  • FreqTweak is a tool for FFT-based realtime audio spectral manipulation and display. It provides several algorithms for processing audio data in the frequency domain and a highly interactive GUI to manipulate the associated filters for each. It also provides high-resolution spectral displays in the form of scrolling-raster spectrograms and energy vs frequency plots displaying both pre- and post-processed spectra.


TAPESTREA

  • TAPESTREA (Techniques And Paradigms for Expressive Synthesis, Transformation, and Rendering of Environmental Audio, or taps, is a unified framework for interactively analyzing, transforming and synthesizing complex sounds. Given one or more recordings, it provides well-defined means to: identify points of interest in the sound and extract them into reusable templates; transform sound components independently of the background and/or other events; continually resynthesize the background texture in a perceptually convincing manner; controllably place event templates over backgrounds, using a novel graphical user interface and/or scripts written in the ChucK audio programming language

Build fails on Linux. Fixing two or three indirect includes gets further, but fails on building it's included [old] chuck.

SPEAR

  • SPEAR (Sinusoidal Partial Editing Analysis and Resynthesis) is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude. Something which closely resembles the original input sound (a resynthesis) can be generated by computing and adding all of the individual time varying sinusoidal waves together. In almost all cases the resynthesis will not be exactly identical to the original sound (although it is possible to get very close).

Aside from offering a very detailed analysis of the time varying frequency content of a sound, a sinusoidal model offers a great deal of flexibility for editing and manipulation. SPEAR supports flexible selection and immediate manipulation of analysis data, cut and paste, and unlimited undo/redo. Hundreds of simultaneous partials can be synthesized in real-time and documents may contain thousands of individual partials dispersed in time. SPEAR also supports a variety of standard file formats for the import and export of analysis data.

Windows/Mac only :(

Ceres3

  • Ceres3 is a cut-and-paste spectral editor with musically enhanced graphic control over spectral activity of a sound file. It is a free educational program with no other aims, and it owes most of its framework to Oyvind Hammer's Ceres and Jonathan Lee's Ceres2. It has an X-window Motif/OpenMotif based GUI, organized around four principal menus with simple keyboard shortcuts.

ATS

  • ATS is a spectral modeling system based on a sinusoidal plus critical-band noise decomposition. The system can be used to analyze recorded sounds, transform their spectrum using a wide variety of algorithms and resynthesize them both out of time and in real time.
  • ATS is a software library of functions for spectral Analysis, Transformation, and Synthesis of sound based on a sinusoidal plus critical-band noise model. A sound in ATS is a symbolic object representing a spectral model that can be sculpted using a variety of transformation functions. Spectral data can be accessed trough an API, and saved to/loaded from disk. ATS is written in LISP, its analysis and synthesis algorithms are implemented using the CLM (Common Lisp Music) synthesis and sound processing language.


Only takes mono .wav files

Cecilia

  • Cecilia is an audio signal processing environment aimed at sound designers. Cecilia mangles sound in ways unheard of. Cecilia lets you create your own GUI using a simple syntax. Cecilia comes with many original built-in modules and presets for sound effects and synthesis.

Loris

  • Loris is an Open Source sound modeling and processing software package based on the Reassigned Bandwidth-Enhanced Additive Sound Model. Loris supports modified resynthesis and manipulations of the model data, such as time- and frequency-scale modification and sound morphing. The Loris programmers' interface supports the C, C++, and Python programming languages, and SWIG interface files are provided so that the API can be easily extended to a variety of other languages. The package includes a handful of utility programs for basic sound modeling and resynthesis, and standard UNIX/Linux tools that build and install the libraries, headers, and utilties.

SMS Tools

  • Spectral Modeling Synthesis Tools - SMS Tools is a set of techniques and software implementations for the analysis, transformation, and synthesis of musical sounds based on various spectral modeling approaches. These techniques can be used for synthesis, processing and coding applications, while some of the intermediate results might also be applied to other music related problems, such as sound source separation, musical acoustics, music perception, or performance analysis. The basic model and implementation were developed by Xavier Serra as part of his PhD thesis published 1989. Since then many extensions have been proposed at MTG-UPF and by other researchers.

FxEngine

  • FxEngine - an Open C++ Framework under LGPL license. The FxEngine Framework simplifies the plugin architecture for the data flow processing. It provides a full control to the plugin architecture for applications that require custom solutions.
  • FxJackPack - contains two plugins for the FxEngine framework which enables the recording and playback sound through JACK (Jack Audio Connection Kit).

NASPRO

Aglaophone

SpectMorph

  • SpectMorph is a free software project which allows to analyze samples of musical instruments, and to combine them (morphing). It can be used to construct hybrid sounds, for instance a sound between a trumpet and a flute; or smooth transitions, for instance a sound that starts as a trumpet and then gradually changes to a flute.

Spectral Toolbox

  • The Spectral Toolbox - a suite of analysis-resynthesis programs that locate relevant partials of a sound and allow them to be resynthesized at any specified frequencies. This enables a variety of techniques including spectral mappings (sending all partials of a sound to fixed destinations), spectral morphing continuously interpolating between the partials of a source sound and a destination) and dynamic tonality (a way of organizing the relationship between a family of tunings and a set of related timbres). A complete application called the TransFormSynth concretely demonstrates the methods using either a one-dimensional controller such as a midi keyboard or a two-dimensional control surface (such as a MIDI guitar, a computer keyboard, or the forthcoming Thummer controller). Requires installing either Max Runtime (free from cycling74) or Max/MSP (not free) and some java routines.

Wav2Spectrum

  • https://github.com/paulnasca/wav2spectrum - a simple application which takes a small chunk (window) from input wav and outputs the frequencies one by one (a sweep) into another wav file. It is very useful to hear the harmonics (one by one) from a sound. It can be used as a spectrum tool for the blind people who are interested in sound analysis.

swingify

Melodyne

$

Time/pitch

Paul's Extreme Sound Stretch



  • https://github.com/paulnasca/2xphases - This repository contains two programs: "2xautoconvolution" which uses autoconvolution to process audio and an older program "2xphases" which uses long-term FFT audio processing.

tcStrech

  • tcStretch - a Windows VST 2.4 plug-in for time stretching, pitch shifting, and blurring. Time stretch can be up to 1 million times slower. Pitch shift is plus or minus one octave. Blurring blends nearby spectral material to make the output less static. Playback is sensitive to transients in the source material. Playback rate and blur amount are automatically adjusted according to the transient contour of the material being stretched. Playing transients at a faster rate than non-transients tends to make the output sound less obviously stretched. Playing transients more slowly than non-transients emphasizes the stretchiness [good when playing in reverse mode with highly transient material]. Adding blur brings in some subtle (or not so subtle) randomness which helps to keep the output less static.

Rubber Band

  • Rubber Band Library is a high quality software library for audio time-stretching and pitch-shifting. It permits you to change the tempo and pitch of an audio stream or recording dynamically and independently of one another. Rubber Band Library is intended for use by developers creating their own application programs rather than directly by end users, although it does also include a simple (free) command-line utility program that you can use for fixed adjustments to the speed and pitch of existing audio files.

Play it Slowly

  • Play it Slowly - software to play back audio files at a different speed or pitch. It does also allow you to loop over a certain part of a file. It's intended to help you learn or transcribe songs. It can also play videos thanks to gstreamer. Play it slowly is intended to be used on a GNU/Linux system like Ubuntu.

StretchPlayer

  • StretchPlayer is an audio file player that allows you to change the speed of the song without changing the pitch. It will also allow you to transpose the song to another key (while also changing the speed). This is a very powerful tool for musicians who are learning to play a pre-recorded song.

PitchTempoPlayer

  • PitchTempoPlayer (PTPlayer) is an audio player for Linux that allows to change pitch and speed (tempo) of the sound independently of each other. Fine tuning (less than half tone) is also possible, as well recording, exporting the modified audio file and managing a playlist.

SBSMS

SoundTouch

  • SoundTouch is an open-source audio processing library for changing the Tempo, Pitch and Playback Rates of audio streams or audio files. The library additionally supports estimating stable beats-per-minute rates for audio tracks. The SoundTouch library is intended for application developers writing sound processing tools that require tempo/pitch control functionality, or just for playing around with the sound effects.

The SoundTouch library source kit includes also an example utility SoundStretch for processing .wav audio files from command-line interface.

fibonaccistretch

Groovinator

  • https://github.com/usdivad/Groovinator - an audio plugin, available in VST and AU format, that performs real-time playhead-aware rhythm modification. Currently supported modification modes are: Classic Stretch, Sample 'n' Shift, and Step Repeater. Developed using the JUCE framework in addition to Olli Parviainen's SoundTouch library and Sergio Castro's implementation of Bjorklund's algorithm.

Melodrumatic

  • https://github.com/usdivad/Melodrumatic - an audio plugin that lets you "pitch-shift" via delay (i.e. the Doppler effect) to turn unpitched audio into melodies. Controllable via MIDI or mouse :)

Denoise

Gnome Wave Cleaner

Postfish

audio-declipper

Neural network



Audio and MIDI looping

See also Sampling#Audio looping and MIDI#MIDI looping

Giada

  • Giada - a free, minimal, hardcore audio tool for DJs, live performers and electronic musicians. How does it work? Just pick up your channel, fill it with samples or MIDI events and start the show by using this tiny piece of software as a loop machine, drum machine, sequencer, live sampler or yet as a plugin/effect host. Giada aims to be a compact and portable virtual device for Linux, Mac OS X and Windows for production use and live sets.


What can you control with MIDI:

  • Global elements — sequencer, metronome, main volumes and so on, stored inside the configuration file and you set them once;
  • Per-channel elements — channel on/off, mute, volume, solo and so on, stored inside the patch and you set them whenever you create a new song.

No MIDI mod system, each binding is 'channel' specific ('channel' being the Giada term for a sample or sequence), which doesn't seem like it would scale well.

Soundscape

Boodler

  • Boodler is an open-source soundscape tool -- continuous, infinitely varying streams of sound. Boodler is designed to run in the background on a computer, maintaining whatever sound environment you desire. Boodler is extensible, customizable, and modular. Each soundscape is a small piece of Python code -- typically less than a page. A soundscape can incorporate other soundscapes; it can combine other soundscapes, switch between them, fade them in and out. This package comes with many example soundscapes. You can use these, modify them, combine them to arbitrary levels of complexity, or write your own.

Klangwunder3000

Random Parallel Player

  • https://github.com/hilbrichtsoftware/random-parallel-player - Takes a bunch of audio files as tracks and plays them back randomly creating new music each playthrough. The core rule of RPP: No human interaction once the playback has started. RPP is based on an idea of Louigi Verona. The included audio samples in example.rpp were created by him. You can read about the original project here

Atmosfear

  • https://github.com/teragonaudio/Atmosfear - a VSTi plugin which generates random atmospheric soundscapes with samples scraped from FreeSound. We had originally imagined that the plugin colud generate soundscapes resembling parks, public places, nature, etc. However, the resulting sounds that it makes are generally quite surreal and creepy, hence the name. :)


Foco

  • https://github.com/akashnimare/foco - a cross-platform desktop app which runs in menubar. Foco boosts your productivity by creating perfect productive environment. It has the best sounds for getting work done .

jungle

  • jungle - an audio-system. It allows you to create an ambiance using random audio-samples. It works with systems that have one or more audio-devices (with one or more channels) and can also use multiple systems (e.g. a couple of raspberry pis) in a cluster.This software requires a Linux system. It uses the ALSA sub system.I run it with a server-pc and 3 raspberry pies.


Web


  • myNoise - background noises and relaxing soundscape generator, web/app

Sonification

See also Synthesis#Graphics synthesis

  • https://en.wikipedia.org/wiki/Sonification - the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques. For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.




  • CodeSounding - CodeSounding is an open source sonification framework which makes possible to hear how any existing Java program "sounds like", by assigning instruments and pitches to code statements (if, for, etc) and playing them as they are executed at runtime. In this way the flowing of execution is played as a flow of music and its rhythm changes depending on user interaction.



  • HyperMammut - transform sounds to images and vice-versa using single BIG Fourier Transforms (or DCT/DST,etc.).


Retargeting

  • Scalable Music: Automatic Music Retargeting and Synthesis - S. Wenner, J.C. Bazin, A. Sorkine-Hornung, C. Kim, M. Gross. In this paper we propose a method for dynamic rescaling of music, inspired by recent works on image retargeting, video reshuffling and character animation in the computer graphics community. Given the desired target length of a piece of music and optional additional constraints such as position and importance of certain parts, we build on concepts from seam carving, video textures and motion graphs and extend them to allow for a global optimization of jumps in an audio signal. Based on an automatic feature extraction and spectral clustering for segmentation, we employ length-constrained least-costly path search via dynamic programming to synthesize a novel piece of music that best fulfills all desired constraints, with imperceptible transitions between reshuffled parts. We show various applications of music retargeting such as part removal, decreasing or increasing music duration, and in particular consistent joint video and audio editing.


  • https://github.com/ucbvislab/radiotool - a python library that aims to make it easy to create audio by piecing together bits of other audio files. This library was originally written to enable my research in audio editing user interfaces, but perhaps someone else might find it useful.

Web

See also WebDev#Web Audio API, Drumming#Web

Background

Tuning

Dart-Mic

  • Dart-Mic - a Javascript library which listens to microphone input and performs pitch/note detection, volume detection, recording, and general purpose data processing. It makes use of the Web Audio APi (which is only supported by Chrome currently) and DSP.js.

AudioNotch

  • AudioNotch - Tinnitus Treatment Sound Therapy - Tuner and Tone Generator


x42-tuner

tunescope

  • https://github.com/dack/tunescope - an oscilloscope style guitar tuner. It uses jack for audio input and opengl for rendering. The signal is displayed in both normal and XY mode, using an automatically selected not as the reference.

FMIT

  • FMIT - Free Music Instrument Tuner, is a graphical utility for tuning your musical instruments, with error and volume history and advanced features.

LINGOT

  • LINGOT - a musical instrument tuner. It's accurate, easy to use, and highly configurable. Originally conceived to tune electric guitars, it can now be used to tune other instruments. It looks like an analogue tuner, with a gauge indicating the relative shift to a certain note, found automatically as the closest note to the estimated frequency.

jackstrobe

Guitar Tuning Database


alt-tuner

  • alt-tuner - a DAW microtonal tuning plug-in that retunes almost every midi keyboard or softsynth. It runs on PCs, macs and Linux/Wine machines. Click here to buy it.


MTuner

  • https://www.meldaproduction.com/MTuner - a simple Windows VST audio frequency analyzer designed mostly for tuning guitars and other instruments. It detects frequency, note and deviation from correct pitch in cents, resolving frequencies in the range 50Hz to 2kHz, which is enough for most instruments and vocals.

tuner

  • https://github.com/logsol/tuner - A simple standalone app for macOS based on JUCE that detects the frequency of an instrument and shows its note based on autocorrelation.

Speech

Recognition






  • Audiogrep - transcribes audio files and then creates "audio supercuts" based on search phrases. It uses CMU Pocketsphinx for speech-to-text and pydub to stitch things together. [73]


  • Simon - an open source speech recognition program that can replace your mouse and keyboard. The system is designed to be as flexible as possible and will work with any language or dialect.



  • Speech Signal Processing Toolkit (SPTK) - a suite of speech signal processing tools for UNIX environments, e.g., LPC analysis, PARCOR analysis, LSP analysis, PARCOR synthesis filter, LSP synthesis filter, vector quantization techniques, and other extended versions of them. This software is released under the Modified BSD license.SPTK was developed and has been used in the research group of Prof. Satoshi Imai (he has retired) and Prof. Takao Kobayashi (currently he is with Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology) at P&I laboratory, Tokyo Institute of Technology. A sub-set of tools was chosen and arranged for distribution by Prof. Keiichi Tokuda (currently he is with Department of Computer Science and Engineering, Nagoya Institute of Technology) as a coordinator in cooperation and other collaborates (see "Acknowledgments" and "Who we are" in README).The original source codes have been written by many people who took part in activities of the research group. The most original source codes of this distribution were written by Takao Kobayashi (graph, data processing, FFT, sampling rate conversion, etc.), Keiichi Tokuda (speech analysis, speech synthesis, etc.), and Kazuhito Koishida (LSP, vector quantization, etc.).


  • speechrate - software for the analysis of speech. Below you will find a script that automatically detects syllable nuclei in order to measure speech rate without the need of a transcription. Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered as potential syllable nuclei. The script subsequently discards peaks that are not voiced. On this page you find an example of how the script works.





  • Wit.ai - makes it easy for developers to build applications and devices that you can talk or text to. Our vision is to empower developers with an open and extensible natural language platform. Wit.ai learns human language from every interaction, and leverages the community: what’s learned is shared across developers.






Synthesis

to sort/categorise



SAM


rsynth


Festival

  • Festival - or The Festival Speech Synthesis System, offers a general framework for building speech synthesis systems as well as including examples of various modules. As a whole it offers full text to speech through a number APIs: from shell level, though a Scheme command interpreter, as a C++ library, from Java, and an Emacs interface. Festival is multi-lingual (currently English (British and American), and Spanish) though English is the most advanced. Other groups release new languages for the system.
  • Festvox - aims to make the building of new synthetic voices more systemic and better documented, making it possible for anyone to build a new voice.


Rocaloid

  • Rocaloid - a free, open-source singing voice synthesis system. Its ultimate goal is to fast synthesize natural, flexible and multi-lingual vocal parts.  Like other vocal synthesizing software, after installing the vocal database, inputting lyrics and pitch, you can synthesize attractive vocal parts. What’s more, Rocaloid highlights on providing you more controllable parameters which enabling to take control of exquisite dimensions of the synthesized voice and export with better quality. By using a fully constructed Rocaloid Database, you can synthesize singing voice in any phonetic-based languages.

Festvox

  • Festvox - aims to make the building of new synthetic voices more systemic and better documented, making it possible for anyone to build a new voice. Specifically we offer: Documentation, including scripts explaining the background and specifics for building new voices for speech synthesis in new and supported languages. Example speech databases to help building new voices. Links, demos and a repository for new voices. This work is firmly grounded within Edinburgh University's Festival Speech Synthesis System and Carnegie Mellon University's small footprint Flite synthesis engine.


MaryTTS

  • MaryTTS is an open-source, multilingual Text-to-Speech Synthesis platform written in Java. It was originally developed as a collaborative project of DFKI’s Language Technology Lab and the Institute of Phonetics at Saarland University. It is now maintained by the Multimodal Speech Processing Group in the Cluster of Excellence MMCI and DFKI.


eSpeak

  • eSpeak - a compact open source software speech synthesizer for English and other languages, for Linux and Windows. eSpeak uses a "formant synthesis" method. This allows many languages to be provided in a small size. The speech is clear, and can be used at high speeds, but is not as natural or smooth as larger synthesizers which are based on human speech recordings.


  • https://github.com/divVerent/ecantorix - a singing synthesis frontend for espeak. It works by using espeak to generate raw speech samples, then adjusting their pitch and length and finally creating a LMMS project file referencing the samples in sync to the input file.

OpenSource SpeechSynth


MBROLA


Assistive Context-Aware Toolkit


Praat

  • Praat - doing phonetics by computer


Gnuspeech

  • gnuspeech - makes it easy to produce high quality computer speech output, design new language databases, and create controlled speech stimuli for psychophysical experiments. gnuspeechsa is a cross-platform module of gnuspeech that allows command line, or application-based speech output. The software has been released as two tarballs that are available in the project Downloads area of http://savannah.gnu.org/projects/gnuspeech. [79]


Project Merlin


UTAU

  • https://en.wikipedia.org/wiki/Utau - a Japanese singing synthesizer application created by Ameya/Ayame. This program is similar to the Vocaloid software, with the difference that it is shareware instead of being released under third party licensing




Sinsy


Mozilla TTS

CMU Flite

  • CMU Flite - a small, fast run-time open source text to speech synthesis engine developed at CMU and primarily designed for small embedded machines and/or large servers. Flite is designed as an alternative text to speech synthesis engine to Festival for voices built using the FestVox suite of voice building tools.

mesing


Adobe VoCo


VST Speek


char2wav


loop


IPOX

  • IPOX - an experimental, all-prosodic speech synthesizer, developed many years ago by Arthur Dirksen and John Coleman. It is still available for downloading, and was designed to run on a 486 PC running Windows 3.1 or higher, with a 16-bit Windows-compatible sound card, such as the Soundblaster 16. It still seems to run on e.g. XP, but I haven't tried it on Vista.


NPSS


Pink Trombone

Klatter

  • https://github.com/fundamental/klatter - a bare bones formant synthesizer based upon the description given in the 1979 paper "Software For a Cascade/Parallel Formant Synthesizer" by Dennis Klatt. This program was not designed for interactive use, though there is code for some minimal midi control. In it's current state, it is enough of a curiosity that it will be preserved, though it may not see much if any use.


Tacotron 2


Real-Time-Voice-Cloning

leesampler

Neural Parametric Singing Synthesizer

VoiceOfFaust

Other


Analysis

ESPS

  • ESPS - Entropic Signal Processing System, is a package of UNIX-like commands and programming libraries for speech signal processing. As a commercial product of Entropic Research Laboratory, Inc, it became extremely widely used in phonetics and speech technology research laboratories in the 1990's, in view of the wide range of functions it offered, such as get_f0 (for fundamental frequency estimation), formant (for formant frequency measurement), the xwaves graphical user interface, and many other commands and utilities. Following the acquisition of Entropic by Microsoft in 1999, Microsoft and AT&T licensed ESPS to the Centre for Speech Technology at KTH, Sweden, so that a final legacy version of the ESPS source code could continue to be made available to speech researchers. At KTH, code from the ESPS library (such as get_f0) was incorporated by Kåre Sjölander and Jonas Beskow into the Wavesurfer speech analysis tool. This is a very good alternative way to use many ESPS functions if you want a graphical user interface rather than scripting.


NICO toolkit

  • NICO toolkit - mainly intended for, and originally developed for speech recognition applications, a general purpose toolkit for constructing artificial neural networks and training with the back-propagation learning algorithm. The network topology is very flexible. Units are organized in groups and the group is a hierarchical structure, so groups can have sub-groups or other objects as members. This makes it easy to specify multi-layer networks with arbitrary connection structure and to build modular networks.

Speech Research Tools

  • https://sourceforge.net/projects/speechresearch - Software for speech research. It includes programs and libraries for signal processing, along with general purpose scientific libraries. Most of the code is in Python, with C/C++ supporting code. Also, contains code releases corresponding to publishe


HAT

  • Higgins Annotation Tool - can be used to transcribe and annotate speech with one or more audio tracks (such as dialogue). Windows.

Machine learning


Video

See also Video

  • Xjadeo is a software video player that displays a video-clip in sync with an external time source (MTC, LTC, JACK-transport). Xjadeo is useful in soundtrack composition, video monitoring or any task that requires to synchronizing movie frames with external events.



Gaze

Performance

  • The Box of No Return - a Linux-based musical synthesizer platform, suitable for live musicianship, designed to handle multiple patches with enormous demands, and switch between them with zero delay and zero cutout.  If you sit in your home studio and use single SoundFonts with a laptop and simple GUI, you don't need this.  If you play live, and pile on the tone generators and filters in patch development in order to feel and deliver the unyielding power of the musical harmonic roar, a full implementation of the BNR may suit you well.  There are obviously middle grounds too ☺, and there are articles here to help in general.


Games

  • FRACT - a musical exploration game. You arrive in a forgotten place and explore the unfamiliar landscape to discover the secrets of an abandoned world that was once built on sound. As you start to make sense of this strange new environment, you work to rebuild its machinery by solving puzzles and bring the world back to life by shaping sound and creating music in the game.


Apps

  • Augment - an amazing way to listen to the world. It harmonizes your listening experience which helps you to be less distracted and stressed. The Augment app filters your acoustic environment, takes out harsh sounds and turns stressful noise into harmonic sound environments. Try it now, it's a free download on the app Store!

DOS

macOS


Other



  • https://en.wikipedia.org/wiki/ReWire_(software_protocol) - a software protocol, jointly developed by Propellerhead and Steinberg, allowing remote control and data transfer among digital audio editing and related software. Originally appearing in the ReBirth software synthesizer in 1998, the protocol has since evolved into an industry standard. Currently used in macOS and Microsoft Windows 32-bit or 64-bit audio applications, ReWire enables the simultaneous transfer of up to 256 audio tracks of arbitrary resolution and 4080 channels of MIDI data. This allows, for example, the output from synthesizer software to be fed directly into a linear editor without the use of intermediate files or analog transfers. There are also provisions to remotely trigger actions, such as starting and stopping recording. The protocol is licensed free of charge to companies only, but comes with a "non-disclosure of source code" license that is incompatible with most free-software licenses. The ReWire system consists of "Hosts", "Panels", and "Devices". Hosts are the host applications which typically do the sequencing at one end and the final mixdown at the other end. A Device is a dynamic link library that only generates sound; it has no user interface. A Panel is a graphical interface for setting the parameters of one Device. A typical setup would be to use Ableton Live in "Host" mode, and use Propellerhead Reason as a synthesizer. In this case Reason would provide Device/Panel pairs to Ableton, which could then send midi commands, sync timing and mix Reason's output into its own effects chains. Many applications support either mode. In fact, an application could (at the discretion of a developer) act as both a Host and a Panel at the same time.