Audio
General
Mostly Linux, mostly free software.
See also Creative coding, Playback, Dataflow, Pure Data, Distros#Media
- http://stuff.mihozu.net/stuff/bookmarks - old but interesting
- https://librazik.tuxfamily.org/doc2/logiciels - francophone
- CDM - blog, etc. on Create Digital music, motion, and more.
“Sound is when you mow your lawn, noise is when your neighbor mows their lawn, and music is when your neighbor mows your lawn,” [1]
Acoustics
mess, to merge with parts of Music, Speaker
- https://en.wikipedia.org/wiki/Acoustics - the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
- https://en.wikipedia.org/wiki/Acoustical_engineering - also known as acoustic engineering) is the branch of engineering dealing with sound and vibration. It includes the application of acoustics, the science of sound and vibration, in technology. Acoustical engineers are typically concerned with the design, analysis and control of sound.One goal of acoustical engineering can be the reduction of unwanted noise, which is referred to as noise control. Unwanted noise can have significant impacts on animal and human health and well-being, reduce attainment by students in schools, and cause hearing loss. Noise control principles are implemented into technology and design in a variety of ways, including control by redesigning sound sources, the design of noise barriers, sound absorbers, suppressors, and buffer zones, and the use of hearing protection (earmuffs or earplugs). It also covers positive uses of sound, from the use of ultrasound in medicine to the programming of digital sound synthesizers, and from designing a concert hall to enhance the sound of an orchestra to specifying a railway station's sound system so announcements are intelligible.
- https://en.wikipedia.org/wiki/Acoustic_wave - type of longitudinal waves that propagate by means of adiabatic compression and decompression. Longitudinal waves are waves that have the same direction of vibration as their direction of travel. Important quantities for describing acoustic waves are sound pressure, particle velocity, particle displacement and sound intensity. Acoustic waves travel with the speed of sound which depends on the medium they're passing through.
- https://en.wikipedia.org/wiki/Surface_acoustic_wave - n acoustic wave traveling along the surface of a material exhibiting elasticity, with an amplitude that typically decays exponentially with depth into the material.
- https://en.wikipedia.org/wiki/Sound - a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a medium such as air or water. In physiology and psychology, sound is the reception of such waves and their perception by the brain.
- https://en.wikipedia.org/wiki/Sound_energy - a form of energy associated with the vibration of matter. The SI unit of sound energy is the joule (J). Sound is a mechanical wave and as such consists physically in oscillatory elastic compression and in oscillatory displacement of a fluid. Therefore, the medium acts as storage for both potential and kinetic energy as well.
- https://en.wikipedia.org/wiki/Speed_of_sound - the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C (68 °F), the speed of sound is 343.2 metres per second (1,126 ft/s; 1,236 km/h; 768 mph; 667 kn), or a kilometre in 2.914 s or a mile in 4.689 s.
- https://en.wikipedia.org/wiki/Sound_power - or acoustic power is the rate at which sound energy is emitted, reflected, transmitted or received, per unit time. The SI unit of sound power is the watt (W). It is the power of the sound force on a surface of the medium of propagation of the sound wave. For a sound source, unlike sound pressure, sound power is neither room-dependent nor distance-dependent. Sound pressure is a measurement at a point in space near the source, while the sound power of a source is the total power emitted by that source in all directions. Sound power passing through an area is sometimes called sound flux or acoustic flux through that area.
- https://en.wikipedia.org/wiki/Sound_intensity - also known as acoustic intensity is defined as the sound power per unit area. The SI unit of sound intensity is the watt per square meter (W/m2). The usual context is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity. Sound intensity is not the same physical quantity as sound pressure. Hearing is directly sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone. Sound energy passing per second through a unit area held perpendicular to the direction of propagation of sound waves is called intensity of sound.
- https://en.wikipedia.org/wiki/Sound_pressure - or acoustic pressure is the local pressure deviation from the ambient (average, or equilibrium) atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa).
- https://en.wikipedia.org/wiki/Decibel - a logarithmic unit used to express the ratio of two values of a physical quantity, often power or intensity. One of these values is often a standard reference value, in which case the decibel is used to express the level of the other value relative to this reference. The number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities, or of the ratio of the squares of two field amplitude quantities.
The decibel is commonly used in acoustics as a unit of sound pressure level. The reference pressure in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure.
- Gain structure: input and output levels - This article describes input and output gain structure of audio devices. Audio components are typically rated by their input sensitivity and/or maximum output voltage. This article explains how to match the output voltage of an audio device to the input voltage range of the next device in the signal chain, and how to adjust input sensitivity to accommodate a variety of voltages from different source devices. dB, dBu, dBV, dBFS, and dB-SPL.
- https://en.wikipedia.org/wiki/Audio_frequency - AF or audible frequency, is characterized as a periodic vibration whose frequency is audible to the average human. The SI unit of audio frequency is the hertz (Hz). It is the property of sound that most determines pitch. The generally accepted standard range of audible frequencies is 20 to 20,000 Hz, although the range of frequencies individuals hear is greatly influenced by environmental factors. Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Frequencies above 20,000 Hz can sometimes be sensed by young people. High frequencies are the first to be affected by hearing loss due to age and/or prolonged exposure to very loud noises.
See Music
- S.S. Stevens, "The relation of pitch to intensity", Vol. 6, 1935, pp. 150-154. http://asa.scitation.org/doi/10.1121/1.1915715
- W.B. Snow, "Changes of pitch with loudness at low frequencies", Vol. 8, 1936, pp. 14-19. http://asa.scitation.org/doi/10.1121/1.1915846
- Engineering Acoustics - free Wikibook
- https://en.wikipedia.org/wiki/Onset_(audio) - the beginning of a musical note or other sound, in which the amplitude rises from zero to an initial peak. It is related to (but different from) the concept of a transient: all musical notes have an onset, but do not necessarily include an initial transient.
- https://en.wikipedia.org/wiki/Transient_(acoustics) - a high amplitude, short-duration sound at the beginning of a waveform that occurs in phenomena such as musical sounds, noises or speech. It can sometimes contain a high degree of non-periodic components and a higher magnitude of high frequencies than the harmonic content of that sound. Transients do not necessarily directly depend on the frequency of the tone they initiate. Transients are more difficult to encode with many audio compression algorithms, causing pre-echo.
- https://en.wikipedia.org/wiki/Acoustic_space - an acoustic environment in which sound can be heard by an observer. The term "acoustic space" was first mentioned by Marshall McLuhan, a professor and a philosopher.
- https://en.wikipedia.org/wiki/Acoustic_attenuation - a measure of the energy loss of sound propagation in media. Most media have viscosity, and are therefore not ideal media. When sound propagates in such media, there is always thermal consumption of energy caused by viscosity. For inhomogeneous media, besides media viscosity, acoustic scattering is another main reason for removal of acoustic energy. Acoustic attenuation in a lossy medium plays an important role in many scientific researches and engineering fields, such as medical ultrasonography, vibration and noise reduction.
- https://en.wikipedia.org/wiki/Absorption_(acoustics) - refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. Part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. The energy transformed into heat is said to have been 'lost'.
When sound from a loudspeaker collides with the walls of a room part of the sound's energy is reflected, part is transmitted, and part is absorbed into the walls. As the waves travel through the wall they deform the material thereof (just like they deformed the air before). This deformation causes mechanical losses via conversion of part of the sound energy into heat, resulting in acoustic attenuation, mostly due to the wall's viscosity. Similar attenuation mechanisms apply for the air and any other medium through which sound travels.
The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction. Acoustic absorption is of particular interest in soundproofing. Soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location. In general, soft, pliable, or porous materials (like cloths) serve as good acoustic insulators - absorbing most sound, whereas dense, hard, impenetrable materials (such as metals) reflect most.
- https://en.wikipedia.org/wiki/Acoustic_resonance - a phenomenon where acoustic systems amplify sound waves whose frequency matches one of its own natural frequencies of vibration (its resonance frequencies). The term "acoustic resonance" is sometimes used to narrow mechanical resonance to the frequency range of human hearing, but since acoustics is defined in general terms concerning vibrational waves in matter, acoustic resonance can occur at frequencies outside the range of human hearing. An acoustically resonant object usually has more than one resonance frequency, especially at harmonics of the strongest resonance. It will easily vibrate at those frequencies, and vibrate less strongly at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of a drum membrane. Acoustic resonance is also important for hearing. For example, resonance of a stiff structural element, called the basilar membrane within the cochlea of the inner ear allows hair cells on the membrane to detect sound. (For mammals the membrane has tapering resonances across its length so that high frequencies are concentrated on one end and low frequencies on the other.)
- https://en.wikipedia.org/wiki/Helmholtz_resonance - or wind throb is the phenomenon of air resonance in a cavity, such as when one blows across the top of an empty bottle. The name comes from a device created in the 1850s by Hermann von Helmholtz, the Helmholtz resonator, which he used to identify the various frequencies or musical pitches present in music and other complex sounds.
- https://en.wikipedia.org/wiki/Diffusion_(acoustics) - in acoustics and architectural engineering, is the efficacy by which sound energy is spread evenly in a given environment. A perfectly diffusive sound space is one that has certain key acoustic properties which are the same anywhere in the space. A non-diffuse sound space would have considerably different reverberation time as the listener moved around the room. Virtually all spaces are non-diffuse. Spaces which are highly non-diffuse are ones where the acoustic absorption is unevenly distributed around the space, or where two different acoustic volumes are coupled. The diffusiveness of a sound field can be measured by taking reverberation time measurements at a large number of points in the room, then taking the standard deviation on these decay times.[citation needed] Alternately, the spatial distribution of the sound can be examined. Small sound spaces generally have very poor diffusion characteristics at low frequencies due to room modes.
- https://en.wikipedia.org/wiki/Anechoic_chamber - an-echoic meaning "non-reflective, non-echoing, echo-free", is a room designed to completely absorb reflections of either sound or electromagnetic waves. They are also often isolated from waves entering from their surroundings. This combination means that a person or detector exclusively hears direct sounds (no reverberant sounds), in effect simulating being inside an infinitely large room.
- https://en.wikipedia.org/wiki/Cymatics - the study of visible sound and vibration
- https://en.wikipedia.org/wiki/Beat_(acoustics) - an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.
Performing Musician Magazine:
- Acoustics and sound propagation: Part 1 - Sound fields, Critical distance
- Acoustics and sound propagation: Part 2 - Diffraction
- Acoustics and sound propagation: Part 3 - Sabines, Diffusion, Refraction, Typical absorption in Sabines per square foot
- Acoustics and sound propagation: Part 4 - Refraction, Standing waves, Critical Distance
Loudness
- https://en.wikipedia.org/wiki/Loudness - the subjective perception of sound pressure. More formally, it is defined as, "That attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud." The relation of physical attributes of sound to perceived loudness consists of physical, physiological and psychological components. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics.
In different industries, loudness may have different meanings and different measurement standards. Some definitions such as LKFS refer to relative loudness of different segments of electronically reproduced sounds such as for broadcasting and cinema. Others, such as ISO 532A (Stevens loudness, measured in sones), ISO 532B (Zwicker loudness), DIN 45631 and ASA/ANSI S3.4, have a more general scope and are often used to characterize loudness of environmental noise.
Loudness, a subjective measure, often confused with physical measures of sound strength such as sound pressure, sound pressure level (in decibels), sound intensity or sound power. Filters such as A-weighting and ITU-R BS.1770 attempt to compensate measurements to correspond to loudness as perceived by the typical human.
- https://en.wikipedia.org/wiki/Phon - a unit of loudness level for pure tones. Its purpose is to compensate for the effect of frequency on the perceived loudness of tones. By definition, the number of phon of a sound is the dB SPL of a sound at a frequency of 1 kHz that sounds just as loud. This implies that 0 phon is the limit of perception, and inaudible sounds have negative phon levels. The equal-loudness contours are a way of mapping the dB SPL of a pure tone to the perceived loudness level (LN) in phons. These are now defined in the international standard ISO 226:2003, and the research on which this document is based concluded that earlier Fletcher–Munson curves and Robinson–Dadson curves were in error. The phon unit is not an SI unit in metrology. It is used as a unit of loudness level by the American National Standards Institute.
- https://en.wikipedia.org/wiki/Equal-loudness_contour - a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon, and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment. Equal-loudness contours are often referred to as "Fletcher-Munson" curves, after the earliest researchers, but those studies have been superseded and incorporated into newer standards. The definitive curves are those defined in the international standard ISO 226:2003, which are based on a review of modern determinations made in various countries.
- https://en.wikipedia.org/wiki/Fletcher–Munson_curves - ne of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America.
- https://en.wikipedia.org/wiki/LKFS - a loudness standard designed to enable normalization of audio levels for delivery of broadcast TV and other video. Loudness units relative to full scale (LUFS) is a synonym for LKFS that was introduced in EBU R128. Loudness units (LU) is an additional unit used in EBU R128. It describes Lk without direct absolute reference and therefore describes loudness level differences. LKFS is standardized in ITU-R BS.1770.
- YouTube: EBU R128 Introduction - Florian Camerer - an introduction to the European Broadcasting Union's R128 Broadcast Standard and speaks in general about perceived loudness, peak normalization, loudness normalization, etc.
- https://github.com/dpapavas/alsaloudness - a loudness-compensated volume control for ALSA. It is based on the ISO 226 equal loudness contours, it is fast (thanks to FFTW for the most part) and should be able to run in real time, presenting a very small load to the CPU, even for the embedded systems often used in media player setups.
- https://github.com/jiixyj/libebur128 - A library implementing the EBU R128 loudness standard.
- https://en.wikipedia.org/wiki/Sone - a unit of how loud a sound is perceived. The sone scale is linear. Doubling the perceived loudness doubles the sone value. Proposed by Stanley Smith Stevens in 1936, it is a non-SI unit. In acoustics, loudness is the subjective perception of sound pressure. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics.
Psychoacoustics
- https://en.wikipedia.org/wiki/Psychoacoustics - the scientific study of sound perception. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including speech and music). It can be further categorized as a branch of psychophysics.
- https://en.wikipedia.org/wiki/Bark_scale - a psychoacoustical scale proposed by Eberhard Zwicker in 1961. It is named after Heinrich Barkhausen who proposed the first subjective measurements of loudness. One definition of the term is "...a frequency scale on which equal distances correspond with perceptually equal distances. Above about 500 Hz this scale is more or less equal to a logarithmic frequency axis. Below 500 Hz the Bark scale becomes more and more linear." The scale ranges from 1 to 24 and corresponds to the first 24 critical bands of hearing. It is related to, but somewhat less popular than, the mel scale, a perceptual scale of pitches judged by listeners to be equal in distance from one another.
Software
- I-Simpa: I-Simpa - an open software dedicated to the modelling of sound propagation in 3D complex domains. It is a perfect tool for experts (i.e. acousticians), for teachers and students, as well as for researchers, in their projects (room acoustics, urban acoustics, industrial spaces, acoustic courses...).
Training
See Music#Training
- RealSimple Project - musical acoustics laboratory exercises integrating both hands-on laboratory experience and computer-based simulation.
Cymatics
- http://www.cymaticsource.com/
- http://cymatica.com/
- http://www.cymascope.com/
- http://cymaticamplifier.com/
- http://www.cymatrix.org/cymatrix1/home.html
- http://softwaretonoscope.com/
- http://www.phy.davidson.edu/StuHome/jimn/Java/modes.html
Non-Newtonian
- YouTube: Collin's Lab: DIY Cymatics
- YouTube: Slow-Mo Non-Newtonian Fluid on a Speaker - Veritasium
Electrical
- https://en.wikipedia.org/wiki/Audio_signal - a representation of sound, typically as an electrical voltage. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz (the limits of human hearing). Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal into sound. Digital representations of audio signals exist in a variety of formats. An audio channel or audio track is an audio signal communications channel in a storage device, used in operations such as multi-track recording and sound reinforcement.
- https://en.wikipedia.org/wiki/Audio_power - the electrical power transferred from an audio amplifier to a loudspeaker, measured in watts. The electrical power delivered to the loudspeaker, together with its efficiency, determines the sound power generated (with the rest of the electrical power being converted to heat). Amplifiers are limited in the electrical energy they can output, while loudspeakers are limited in the electrical energy they can convert to sound energy without being damaged or distorting the audio signal. These limits, or power ratings, are important to consumers finding compatible products and comparing competitors.
- ITT Cannon Audio Connectors - ITT Cannon’s line of connectors is environmentally-sealed and designed to meet the requirements of today's electronics, telecommunications and industrial applications, including instrumentation.
Digital
- Advanced Aspects of Digital Audio - Collected for the inquisitive audio enthusiast
- https://github.com/MTG/conferences - This repository hosts a list of upcoming and past conference calls and journal calls for the wider music technology community.
- https://www.mp3-tech.org/programmer/docs/resampler.pdf The Quest For The Perfect Resampler2003.06.23Laurent de SorasWeb: http://ldesoras.free.fr
- SRC Comparisons - "We have organized the testing of some of the objective parameters of SRC algorithms in the 96 kHz - 44.1 kHz conversion mode. This mode is considered "hard" because of its fractional resampling ratio. The set of test signals has been discussed among engineers from Weiss Engineering, Alexey Lukin and members of Glenn Meadows' Mastering Web-Board. The test files were available in a variety of resolutions (32-bit int, 32-bit float, 24-bit), and the best supported resolution has been used for each of the SRC algorithms tested. The resulting graphs have been drawn by a modified version of the RightMark Audio Analyzer (RMAA) and some specially developed analysis software."
- Sound on Sound: Phase Demystified - Phase interactions are well known for their ability to destructively interfere with recorded signals, but an understanding of the process can turn it into one of the most powerful creative tools available to you.
- PDF: The Quest For The Perfect Resampler - 2003.06.23, Laurent de Soras
Production
See also Music, DAW, Synthesis#Programming, etc.
- https://en.wikipedia.org/wiki/Programming_(music) - a form of music production and performance using electronic devices, such as sequencers, to generate sounds of musical instruments.
Synth programming, sequencer programming, etc.
- http://www.trevorwishart.co.uk/transformation.html - sound art, spectral, electro-acoustic
- YouTube: Extreme jazz fusion reharmonization - Adam Neely
- http://www.livepa.org/ - performance
- YouTube: TOBI NEUMANN (EB.TV Tech Talk)
- DnBProduction.com: FAQ - archived 2005
- YouTube: How I mix midi drums
Communities
- Dancetech - music recording technology forums
- AudioSEX - Professional Audio Forum
- PDF: Linux Midi Orchestration - Peter Schaffter
Amplitude
Frequency
Drums
- YouTube: How To Program Drum Grooves - Rick Beato
- YouTube: Let's Make Synth Drums in Linux!
- http://www.mediafire.com/file/gsybc0lle771ggi/Rene+Pierre+Bardet+-+260+Drum+Machine+Patterns.pdf.zip
- Pocket Operations - A portable collection of drum machine patterns.
Mixdown
Hardware
- http://blip.tv/ruinwesen/ruin-wesen-minicommand-pattern-euclid-3186495
- http://www.rebeltech.org/modules/stoicheia/
- http://cycling74.com/
see sound on sound, etc.
- Schematic Vault - This collection of pro audio schematics and reference materials has been amassed both from my private stock and from various internet resources. All materials have been formatted as multi-page pdf files for ease of use. Please feel free to email me (address on home page) with any material you'd care to add.
Studio
Wiring
- https://en.wikipedia.org/wiki/Phone_connector_(audio) - also known as phone jack, audio jack, headphone jack or quarter inch jack plug, is a family of electrical connectors typically used for analog audio signals. The phone connector was invented for use in telephone switchboards in the 19th century and is still widely used. The phone connector is cylindrical in shape, with a grooved tip to retain it. In its original audio configuration, it typically has two, three, four and, occasionally, five contacts. Three-contact versions are known as TRS connectors, where T stands for "tip", R stands for "ring" and S stands for "sleeve". Ring contacts are typically the same diameter as the sleeve, the long shank. Similarly, two-, four- and five- contact versions are called TS, TRRS and TRRRS connectors respectively. The outside diameter of the "sleeve" conductor is 1⁄4 inch (6.35 millimetres).
The "mini" connector has a diameter of 3.5 mm (0.14 in) and the "sub-mini" connector has a diameter of 2.5 mm (0.098 in).
- YouTube: Connecting your neutrik speakON Connector on wire - MrFlexy SMPS
- https://en.wikipedia.org/wiki/Balanced_audio - a method of interconnecting audio equipment using balanced lines. This type of connection is very important in sound recording and production because it allows the use of long cables while reducing susceptibility to external noise caused by electromagnetic interference.Balanced connections typically use shielded twisted-pair cable and three-conductor connectors. The connectors are usually 3-pin XLR or 1⁄4 inch (6.35 mm) TRS phone connectors. When used in this manner, each cable carries one channel, therefore stereo audio (for example) would require two of them.
- https://en.wikipedia.org/wiki/XLR_connector - a style of electrical connector, primarily found on professional audio, video, and stage lighting equipment. The connectors are circular in design and have between 3 and 7 pins. They are most commonly associated with balanced audio interconnection, including AES3 digital audio, but are also used for lighting control, low-voltage power supplies, and other applications. XLR connectors are available from a number of manufacturers and are covered by an international standard for dimensions, IEC 61076-2-103.[1] They are superficially similar to the older and smaller DIN connector range, but are not physically compatible with them.
- furutech PCOCC process - PCOCC Pure Copper by Ohno Continuous Casting
Patch bay
"Full-Normal : Each jack on the top-row is connected to the jack under it on the bottom-row. This allows the audio or video signal to “pass-through” the patchbay without using a patch cable. When we want to change the “normal” signal path we can use a patch cable to change the destination of the signal. Placing a patch cable into the either row breaks the signal path. The signal follows the patch cable to where it is patched.
"Half-Normal: ...Placing a patch cable into the bottom-row breaks the signal path. Placing a patch cable into the top-row allows the signal to still go to the jack under it on the bottom-row (without breaking the normal) and also follows the patch cable."
Microphones
- https://en.wikipedia.org/wiki/Microphone - colloquially nicknamed mic or mike (/maɪk/), is a transducer that converts sound into an electrical signal.Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, radio and television broadcasting, and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes such as ultrasonic sensors or knock sensors.Several different types of microphone are in use, which employ different methods to convert the air pressure variations of a sound wave to an electrical signal. The most common are the dynamic microphone, which uses a coil of wire suspended in a magnetic field; the condenser microphone, which uses the vibrating diaphragm as a capacitor plate, and the piezoelectric microphone, which uses a crystal of piezoelectric material. Microphones typically need to be connected to a preamplifier before the signal can be recorded or reproduced.
Preamplifier
Direct injection
- https://en.wikipedia.org/wiki/DI_unit - an electronic device typically used in recording studios and in sound reinforcement systems to connect a high-output impedance, line level, unbalanced output signal to a low-impedance, microphone level, balanced input, usually via an XLR connector and XLR cable. DIs are frequently used to connect an electric guitar or electric bass to a mixing console's microphone input jack. The DI performs level matching, balancing, and either active buffering or passive impedance matching/impedance bridging to minimize unwanted noise, distortion, and ground loops. DI units are typically metal boxes with input and output jacks and, for more expensive units, “ground lift” and attenuator switches. DI units are also referred to as a DI box, direct box, or simply DI, with each letter pronounced, as in "Dee Eye." The term is variously claimed to stand for direct input, direct injection, direct induction or direct interface.
Mixer
- RD/MPCTools - Extension Toolkit for Martin M-Series Software
- https://github.com/SpotlightKid/xair-remote - Tools for querying and controlling Behringer X-AIR and MIDAS M-AIR audio mixers
Amplifier
For amp to be twice as loud as the 10 watt RMS amp you need a 100 watt RMS amp, and then for a amp to be twice as loud as the 100 watt RMS amp you need 1,000 watt RMS amp.
- Class A - 100% of the input signal is used (conduction angle Θ = 360°). The active element remains conducting all of the time.
- Class B - 50% of the input signal is used (Θ = 180°); the active element carries current half of each cycle, and is turned off for the other half.
- Class AB - Class AB is intermediate between class A and B, the two active elements conduct more than half of the time.
- Class C - Less than 50% of the input signal is used (conduction angle Θ < 180°).
- Class D - uses some form of pulse-width modulation to control the output devices; the conduction angle of each device is no longer related directly to the input signal but instead varies in pulse width. These are sometimes called "digital" amplifiers because the output device is switched fully on or off, and not carrying current proportional to the signal amplitude.
- https://en.wikipedia.org/wiki/Amplifier_figures_of_merit - numerical measures that characterize its properties and performance. Figures of merit can be given as a list of specifications that include properties such as gain, bandwidth, noise and linearity, among others listed in this article. Figures of merit are important for determining the suitability of a particular amplifier for an intended use.
- YouTube: Is equipment burn in real? - "all capacitors have to form, you put voltage on them and little microscopic holes are filled and it changes the equivalent series resistance and a number of other characteristics"
- https://joebennett.net/2014/10/13/guitarists-stop-hurting-the-audience-at-small-gigs/ - inc. cab placement
- IEEE Spectrum: NwAvGuy: The Audio Genius Who Vanished - [7] [8]
Crossover
Speaker
See Speaker
Headphones
Calibration
Synthesizer
See Synthesis, #Audio programming
Vocoder
- Elektor 10 Channel Vocoder - "I'm a synth & electronics passionate fan and live near Antwerp, Belgium. In the mid 1990's, really young and inexperienced, I decided to build the Elektor 10 channel vocoder as described in the Dutch Elektor magazine from the early 1980's."
Drum machine
- LXR Drum Synthesizer - The LXR is a full fledged digital drum machine with integrated sequencer. Its sound engine provides 6 different instruments, each with over 30 parameters to tweak. It can produce a wide variety of sounds, ranging from classic analogue emulations to crunchy digital mayhem.
- eXaDrums - Electronic drums for Linux. The goal project is to use a Raspberry Pi to make a drum module. As far as the software goes, it is written in C++ and uses Gtkmm to display a nice graphical user interface (GUI) on a Raspberry Pi official 7" touchscreen. The hardware consists of some accelerometers or piezos connected to an analog to digital converter (ADC).
Sampler
- https://github.com/ryanjamesmcgill/Audio-Sampler-Selector - An embedded linux system that plays back audio samples for musical performance. The development board used was a Texas Instruments (TI) Beagle Bone Black based on an AM335x 1GHz ARM® Cortex-A8 processer. For high-quality audio, a TI PCM5102A DAC was used. The program leveraged the c++ library: JUCE and Linux sound server: JACK.
- YouTube: SelectorDemo
Sound module
- https://en.wikipedia.org/wiki/Sound_module - an electronic musical instrument without a human-playable interface such as a piano-style musical keyboard. Sound modules have to be operated using an externally connected device, which is often a MIDI controller, of which the most common type is the musical keyboard (although wind controllers, guitar controllers and electronic drum pads are also used). Controllers are devices that provide the human-playable interface and which may or may not produce sounds of its own. Another common way of controlling a sound module is through a sequencer, which is computer hardware or software designed to record and play back control information for sound-generating hardware (e.g., a DJ may program a bassline and use the sound module to produce the sound). Connections between sound modules, controllers, and sequencers are generally made with MIDI (Musical Instrument Digital Interface), which is a standardized protocol designed for this purpose, which includes special ports (jacks) and cables.
DAC / ADC
- https://github.com/hideakitai/MCP4728 - Arduino library for MCP4728 quad channel, 12-bit voltage output Digital-to-Analog Convertor with non-volatile memory and I2C compatible Serial Interface
- https://github.com/onkelDead/tascam.lv2 - LV2 plugin to control Tascam US-16x08 interface via cutsom alsa driver
- https://github.com/onkelDead/tascam-gtk - GTK+ based application to control Tascam US-16x08 DSP mixer
Sound chip/card
- https://github.com/Skidlz/YM3427 - Info/Code for Yamaha's YM3427 IC
- https://github.com/rhargreaves/mega-drive-midi-interface - Control the Yamaha YM2612 and SN76489 chips of the SEGA Mega Drive via MIDI
- https://en.wikipedia.org/wiki/AC'97 - Audio Codec '97; also MC'97 for Modem Codec '97, is an audio codec standard developed by Intel Architecture Labs in 1997. The standard was used in motherboards, modems, and sound cards. Audio components integrated into chipsets consist of two component classes: an AC'97 digital controller (DC97), which is built into the southbridge of the chipset, and AC'97 audio and modem codecs, which are the analog components of the architecture. AC'97 defines a high-quality, 16- or 20-bit audio architecture with surround sound support for the PC. AC'97 supports a 96 kHz sampling rate at 20-bit stereo resolution and a 48 kHz sampling rate at 20-bit stereo resolution for multichannel recording and playback. AC97 defines a maximum of 6 channels of analog audio output.
- https://en.wikipedia.org/wiki/Intel_High_Definition_Audio - or HD Audio or HDA, is a specification for the audio sub-system of personal computers. It was released by Intel in 2004 as successor to their AC'97 PC audio standard. During development it had the codename "Azalia".
ISA
- OS/2 Museum: A Sound Card Before Its Time - [9]
- https://github.com/schlae/snark-barker - a 100% compatible clone of the famed SB 1.0 "Killer Card" sound card from 1989. It implements all the features, including the digital sound playback and recording, Ad Lib compatible synthesis, the joystick/MIDI port, and the CMS chips (which are actually Philips SAA1099 synthesizer devices). [10]
PCI
- envy24control - alsa-utils
- https://sourceforge.net/projects/kenvy24 - VIA Envy24 based sound cards control utility, for the KDE environment
Wavetable
- https://en.wikipedia.org/wiki/Wavetable_synthesis#Confusion_with_sample-based_synthesis_(S&S)_and_Digital_Wave_Synthesis - In 1992, with the introduction of the Creative Labs Sound Blaster 16 the term "wavetable" started to be (incorrectly) applied as a marketing term to their sound card. However, these sound cards did not employ any form of wavetable synthesis, but rather PCM samples and FM synthesis.
- https://en.wikipedia.org/wiki/Creative_Wave_Blaster - was an add-on MIDI-synthesizer for Creative Sound Blaster 16 and Sound Blaster AWE32 family of PC soundcards. It was a sample-based synthesis General MIDI compliant synthesizer. For General MIDI scores, the Wave Blaster's wavetable-engine produced more realistic instrumental music than the SB16's onboard Yamaha-OPL3.
- YouTube: MIDI and Wavetable - PhilsComputerLab playlist
Music workstaion
Digital Audio Workstation
FireWire
- https://en.wikipedia.org/wiki/IEEE_1394 - an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple, which called it FireWire. The 1394 interface is also known by the brands i.LINK (Sony), and Lynx (Texas Instruments). The copper cable it uses in its most common implementation can be up to 4.5 metres (15 ft) long. Power is also carried over this cable allowing devices with moderate power requirements to operate without a separate power supply. FireWire is also available in Cat 5 and optical fiber versions. The 1394 interface is comparable to USB, though USB requires a master controller and has greater market share.
- FFADO project aims to provide a generic, open-source solution for the support of FireWire based audio devices for the Linux platform. It is the successor of the FreeBoB project.
AES3
- https://en.wikipedia.org/wiki/AES3 - (also known as AES/EBU) is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of PCM audio over several transmission media including balanced lines, unbalanced lines, and optical fiber. AES3 was jointly developed by the Audio Engineering Society (AES) and the European Broadcasting Union (EBU). The standard was first published in 1985 and was revised in 1992 and 2003. AES3 has been incorporated into the International Electrotechnical Commission's standard IEC 60958, and is available in a consumer-grade variant known as S/PDIF.
ADAT Lightpipe
- https://en.wikipedia.org/wiki/ADAT_Lightpipe - officially the ADAT Optical Interface, is a standard for the transfer of digital audio between equipment. It was originally developed by Alesis but has since become widely accepted,[1] with many third party hardware manufacturers including Lightpipe interfaces on their equipment. The protocol has become so popular that the term "ADAT" is now often used to refer to the transfer standard rather than to the Alesis Digital Audio Tape itself.
MADI
- http://en.wikipedia.org/wiki/MADI - or Multichannel Audio Digital Interface or AES10, is an Audio Engineering Society (AES) standard electronic communications protocol that defines the data format and electrical characteristics of an interface that carries multiple channels of digital audio. The AES first documented the MADI standard in AES10-1991, and updated it in AES10-2003 and AES10-2008. The MADI standard includes a bit-level description and has features in common with the two-channel format of AES3. It supports serial digital transmission over coaxial cable or fibre-optic lines of 28, 56, or 64 channels; and sampling rates of up to 96 kHz with resolution of up to 24 bits per channel. Like AES3 or ADAT it is a Uni-directional interface (one sender and one receiver).
USB
- FOSDEM 2019: Linux and USB Audio Class 3 - The USB Audio Class 3.0 is a specification recently introduced by USB Implementers Forum. Ruslan is an author of UAC3 implementation in Linux, he will give an overview of improvements and changes in this USB Audio spec, and will share current Linux support status and challenges faced during ALSA drivers implementation.
- https://aur.archlinux.org/packages/snd-usb-audio-lowlatency-dkms/
- low latency tweaks to snd-usb-audio - LinuxMusicians
Pedal
Jesusonic
See also DAW#JSFX
- Jesusonic - A dynamic text mode live FX processor
- https://code.google.com/p/jsfxgen - a working proof-of-concept modular IDE for generating DSP code for the JesuSonic platform, written using the Flex and AIR SDKs. JesuSonic has a standalone version and is also part of Reaper as a plugin.
- https://github.com/asb2m10/jsusfx - Opensource Jesusonic FX implementation
- https://github.com/JoepVanlier/JSFX - A bundle of JSFX and scripts for reaper.
The OWL
- The OWL - an open source, programmable audio platform made for musicians, hackers and programmers alike. Users can program their own effects, or download ready-made patches from our growing online patch library. It is available both as a guitar fx pedal and a Eurorack synthesizer module. OWL stands for Open Ware Laboratory which refers to the fact that the entire project is open source in both hardware and software. Being open source is an important issue for us in terms of making all of the technology completely accessible to the end user.
- https://github.com/pingdynasty/OwlSim - Simulator for Open Ware Laboratory, a programmable audio effects pedal
to sort
Controller
See also MIDI#Controllers
- https://github.com/brendan-byrne/Modi - Modular Controller System
Game controller
- https://github.com/grejppi/wmcv - wmcv is a Python module that lets you use the Wiimote controller as a CV controller with JACK.
Eye tracking
- http://paulbatchelor.github.io/proj/eyejam - an open-source eye-controlled music composition environment. This was developed during for my summer internship with the Enable Group at Microsoft Research. The source code can be found on github under the official project name Microsoft Hands-Free Sound Jam. EyeJam is cross-platform, with suport for Windows, Mac, and Linux. Eye-control is only available on Windows. On the other platforms, eye control is simulated using the mouse cursor.
Raspberry PI
- https://github.com/szymonkaliski/LoopPI2 - 6-track audio looper working on Raspberry PI 3, made with ChucK.
Instruments
- Chimaera - a poly-magneto-phonic-theremin (we had to come up with this new subcategory in the domain of electronic instruments, as the Chimaera did not fit anywhere else). Other terms that would describe it well could be: a general-purpose-continuous-music-controller, a multi-touch-less-ribbon-controller or a possible offspring of a mating experiment of a keyboard and violin. Think of it as an invisible string that is excitable by an arbitrary number of magnetic sources. Depending on where the magnetic sources are located on the string and depending on how strong (or how near) they are, the device outputs different event signals. These general-purpose event signals then can be used to e.g. drive a synthesizer, an effects processing unit or some other hardware.
- chair.audio - making digital instruments with analog interfaces. Our mission is to make sounds tangible. That's why we are developing instruments with haptic interfaces for electronic sound - both analog and software synthesis. Our Instruments have excitable surfaces that you can scratch, hit or bow. A very limited run of our developer edition will soon be available here.
- https://github.com/SammyIAm/Moppy2 - Musical flOPPY controller
- https://news.ycombinator.com/item?id=19595623 - roli seaboard
Wire
- YouTube: Wire Recording
Music roll
- https://en.wikipedia.org/wiki/Music_roll - a storage medium used to operate a mechanical musical instrument. They are used for the player piano, mechanical organ, electronic carillon and various types of orchestrion. The vast majority of music rolls are made of paper. Other materials that have been utilized include thin card (Imhof-system), thin sheet brass (Telektra-system), composite multi-layered electro-conductive aluminium and paper roll (Triste-system) and, in the modern era, thin plastic or PET film. The music data is stored by means of perforations. The mechanism of the instrument reads these as the roll unwinds, using a pneumatic, mechanical or electrical sensing device called a tracker bar, and the mechanism subsequently plays the instrument. After a roll is played, it is necessary for it to be rewound before it can be played again. This necessitates a break in a musical performance. To overcome this problem, some instruments were built with two player mechanisms allowing one roll to play while the other rewinds. A piano roll is a specific type of music roll, and is designed to operate an automatic piano like the player piano or the reproducing piano.
- https://en.wikipedia.org/wiki/Piano_roll - a music storage medium used to operate a player piano, piano player or reproducing piano. A piano roll is a continuous roll of paper with perforations (holes) punched into it. The perforations represent note control data. The roll moves over a reading system known as a 'tracker bar' and the playing cycle for each musical note is triggered when a perforation crosses the bar and is read. A rollography is a listing of piano rolls, especially made by a single performer, analogous to a discography.
Piano rolls were in continuous mass production from around 1896 to 2008, and are still available today, with QRS Music claiming to have 45,000 titles available with "new titles being added on a regular basis". Largely replacing piano rolls, which are no longer mass-produced today, MIDI files represent a modern way in which musical performance data can be stored. MIDI files accomplish digitally and electronically what piano rolls do mechanically. Software for editing a performance stored as MIDI data often has a feature to show the music in a piano roll representation.
- Midimusic eplayWin32 - Estey and Wurlitzer e-roll player for Hauptwerk, Miditzer, GrandOrgue & eplayOrgan. This graphical player will play Estey e-rolls on any Hauptwerk or Miditzer organ and Wurlitzer Band Organ e-rolls on eplayOrgan (Windows, iMac and Linux) It will automatically operate the manuals, pedals, stops, couplers and swell. As supplied this version plays the Hauptwerk St. Annes Moseley and Paramount 310 plus the Miditzer 160, 216 or 260 organs. It also plays Wurlitzer 125, 150 and 165 organs. Other Hauptwerk or Miditzer organs can be played by adding their data via the menus. It also plays my new eplayOrgan and most other organs which can be played from midi keyboards, including GrandOrgue, Viscount and jOrgan.
Multitrack recording
- https://en.wikipedia.org/wiki/Multitrack_recording - also known as multitracking, double tracking, or tracking—is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.
Prior to the development of multitracking, the sound recording process required all of the singers, band instrumentalists, and/or orchestra accompanists to perform at the same time in the same space. Multitrack recording was a significant technical improvement as it allowed studio engineers to record all of the instruments and vocals for a piece of music separately. Multitracking allowed the engineer to adjust the levels and tone of each individual track, and if necessary, redo certain tracks or overdub parts of the track to correct errors or get a better "take." As well, different electronic effects such as reverb could be applied to specific tracks, such as the lead vocals, while not being applied to other tracks where this effect would not be desirable (e.g., on the electric bass). Multitrack recording was much more than a technical innovation; it also enabled record producers and artists to create new sounds that would be impossible to create outside of the studio, such as a lead singer adding many harmony vocals with their own voice to their own lead vocal part, an electric guitar player playing many harmony parts along with their own guitar solo, or even recording the drums and replaying the track backwards for an unusual effect.
In the 1980s and 1990s, computers provided means by which both sound recording and reproduction could be digitized, revolutionizing audio recording and distribution. In the 2000s, multitracking hardware and software for computers was of sufficient quality to be widely used for high-end audio recordings by both professional sound engineers and by bands recording without studios using widely available programs, which can be used on a high-end laptop computer. Though magnetic tape has not been replaced as a recording medium, the advantages of non-linear editing (NLE) and recording have resulted in digital systems largely superseding tape. Even in the 2010s, with digital multitracking being the dominant technology, the original word "track" is still used by audio engineers.
Wifi recording
- Sparrow - a basic but versatile product, allowing the recording, live broadcasting and other operations common to stage performances and TV/movie recordings.
MP3
- https://en.wikipedia.org/wiki/MP3_player - an electronic device that can play MP3 digital audio files. It is a type of digital audio player, or portable media player. Most players play more than the MP3 file format, such as Windows Media Audio (WMA), Advanced Audio Coding (AAC), Vorbis, FLAC, Speex and Ogg.
- https://en.wikipedia.org/wiki/Portable_media_player - or digital audio player (DAP) is a portable consumer electronics device capable of storing and playing digital media such as audio, images, and video files. The data is typically stored on a CD, DVD, BD, flash memory, microdrive, or hard drive. Most portable media players are equipped with a 3.5 mm headphone jack, which users can plug headphones into, or connect to a boombox or hifi system. In contrast, analogue portable audio players play music from non-digital media that use analogue signal storage, such as cassette tapes or vinyl records.Often mobile digital audio players are marketed and sold as "portable MP3 players", even if they also support other file formats and media types. Increasing sales of smartphones and tablet computers have led to a decline in sales of portable media players, leading to most devices being phased out, though flagship devices like the Apple iPod and Sony Walkman are still in production. Portable DVD/BD players are still manufactured by brands across the world.
Rockbox
- Rockbox is a free replacement firmware for digital music players. It runs on a wide range of players:
gtkpod
- gtkpod - a graphical user interface for the Apple iPod for Unix-like systems, written using the GTK+ toolkit.
PA system
- https://en.wikipedia.org/wiki/Backline_(stage) - used in popular music and sound reinforcement system contexts to refer to electronic audio amplification equipment and speaker enclosures that are placed behind the band or the rhythm section on stage, including amplifiers and speaker cabinets for guitars, bass guitars and keyboards. In the US and Canada, the term has expanded to include many of the musical instruments that the rhythm section musicians play, including pianos, Hammond organs, drum kits and various percussion instruments such as congas and bongos.
Sound system
- https://en.wikipedia.org/wiki/Sound_reinforcement_system - the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
A sound reinforcement system for a rock concert in a stadium may be very complex, including hundreds of microphones, complex live sound mixing and signal processing systems, tens of thousands of watts of amplifier power, and multiple loudspeaker arrays, all overseen by a team of audio engineers and technicians. On the other hand, a sound reinforcement system can be as simple as a small public address (PA) system, consisting of, for example, a single microphone connected to a 100 watt amplified loudspeaker for a singer-guitarist playing in a small coffeehouse. In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience.
Some audio engineers and others in the professional audio industry disagree over whether these audio systems should be called sound reinforcement (SR) systems or PA systems. Distinguishing between the two terms by technology and capability is common, while others distinguish by intended use (e.g., SR systems are for live event support and PA systems are for reproduction of speech and recorded music in buildings and institutions). In some regions or markets, the distinction between the two terms is important, though the terms are considered interchangeable in many professional circles.
- soundsystem.world - worldwide map of original soundsystems
- AV Tools - Android app
- https://en.wikipedia.org/wiki/Sound_system_(Jamaican) - group of disc jockeys, engineers and MCs playing ska, rocksteady or reggae music. The sound system is an important part of Jamaican culture and history.
- https://en.wikipedia.org/wiki/Sound_system_(DJ) - a group of DJs and audio engineers contributing and working together as one, playing and producing music over a large PA system or sound reinforcement system, typically for a dance event or party.
Linux
"there are many good reasons to consider Linux (for pro audio): low- or no-cost software; a high-performance sound system; great audio routing; powerful sound synthesis; high-quality score notation; it's extremely customisable; more of your CPU power may be used for audio processing; it avoids many patent/license restriction pitfalls; it avoids costly tie-ins to specific product ranges; software may be (legally) modified to suit your individual needs; modular software allows you to configure your software studio the way that you want; and it's driven by passion before profit." [12]
News and communities
- Linuxaudio.org is a not-for-profit consortium of libre software projects and artists, companies, institutions, organizations, and hardware vendors using Linux kernel-based systems and allied libre software for audio-related work, with an emphasis on professional tools for the music, production, recording, and broadcast industries.
- LinuxMusicians forum - mission: to facilitate discussion, learning, and discovery of music making on the Linux platform.
- Libre Music Production is a community-driven online resource, focused on promoting musical creation and composition using free and open source (FLOSS) software. By providing hands-on material submitted by the community, such as guides, tutorials, articles and news updates, we want to show not only that there is great FLOSS audio software out there, but also how to practically use that software to make music.
- http://opensourcemusician.com/
- https://archive.org/details/osmpodcast
- #opensourcemusician
- https://www.reddit.com/r/linuxaudio - A subreddit dedicated towards music and audio related topics on the Linux platform.
Mailing lists
- https://lists.linuxaudio.org/listinfo - public mailing lists on lists.linuxaudio.org
- Jack-devel
- Nabble: Jack-devel
IRC
Freenode
- #lau - Linux Audio Users, slower traffic, related to the mailing list
- #lad - Linux Audio Developers programming chat, related to the mailing list
- #linuxmusicians - slower traffic, related to the forum
- #opensourcemusicians - related to the podcast, FLOSS on all platforms, chat can get quite general
- #linuxmao - Francophone, related to the site
- #audio4linux.de - Germanophone, related to the site
- #archlinux-proaudio - Arch Linux proaudio project and general discussion
- #kxstudio - Debian audio repo/distro
- #studioware - Slackware multimedia
- #proaudio-overlay - Gentoo audio
- #lv2 - open audio plugin format
- #jack - audio system
- #alsa - audio system
- #pulseaudio - audio system
- #audacity - sample editor
- #ardour - DAW
- #ingen - audio host
- #non - "DAW"
- #lmms - DAW
- #rosegarden - sequencer
- #zrythm - DAW
- #surgesynth - synth
- ##zynsubaddfx - synth
- #dataflow - Pure Data
- #lilypond - notation
- #laborejo - notation, sequencer, SF2
- #openal - 3d audio
- #vorbis - codec
- ##dsp - digital signal processing
- ##music-electronics
- #musicbrainz - music tagging
- #metabrainz - musicbrains dev
- #edmproduction - related to the subreddit
- #RedditAudio - mostly audio consumer electronics
- ##xenharmonic - microtonal
- ##audio
- #audiovisual
- ##radio
- ##hamradio
- ##rtlsdr - software defined radio
- ##electronics
- ##music - general music listening chat
- #music - general music listening chat
- #EDM - slow, general electronic dance music chat channel
- #Juce - framework
OFTC
- #debian-multimedia
Cons
- Sonoj Convention Archive - media and results from past events
Distros
See Distros#Audio/visual, Playback#Operating System
Software lists
- Planet Linux Audio - aggregated software update news
An in-depth look at programming sound using Linux — jannewmarch
- http://www.hitsquad.com/smm/linux/ - Linux software list
- KVR: Linux free software listing - by date updated
Setup
to better
- https://github.com/redtide/archlinux-realtime-generic-setup - Common / generic configuration for an Archlinux RT enabled kernel
- https://github.com/usrmusicman/ArchStudioUtils - Useful Scripts For Archlinux Audio
Real time
See also *nix#Real-Time
Audio systems
- http://en.wikipedia.org/wiki/Sound_server - A sound server is software that manages the use of and access to audio devices (usually a sound card). It commonly runs as a background process.
In a Unix-like operating system, a sound server mixes different data streams and sends out a single unified audio to an output device. The mixing is usually done by software, or by hardware if there is a supported sound card.
The "sound stack" can be visualized as follows, with programs in the upper layers calling elements in the lower layers:
- Applications (e.g. mp3 player, web video)
- Sound server (e.g. aRts, ESD, JACK, PulseAudio)
- Sound subsystem (described as kernel modules or drivers; e.g. OSS, ALSA)
- Operating system kernel (e.g. Linux, Unix)
- Linux Audio Survival kit - A brief guide to have success setting up the audio in Linux.
- https://github.com/raboof/realtimeconfigquickscan - scripts to inspect a linux installation and make suggestions for improving realtime/audio performance.
- https://github.com/linuxaudio/realtime-suggestions - A bash script, that suggests optimization options (while not stating the obvious) for Linux kernel realtime use. As these are just suggestions, they should be considered with a grain of salt: Configurations on Linux distributions can differ quite a lot. That being said: This script will not think for you!
- https://github.com/dynobot/Linux-Audio-Adjustments - Audio Tweaks for Debian Based RPi
- A Guide Through The Linux Sound API Jungle - a guide to the Linux audio jungle.
"If we were drawing the [internet] OSI model used to describe the networking framework that connects your machine to every other machine on the network, we'd find clear strata, each with its own domain of processes and functionality. There's very little overlap in layers, and you certainly don't find end-user processes in layer seven messing with the electrical impulses of the raw bitstreams in layer one.
"Yet this is exactly what can happen with the Linux audio framework. There isn't even a clearly defined bottom level, with several audio technologies messing around with the kernel and your hardware independently. Linux's audio architecture is more like the layers of the Earth's crust than the network model, with lower levels occasionally erupting on to the surface, causing confusion and distress, and upper layers moving to displace the underlying technology that was originally hidden."
"ALSA itself has a kernel level stack and a higher API for programmers to use, mixing drivers and hardware properties with the ability to play back surround sound or an MP3 codec. Most distributions stick PulseAudio and GStreamer on top[,] ... The deeper the layer, the closer to the hardware it is." [13]
- Arch Forum: OSS4 vs ALSA vs Pusleaudio vs Jack
- https://github.com/hodefoting/atty - audio interface and driver for terminals
OSS
- Open Sound System is an audio subsystem that provides a cross platform API and device drivers for most consumer and professional audio devices for UNIX® and POSIX based operating systems, including Linux. Owing to its open architecture, applications developed on one supporting operating system platform can be easily recompiled on any other platform.
Old.
The API is designed to use the traditional Unix framework of open(), read(), write(), and ioctl(), via special devices. For instance, the default device for sound input and output is /dev/dsp. Examples using the shell:
cat /dev/random > /dev/dsp # plays white noise through the speaker cat /dev/dsp > a.a # reads data from the microphone and copies it to file a.a
- https://github.com/libfuse/osspd - OSS Proxy uses CUSE (extension of FUSE allowing character devices to be implemented in userspace) to implement OSS interface - /dev/dsp, /dev/adsp and /dev/mixer. From the POV of the applications, these devices are proper character devices and behave exactly the same way so it can be made quite versatile.
ALSA
- Advanced Linux Sound Architecture (ALSA) provides audio and MIDI functionality to the Linux operating system.
"ALSA is responsible for translating your audio hardware's capabilities into a software API that the rest of your system uses to manipulate sound. It was designed to tackle many of the shortcomings of OSS (and most other sound drivers at the time), the most notable of which was that only one application could access the hardware at a time. This is why a software component in ALSA needs to manages audio requests and understand your hardware's capabilities.
"ALSA was designed to replace OSS. However, OSS isn't really dead, thanks to a compatibility layer in ALSA designed to enable older, OSS-only applications to run. It's easiest to think of ALSA as the device driver layer of the Linux sound system. Your audio hardware needs a corresponding kernel module, prefixed with snd_, and this needs to be loaded and running for anything to happen. This is why you need an ALSA kernel driver for any sound to be heard on your system, and why your laptop was mute for so long before someone thought of creating a driver for it. Fortunately, most distros will configure your devices and modules automatically. [14]
- http://lxr.free-electrons.com/source/Documentation/sound/alsa/ALSA-Configuration.txt
- http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture
- The Linux Kernel documentation: Linux Sound Subsystem Documentation
- The Linux Kernel documentation: Advanced Linux Sound Architecture - Driver Configuration guide
- The ALSA Driver API
re official wiki
- http://i.imgur.com/f66sf.png - ALSA job
Information
less /proc/asound/card0/pcm0p/sub0/hw_params # current hardware info cat /proc/asound/cards # List audio hardware cat /proc/asound/card0 # List card info cat /proc/asound/devices # List audio hardware aplay -L # List all PCMs defined modinfo soundcore # Kernel sound module info lsmod | grep snd lspci -v | grep -i audio # show some kernel info
- alsacap - ALSA device capability lister. alsacap - ALSA device capability lister. scans soundcards known to ALSA for devices and subdevices. displays ranges of configuration parameters for the given ALSA device.
- QasConfig is a graphical browser for the configuration tree and can help to analyze and debug an ALSA setup.
- [alsa-query.c ](https://gist.github.com/SpotlightKid/50986b65db32f6e93e86530bd2b90559) - Print hardware capabilities of ALSA device
Configuration
ALSA settings are stored in file 'asound.state', location can vary depending on distribution
- http://www.alsa-project.org/main/index.php/Asoundrc - Neither the user-side .asoundrc nor the asound.conf configuration files are required for ALSA to work properly. Most applications will work without them. These files are used to allow extra functionality, such as routing and sample-rate conversion, through the alsa-lib layer.
The keyword default is defined in the ALSA lib API and will always access hw:0,0 — the default device on the default soundcard. Specifying the !default name supersedes the one defined in the ALSA lib API.
pcm.NAME { type hw # Kernel PCM card INT/STR # Card name or number [device] INT # Device number (default 0) [subdevice] INT # Subdevice number, -1 first available (default -1) mmap_emulation BOOL # enable mmap emulation for ro/wo devices }
- ALSA project - the C library reference: PCM (digital audio) interface - ALSA uses the ring buffer to store outgoing (playback) and incoming (capture, record) samples. There are two pointers being maintained to allow a precise communication between application and device pointing to current processed sample by hardware and last processed sample by application. The modern audio chips allow to program the transfer time periods. It means that the stream of samples is divided to small chunks. Device acknowledges to application when the transfer of a chunk is complete.
- PCM (digital audio) plugins - these extend functionality and features of PCM devices. The plugins take care about various sample conversions, sample copying among channels and so on.
Plugin: hw
This plugin communicates directly with the ALSA kernel driver. It is a raw communication without any conversions. The emulation of mmap access can be optionally enabled, but expect worse latency in the case.
The nonblock option specifies whether the device is opened in a non-blocking manner. Note that the blocking behavior for read/write access won't be changed by this option. This influences only on the blocking behavior at opening the device. If you would like to keep the compatibility with the older ALSA stuff, turn this option off.
This plugin stores contents of a PCM stream to file or pipes the stream to a command, and optionally uses an existing file as an input data source (i.e., "virtual mic")
- http://alsa.opensrc.org/Dmix - Mixing enables multiple applications to output sound at the same time. Most discrete sound cards support hardware mixing, which is enabled by default if available. Integrated motherboard sound cards (such as Intel HD Audio), usually do not support hardware mixing. On such cards, software mixing is done by an ALSA plugin called dmix. This feature is enabled automatically if hardware mixing is unavailable.
- http://alsa.opensrc.org/Dsnoop - the equivalent of the dmix plugin, but for recording sound. The dsnoop plugin allows several applications to record from the same device simultaneously.
- Alsa Opensrc Org - These instructions apply to digital connections using either electrical coax or optical connections.
- Linux ALSA sound notes
- https://dl.dropboxusercontent.com/u/18371907/asoundrc - mega config suggestions/fixes/information
- JACK and Headphones - a virtual device in .asoundrc file that maps two channels of audio to all four channels on the soundcard. With that in place, we need to tell JACK to connect to our new virtual device "hpfix".
loopback
- http://linux.die.net/man/1/alsaloop - allows create a PCM loopback between a PCM capture device and a PCM playback device, supports multiple soundcards, adaptive clock synchronization, adaptive rate resampling using the samplerate library (if available in the system). Also, mixer controls can be redirected from one card to another (for example Master and PCM).
Tools
alsactl # advanced controls for ALSA soundcard driver alsactl init # initiate basic configure alsactl store # storae configuration
- stativ / asoundconf - asoundconf-gtk, the GTK+ applet to allow you to select your default sound card.
- http://alsa.opensrc.org/Alsa-tools
- https://launchpad.net/ubuntu/xenial/+package/alsa-tools-gui
- echomixer - control tool for Echoaudio soundcards
- envy24control - control tool for Envy24 (ice1712) based soundcards
- hdajackretask - retask jacks on HDA Intel hardware
- hdspconf - GUI program to control the Hammerfall HDSP Alsa Settings.
- hdspmixer - tool to control the advanced routing features of the
- RME Hammerfall DSP.
- rmedigicontrol - control tool for RME Digi32 and RME Digi96 soundcards
- https://launchpad.net/ubuntu/xenial/+package/alsa-tools-gui
speaker-test -c 2 # Using 16 octaves of pink noise, alsa-utils
- https://github.com/nedko/u7 - a program for controlling ALSA volume through Linux input device.
PulseAudio
- PulseAudio is a sound system for POSIX OSes, meaning that it is a proxy for your sound applications. It allows you to do advanced operations on your sound data as it passes between your application and your hardware. Things like transferring the audio to a different machine, changing the sample format or channel count and mixing several sounds into one are easily achieved using a sound server.
- How PulseAudio works - graphic
Configuration
user specific pulseaudio config;
~/.pulse/default.pa # to load modules and define defaults ~/.pulse/client.conf # to configure a client for the sound server ~/.pulse/daemon.conf # to define sample rates and buffers
To avoid .pulse-cookie in home folder, set the following in /etc/pulse/client.conf [17]
cookie-file = /tmp/pulse-cookie
By default, pulseaudio changes master and application volume at the same time. To disable this, edit /etc/pulse/daemon.conf or ~/.config/pulse/daemon.conf with:
flat-volumes = no
man pulse-cli-syntax # pulseaudio commandline help
pactl # control a running PulseAudio sound server
pacmd # reconfigure a PulseAudio sound server during runtime pacmd list-cards pacmd dump
- http://0pointer.de/lennart/projects/paprefs - PulseAudio Preferences (paprefs) is a simple GTK based configuration dialog for the PulseAudio sound server.
- https://pypi.org/project/pulsectl - high-level interface and ctypes-based bindings for PulseAudio (libpulse), mostly focused on mixer-like controls and introspection-related operations (as opposed to e.g. submitting sound samples to play, player-like client).
- https://github.com/flexibeast/pulseaudio-control - pulseaudio-control controls PulseAudio volumes from Emacs, via pactl.
- https://github.com/umlaeute/pa-systray - tiny systray icon to turn on/off pulseaudio
- https://github.com/christophgysin/pasystray - allows setting the default PulseAudio source/sink and moving streams on the fly between sources/sinks without restarting the client applications.
- https://github.com/Junker/mictray - a Lightweight application which lets you control the microphone state and volume from system tray
- https://github.com/miek/midi2pamixer - Control PulseAudio mixer with MIDI device
- https://github.com/rhaas80/pa_volume - a simple tool to set the remembered volume level of pulseaudio clients. It requires module-stream-restore to be loaded (which is usually the case) to function. When called without arguments it shows all the known clients (running and non-running) and their remembered volume level. To set the volume level pass it the name of the client followed by the volume in pecent.
Mixer
GUI
- https://github.com/lxde/pavucontrol-qt - A Pulseaudio mixer in Qt (port of pavucontrol)
- https://github.com/rafalcieslak/pavucontrol - fork with a compact UI
CLI
- https://github.com/graysky2/pulseaudio-ctl - Control pulseaudio volume from the shell or mapped to keyboard shortcuts. No need for alsa-utils. [18]
- https://github.com/cdemoulins/pamixer - like amixer but for pulseaudio. It can control the volume levels of the sinks.
- https://github.com/falconindy/ponymix - CLI volume control for PulseAudio
TUI
- https://github.com/GeorgeFilipkin/pulsemixer - cli and curses mixer for pulseaudio. horizontal level bars, mousewheel selects channel
h/j/k/l, arrows navigation, volume change H/L, Shift+Left/Shift+Right change volume by 10 1/2/3/4/5/6/7/8/9/0 set volume to 10%-100% m mute/unmute Space lock/unlock channels together Enter context menu F1/F2/F3 change modes Tab go to next mode Mouse left click select device or mode Mouse wheel volume change q/Esc/^C quit
- https://github.com/fulhax/ncpamixer - horizontal ncurses PulseAudio Mixer inspired by pavucontrol.
- https://github.com/patroclos/PAmix - horizontalncurses/curses pulseaudio mixer in c++ similar to pavucontrol
- https://github.com/mk-fg/pulseaudio-mixer-cli - Interactive python/ncurses UI to control volume of pulse streams, kinda like alsamixer, focused not on sink volume levels (which can actually be controlled via alsamixer, with alsa-pulse plugin), but rather on volume of individual streams, so you can tune down the music to hear the stuff from games, mumble, skype or browser.
- https://github.com/KenjiTakahashi/pacmixer - an alsamixer alike for PulseAudio. breaks PA connections for PNmixer and gives error. mousewheel = WTF!
- https://github.com/TheDarrenJoseph/purses - PulseAudio ncurses Audio Visualiser written in C
Web
- https://github.com/Siot/PaWebControl - PulseAudio Web Volume Control. Requirements: PHP web server, PulseAudio pactl command
Processing
- https://github.com/wwmm/pulseeffects - Limiter, compressor, reverberation, equalizer and auto volume effects for Pulseaudio applications
- http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Equalizer/
- https://sites.google.com/site/nevion/projects/pulseaudio-equalizer
- https://github.com/kernelOfTruth/pulseaudio-equalizer - A LADSPA based multiband equalizer approach for getting better sound out of pulseaudio. This equalizer clearly is more potent than the (deprecated ?), optional one from Pulseaudio.
- https://gitorious.org/pulseaudio-equalizer/pulseaudio-equalizer
- https://sourceforge.net/projects/qpaeq - qpaeq is an equalizer interface for pulseaudio
- https://github.com/gotbletu/shownotes/blob/master/pulseaudio-dynamic-range-compression.md - LADSPA based FX
Other
pasuspender -- audacity # temporaraly suspend pulseaudio and launch audacity, for when PA gets in the way and config yak shaving isn't an option
- https://linux.die.net/man/1/padsp - PulseAudio OSS Wrapper. starts the specified program and redirects its access to OSS compatible audio devices (/dev/dsp and auxiliary devices) to a PulseAudio sound server. padsp uses the $LD_PRELOAD environment variable that is interpreted by ld.so(8) and thus does not work for SUID binaries and statically built executables. Equivalent to using padsp is starting an application with $LD_PRELOAD set to libpulsedsp.so
cat /dev/urandom | padsp tee /dev/audio > /dev/null
- https://github.com/i-rinat/apulse - pulseaudio emulation for ALSA
- https://github.com/masmu/pulseaudio-dlna - A lightweight streaming server which brings DLNA / UPNP and Chromecast support to PulseAudio and Linux
- http://hfujita.github.io/pulseaudio-raop2 - Experimental RAOP2 (Apple AirPlay2) support for PulseAudio
- https://github.com/naftulikay/autopulse - Script for dynamically changing your default PulseAudio sink on hotplug events for USB peripherals, etc.
JACK
See also #JACK configuration
- JACK - a system for handling real-time, low latency audio (and MIDI). It runs on GNU/Linux, Solaris, FreeBSD, OS X and Windows (and can be ported to other POSIX-conformant platforms). It can connect a number of different applications to an audio device, as well as allowing them to share audio between themselves. Its clients can run in their own processes (ie. as normal applications), or can they can run within the JACK server (ie. as a "plugin"). JACK also has support for distributing audio processing across a network, both fast & reliable LANs as well as slower, less reliable WANs.
- PulseAudio and Jack - differences
- Sound Engineers Guide to Jackd - This page attempts to collect in one place various issues surrounding the design and abilities of Paul Davis's wonderful Jackd from the point of view of a technical user. It is not an introduction or a usage howto.
- PDF: Timing Measurements In JACK 2 - S. Letz, D. Fober, Y. Orlarey, Grame - Centre national de cr ́eation musicale
- PDF: Surviving on Planet CCRMA, two Years Later and Still alive. - Fernando Lopez-Lezcano, 2005
- https://github.com/Barrett17/libjackcompat - an experimental media_kit compatibility layer and a rewrite of the jack API. This is done by emulating the jack client using an Haiku media_node as backend.
Clients
- https://github.com/jackaudio/example-clients
- https://github.com/jackaudio/example-clients/blob/master/inprocess.c - internal client, runs as part of jackd
- https://github.com/resinbeard/jacksandbox - a simple JACK client for learning and testing audio code.
Libraries / headers
- https://github.com/jackaudio/headers - JACK API headers
- https://github.com/ventosus/jack_osc - a workaround for Jack to support routing sample-accurate OSC packets via Jack MIDI ports as discussed at LAC2014.
- https://github.com/x42/weakjack - small library abstracts the JACK Application Binary Interface for weak/runtime libjack linking.
- https://github.com/stetre/luajack - a Lua binding library for the JACK Audio Connection Kit. It runs on GNU/Linux and requires Lua (>=5.3) and JACK (API >= v0.124.1).
- https://gitlab.com/gabrbedd/jacksquat - A JACK mock library for use in unit testing
Utils
- https://github.com/7890/jack_tools - alternative jack_* helpers
- https://github.com/Gimmeapill/xruncounter - Small linux tool written in C by Hermann Meyer (aka @brummer10) to measure jack xruns and evaluate the overall performance of a system for realtime audio.
- https://github.com/SpotlightKid/jack-audio-tools - A collection of utilities and tools for the JACK audio ecosystem
- https://github.com/be1/jackie - a small graphical jackd launcher
- https://github.com/falkTX/wineasio - provides an ASIO to JACK driver for WINE. ASIO is the most common Windows low-latency driver, so is commonly used in audio workstation programs.
Metadata
- https://github.com/drobilla/jackey - Jack Metadata Property Definitions
- jack-property-listener.py - Listen to and print JACK client/port meta-data changes.
GStreamer
- GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.
sndio
- sndio is a small audio and MIDI framework part of the OpenBSD project. It provides an lightweight audio & MIDI server and a fully documented user-space API to access either the server or directly the hardware in a uniform way. Sndio is designed to work for desktop applications, but pays special attention to synchronization mechanisms and reliability required by music applications. Reliability through simplicity are part of the project goals.
CRAS
- CRAS: ChromeOS Audio Server - allows for sound to be routed dynamically to newly attached audio-capable monitors (DisplayPort and HDMI), USB webcam, USB speakers, bluetooth headsets, etc., and in a way that requires as little CPU as possible and that adds little or no latency.
dspd
- https://github.com/dspdaemon/dspd - A Linux sound daemon with minimal dependencies that implements several existing APIs and protocols
aRts
See #aRts_2
NAS
- The Network Audio System (NAS) - a network transparent, client/server audio transport system. It can be described as the audio equivalent of an X server. Enjoy!
Enlightened Sound Daemon
Pipewire
- PipeWire - a project that aims to greatly improve handling of audio and video under Linux. It aims to support the usecases currently handled by both PulseAudio and Jack and at the same time provide same level of powerful handling of Video input and output. It also introduces a security model that makes interacting with audio and video devices from containerized applications easy, with supporting Flatpak applications being the primary goal. Alongside Wayland and Flatpak we expect PipeWire to provide a core building block for the future of Linux application development.
- https://github.com/PipeWire/pipewire-alsa - ALSA plugin for pipewire
- https://github.com/PipeWire/pipewire-jack - JACK client library
- https://github.com/PipeWire/pipewire-pulseaudio - PulseAudio replacement library on top of PipeWire
Integration
- JACK FAQ: How use PulseAudio and JACK?
- https://github.com/jackaudio/jackaudio.github.com/wiki/WalkThrough_User_PulseOnJack
- JACK FAQ: How do I route audio from Flash to JACK?
- JACK FAQ: Routing GStreamer audio via JACK
- Setting up Jack Audio for GStreamer, Flash, and VLC - First of all I need to say that I won't be mentioning Pulseaudio so if that is what you're here for then you are at the wrong place because I don't use Pulseaudio at all, Pulseaudio can be run ontop of Jack but doing so will increase CPU load (a very tiny amount on modern systems).
- https://github.com/brummer10/pajackconnect - Make JACK Work With PulseAudio. This script is intended to be invoked via QjackCtl to start up and shut down JACK on a system running PulseAudio. It handles the necessary setup to make the two work together, so PulseAudio clients get transparently routed through JACK while the latter is running, or if pulseaudio is suspend by pasuspender, do nothing [19]
Windows
- https://wiki.jriver.com/index.php/WASAPI - is Microsoft's most modern method for talking with audio devices. It is available in Windows Vista, Windows 7, and later versions of Windows. It allows delivering an unmodified bitstream to a sound device, and provides benefits similar to those provided by ASIO drivers. One of the other main benefits of WASAPI is that it provides applications with exclusive access to audio devices, bypassing the system mixer, default settings, and any typically any effects provided by the audio driver. WASAPI is the recommended Audio Output Mode for Windows unless your audio device has a well-behaved ASIO driver, and it effectively replaces all legacy output modes including Kernel Streaming and Direct Sound.
- https://en.wikipedia.org/wiki/DirectSound - a deprecated software component of the Microsoft DirectX library for the Windows operating system. DirectSound provides a low-latency interface to sound card drivers written for Windows 95 through Windows XP and can handle the mixing and recording of multiple audio streams.
- https://en.wikipedia.org/wiki/DirectMusic - a deprecated component of the Microsoft DirectX API that allows music and sound effects to be composed and played and provides flexible interactive control over the way they are played. Architecturally, DirectMusic is a high-level set of objects, built on top of DirectSound, that allow the programmer to play sound and music without needing to get quite as low-level as DirectSound. DirectSound allows for the capture and playback of digital sound samples, whereas DirectMusic works with message-based musical data. Music can be synthesized either in hardware, in the Microsoft GS Wavetable SW Synth, or in a custom synthesizer.
- https://en.wikipedia.org/wiki/XAudio2 - a lower-level audio API for Microsoft Windows, Xbox 360 and Windows Phone 8, the successor to DirectSound on Windows and a supplement to the original XAudio on the Xbox 360. XAudio2 operates through the XAudio API on the Xbox 360, through DirectSound on Windows XP, and through the low-level audio mixer WASAPI on Windows Vista and higher.
- ASIO4ALL - Universal ASIO Driver For WDM Audio
JACK configuration
pasuspender -- jackd # temporaraly suspend pulseaudio and start jack (needed for jack1 without PA patch)
- jackd(1) - the JACK audio server daemon, a low-latency audio server. Originally written for the GNU/Linux operating system, it also supports Mac OS X and various Unix platforms. JACK can connect a number of different client applications to an audio device and also to each other. Most clients are external, running in their own processes as normal applications. JACK also supports internal clients, which run within the jackd process using a loadable "plugin" interface.
jackd -R -P89 -s -dalsa -dhw:0 -r48000 -p256 -njack-server # start jackd, realtime priority 89, ALSA engine soundcard hw:0, sample rate of 48k, 256 max ports, instancename
- List of JACK Frame & Period settings ideal for USB interface - LinuxMusicians - (Frames/Sample Rate) * Period = Theoretical (or Math-derived) Latency
# jack2 package commands jack_alias jack_bufsize jack_control jack_cpu jack_cpu_load jack_disconnect jack_evmon jack_freewheel jack_iodelay jack_latent_client jack_load jack_metro jack_midi_dump jack_midi_latency_test jack_midiseq jack_midisine jack_monitor_client jack_multiple_metro jack_net_master jack_net_slave jack_netsource jack_rec jack_samplerate jack_server_control jack_session_notify jack_showtime jack_simple_client jack_simple_session_client jack_test jack_thru jack_transport jack_unload jack_wait jack_zombie jack_connect fluidsynth:l_00 system:playback_3 jack_connect fluidsynth:r_00 system:playback_4 jack_lsp # list jack ports jack_lsp -c # list jack port connections (sinks indented)
- jack-play(1) — jack-tools — Debian stretch — Debian Manpages - a light-weight JACK sound file player. It creates as many output ports as there are channels in the input file. It will connect to ports mentioned in the environment variable JACK_PLAY_CONNECT_TO which must include a %d pattern to indicate port number, otherwise it implements no connection logic, use jack-plumbing(1) instead. Written by Rohan Drape.
- q=0: SRC_LINEAR
- q=1: SRC_ZERO_ORDER_HOLD
- q=2: SRC_SINC_FASTEST
- q=3: SRC_SINC_MEDIUM_QUALITY
- q=4: SRC_SINC_BEST_QUALITY
- jack_iodelay - will create one input and one output port, and then measures the latency (signal delay) between them. For this to work, the output port must be connected to its input port. The measurement is accurate to a resolution of greater than 1 sample.
- https://gareus.org/oss/jackfreqd/start - heavily based on powernowd. Instead of taking CPU load as parameter for deciding on the CPU frequency jackfreqd uses JACK DSP-load and jackfreqd only supports the powernowd's aggressive mode 1). Optionally jackfreqd can also take CPU load into account which comes in handy when the JACK-daemon is temporarily unavailable or if frequency-scaling should also be done for on non-audio processes.
Utilities
jack_control
D-Bus control via python2-dbus
jack_control start # starts the jack server jack_control stop # stops the jack server jack_control status # check whether jack server is started, return value is 0 if running and 1 otherwise
jack_control dg # current driver jack_control dp # current driver paramaters
jack_control dl # drivers list jack_control ds alsa # selects alsa as the driver (backend) jack_control sm # switch master to currently selected driver
jack_control eps realtime True # set engine parameters, such as realtime jack_control dps period 256 # set the driver parameter period to 256
etc.: help - print this help text dpd <param> - get long description for driver parameter dps <param> <value> - set driver parameter dpr <param> - reset driver parameter to its default value asd <driver> - add slave driver rsd <driver> - remove slave driver il - get list of available internals ip <name> - get parameters of given internal ipd <name> <param> - get long description for internal parameter ips <name> <param> <value> - set internal parameter ipr <name> <param> - reset internal parameter to its default value iload <name> - load internal iunload <name> - unload internal ep - get engine parameters epd <param> - get long description for engine parameter eps <param> <value> - set engine parameter epr <param> - reset engine parameter to its default value
jack-select
- https://github.com/SpotlightKid/jack-select - A systray application to quickly change the JACK-DBus configuration from QjackCtl presets.
jackman
- https://github.com/progwolff/jackman - Collection of scripts that help managing multiple audio interfaces with Jack
- https://github.com/progwolff/jackman_kcm - GUI for KDE Config Manager
Connections
alsa_in / alsa_out
- man: alsa_in, alsa_out - Jack clients that perform I/O with an alternate audio interface
alsa_in -j "Description" -d prefix:Name -q 1 2>&1 1> /dev/null & # used to send ALSA microphone input to an JACK input device # -d = device name, hw:2 # -q = quality of resampler, 1-4 # -c = channels, automatic default # -r 48000 = sample rate, automatic default
alsa_in # can automatically detect and open an available soundcard (what type? doesn't work for usb mic)
arecord -l ... card 2: AK5370 [AK5370], device 0: USB Audio [USB Audio] Subdevices: 0/1 Subdevice #0: subdevice #0 alsa_in -dhw:2 -jusb-mic # or alsa_in -dhw:AK5370 -j "USB Mic"
alsa_out -j "Description" -d prefix:Name -q 1 2>&1 1> /dev/null & # used to send JACK output to an ALSA device, like a speaker or headphones
If you get "Capture open error: Device or resource busy", some other program has control of the playback interface.
To see what application has control of the interface:
fuser -u /dev/snd/pcmC0D0p # this is card 0, device 0, pcm playback
If it's pulseaudio, launch pavucontrol, go to the Configuration tab and select Off for the device(s).
- https://github.com/IARI/alsa_jack_gui - qt-based gui to manage alsa_jack bridges
Zita-ajbridge
- Zita-ajbridge - provides two applications, zita-a2j and zita-j2a. They allow to use an ALSA device as a Jack client, to provide additional capture (a2j) or playback (j2a) channels. Functionally these are equivalent to the alsa_in and alsa_out clients that come with Jack, but they provide much better audio quality. The resampling ratio will typically be stable within 1 PPM and change only very smoothly. Delay will be stable as well even under worse case conditions, e.g. the Jack client running near the end of the cycle.
cat /proc/asound/cards zita-a2j -dhw:3,0 -jwebcam
Jack Std-I/O
- jackstdio - jack-stdout writes JACK audio-sample data to buffered standard output. jack-stdin reads raw audio data from standard-input and writes it to a JACK audio port.
- https://github.com/x42/jack-stdio - unix pipe audio-data from and to JACK
Session/config management
- A brief survey of Linux audio session managers - January 2013
- LinuxMusicians: Re: Non-stuff in KXStudio - April 2013, a comparison of session managers
- LinuxMusicians: Re: jack_session - August 210, LASH vs LADISH
- Linux Synth Notes: robust_session_management
- Linux Synth Notes: concurrent_patch_management
LASH Audio Session Handler
- LASH - a session management system for GNU/Linux audio applications. It allows you to save and restore audio sessions consisting of multiple interconneced applications, restoring program state (ie loaded patches) and the connections between them.
Dead. Inflexible and underused.
- GLASHCtl - a simple applet for controlling the LASH Audio Session Handler. When you run it it will appear as a small LASH icon in your "notification area" or "system tray".
LADISH
- ladish - LADI Session Handler or simply ladish is a session management system for JACK applications on GNU/Linux using Dbus. Its aim is to allow you to have many different audio programs running at once, to save their setup, close them down and then easily reload the setup at some other time. ladish doesn't deal with any kind of audio or MIDI data itself; it just runs programs, deals with saving/loading (arbitrary) data and connects JACK ports together. It can also be used to move entire sessions between computers, or post sessions on the Internet for download.
- https://github.com/LADI/ladish - LADI Session Handler, a rewrite of LASH. a session management system for JACK applications on GNU/Linux. Its aim is to allow you to have many different audio programs running at once, to save their setup, close them down and then easily reload the setup at some other time. ladish doesn't deal with any kind of audio or MIDI data itself; it just runs programs, deals with saving/loading (arbitrary) data and connects JACK ports together. It can also be used to move entire sessions between computers, or post sessions on the Internet for download. ladish has GUI frontend, gladish, based on lpatchage (LADI Patchage). and the ladish_control command line app for headless operation. LADI Tools is set of apps that interface with ladish, JACK server and a2jmidid
- https://github.com/alessio/ladish/
LADI Tools
- https://github.com/alessio/laditools - LADI Tools, forked from LADI/laditools, is a set of tools aiming to achieve the goals of the LADI project to improve desktop integration and user workflow of Linux audio system based on JACK and LADISH. Those tools take advantage of the DBus interfaces of JACK2 and LADISH to ease the configuration and use of your software studio.
In a near future, it should also be possible to use laditools to control JACK through an OSC interface.
You will find in this suite:
- laditools - python module
- ladi-system-tray - a system tray icon that allows you to start, stop and monitor JACK, as well as start some JACK related apps (log viewer, connections...)
- wmladi - a controller as a Window Maker dockapp which uses a menu similar to ladi-system-tray's
- ladi-system-log - a JACK, LADISH and a2jmidid log viewer
- ladi-control-center - a GUI to setup JACK's and laditools' configuration
- ladi-player - compact front-end to that allows users to start, stop and monitor a LADI system.
- g15ladi - a JACK monitor for g15 keyboards
JACK Session
- originally jack 1 (not D-Bus)
- Saving a session will save the state of all 'JACK Session'-supported apps plus their JACK connections
- Opening a session will automatically launch those apps, restoring their state and JACK connections
- Supported apps can be told to save/load their state to/from a specific location
- https://github.com/torbenh3/pyjacksm - a simple sessionmanager for the jack-session protocol. jacksmtray should start the tray app, which controls the sessionmanager daemon.
- https://github.com/fps/js_wrap - A simple wrapper for non jack_session enabled apps whose state can be fully qualified by the cmdline to run them
QjackCtl
- QjackCtl - JACK Audio Connection Kit - Qt GUI Interface
QjackCtl holds its settings and configuration state per user, in a file located as $HOME/.config/rncbc.org/QjackCtl.conf. Normally, there's no need to edit this file, as it is recreated and rewritten everytime qjackctl is run.
- D-Bus control and Jack 2 D-Bus control
- Connection and JACK Session manager
- https://github.com/kmatheussen/qjackctl_macos - Scripts to build qjackctl for macos
- https://github.com/cybercatalyst/jackcontrol - older qt4 fork
Cadence
- Cadence - a set of tools useful for audio production. Cadence itself is also an application (the main one), which this page will document. There are other applications that are part of the Cadence suite, they are usually named as the "Cadence tools". They are: Catarina (simple patching), Catia (patching), Claudia (LADISH)
- Cadence - controls and monitors various Linux sound systems as well as audio-related system settings
cadence --minimized &
Claudia
- Claudia - a LADISH frontend; it's just like Catia, but focused at session management through LADISH.
- jack2 (dbus)
- Claudia-Launcher is a multimedia application launcher with LADISH support. It searches for installed packages (not binaries), and displays the respective content as a launcher. The content is got through an hardcoded database, created and/or modified to suit the target distribution.
- https://repo.or.cz/klaudia.git - formerly
Non Session Manager
- Non Session Manager is a graphical interface to the NSM Daemon (nsmd). By default, running the command non-session-manager will start both the GUI and an instance of the daemon. NSM manages clients together in a session. That's it. NSM doesn't know or care what Window Manager or audio subsystem those clients use--nor should it. Specific clients must be written to persist these environmental factors, and added to sessions when required.
For saving and restoring the JACK connection graph, a simple headless client named jackpatch has been developed and included in the NSM distribution. Simply add jackpatch do your basic template session and all the sessions you base on it will have their JACK connection graphs automatically saved and restored.
non-session-manager -- --session-root path
- http://non.tuxfamily.org/nsm/API.html - going beyond LASH (L0/L1 (a unix signal handler for SIGUSR1 and trigger a save)) and JACK-Session (L2)
- http://non.tuxfamily.org/wiki/nsm-proxy - a simple NSM client for wrapping non-NSM capable programs. It enables the use of programs supporting LADISH Level 0 and 1, and programs which accept their configuration via command-line arguments.
- https://github.com/vktec/njsm - bridges Non Session Manager and JACK Session. This allows programs that support JACK Session (say, jalv) to run inside nsm-proxy and have their data saved using njsm.
- https://github.com/rhetr/nsm-scripts - various scripts to supplement non session manager (NSM) usage.
- https://github.com/rhetr/nsm-git - makes git a little easier to use with non session manager sessions. creates a git repository in the current session and commits all untracked and unstaged files to it whenever save is pressed. nsm-git also reads the session.nsm file and deletes any saved applications that are not listed in the session. This program is meant to be executed within NSM.
- https://github.com/diovudau/pynsm2 - Non Session Manager client library in Python - Version2: No dependencies except Python3.
RaySession
- https://github.com/Houston4444/RaySession - a GNU/Linux session manager for audio programs as Ardour, Carla, QTractor, Non-Timeline, etc... It uses the same API as Non Session Manager, so programs compatible with NSM are also compatible with Ray Session. As Non Session Manager, the principle is to load together audio programs, then be able to save or close all documents together.
Stagepatch
- https://github.com/ViktorNova/stagepatch - nsm-git fork. Persistent audio and MIDI patchbay daemon for NSM, which auto-connects remembered devices when they reappear. Based on aj-snapshot
Gonzo
- https://github.com/scgolang/gonzo - Command line nsm server. I started this project out of frustration with trying to write nsm clients.
- https://github.com/scgolang/nsm - Non session manager OSC protocol implemented in Go
MonoMultiJack
- MonoMultiJack - a program for managing, starting and stopping Jackd and music programs. Another feature is connecting and disconnecting Jack audio and MIDI ports, as well as ALSA MIDI ports. It is programmed in Mono using GTK# as GUI toolkit.
JackLinx
- JackLinx - Simple Session Manager for the Music Classroom.
Preselected applications.
chino
- chino - a 'special-purpose session manager', requiring customisation to cover one or more similar setups. Once customised, using it is dead simple. Perhaps it is best to not overstress the term "session management", instead describing chino as a framework and toolset to build and manage a meta-application consisting of the user's favorite modular Jack audio and Midi tools, each started and interconnected in predefined ways.
chino -n newproject # start newproject in current directory chino -o existingproject # open existingproject
Broken? Depends on listlib which depends on anch which depends on tml which depends on flex. None of these were in the AUR. Then tml didn't build.
Routing
Catia
- Catia is a JACK Patchbay, with some neat features like A2J bridge support and JACK Transport.
Patchage
- Patchage is a modular patch bay for audio and MIDI systems based on Jack and Alsa.
Patchmatrix
- https://github.com/OpenMusicKontrollers/patchmatrix - PatchMatrix gives the best user experience with JACK1, as it makes intensive use of JACK's metadata API, which JACK2 still lacks an implementation of.
QJackConnect
- QJackConnect - a QT based patchbay for the JACK Audio Connection Kit.
jsweeper
- jsweeper will be a programmable port connection manager for ALSA sequencer and JACK audio and midi ports. Ports are laid out in a matrix so that connecting or disconnecting a port or a group of ports is just one mouse click or keypress.
njconnect
- njconnect - Curses Jack connection manager
CliConnect
- CliConnect is a minimal terminal based JACK connection manager. Why is that useful? For using over SSH mostly.
esjit
- esjit - a text-mode JACK audio connection manager. not in AUR anymore.
Routing snapshots
aj-snapshot
- aj-snapshot - a small program that can be used to make snapshots of the connections made between JACK and/or ALSA clients. Because JACK can provide both audio and MIDI support to programs, aj-snapshot can store both types of connections for JACK. ALSA, on the other hand, only provides routing facilities for MIDI clients. You can also run aj-snapshot in daemon mode if you want to have your connections continually restored.
aj-snapshot filename # make a snapshot aj-snapshot -r filename # restore a snapshot aj-snapshot -d filename & # run in daemon mode
- Robust Session Management - QJackCTL and Patchage for setup, diagnostics, and testing, aj-snapshot for management of Jack and ALSA MIDI connections, The DBus version of Jack2
autocable
- https://github.com/resinbeard/autocable - A tiny C application that loads a text file and routes Jack audio connections for you.
echo "connect system:capture_1 system:playback_1
> disconnect system:capture_2 system:playback_2" | ./autocable
./autocable yourdirectory/textfile.ac
- Beginning a GNU/Linux/JACK headless performance system - with autocable and qjackctl for visual demonstration
JMess
- https://github.com/jcacerec/jmess-jack - JMess - A utility to save your audio connections (mess). JMess can save an XML file with all the current Jack Audio connections. This same file can be loaded to connect everything again. The XML file can also be edited. It also also has the option to disconnect all the clients.
jmess -s filename.xml # save jmess -c filename.xml # load jmess -d -c filename.xml # disconnect all then load jmess -d # disconnect all
jack_snapshot
- jack_snapshot - a little tool for storing/restoring jack connection states. it does this by writing/reading the names of the connected ports into/from a simple textfile. and here is also one weakness: some jack clients don't use the same jack name on each run, but dynamically assign one [like meterbridge] but most of them can be told to use a specific name, so this isn't really a problem. at least not for me. some pattern matching might be added in the future..
jack-plumbing
- jack-plumbing maintains a set of port connection rules and manages these as clients register ports with JACK- Port names are implicitly bounded regular expressions and support sub-expression patterns.
jack-matchmaker
- jack-matchmaker - a small command line utility that listens to JACK port registrations by clients and connects them when they match one of the port pattern pairs given on the command line at startup. jack-matchmaker never disconnects any ports. The port name patterns are specified as pairs of positional arguments or read from a file (see below) and are interpreted as Python regular expressions
jack_autoconnect
- https://github.com/kripton/jack_autoconnect - Tiny application that reacts on port registrations by clients and connects them. The port names are interpreted as regular expressions and more than one pair can be defined upon calling.
Jack Sanity
- Jack Sanity - A scriptable environment using JavaScript for controlling jackdbus clients.
jacklistener etc.
patchy
- https://github.com/dedelala/patchy - store and recall jack audio port connections. Written in Go.
ASTRUX
- ASTRUX - A setup creation tool for live-oriented musicians (under active development by Raphaël Mouneyres)
Timing
JACK Transport / Timebase
- JACK Transport Design - The JACK Audio Connection Kit provides simple transport interfaces for starting, stopping and repositioning a set of clients. This document describes the overall design of these interfaces, their detailed specifications are in <jack/transport.h>
Timebase master relates to the musical/metronomic information being delivered to other JACK clients (tempo, bar, beat, ticks, time-sig. etc.).
Transport modes relate to linear playback location sync and state (stopped vs. rolling). [20]
jack_transport> ? activate Call jack_activate(). exit Exit transport program. deactivate Call jack_deactivate(). help Display help text [<command>]. locate Locate to frame <position>. master Become timebase master [<conditionally>]. play Start transport rolling. quit Synonym for `exit'. release Release timebase. stop Stop transport. tempo Set beat tempo <beats_per_min>. timeout Set sync timeout in <seconds>. ? Synonym for `help'.
echo play |jack_transport # pass command to execute # tempo change doesn't work via this method
- timebase.py - Query and manipulate JACK transport state and provide timebase information using jackclient-python
- JackDirector - a Linux app that lets you control Jack Audio Connection Kit's transport play/pause using midi commands (noteon) and let you assign bpm changes and other commands to midi program changes. This program plays a metronome thru 2 audio outputs exposed in Jack.
- gjacktransport - a standalone application that provides access to the jack audio connection kit‘s, JACK transport mechanism via a dynamic graphical slider. in other words: this software allows to seek Audio/Video media files when they are played along jack transport. Intended for audio-engineers or A/V editors that work with arodour, ecasound, hydrogen and/or xjadeo. Additionally it provides 'gjackclock'. A "Big Clock" display for jack-transport.
- cabestan is a small GTK+ program that interfaces with the jack audio connection kit to play, rewind, or fast forward the stream via the jack transport interface.
- jack-transport is a minimalist Jack transport control interface using ncurses. It displays the transport state and current time, and provides standard operating keys.
- QJackMMC - a Qt based program that can connect to a device or program that emits MIDI Machine Control (MMC) and allow it to drive JACK transport, which in turn can control other programs. JackCtlMMC is a slightly simpler command-line version of QJackMMC.
- https://github.com/ycollet/qtmiditrans - A Jack midi filter which translates midi events into jack transport (stop / play)
- jack-osc - publishes the transport state of the local JACK server as OSC packets over a UDP connection. jack-osc allows any OSC enabled application to act as a JACK transport client, receiving sample accurate pulse stream timing data, and monitoring and initiating transport state change.
- InConcert - a MIDI-controlled application that allows a musician to control the tempo and synchronization of a MIDI sequence. It features a tap tempo to adjust the beat (and synchronize the beat) and the ability to skip beats or insert beats. It works by controlling the Jack Audio Connection Kit's transport. InConcert depends on Jack and ALSA, and therefore only runs on Linux.
Doesn't work??
- TapStart - measures a tempo you tap. But: It sends OSC-messages with the tempo or delay to customizable hosts and paths. It updates the Jack tempo on each click (=new averaged tempo). It can start the Jack transport after tapping a defined number of beats.
- jack-trans2midi - a utility that converts jack transport into midi clock messages
- https://github.com/harryhaaren/AutoMate - An automation editor, which uses (or will use) JACK MIDI output and JACK Transport to sync to the beat.
- jack-file - Jack transport-centric utilities for audio playback
Ableton Link
- Ableton Link: Connect music making apps with Ableton Live | Ableton - a technology that keeps devices in time over a local network, so you can forget the hassle of setting up and focus on playing music. Link is now part of Live, and also comes as a built-in feature of other software and hardware for music making.
- https://github.com/rncbc/jack_link - a JACK transport timebase prototype bridge to Ableton Link.
- https://github.com/falkTX/Hylia - Host transport library for Ableton Link
- https://github.com/Deep-Symmetry/carabiner - A loose connector for interacting with Ableton Link. Carabiner is a program that embeds the C++ Link library and listens for local TCP connections to allow other programs, like beat-link-trigger and Afterglow, to participate in some aspects of a Link session, even though they were not written using C++ compatible languages and runtimes.
- https://github.com/ianacaburian/AbletonLink_JuceSampler - Simple JUCE sampler sequencer synced using Ableton Link.
Pioneer DJ Link
- https://github.com/Deep-Symmetry/beat-link - A Java library for synchronizing with beats from Pioneer DJ Link equipment, and finding out details about the tracks that are playing.
- https://github.com/Deep-Symmetry/beat-link-trigger - Trigger events and automate shows in response to events on Pioneer CDJs
- https://github.com/Deep-Symmetry/open-beat-control - Provides a subset of beat-link features over Open Sound Control.
- https://github.com/Deep-Symmetry/beat-carabiner - A minimal tempo bridge between Pioneer Pro DJ Link and Ableton Link.
- https://github.com/Deep-Symmetry/dysentery - Exploring ways to participate in a Pioneer Pro DJ Link network.
Calculation
- Qrest is a musician toolkit aimed at helping composers, performers, recordists and mixers : Find out the tempo of a musical piece, Calculate delay times, Calculate LFO frequencies (i.e., timing conversions)
- RASP - "RASP Aids Song Production" is a set of utilities for song production, supplementing functions missing in some DAWs. Features: Tap Tempo, Delay/Hz Calculator, Song Time Calculator, Note-to-Frequency Conversion, Simple Frequency Generator (v2), Metronome (v2)
Metronomes
kmetronome
- Drumstick Metronome (kmetronome) is a MIDI based metronome using the ALSA sequencer. Intended for musicians and music students, it is a tool to keep the rhythm while playing musical instruments.
- No decimal BPM, not MIDI driven
ametro
- ametro - a little, simple MIDI Metronome using the ALSA sequencer.
- https://github.com/rabramley/linux_midi_commands
- No decimal BPM
klick
- klick is an advanced command-line based metronome for JACK. It allows you to define complex tempo maps for entire songs or performances.
JACK transport connect but not driven by it? BPM argument required, doesn't change when transport master runs.
- gtklick - a GTK frontend to klick. It's written in Python and communicates with klick via OSC.
klick -o 12345 60 & gtklick -q osc.udp://localhost:12345
- https://github.com/jean-emmanuel/kleek - Simple klick cli wrapper to setup training patterns faster
- https://github.com/sonejostudios/klick2wav - a GUI for the export function of Klick.
Polygnome
- Polygnome - A polyrhythmic metronome in GTK+. Supports ALSA and JACK audio backends.
- https://gitlab.com/bwrsandman/polygnome
- Audio only, no MIDI
GTick
- GTick - an audio metronome application written for GNU/Linux and other UN*X-like operting systems supporting different meters (Even, 2/4, 3/4, 4/4 and more) and speeds ranging from 10 to 1000 bpm. It utilizes GTK+ and OSS (ALSA compatible).
- https://github.com/yoyonel/gtick
- Audio only, no MIDI
Hubcap
- Hubcap - a fairly simple metronome *nix app with a tempo fader and both auditory and visual feedback on a beat.
- Audio only, no MIDI
Accelerando
- Accelerando - a musical metronome that can speed up, allowing you to practice your music at progressively faster tempos. For example, you could set it to play 60 beats per minute for 4 bars, then automatically speed up by 10 beats per minute, and so on. It runs on Unix.
- https://github.com/bcrowell/accelerando
- Audio only, no MIDI
jmetro
- https://github.com/jmage619/jmetro - A dumb linux based Jack metronome with Qt based UI
midiclick
- midiclick - generates a metronome click-track on MIDI channel 9
- http://www.pjb.com.au/midi/free/midiclick
- ALSA MIDI, no audio
ctronome
- ctronome - a very simple yet powerful ;) programmable console metronome software.
- OSS Audio only, no MIDI
Click Tracker
- Click Tracker - a program designed for composers, conductors and instrumentalists working with modern music. The main goal of the software is to prepare a click track of any score, no matter how complex it is. This software runs in Windows, OSX and Linux under the open source program Pure Data, and can be used either by conductors in concert, by musicians for practice purposes, by composers while composing.
Graphical metronomes
- JVMetro - provides a colorful, realtime visual indication of the passage of bars and beats on the Jack transport--without generating any sound of its own.
- https://gitlab.com/smondet/vimebac - graphical metronome and instructions display that interfaces with JACK-midi applications. The display can be completely driven by MIDI events and it can also send MIDI events. It can also be self-driven and hence run without jackd although this is somewhat less interesting since it becomes just a visual metronome.
- https://github.com/kunstmusik/BeatViz - BeatViz shows a 4x4 grid that represents additive groupings of beats. (Beat here meaning a single atomic tick, equal to a 16th note within the author's Csound Live Code system. UDP controlled.
- Nevena's Metronome - a metronome program with Qt GUI. Besides just being beautiful, and working under X11, Windows, and Mac, it has some advanced features, for example it can count to you or act as a stroboscope.
Web metronomes
- BestMetronome.com - web and mobile app metronomes
- Chrome Web Store: Dr. Beat - developed as a part of the HackTime (http://goo.gl/SscNs) project from GDG Chrome Korea. It's a metro style metronome app. It helps you to keep the beats.
Windows
- Open Metronome - Windows only. User definable BPM; Measure can be set to any length, with emphasis on any beat(s); Each beat can be one or more of over forty voices, with the supplied Samples covering the complete General MIDI percussion set, or custom samples; Visual indicator as well as audible output;
- based on http://www.weirdmetronome.com/
- http://bouncemetronome.com - Windows/Wine- $
Networked
See also Streaming#Audio
- https://en.wikipedia.org/wiki/Audio_over_Ethernet - the use of an Ethernet-based network to distribute real-time digital audio. AoE replaces bulky snake cables or audio-specific installed low-voltage wiring with standard network structured cabling in a facility. AoE provides a reliable backbone for any audio application, such as for large-scale sound reinforcement in stadiums, airports and convention centers, multiple studios or stages.
While AoE bears a resemblance to voice over IP (VoIP) and audio over IP (AoIP), AoE is intended for high-fidelity, low-latency professional audio. Because of the fidelity and latency constraints, AoE systems generally do not utilize audio data compression. AoE systems use a much higher bit rate (typically 1 Mbit/s per channel) and much lower latency (typically less than 10 milliseconds) than VoIP. AoE requires a high-performance network. Performance requirements may be met through use of a dedicated local area network (LAN) or virtual LAN (VLAN), overprovisioning or quality of service features. Some AoE systems use proprietary protocols (at the higher OSI layers) which create Ethernet frames that are transmitted directly onto the Ethernet (layer 2) for efficiency and reduced overhead. The word clock may be provided by broadcast packets.
See also Networking#ISDN
mLAN
- https://en.wikipedia.org/wiki/mLAN - short for Music Local Area Network, is a transport level protocol for synchronized transmission and management of multi-channel digital audio, video, control signals and multi-port MIDI over a network. The mLAN protocol was originally developed by Yamaha Corporation, and publicly introduced in January 2000. It was available under a royalty-free license to anyone interested in utilizing the technology. mLAN exploits several features of the IEEE 1394 (FireWire) standard such as isochronous transfer and intelligent connection management. There are two versions of the mLAN protocol. Version 1 requires S200 rate, while Version 2 requires S400 rate and supports synchronized streaming of digital audio at up to 24 bit word length and 192 kHz sample rate, MIDI and wordclock at a bitrate up to 400 Megabits per second. As of early 2008, mLAN appeared to have reached the end of its product life.
- mLAN Central -"mLAN FireWire Music Networking is the enabling technology for creating an intelligent, managed local area music network using FireWire. mLAN not only carries multi-channel digital audio and MIDI over 1394 FireWire, it includes the connection management so you can easily manage your entire network."
AES67
- https://en.wikipedia.org/wiki/AES67 - a technical standard for audio over IP and audio over ethernet interoperability. The standard was developed by the Audio Engineering Society and first published in September 2013. It is a layer 3 protocol suite based on existing standards and is designed to allow interoperability between various IP-based audio networking systems such as RAVENNA, Livewire, Q-LAN and Dante. It also provides interoperability with layer 2 technologies, like Audio Video Bridging (AVB). AES67 promises interoperability between previously competing networked audio systems and long-term network interoperation between systems. Since its publication, AES67 has been implemented independently by several manufacturers and adopted by many others.
- Merging Technologies - Alsa Ravenna Aes67 Driver - an ALSA Linux driver designed to provide high performance RAVENNA/AES67 support for the Linux ecosystems.Merging is strongly committed to foster the adoption of AES67 networking capability by making available a simple to integrate AES67 ALSA Linux driver with all required functionalities for Linux based OEMs to take advantage of this rapidly evolving market.
- https://github.com/voc/aes67-recorder - A Linux/GStreamer-Based AES67 Multitrack Audio Backup Solution
- VB-Audio Network - VBAN
- https://github.com/quiniouben/vban - VBAN protocol open-source implementation
Jack
Netjack
- Netjack - a Realtime Audio Transport over a generic IP Network. It is fully integrated into JACK. Syncs all Clients to one Soundcard so no resampling or glitches in the whole network. Packet loss is now also handled gracefully. By using the celt codec, its even possible, that single packet losses get masked by the Packet Loss Concealment Code.
- https://github.com/elcorto/jackpod - Control a realtime netjack2 connection between two machines
- LinuxMusicians: Setup Netjack2 with a crossover cable
JackTrip
- JackTrip - a Linux and Mac OS X-based system used for multi-machine network performance over the Internet. It supports any number of channels (as many as the computer/network can handle) of bidirectional, high quality, uncompressed audio signal streaming.
Zita-njbridge
- Zita-njbridge Command line Jack clients to transmit full quality multichannel audio over a local IP network, with adaptive resampling by the receiver(s). Zita-njbridge can be used for a one-to-one connection (using UDP) or in a one-to-many system (using multicast). Sender and receiver(s) can each have their own sample rate and period size, and no word clock sync between them is assumed. Up 64 channels can be transmitted, receivers can select any combination of these. On a lightly loaded or dedicated network zita-njbridge can provide low latency (same as for an analog connection). Additional buffering can be specified in case there is significant network delay jitter. IPv6 is fully supported.
- https://github.com/rhetr/ipaudio - use jackd, zita-njbridge and systemd for network ip audio
jack_audio_send / jack_audio_receive
- https://github.com/7890/jack_tools/tree/master/audio_rxtx - jack_audio_send & jack_audio_receive - JACK clients allowing to transmit uncompressed
native JACK 32 bit float audio data on the network using UDP OSC messages.
MultiJACK
- https://github.com/ponderworthy/MultiJACK - a fully operational demo of a framework to increase available audio DSP power available to JACK within a single multicore motherboard, using multiple JACK processes in concert, connected via IP transport.
Compared to jack2??
FLACJACKet
- https://github.com/0xsx/FLACJACKet - a DLNA media server that broadcasts streams of audio routed to JACK input ports over the local network encoded in the FLAC format. It aims to provide reliable audio transmission while minimizing latency and taking advantage of FLAC features such as lossless compression and support for surround sound. It is Free and Open Source Software, released under the GNU General Public License.
TPF
- https://gitlab.zhdk.ch/TPF/tpf-server - Telematic performance format server software
trx
- https://github.com/nettings/trx-jack - fork of http://www.pogo.org.uk/~mark/trx.git (dead) by Mark Hills
Studio Link
- Studio Link - professional Audio-Over-IP
- https://github.com/Studio-Link/app - This repository contains the studio link - baresip modules and build environment
- https://github.com/Studio-Link/overlay-lv2 - Linux LV2 VoIP/AoIP Plugin
AudioGridder
- https://github.com/apohl79/audiogridder - DSP servers using general purpose networks and computers] - allows you to offload DSP processing from your local to remote computers. This can come in handy when mixing complex projects for instance. AudioGridder comes with a plugin and a server that is enabling VST3 and AudioUnit plugins to be hosted across the network. Simply run the server component on a remote machine and connect your DAW using the AudioGridder AU/VST3 plugin. You can add remote insert chains into your DAW's signal paths that way. The DSP code of the inserted plugins will be executed on the remote machine and the plugin UI's will be streamed over the wire. This allows for an experience very close to hosting the plugins directly in your DAW but not using your local CPU.
RoC
- Roc - real-time audio streaming over the network
HBRMT
- https://en.wikipedia.org/wiki/High_bit_rate_media_transport - (HBRMT) formerly known as High bit rate audio video over IP (HBRAV-IP), is a proposed standard for data encapsulation and forward error correction (FEC) of high bit rate contribution oriented video/audio feed services, up to 3 Gbit/s over Ethernet networks. HBRMT is being developed by the SMPTE 32NF networking technology committee. HBRMT is designed to incorporate both SDI uncompressed and JPEG 2000 compressed video and audio formats.
Plugins
Formats
VST2 / VST3
paths:
~.vst /usr/lib/vst /usr/local/lib/vst ~/.wine/drive_c/Program Files (x86)/VstPlugins ~/.wine/drive_c/Program Files/VstPlugins
- PDF: An Investigation into Music-Oriented Software-Based Audio Signal Processing, Including Development of a Real-time Audio Application Using C++ - Toby Newman
- VST Preset Generator - writes preset files (fxp for program patch or fxb for bank patch) with randomized values.This is a tool for lazy or curious sound designers, who want to experiment random theory with their VST plugins.
- https://github.com/x42/lv2vst - LV2 - VST wrapper. Expose LV2 plugins as VST2 plugins to a VST plugin-host on Windows, OSX and Linux.
- https://github.com/falkTX/JackAss - a VST plugin that provides JACK-MIDI support for VST hosts. Simply load the plugin in your favourite host to get a JACK-MIDI port. Each new plugin instance creates a new MIDI port.
- https://github.com/webprofusion/OpenAudio - A list of open source VST/audio plugin projects. Please contribute more links or open source your own plugins.
- https://github.com/DropSnorz/OwlPlug - Audio plugin manager. Small tool to manage VST plugin folders on Windows and MacOS
Creating
- HISE - a cross-platform open source audio application for building virtual instruments. It emphasizes on sampling, but includes some basic synthesis features for making hybrid instruments as well as audio effects. You can export the instruments as VST / AU / AAX plugins or as standalone application for Windows / macOS or iOS.
- https://github.com/davidhealey/librewave_woodwinds - contains the HISE project, scripts, and image files for the Libre Wave Sofia Woodwinds virtual instrument.
- https://github.com/Tracktion/pluginval - a cross-platform plugin validator and tester application. It is designed to be used by both plugin and host developers to ensure stability and compatibility between plugins and hosts.
- Jamba - a set of helpers (classes, concepts, build files, etc…) built on top of the VST SDK to provide a lightweight framework to build a VST2/3 plugin. Jamba has been designed to help in building VST2/3 plugin, not to replace it: you are still writing a VST2/3 plugin, not a Jamba plugin.
- https://github.com/RustAudio/vst-rs - VST 2.4 API implementation in rust. Create plugins or hosts. Previously rust-vst on the RustDSP group.
VST3
- https://github.com/steinbergmedia/vst3_public_sdk - VST 3 Implementation Helper Classes And Examples
Two favourite VST3 threads;
- Add support for sending Midi CCs out of VST3 plugins - Feature Requests - JUCE
- Linux support -help wanted- · Issue #105 · iPlug2/iPlug2
LADSPA
~/.ladspa /usr/local/lib/ladspa /usr/lib/ladspa
- https://github.com/swh/LRDF - a library to make it easy to manipulate RDF files describing LADSPA plugins. It can also be used for general RDF manipulation. It can read RDF/XLM and N3 files and export N3 files, it also has a light taxonomic inference capability. N.B. this is the descendent project of sourceforge.net/projects/lrdf
LV2
- LV2 - an open standard for audio plugins, used by hundreds of plugins and other projects. At its core, LV2 is a simple stable interface, accompanied by extensions which add functionality to support the needs of increasingly powerful audio software.
~/.lv2 /usr/local/lib/lv2 /usr/lib/lv2 # standard lv2 paths
lv2ls # list all lv2 plugins available
Specifications
- LV2 Specifications - All official LV2 specifications.
- LV2 - an interface for writing audio processors, or plugins, in C/C++ which can be dynamically loaded into many applications, or hosts. This core specification is simple and minimal, but is designed so that extensions can be defined to add more advanced features, making it possibly to implement nearly any feature imaginable. API docs)
- http://lv2plug.in/ns/ext/port-groups - Multi-channel groups of LV2 ports.
- LV2 MIDI - defines a data type for a MIDI message, midi:MidiEvent, which is normalised for fast and convenient real-time processing. MIDI is the Musical Instrument Digital Interface, a ubiquitous binary standard for controlling digital music devices. For plugins that process MIDI (or other situations where MIDI is sent via a generic transport) the main type defined here, midi:MidiEvent, can be mapped to an integer and used as the type of an LV2 Atom or Event.
- LV2 + midnam
- LV2 Units - This vocabulary defines a number of units for use in audio processing.
- LV2 UI - This extension is used to create User Interfaces (UIs) for LV2 plugins.
- LV2 1.0 released, what's next? - "LV2 is a successor of both LADSPA (audio effects) and DSSI (instruments) with some backwards compatibility. The scope of the API more or less equals to the sum of LADSPA and DSSI, not in the last place thanks to its modular design."
Creating
- Programming LV2 Plugins - a series of well-documented example plugins that demonstrate the various features of LV2. Starting with the most basic plugin possible, each adds new functionality and explains the features used from a high level perspective. API and vocabulary reference documentation explains details, but not the “big picture”. This book is intended to complement the reference documentation by providing good reference implementations of plugins, while also conveying a higher-level understanding of LV2.
- Programming LV2 Plugins - new book layout
- The LV2 Book - Rust Edition - a translation of the LV2 Book by David Robillard for the lv2rs library. As such, the examples in this book as well as the README's and comments are copied from the original, but the book itself has been altered to adapt for the differences between C and Rust.
- https://github.com/diovudau/lv2-workshop - Documentation and code for a workshop on creating LV2 plug-ins by OSAMC
- LV2 programming for the complete idiot - an LV2 plugin programming guide for the complete idiot using a set of C++ classes. If you are not a complete idiot, you may want to read the LV2 spec and figure it out for yourself.
- Lilv - a C library to make the use of LV2 plugins as simple as possible for applications. Lilv is the successor to SLV2, rewritten to be significantly faster and have minimal dependencies. It is stable, well-tested software (the included test suite covers over 90% of the code) in use by several applications.
- https://github.com/agraef/pure-lang/tree/master/pure-lilv - provides a Pure module for David Robillard's Lilv, a library for LV2 plugin host writers.
- https://github.com/atsushieno/lilv-sharp - It is an experimental Mono binding for Lilv.
- Suil is a lightweight C library for loading and wrapping LV2 plugin UIs.
- https://github.com/OpenMusicKontrollers/props.lv2 - Utility header for property based LV2 plugins
- https://github.com/OpenMusicKontrollers/timely.lv2 - Utility header for time-based LV2 plugins
- LVTK - C++ wrappers for LV2 Plugins
- https://github.com/Janonard/lv2rs - Idiomatic Rust library to create LV2-compatible plugins.
- https://github.com/x42/lv2toweb - create xhtml documentation for LV2 plugins
- LV2 Create - a GUI utility that lets you easily enter information about a plugin, without needing to know too many details about LV2 (certainly not about those godawful, over-engineered, developer/enduser hostile, inefficient, easily-broken TTL files. Terrible design for audio work). Then you click a button, and the utility creates the TTL files, and C skeleton code for the plugin. You just need to add your DSP code, and compile to create your plugin. It even generates the GNU Makefile for you.
- dkbuilder - from circuit to LV2 plugin
- dkbuilder: simulate a Poweramp - follow-up
- https://github.com/lvtk/jlv2 - LV2 Related JUCE Modules
Testing
- lv2lint - Check whether a given LV2 plugin is up to the specification
- Torture tester - a program to help with testing of LADSPA and LV2 plugins.
- https://github.com/moddevices/lv2bm - a benchmark tool for LV2 plugins, it was inspired in the lv2bench of lilv utils and the Torture tester
- https://github.com/ventosus/alluis.lv2 - LV2 plugin to test various LV2 UI toolkits
- https://github.com/ventosus/customui.lv2 - Plugin to test custom LV2 UI
Bridge to
- NASPRO bridges - a collection of bridges to LV2 that, once installed, allow you to use plugins developed for other plugin standards in LV2 hosts. As of now, it contains two bridges: a LADSPA 1.1 bridge and a DSSI 1.0.0/1.1.0 bridge.
- https://github.com/x37v/pdlv2 - turns pure data patches into LV2 plugins
Bridge from
- https://github.com/falkTX/Shella - LV2 to VST2 wrapper
Offline processing
- lv2file - a simple program which you can use to apply effects to your audio files without much hassle.
- lv2proc - generates an output sound file by applying a LV2 effect plugin to an input sound file.
Events
- https://github.com/OpenMusicKontrollers/orbit.lv2 - An LV2 time event manipulation plugin bundle
- Beatbox - Creates MIDI events based on LV2 time position events (bars and beats), e.g. to drive a drum machine. Bars and beats can be disabled/enabled separately.
- Click - Synthesizes click tracks based on LV2 time position events (bars and beats). Bars and beats can be disabled/enabled separately.
- Looper - Loops arbitrary LV2 atom events on a ping-pong buffer. E.g. loops MIDI, OSC or anything else that can be packed into LV2 atoms with sample accuracy. Needs to be driven by LV2 time position events.
- Pacemaker - Creates LV2 time position events from scratch to drive other plugins.
- Quantum - Quantizes incoming events to whole beats.
- Subspace - Subdivide or multiply incoming time signals by whole fractions, e.g. to speed up time x2, x3, ... or slow it down to x1/2, x1/3, ...
- Timecapsule - Record/Playback of arbitrary LV2 atoms to/from memory. Record all incoming atom messages with sample accuracy and play them back later from memory. Stored atom event data is part of the plugin state and thus preserved across instantiations.
Presets
- https://gitlab.com/Jofemodo/preset2lv2 - A converter that takes a set of native presets and generate a LV2 bundle containing one or more banks. It supports several native formats and it's easily extensible.
Hardware
- https://wiki.moddevices.com/wiki/Control_Chain - an open standard developed by MOD Devices that defines communication protocol, electrical specification, cables and connectors. It's used to interconnect external controllers (a.k.a peripheral devices) as expression pedals and foot-switches extension to MOD devices, for example the MOD Duo.
DSSI
Hosts
Multiple
Carla
- Carla - an audio plugin host, with support for many audio drivers and plugin formats. It has some nice features like automation of parameters via MIDI CC (and send output back as MIDI too) and full OSC control. Carla currently supports LADSPA (including LRDF), DSSI, LV2, VST2/3 and AU plugin formats, plus GIG, SF2 and SFZ file support. It uses JACK as the default and preferred audio driver but also supports native drivers like ALSA, DirectSound or CoreAudio.
There are 4 types of engine processing:
- Single-client: (JACK driver only) - carla-jack-single
- Same as Multi-client, except that all JACK ports belong to a single master client.
- This is needed when a setup doesn't support multi-client JACK apps, such as LADISH.
- Multi-client: (JACK driver only) - carla-jack-multi
- Every single plugin is exposed as a new JACK client. Audio and MIDI ports are registered as needed.
- Rack: - carla-rack
- Plugins are processed in order, from top to bottom.
- Plugins with non-stereo audio channels are not supported, but a forced-stereo option is available for Mono ones.
- Patchbay: - carla-patchbay
- Modular patchbay mode, just like in JACK Multi-client and many other modular applications.
- Every plugin gets its own canvas group and ports allowing you to interconnect plugin audio and MIDI.
carla-single # usage: /usr/bin/carla-single [arch (optional)] [format] [filename/uri] [label (optional)] [uniqueId (optional)]
Possible archs:
- native (default) - linux32 - linux64 - win32 - win64
Possible formats:
- internal - ladspa - dssi - lv2 - vst|vst2 - gig - sf2 - sfz
Command-line launch examples:
/usr/bin/carla-single internal midisplit /usr/bin/carla-single dssi /usr/lib/dssi/whysynth.so /usr/bin/carla-single lv2 http://calf.sourceforge.net/plugins/Compressor /usr/bin/carla-single native vst /usr/lib/vst/TAL-NoiseMaker.so /usr/bin/carla-single win32 vst "~/.wine/drive_c/Program Files (x86)/VstPlugins/Kontakt 5.dll"
- Carla Backend: Modules - API
- https://github.com/progwolff/performer - Performer lets you manage all the songs in your setlist as individual Carla patches and loads each of them when you need it. Additionally Performer uses Okular or QWebEngine to display notes and chords of your songs.
Ingen
- Ingen (formerly Om) is a modular audio processing system for GNU/Linux audio systems using the Jack audio server and LV2 or LADSPA plugins.
Jost
- Jost (dead) is the first open source multi-technology (native vst, ladspa, dssi) host in linux. It will mainly host a chain of plugins per instance, publishing jack, alsa and alsa_seq ports in order to be connected in your main stream flow. it still have some very good features that makes it a first class host.
VST
MrsWatson
- MrsWatson - a command-line audio plugin host. It takes an audio and/or MIDI file as input, and processes it through one or more audio plugins. Currently MrsWatson only supports VST 2.x plugins, but more formats are planned in the future. MrsWatson was designed for primarily three purposes: Audio plugin development and testing, Automated audio processing for servers or other applications, Unit testing audio plugins
dssi-vst
- dssi-vst - Run Windows VST plugins on Linux. DSSI doesn't support host tempo to plugin features.
FST
- FST - a program by which uses Wine, Jack and Steinberg's VST Audio Plug-Ins SDK to enable the use of many VST audio plugins under Gnu/Linux.
FeSTige
- FeSTige - a GUI for fst and dssi-vst, allowing you to run Windows VST plugins on Linux.
fsthost
- fsthost - FreeST standalone fork. linux VST host - hybrid using winelib. Runs as a Jack client for Audio/MIDI, and with GTK GUI. Supports 32 and 64 bit plugins. doesn't see JACK server..?
fsthost -g ~/.vst # build plugin db
export VST_PATH=~/VST:/usr/share/vst:/otherlocation fsthost -g
fsthost_menu # Perl GTK menu to startup plugins fsthost_ctrl # Perl GTK app for control via TCP socket fsthost_list # simple application to show known plugins ( read about XML DB ) export FSTMENU_GTK=2 # or 3
Airwave
- Airwave - a WINE-based VST bridge, that allows for the use of Windows 32- and 64-bit VST 2.4 audio plugins with Linux VST hosts
- https://pastebin.com/aDNcdRjp - "Airwave is very nice, but adding more than a few plugins to it is awfully tedious. So I've taken the matter into my own hands and written a script to add a large number of plugins to Airwave (plus the ability to edit their names) as a batch process." [22]
vstserver
- https://github.com/kmatheussen/vstserver - an old program vstlib server
vst-bridge
- https://github.com/abique/vst-bridge - a bridge to run Windows VST plugins (both 32 and 64 bits) with Linux VST hosts.
LinVst
- https://github.com/osxmidi/LinVst - a Linux vst plugin that runs Windows 64 bit vst's. To use LinVst, the linvst.so file simply needs to be renamed to match the windows vst dll's filename.
- https://github.com/osxmidi/LinVst3 - Linux Windows vst3 wrapper/bridge
- https://github.com/osxmidi/LinVst3-X - Windows vst3 Linux Wrapper - Extra
yabridge
- https://github.com/robbert-vdh/yabridge - Yet Another VST bridge, run Windows VST2 plugins under Linux
VSTForx
- VSTForx - a full-modular effect network creation tool which comes as a VST-plugin. With VSTForx you are able to load any number of VST-plugins and connect them anyway you want. Additional modules allow you to manipulate such signal chains and offer a whole new way in mixing and producing. Windows/Mac. $.q
L_Pa
- L_Pa Project - collection of tools aimed at better integration and *performance* of Linux + Wine + jackd + Proaudio applications...L_pa accomplishes this by setting up a proper mix of kernel/software with low-latency/proaudio users _specifically in mind. So far the main components are;Custom version of the (rt-)linux kernel with a delta of patches, for linux proaudio usage.Custom version of Wine with a delta of patches to ensure Wine is ready for proaudio on the linux platform. It's also has various other improvements and bug fixes.You'll want both WineASIO and FSThost;WineASIO - WineASIO provides an ASIO to JACK driver for WINE.
vsthost
- https://github.com/wtrsltnk/vsthost - Small code base containing a minimal vsthost
NetVST
- NetVST - Windows only
PluginRunner
- https://github.com/jatinchowdhury18/PluginRunner - A minimal command-line application for running audio through an audio plugin. Made with JUCE. Mostly only tested on Windows.
DSSI
- ghostess - a rough start at a graphical DSSI host, based on jack-dssi-host, but capable of saving and restoring plugin configuration, as well as specifying MIDI channels and layering synths. ghostess includes three MIDI drivers: an ALSA sequencer MIDI driver, a (clumsy but functional) CoreMIDI driver (which allows ghostess to be used on Mac OS X), and a JACK MIDI driver for use with the MIDI transport in recent versions (>=0.105.0) of JACK. ghostess also comes with a universal DSSI GUI, which attempts to provide GUI services for any DSSI or LADSPA plugin, and may be used with any DSSI host.
LADSPA
- JACK Rack is an effects "rack" for the JACK low latency audio API. The rack can be filled with LADSPA effects plugins and can be controlled using the ALSA sequencer. It's phat; it turns your computer into an effects box.
- jackspa - A small utility which will host a LADSPA plugin, providing JACK ports for its audio inputs and outputs, and sliders in a gtkmm GUI for its control inputs. I find it useful for hosting plugins with odd port configurations (such as a vocoder or a ring modulator), and for testing plugins. This project is pretty hacky. I threw it together quickly because I needed it in a hurry, and as a result, it's fairly buggy, and the code is a mess. But, it does the job.
- ng-jackspa is a set of simple user interfaces that host a LADSPA plugin, providing JACK ports for its audio inputs and outputs, and dynamic setting of its control inputs. Additionally, the plugin controls can be exported to or controlled by control voltages on standard JACK audio ports.
- Soundtank hosts LADSPA plugins in "realtime objects" which embody the structure of the audio signal flow. RTObjects can be controlled in a completely customizeable fashion using MIDI events sent through the ALSA sequencer interface.
- Stomper - a virtual pedalboard for guitar, using commonly-available audio plugins in a user-defined arrangement and MIDI for switching. It is intended for on-stage use and will be optimized as such.
LV2
Jalv
- Jalv - a simple but fully featured LV2 host for Jack. It runs LV2 plugins and exposes their ports as Jack ports, essentially making any LV2 plugin function as a Jack application.
- https://github.com/brummer10/jalv_select - little app to select lv2 plugs for run with jalv
jalv.qt5 http://drumkv1.sourceforge.net/lv2
LV2_PATH=/path/to/plugin.lv2 jalv.gtk URI LV2_PATH=/path/to/plugin.lv2 lv2ls # to find the URI(s)
zynjacku
- zynjacku - JACK based, GTK (2.x) host for LV2 synths. It has one JACK MIDI input port (routed to all hosted synths) and one (two for stereo synths) JACK audio output port per plugin. Such design provides multi-timbral sound by running several synth plugins.
Synthpod
- Synthpod - both LV2 host and plugin. It can be run as a standalone app and be used as a tool for live performances or general audio and event filtering. Or it can be run as a plugin itself inside another host (or inside itself) to add support for non-linear patching where only strictly linear connections are supported (e.g. as in most DAWs). Patching of audio channels is clickless.
Elven
- Elven - written for revision 2 of the LV2 specification and is NOT compatible with revisions 3 and later. It may work, it may break subtly or it may give your computer the swine flu.
mod-host
- https://github.com/moddevices/mod-host - an LV2 host for JACK, controllable via socket or command line
- https://github.com/moddevices/mod-ui - the UI for the MOD software. It's a webserver that delivers an HTML5 interface and communicates with mod-host. It also communicates with the MOD hardware, but does not depend on it to run.
- PedalPi - PluginsManager - Pythonic management of LV2 audio plugins with mod-host.
MODEP
- MODEP - an open-source, community-based MOD DUO emulator that lets you play around with hundreds of LV2 audio plugins ranging from a simple reverb to a complex FM synth using your Raspberry Pi and Pisound or any other Raspberry Pi supported sound card!
- https://github.com/BlokasLabs/modep - fork of pi-gen
lv2host
- https://github.com/giuliomoro/lv2host - A lv2 host with Bela example.
jackwrap.c
- https://github.com/x42/robtk/blob/master/jackwrap.c - x42 jack wrapper / minimal LV2 host
lv2h
Windows / Mac
- https://github.com/hotwatermorning/Terra - Yet another audio plugin hosting application.
Audio formats
- Xiph.Org's Digital Show & Tell- video on digital media explores multiple facets of digital audio signals and how they really behave in the real world.
- Video Game Music Preservation Foundation - the Wikipedia of video game music!
PCM
- https://en.wikipedia.org/wiki/Pulse-code_modulation - a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, Compact Discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.
Linear pulse-code modulation (LPCM) is a specific type of PCM where the quantization levels are linearly uniform. This is in contrast to PCM encodings where quantization levels vary as a function of amplitude (as with the A-law algorithm or the μ-law algorithm). Though PCM is a more general term, it is often used to describe data encoded as LPCM.
A PCM stream has two basic properties that determine the stream's fidelity to the original analog signal: the sampling rate, which is the number of times per second that samples are taken; and the bit depth, which determines the number of possible digital values that can be used to represent each sample.
- https://en.wikipedia.org/wiki/Differential_pulse-code_modulation - DPCM, a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal. The input can be an analog signal or a digital signal. If the input is a continuous-time analog signal, it needs to be sampled first so that a discrete-time signal is the input to the DPCM encoder. DPCM was invented by C. Chapin Cutler at Bell Labs in 1950; his patent includes both methods.
Option 1: take the values of two consecutive samples; if they are analog samples, quantize them; calculate the difference between the first one and the next; the output is the difference, and it can be further entropy coded. Option 2: instead of taking a difference relative to a previous input sample, take the difference relative to the output of a local model of the decoder process; in this option, the difference can be quantized, which allows a good way to incorporate a controlled loss in the encoding. Applying one of these two processes, short-term redundancy (positive correlation of nearby values) of the signal is eliminated; compression ratios on the order of 2 to 4 can be achieved if differences are subsequently entropy coded, because the entropy of the difference signal is much smaller than that of the original discrete signal treated as independent samples.
- https://en.wikipedia.org/wiki/Adaptive_differential_pulse-code_modulation - ADPCM, is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio. Typically, the adaptation to signal statistics in ADPCM consists simply of an adaptive scale factor before quantizing the difference in the DPCM encoder. ADPCM was developed in the early 1970s at Bell Labs for voice coding, by P. Cummiskey, N. S. Jayant and James L. Flanagan
PDM
- https://en.wikipedia.org/wiki/Pulse-density_modulation - a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM). Instead, it is the relative density of the pulses that corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal. Pulse-width modulation (PWM) is a special case of PDM where the switching frequency is fixed and all the pulses corresponding to one sample are contiguous in the digital signal. For a 50% voltage with a resolution of 8-bits, a PWM waveform will turn on for 128 clock cycles and then off for the remaining 128 cycles. With PDM and the same clock rate the signal would alternate between on and off every other cycle. The average is 50% for both waveforms, but the PDM signal switches more often. For 100% or 0% level, they are the same.
WAV / WAVE
- https://en.wikipedia.org/wiki/WAV - a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in "chunks", and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.
- Intro to Audio Programming, Part 2: Demystifying the WAV Format – Game Theory - A blog by Microsoft Academic Developer Evangelist, Dan Waters
- https://github.com/Borewit/music-metadata/wiki/RIFF-WAVE - on metadata
- https://github.com/K0F/genwav - generate audio files from txt floats
Broadcast WAV
- https://en.wikipedia.org/wiki/Broadcast_Wave_Format - an extension of the popular Microsoft WAV audio format and is the recording format of most file-based non-linear digital recorders used for motion picture, radio and television production. It was first specified by the European Broadcasting Union in 1997, and updated in 2001 and 2003. The purpose of this file format is the addition of metadata to facilitate the seamless exchange of sound data between different computer platforms and applications. It specifies the format of metadata, allowing audio processing elements to identify themselves, document their activities, and supports timecode to enable synchronization with other recordings. This metadata is stored as extension chunks in a standard digital audio WAV file.
- BWF MetaEdit - developed by the Federal Agencies Digitization Guidelines Initiative (FADGI) supported by AudioVisual Preservation Solutions.This tool permits embedding, editing, and exporting of metadata in Broadcast WAVE Format (BWF) files. This tool can also enforce metadata guidelines developed by the Federal Agencies Audio-Visual Working Group, as well as recommendations and specifications from the European Broadcasting Union (EBU), Microsoft, and IBM.
RF64
- PDF: Long-form file format for the international exchange of audio programme materials with metadata
- https://en.wikipedia.org/wiki/RF64 - a BWF-compatible multichannel audio file format enabling file sizes to exceed 4 GB. It has been specified by the European Broadcasting Union. It has been accepted as the ITU recommendation ITU-R BS.2088.The file format is designed to meet the requirements for multichannel sound in broadcasting and audio archiving. It is based on the Microsoft RIFF/WAVE format and Wave Format Extensible for multichannel parameters. Additions are made to the basic specification to allow for more than 4 GB file sizes when needed (the new maximum filesize is now approximately 16 exabytes). The format is transparent to the BWF and all its supplements and chunks.
- https://github.com/IRT-Open-Source/libbw64 - Broadcast Wave 64 (ITU-R BS.2088) library
AU
- https://en.wikipedia.org/wiki/Au_file_format - a simple audio file format introduced by Sun Microsystems. The format was common on NeXT systems and on early Web pages. Originally it was headerless, being simply 8-bit µ-law-encoded data at an 8000 Hz sample rate. Hardware from other vendors often used sample rates as high as 8192 Hz, often integer multiples of video clock signal frequencies. Newer files have a header that consists of six unsigned 32-bit words, an optional information chunk and then the data (in big endian format). Although the format now supports many audio encoding formats, it remains associated with the µ-law logarithmic encoding. This encoding was native to the SPARCstation 1 hardware, where SunOS exposed the encoding to application programs through the /dev/audio interface. This encoding and interface became a de facto standard for Unix sound.
- https://notabug.org/kd/au-utils - simple, easily sandboxed, pipeline components for audio processing, using the au(7) file-format as an intermediary.
MP3
- MP3 (MPEG-1 or MPEG-2 Audio Layer III) is a patented encoding format for digital audio which uses a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players.
- https://github.com/lieff/minimp3 - Minimalistic, single-header library for decoding MP3. minimp3 is designed to be small, fast (with SSE and NEON support), and accurate (ISO conformant). You can find a rough benchmark below, measured using perf on an i7-6700K, IO included, no CPU heat to address speedstep:
- https://github.com/anars/blank-audio - Set of blank MP3 audio files
Encoding
LAME
- LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
- http://savvyadmin.com/batch-mp3-encoding-with-linux-and-lame/
for f in *.wav ; do lame "$f" ; done
shine
- https://github.com/toots/shine - Super fast fixed-point MP3 encoder.
mp3fs
- mp3fs - a read-only FUSE filesystem which transcodes between audio formats (currently FLAC to MP3) on the fly when files are opened and read. It can let you use a FLAC collection with software and/or hardware which only understands the MP3 format, or transcode files through simple drag-and-drop in a file browser. [25]
Splitting
mp3splt
- mp3splt - a utility to split mp3, ogg vorbis and native FLAC files selecting a begin and an end time position, without decoding. It's very useful to split large mp3/ogg vorbis/FLAC to make smaller files or to split entire albums to obtain original tracks. If you want to split an album, you can select split points and filenames manually or you can get them automatically from CDDB (internet or a local file) or from .cue files. Supports also automatic silence split, that can be used also to adjust cddb/cue splitpoints. Trimming using silence detection is also available. You can extract tracks from Mp3Wrap or AlbumWrap files in few seconds. For mp3 files, both ID3v1 & ID3v2 tags are supported. Mp3splt-project is split in 3 parts : libmp3splt, mp3splt and mp3splt-gtk.
unflac
- https://github.com/ftrvxmtrx/unflac - Frame accurate audio image + cue sheet splitting. No ReplayGain.
split2flac
- https://github.com/ftrvxmtrx/split2flac - Split flac/ape/wv/wav + cue sheet into separate tracks
Pcutmp3
- Pcutmp3 - a Java based program that lets you cut and trim MP3 files losslessly (i.e. without any quality loss as there is no re-encoding). Ideal for removing adverts or unwanted intros/outros from your radio mixes. Originally created by Sebastian Gesemann it is now maintained by Christopher Banes.
- https://bitbucket.org/gbouthenot/pcutmp3/src/default/ - gapless mp3 cutter tool This is a fork from http://pcutmp3.googlecode.com/svn/trunk/ Author: Christopher Banes (New BSD Licence).
- https://github.com/hdijkema/pcutmp3-gui - Proper Cut MP3 with bug fixes and enhanced with a GUI
lossless-cut
- https://github.com/mifi/lossless-cut - Save space by quickly and losslessly trimming video and audio files
quelcom
- https://github.com/posixru/quelcom - provides assorted tools to perform simple editing operations on MP3 and WAV audio files. These include fading, check-and-clean, informational extraction and lossless cutting and joining without reencoding.
MP3-Splitter
- https://github.com/gitpan/MP3-Splitter - MP3::Splitter - Perl extension for splitting MP3 files
Flacon
- Flacon - extracts individual tracks from one big audio file containing the entire album of music and saves them as separate audio files. To do this, it uses information from the appropriate CUE file. Besides, Flacon makes it possible to conveniently revise or specify tags both for all tracks at once or for each tag separately.
album-splitter
- https://github.com/crisbal/album-splitter - Do you have a music album as a single file (locally or on YouTube), with all its tracks joined together? Do you want to split that album in its single tracks? Do you want to tag these tracks so your music player can get all the required info from them?
WavePad
- Audio Editing Software. Sound, Music, Voice & Mp3 Editor - This audio editing software is a full-featured professional audio and music editor for Windows and Mac. It lets you record and edit music, voice and other audio recordings. When editing audio files, you can cut, copy and paste parts of recordings, and then add effects like echo, amplification and noise reduction. WavePad works as a wav or mp3 editor, but it also supports a number of other file formats including vox, gsm, wma, real audio, au, aif, flac, ogg, and more.
Non-commercial usage only.
mp3DirectCut
- mp3DirectCut - a fast and extensive audio editor and recorder for encoded MP3. Without re-encoding you can directly cut, crop or split your MP3 and AAC tracks, change the volume on MP3 and much more. Direct editing saves encoding time and preserves the original audio quality of your tracks. The built in recorder creates MP3 on the fly. By using Cue sheets, Pause detection or Auto cue you can easily divide long files.
Metadata
- id3reader - a Python module that reads ID3 metadata tags in MP3 files. It can read ID3v1, ID3v2.2, ID3v2.3, or ID3v2.4 tags. It does not write tags at all.
- http://search.cpan.org/dist/MP3-Info/
- http://search.cpan.org/dist/MP3-Tag/
- http://search.cpan.org/dist/MP3-Find/
- https://github.com/quodlibet/mutagen - a Python module to handle audio metadata. It supports ASF, FLAC, MP4, Monkey's Audio, MP3, Musepack, Ogg Opus, Ogg FLAC, Ogg Speex, Ogg Theora, Ogg Vorbis, True Audio, WavPack, OptimFROG, and AIFF audio files. All versions of ID3v2 are supported, and all standard ID3v2.4 frames are parsed. It can read Xing headers to accurately calculate the bitrate and length of MP3s. ID3 and APEv2 tags can be edited regardless of audio format. It can also manipulate Ogg streams on an individual packet/page level.
- eyeD3 - a Python tool for working with audio files, specifically mp3 files containing ID3 metadata (i.e. song info). It provides a command-line tool (eyeD3) and a Python library (import eyed3) that can be used to write your own applications or plugins that are callable from the command-line tool.
- http://sourceforge.net/projects/bulkid3 - an en-masse ID3 tag editor. It is designed to allow bulk modification of mp3 files, as well as file renaming.
AAC
- http://en.wikipedia.org/wiki/Advanced_Audio_Coding - AAC, is a standardized, lossy compression and encoding scheme for digital audio. Designed to be the successor of the MP3 format, AAC generally achieves better sound quality than MP3 at similar bit rates.
- https://github.com/linnaea/faac - based on the ISO MPEG-4 reference code.
Ogg
- Ogg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs.
Container format.
- https://en.wikipedia.org/wiki/Vorbis_comment - a metadata container used in the Vorbis, FLAC, Theora, Speex and Opus file formats. It allows information such as the title, artist, album, track number or other information about the file to be added to the file itself. However, as the official Ogg Vorbis documentation notes, “[the comment header] is meant for short, text comments, not arbitrary metadata; arbitrary metadata belongs in a separate logical bitstream (usually an XML stream type) that provides greater structure and machine parseability.”
Opus
- Opus is a totally open, royalty-free, highly versatile audio codec. Opus is unmatched for interactive speech and music transmission over the Internet, but is also intended for storage and streaming applications. It is standardized by the Internet Engineering Task Force (IETF) as RFC 6716 which incorporated technology from Skype's SILK codec and Xiph.Org's CELT codec.
Best lossy audio format, replaces Vorbis and Speex.
ffmpeg -i input -acodec libopus -b:a bitrate -vbr on -compression_level 10 output
Vorbis
Speex
- https://www.speex.org
- https://en.wikipedia.org/wiki/Speex - a lossy audio compression format specifically tuned for the reproduction of human speech and also a free software speech codec that may be used on VoIP applications and podcasts. It is based on the CELP speech coding algorithm. Speex claims to be free of any patent restrictions and is licensed under the revised (3-clause) BSD license. It may be used with the Ogg container format or directly transmitted over UDP/RTP. It may also be used with the FLV container format. The Speex designers see their project as complementary to the Vorbis general-purpose audio compression project.
- Codec 2 | Rowetel - Codec 2 is an open source speech codec designed for communications quality speech between 700 and 3200 bit/s. The main application is low bandwidth HF/VHF digital radio. It fills a gap in open source voice codecs beneath 5000 bit/s and is released under the GNU Lesser General Public License (LGPL).
The Codec 2 project also contains several modems (FDMDV, COHPSK and mFSK) carefully designed for digital voice over HF radio; GNU Octave simulation code to support the codec and modem development; and FreeDV – an open source digital voice protocol that integrates the modems, codecs, and FEC. FreeDV is available as a GUI application, an open source library (FreeDV API), and in hardware (the SM1000 FreeDV adaptor).
FLAC
- FLAC - stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaning that audio is compressed in FLAC without any loss in quality. This is similar to how Zip works, except with FLAC you will get much better compression because it is designed specifically for audio, and you can play back compressed FLAC files in your favorite player (or your car or home stereo, see supported devices) just like you would an MP3 file. FLAC stands out as the fastest and most widely supported lossless audio codec, and the only one that at once is non-proprietary, is unencumbered by patents, has an open-source reference implementation, has a well documented format and API, and has several other independent implementations.
flac --best --keep-foreign-metadata input.wav
metaflac --list file.flac # list all metadata of a FLAC file
aptX
- https://en.wikipedia.org/wiki/AptX - a family of proprietary audio codec compression algorithms currently owned by Qualcomm. The original aptX algorithm was developed in the 1980s by Dr. Stephen Smyth as part of his Ph.D. research at Queen's University Belfast School of Electronics, Electrical Engineering and Computer Science; its design is based on time domain ADPCM principles without psychoacoustic auditory masking techniques.
Codec 2
- Codec 2 - Codec 2 is an open source speech codec designed for communications quality speech between 700 and 3200 bit/s. The main application is low bandwidth HF/VHF digital radio. It fills a gap in open source voice codecs beneath 5000 bit/s and is released under the GNU Lesser General Public License (LGPL). The Codec 2 project also contains several modems (FDMDV, COHPSK and mFSK) carefully designed for digital voice over HF radio; GNU Octave simulation code to support the codec and modem development; and FreeDV – an open source digital voice protocol that integrates the modems, codecs, and FEC. FreeDV is available as a GUI application, an open source library (FreeDV API), and in hardware (the SM1000 FreeDV adaptor).
STEM
- NI: Stems - a completely new way to DJ. Stems is a new format for music that redefines creative live performance. Create spontaneous edits, a cappellas, instrumentals, and more with your tracks’ musical elements available independently.
- https://github.com/faroit/stempeg - Python tool to read and write STEM files
WavPack
- WavPack - a completely open audio compression format providing lossless, high-quality lossy, and a unique hybrid compression mode. For version 5.0.0, several new file formats and lossless DSD audio compression were added, making WavPack a universal audio archiving solution.
Older
- https://github.com/temisu/ancient_format_decompressor - Decompression routines for ancient formats
DSD / SACD
- https://en.wikipedia.org/wiki/Direct_Stream_Digital - DSD is the name of a trademark used by Sony and Philips for their system of digitally recreating audible signals for the Super Audio CD (SACD). DSD uses pulse-density modulation encoding—a technology to store audio signals on digital storage media that are used for the SACD. The signal is stored as delta-sigma modulated digital audio, a sequence of single-bit values at a sampling rate of 2.8224 MHz (64 times the CD audio sampling rate of 44.1 kHz, but only at 1/32768 of its 16-bit resolution). Noise shaping occurs by use of the 64-times oversampled signal to reduce noise and distortion caused by the inaccuracy of quantization of the audio signal to a single bit. Therefore, it is a topic of discussion whether it is possible to eliminate distortion in one-bit delta-sigma conversion.
- YouTube: DSD Explained part 1 - At the 1996 AES Convention in Copenhagen the formerly CBS Research Lab introduced Direct Stream Digital, DSD for short, as a archive format that offered compact files while it contains sufficient information for conversion to sample rates as high as 352.8 kHz - unheard of in 1996. From there is went to SACD and now to DSD downloads. What is DSD all about? Hans Beekhuyzen explains.
- https://en.wikipedia.org/wiki/Super_Audio_CD - DSD on disc
- Super Audio CD decoder - a command-line application which takes a Super Audio CD source and extracts a 24-bit high resolution wave file. It handles both DST and DSD streams. The application reads the following input: SACD image files (*.iso), Sony DSF files (*.dsf), Philips DSDIFF files (*.dff). Supported output sample rates: 88.2KHz, 96KHz, 176.4KHz, 192KHz
- What is DoP (DSD over PCM)? - It involves taking groups of 16 adjacent 1-bit samples from a DSD stream and packing them into the lower 16 bits of a 24/176.4 data stream. Data from the other channel of the stereo pair is packed the same way. A specific marker code in the top 8 bits identifies the data stream as DoP, rather than PCM. The resulting DoP stream can be transmitted through existing 24/192-capable USB, AES, Dual AES or SPDIF interfaces to a DoP-compatible DAC, which reassembles the original stereo DSD data stream COMPLETELY UNCHANGED. If something goes wrong and the data stream is decoded as PCM, the output will be low-level noise with faint music in the back ground, so it fails safely. This can happen if the computer erases the marker code by applying a volume adjustment.
- DoP isn't PCM - Paul McGowan, PS Audio
- New HDMI audio output format: native DSD (one bit audio) passthrough for Android set-top boxes (feature request) - Google Issue Tracker
- https://en.wikipedia.org/wiki/Digital_eXtreme_Definition - or DXD, a digital audio format that originally was developed for editing high-resolution recordings recorded in DSD, the audio standard used on Super Audio CD (SACD). As the 1-bit DSD format used on SACD is not suitable for editing, alternative formats such as DXD or DSD-Wide must be used during the mastering stage. In contrast with DSD-Wide or DSD Pure which offers level, EQ, and crossfade edits at the DSD sample rate (64fs, 2.822 MHz), DXD is a PCM signal with 24-bit resolution (8 bits more than the 16 bits used for Red Book CD) sampled at 352.8 kHz – eight times 44.1 kHz, the sampling frequency of Red Book CD. The data rate is 8.4672 Mbit/s per channel – three times that of DSD64.
- https://github.com/SqueezeOnArch/dsdplay - DSD to Flac / PCM/DoP conversion and resampling
- https://github.com/DocMarty84/sacd - Converts SACD image files, Philips DSDIFF and Sony DSF files to 24-bit high resolution wave files. Handles both DST and DSD streams. THIS IS ONLY A CLONE OF THE OFFICIAL REPO!!!
MQA
AMR
Dolby Digital / AC-3
- https://en.wikipedia.org/wiki/Dolby_Digital - the name for audio compression technologies developed by Dolby Laboratories. Originally named Dolby Stereo Digital until 1994, except for Dolby TrueHD, the audio compression is lossy. The first use of Dolby Digital was to provide digital sound in cinemas from 35mm film prints; today, it is now also used for other applications such as TV broadcast, radio broadcast via satellite, DVDs, Blu-ray discs and game consoles. This format has different names: Dolby Digital, DD (an abbreviation for Dolby Digital, often combined with channel count; for instance, DD 2.0, DD 5.1), AC-3 (Audio Codec 3, Advanced Codec 3, Acoustic Coder 3. [These are backronyms. Adaptive Transform Acoustic Coding 3 is a separate format developed by Sony.]). ATSC A/52 is name of the standard.
- https://en.wikipedia.org/wiki/Dolby_Digital_Plus - also known as Enhanced AC-3 (and commonly abbreviated as DD+ or E-AC-3, or EC-3) is a digital audio compression scheme developed by Dolby Labs for transport and storage of multi-channel digital audio. It is a successor to Dolby Digital (AC-3), also developed by Dolby, and has a number of improvements including support for a wider range of data rates (32 Kbit/s to 6144 Kbit/s), increased channel count and multi-program support (via substreams), and additional tools (algorithms) for representing compressed data and counteracting artifacts. While Dolby Digital (AC-3) supports up to 5 full-bandwidth audio channels at a maximum bitrate of 640 Kbit/s, E-AC-3 supports up to 15 full-bandwidth audio channels at a maximum bitrate of 6.144 Mbit/s. The full set of technical specifications for E-AC-3 (and AC-3) are standardized and published in Annex E of ATSC A/52:2012, as well as Annex E of ETSI TS 102 366 V1.2.1 (2008–08), published by the Advanced Television Systems Committee.
Dolby AC-4
- https://en.wikipedia.org/wiki/Dolby_AC-4 - Dolby AC-4 is an audio compression standard supporting multiple audio channels and/or audio objects. Support for 5.1 channel audio is mandatory and additional channels up to 7.1.4 are optional. AC-4 provides a 50% reduction in bit rate over AC-3/Dolby Digital Plus.
Dolby TrueHD
- https://en.wikipedia.org/wiki/Dolby_TrueHD - a lossless multi-channel audio codec developed by Dolby Laboratories which is used in home-entertainment equipment such as Blu-ray Disc players and A/V receivers. It is one of the successors to the Dolby Digital (AC-3) surround sound codec, which is used as the audio standard for the DVD-Video format. In this application, Dolby TrueHD competes with DTS-HD Master Audio, a lossless codec from DTS.
Dolby TrueHD uses Meridian Lossless Packing (MLP) as its mathematical basis for compressing audio samples. MLP is also used in the DVD-Audio format, but details of Dolby TrueHD and the MLP Lossless format as used on DVD-Audio differ substantially. A Dolby TrueHD bitstream can carry up to 16 discrete audio channels. Sample depths up to 24 bits/sample and audio sample rates up to 192 kHz are supported. Like the more common legacy codec Dolby Digital, Dolby TrueHD bitstreams carry program metadata. Metadata is separate from the coding format and compressed audio samples, but stores relevant information about the audio waveform and provides control over the decoding process. For example, dialog normalization and dynamic range compression are controlled by metadata embedded in the Dolby TrueHD bitstream. Similarly, a Dolby Atmos encoded Dolby TrueHD stream contains metadata to extract and place the objects in relevant positions. Dolby TrueHD is a variable bit-rate codec.
Dirac
- https://github.com/kode54/dh - Dirac to Headphones, with convolution code
ATRAC / Minidisc
- https://en.wikipedia.org/wiki/Adaptive_Transform_Acoustic_Coding - a family of proprietary audio compression algorithms developed by Sony. MiniDisc was the first commercial product to incorporate ATRAC in 1992. ATRAC allowed a relatively small disc like MiniDisc to have the same running time as CD while storing audio information with minimal loss in perceptible quality. Improvements to the codec in the form of ATRAC3, ATRAC3plus, and ATRAC Advanced Lossless followed in 1999, 2002, and 2006 respectively. Other MiniDisc manufacturers such as Sharp and Panasonic also implemented their own versions of the ATRAC codec. Sony has all but dropped the ATRAC related codecs in the USA and Europe and in their SonicStage powered 'Connect' Music Service (Sony's equivalent of iTunes) on 31 March 2008. However, it is being continued in Japan and various other countries.
NICAM
- https://en.wikipedia.org/wiki/NICAM - an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.
Vinyl
- https://en.wikipedia.org/wiki/Phonograph - a device, invented in 1877, for the mechanical recording and reproduction of sound. In its later forms, it is also called a gramophone (as a trademark since 1887, as a generic name in the UK since 1910), or, since the 1940s, a record player. The sound vibration waveforms are recorded as corresponding physical deviations of a spiral groove engraved, etched, incised, or impressed into the surface of a rotating cylinder or disc, called a "record". To recreate the sound, the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by it, very faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated a diaphragm which produced sound waves which were coupled to the open air through a flaring horn, or directly to the listener's ears through stethoscope-type earphones.
- https://en.wikipedia.org/wiki/Phonograph_cylinder - the earliest commercial medium for recording and reproducing sound. Commonly known simply as "records" in their era of greatest popularity (c. 1896–1915), these hollow cylindrical objects have an audio recording engraved on the outside surface, which can be reproduced when they are played on a mechanical cylinder phonograph. In the 1910s, the competing disc record system triumphed in the marketplace to become the dominant commercial audio medium.
- https://en.wikipedia.org/wiki/Phonograph_record - also known as a gramophone record, especially in British English, or record) is an analog sound storage medium in the form of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the periphery and ends near the center of the disc. At first, the discs were commonly made from shellac; starting in the 1950s polyvinyl chloride became common. In recent decades, records have sometimes been called vinyl records, or simply vinyl, although this would exclude most records made until after World War II.
- Sound Experiments at the Volta Laboratory - Hear My Voice | Albert H. Small Documents Gallery | Smithsonian's National Museum of American History - [28]
- https://en.wikipedia.org/wiki/Magnetic_cartridge - more commonly called a phonograph cartridge or phono cartridge or (colloquially) a pickup, is an electromechanical transducer used in the playback of analog sound recordings called records on a record player, now commonly called a turntable because of its most prominent component but formally known as a phonograph in the US and a gramophone in the UK. The cartridge contains a removable or permanently mounted stylus, the tip - usually a gemstone like diamond or sapphire - of which makes physical contact with the record's groove. In popular usage and in disc jockey jargon, the stylus, and sometimes the entire cartridge, is often called the needle. As the stylus tracks the serrated groove, it vibrates a cantilever on which is mounted a permanent magnet which moves between the magnetic fields of sets of electromagnetic coils in the cartridge (or vice versa: the coils are mounted on the cantilever, and the magnets are in the cartridge). The shifting magnetic fields generate an electrical current in the coils. The electrical signal generated by the cartridge can be amplified and then converted into sound by a loudspeaker.
- YouTube: The world's cheapest phono cartridge - ubiquitous black red angular clone of Chuo Denshi 33 1/3 & 45 Cartridge+Stylus cartridge
- YouTube: What Phono Cartridge Should I Get?
- YouTube: DJ tips: needles
- https://github.com/mitsuhito/CuttingRecordGenerator - To make "Record" with laser cutter. Uses Processing.
- https://github.com/kallaballa/sndcut - a program that generates LP records from audio files - it generates an SVG file that you can laser cut.
Playlist formats
playlist='play.m3u' ; if [ -f $playlist ]; then rm $playlist ; fi ; for f in *.mp3; do echo "$(pwd)/$f" >> "$playlist"; done # create m3u playlist with absolute file paths
Playback
See also Playback, Sampling, Distros#Media
jplay2
- https://gareus.org/oss/jplay2/start - jplay2 is a command-line audio player, gluing JACK, libsamplerate and liblo (OSC control), it plays a single file (no playlist), but with ffmpeg & libsndfile it plays every file one throws at it (even DVD-vobs or timemachine-w64 ;-) ). Once started, it's only possible to interact with jplay2 via OSC or jack-transport.
Random Parallel Player
- https://github.com/diovudau/random-parallel-player - Takes a bunch of audio files as tracks and plays them back randomly creating new music each playthrough. The core rule of RPP: No human interaction once the playback has started. RPP is based on an idea of Louigi Verona. The included audio samples in example.rpp were created by him. You can read about the original project here: https://louigiverona.com/?page=projects&s=writings&t=linux&a=linux_randomhouse
loopnaut
- https://github.com/soenkehahn/loopnaut - tool to play audio files in a loop (via JACK)
mfl-gramophone
- https://github.com/eeeeeta/mfl-gramophone - A simple Rust application to play audio, using JACK, when it receives commands via OSC.
SQA: the Stuttery QLab Alternative
- https://github.com/eeeeeta/sqa - This project aims to create an audio player & cue system for live shows and staged productions, à la Figure53's QLab. All its code is written in the Rust programming language, a new language that prevents memory unsafety and improves programming ergonomics. This one large repo contains many different crates that all help accomplish that aim.
PEAR
- https://github.com/esologic/pear - a tool for sound installations. Take a directory with .wav files named in numeric order and play them over usb sound devices attached to the host computer over and over forever, looping all files once the longest one finishes.
Recording
fmedia
- fmedia - a fast asynchronous media player/recorder/converter for Windows, Linux and FreeBSD. It provides smooth playback and recording even if devices are very slow. It's highly customizable and can be easily extended with additional plugins. Its low CPU & memory consumption saves energy when running on a notebook's battery. Play or convert audio files, record new audio tracks from microphone, save songs from Internet radio, and much more! fmedia is free and open-source project, and you can use it as a standalone application or as a library for your own software. fmedia can decode: .mp3, .ogg (Vorbis, Opus), .opus, .m4a/.mp4 (AAC, ALAC, MPEG), .mka/.mkv (AAC, ALAC, MPEG, Vorbis), .avi (AAC, MPEG), .aac, .mpc, .flac, .ape, .wv, .wav. fmedia can encode into: .mp3, .ogg, .opus, .m4a (AAC), .flac, .wav.
arecord
arecord -D hw:0 -f cd test.wav
audio-recorder
Ecasound
- Ecasound is a software package designed for multitrack audio processing. It can be used for simple tasks like audio playback, recording and format conversions, as well as for multitrack effect processing, mixing, recording and signal recycling. Ecasound supports a wide range of audio inputs, outputs and effect algorithms. Effects and audio objects can be combined in various ways, and their parameters can be controlled by operator objects like oscillators and MIDI-CCs. A versatile console mode user-interface is included in the package.
ecasound -a:1,2 -i jack -o jack -a:1 -elv2:http://calf.sourceforge.net/plugins/Compressor,0,1,0,0,0,0,0.015625,20,0.01,2000,1,1,0,0,1,1 -a:2 -elv2:http://calf.sourceforge.net/plugins/Limiter,0,1,0.587231,0,0,0,0,0,0,0,0,0.0625,0.1,1000,1,1,0,0.5,4
- Nama - manages multitrack recording, mixing and mastering using the Ecasound audio processing engine developed by Kai Vehmanen.
- Ecasound Mastering Interface - a Python front end to ecasound. It looks a lot like Rackmount effect and can be used to create an Ecasound Chain Setup while playing with parameters in real time. It supports mixing, recording, filtering, and processing and can export to ECS files. It supports all ecasound options, chain operators, and controllers.
- Visecas - a graphical user interface for Ecasound (http://eca.cx/ecasound), a software package written by Kai Vehmanen (k@eca.cx) which is designed for multitrack audio processing. It starts Ecasound as a child process and communicates via a pipe using Ecasound's InterActive Mode (IAM) commands.
- ecaplugin.py - a tool to generate the unwieldy ecasound command lines for LADSPA and LV2 plugins from Ardour sessions or JACK Rack configurations.
Meterec
- meterec works as a basic multi track tape recoder. The aim of this software is to minimise the interactions of the users with the computer and allow them to focus on their instrumental performance. For this reason meterec features are minimal. One of the main "limitation" is that meterec can only restart from time 0:00:00.00: if you srew one take, start it over again! rather than learning how to use a specific software to correct what you screw, meterec forces to learn and master your instrument. Good news is previous takes are kept in take history and if in the end, the first one was the best you could play, you can choose it in your final mix.
jack_capture
- jack_capture is a program for recording soundfiles with jack. The default operation will record what you hear in your loudspeakers into a stereo wav file.
- https://github.com/danielappelt/caPiture - uses jack_capture to headlessly multitrack record the input of the Behringer XR18 Mixer. For now, it is hardcoded to work with the XR18.
jrec2
- jrec2 - simple patched jack_capture, simple patch to the jack_capture example client, that implements silence detection and splitting of output files), can call hooks (invoke 3rd party software) upon detecting silence or audio. It include an optional random-playback control script that was used in an installation to record voice and if it detects silence plays back random snippets of previously recorded material.
jack-record
- jack-record is a light-weight JACK capture client to write an arbitrary number of channels to disk.
jack_playrec
- https://github.com/HoerTech-gGmbH/jack_playrec - provides an interface for synchronous recording/playback via the JACK Audio Connection Kit.
screcord.lv2
jamRecord.lv2
- https://github.com/tdufret/jamRecord.lv2 - jam session recorder to only keep on file the last x minutes of a jam
Freeze
- https://github.com/nickolas360/freeze - an LV2 plugin for freezing tracks in a digital audio workstation—that is, temporarily rendering a track as audio to reduce CPU/DSP load, as tracks with large chains of CPU-heavy effects can make buffer underruns (xruns) quite common. Some DAWs like Ardour support track freezing to a certain extent, but Ardour, for example, cannot freeze MIDI tracks.
QJackRcd
- QJackRcd is a simple stereo recorder for Jack with few features as silence processing for automatic pause, file splitting, background file post-processing.
audio coffin
- https://github.com/UoC-Radio/audio-coffin - A simple audio recorder/logger on top of Jack, libsndfile and libsoxr
timemachine
- JACK Timemachine - I used to always keep a minidisc recorder in my studio running in a mode where when you pressed record it wrote the last 10 seconds of audio to the disk and then caught up to realtime and kept recording. The recorder died and haven't been able to replace it, so this is a simple jack app to do the same job. It has the advantage that it never clips and can be wired to any part of the jack graph.
- http://www.64studio.com/manual/audio/timemachine - A JACK application that can retrospectively record audio.
- https://github.com/swh/timemachine
Rotter
- Rotter is a Recording of Transmission / Audio Logger for JACK. It was designed for use by radio stations, who are legally required to keep a recording of all their output. Rotter runs continuously, writing to a new file every hour. Rotter can output files in servaral different strutures, including all files in a single directory or create a directory structure.The advantage of using a folder hierarchy is that you can store related files in the hour's directory.
Transcoding
- https://en.wikipedia.org/wiki/Transcoding - the direct digital-to-digital conversion of one encoding to another,[1] such as for movie data files (e.g., PAL, SECAM, NTSC), audio files (e.g., MP3, WAV), or character encoding (e.g., UTF-8, ISO/IEC 8859). This is usually done in cases where a target device (or workflow) does not support the format or has limited storage capacity that mandates a reduced file size,[2] or to convert incompatible or obsolete data to a better-supported or modern format.
SoundConverter
- SoundConverter - the leading audio file converter for the GNOME Desktop. It reads anything GStreamer can read (Ogg Vorbis, AAC, MP3, FLAC, WAV, AVI, MPEG, MOV, M4A, AC3, DTS, ALAC, MPC, Shorten, APE, SID, MOD, XM, S3M, etc...), and writes to Opus, Ogg Vorbis, FLAC, WAV, AAC, and MP3 files, or use any GNOME Audio Profile.
Perl Audio Converter
- Perl Audio Converter - A tool for converting multiple audio types from one format to another. It supports the following audio formats: 3G2, 3GP, 8SVX, AAC, AC3, ADTS, AIFF, AL, AMB, AMR, APE, AU, AVR, BONK, CAF, CDR, CVU, DAT, DTS, DVMS, F32, F64, FAP, FLA, FLAC, FSSD, GSRT, HCOM, IMA, IRCAM, LA, MAT, MAUD, MAT4, MAT5, M4A, MP2, MP3, MP4, MPC, MPP, NIST, OFF, OFR, OFS, OPUS, OGA, OGG, PAF, PRC, PVF, RA, RAW, RF64, SD2, SF, SHN, SMP, SND, SOU, SPX, SRN, TAK, TTA, TXW, VOC, VMS, VQF, W64, WAV, WMA, and WV.
Secret Rabbit Code
- Secret Rabbit Code - aka libsamplerate, is a Sample Rate Converter for audio. One example of where such a thing would be useful is converting audio from the CD sample rate of 44.1kHz to the 48kHz sample rate used by DAT players. SRC is capable of arbitrary and time varying conversions ; from downsampling by a factor of 256 to upsampling by the same factor. Arbitrary in this case means that the ratio of input and output sample rates can be an irrational number. The conversion ratio can also vary with time for speeding up and slowing down effects.
SRC provides a small set of converters to allow quality to be traded off against computation cost. The current best converter provides a signal-to-noise ratio of 145dB with -3dB passband extending from DC to 96% of the theoretical best bandwidth for a given pair of input and output sample rates. Since the library has few dependencies beyond that provided by the standard C library, it should compile and work on just about any operating system. It is known to work on Linux, MacOSX, Win32 and Solaris. With some relatively minor hacking it should also be relatively easy to port it to embedded systems and digital signal processors.
audiomap
- audiomap - a program which converts from any audio format to any other in a uniform & sane fashion so you don't have to learn all the options for conversion program x. It will preserve all tags in the process. It's goal is bullcrap free conversion. I wrote it because while there are plunty of shell scripts out there to convert things to/from a few formats, they suck at handling weird characters and often even spaces! On top of this, they usually do not properly preserve metadata. Audiomap works with funky chars, spaces, and preserves metadata - all whilst providing encoding/decoding through many formats.
resample
transfercoder
- https://github.com/DarwinAwardWinner/transfercoder - Transfer and transcode your music at the same time
AudioMove
- AudioMove - a simple, easy to use GUI-based batch audio file copy-and-conversion program.
flacsync
- https://github.com/cmcginty/flacsync - Recursively mirror a directory tree of FLAC audio files to AAC or OGG.
lackey
- https://github.com/cassava/lackey - Automatically create and manage a lower-quality mirror of your music library
caudec
- caudec - a command-line utility for GNU/Linux and OS Ⅹ that transcodes (converts) audio files from one format (codec) to another. It leverages multi-core CPUs with lots of RAM by using a ramdisk, and running multiple processes concurrently (one per file and per codec). It is Free Software, licensed under the GNU General Public License.
ffcvt
- ffcvt - ffmpeg convert wrapper tool
Analysis
Sonic Visualiser
- Sonic Visualiser - an application for viewing and analysing the contents of music audio files. The aim of Sonic Visualiser is to be the first program you reach for when want to study a musical recording rather than simply listen to it. We hope Sonic Visualiser will be of particular interest to musicologists, archivists, signal-processing researchers and anyone else looking for a friendly way to take a look at what lies inside the audio file. Sonic Visualiser is Free Software, distributed under the GNU General Public License (v2 or later) and available for Linux, OS/X, and Windows. It was developed at the Centre for Digital Music at Queen Mary, University of London.
Don't forget to install at least the QM vamp plugins.
- YouTube: 2 7 Sonic Visualiser Tutorial 2011
Baudline
- Baudline is a time-frequency browser designed for scientific visualization of the spectral domain. Signal analysis is performed by Fourier, correlation, and raster transforms that create colorful spectrograms with vibrant detail. Conduct test and measurement experiments with the built in function generator, or play back audio files with a multitude of effects and filters. The baudline signal analyzer combines fast digital signal processing, versatile high speed displays, and continuous capture tools for hunting down and studying elusive signal characteristics.
Friture
- Friture - a real-time audio analyzer. It works on Windows, Mac OS X and Linux. It is free and open source.
SoundRuler
- SoundRuler is a tool for measuring and graphing sound and for teaching acoustics. Its visual interactive approach to analysis brings you the best of two worlds: the control of manual analysis and the objectivity and speed of automated analysis.
Binary download needs 32-bit libxp to be installed.
BRP-PACU
- BRP-PACU - A cross platform dual channel FFT based Acoustic Analysis Tool to help engineers analyze live professional sound systems using the transfer function. One feature is the ability to capture four sample plots, average them, and invert to aid in final EQ.
DSP
BRP-PACU
- Open Sound Meter - real time crossplatform dual-FFT measurement analysis tool for sound system tuning.
japa
- Japa (JACK and ALSA Perceptual Analyser) - a 'perceptual' or 'psychoacoustic' audio spectrum analyser.
Spek
- Spek - helps to analyse your audio files by showing their spectrogram. Spek is free software available for Unix, Windows and Mac OS X.
Visual only.
zrtstr
- zrtstr is a small command line application for detecting faux-stereo WAV-files, that is, files with two identical channels that should have been saved as mono. Such files are sometimes generated by some audio-editing software and DAWs (I’m looking at you, old Cubase 5). Gotten tired of receiving such files from clients for mixing, as they are using twice the necessary space and require twice the processing power, I decided to deal with this nuisance once and for all. zrtstr is a cross-platform application which runs very fast, thanks to being written in Rust.
DFasma
- DFasma is a free open-source software which is used to compare audio files in time and frequency. The comparison is first visual, using wavforms and spectra. It is also possible to listen to time-frequency segments in order to allow perceptual comparison. It is basically dedicated to analysis. Even though there are basic functionnalities to align the signals in time and amplitude, this software does not aim to be an audio editor.
ASAnnotation
- AS Annotation is an application for the analysis and automated or manual annotation of sound files. It features state of the art sound analysis algortithms, specialized sound inspection tools and can import Standard MIDI files. ASAnnotation is based on AudioSculpt, a sound analysis and transformation software developed at IRCAM since 1996. In addition to the analysis and annotation features present in AS Annotation, AudioSculpt comes with state of the art sound processing, mostly based on an enhanced version of the phase vocoder. To store and exchange analysis and annotation data, ASAnnotation can use two formats; MIDI for notes and text, and SDIF for all analyses. The MIDI support facilitates the verification, alignment and correction of Standard MIDI Files to soundfiles. SDIF is a specialized format for sound description data, which combines very high precision with efficiency and interchangeability. Numerous other programs support SDIF, such as Max/MSP, OpenMusic, CLAM and SPEAR. A collection of utility programs can be used to convert SDIF files to text.
harmony-analyser
- harmony-analyser is a set of visual tools for music harmony analysis of WAV/MIDI input, powered by JHarmonyAnalyser library
Toscanalyzer
- Toscanalyzer is a powerful audio analysis tool for mixing and mastering. Toscanalyzer helps you to mix and master better. It is not only an analysis tool but a complete guide to understand why your song sounds as it sounds. Toscanalyzer lets you compare audible and visually your project to any reference songs in a very convenient way. Toscanalyzer offers a clear project view including many options to analyze. The analysis gives you a detailed report about possible problems and in addition a clear guidance how to fix it.
Java. Doesn't work for me.
SPAN
- http://www.voxengo.com/product/span/ - windows/mac vst
Raven Lite
- http://www.birds.cornell.edu/brp/RavenLite/RavenLiteReadMe.htm - an interactive sound visualization tool for novice through advanced users who want to visualize sound in exciting ways. Windows and Mac. Lite version is freware
pyAudioAnalysis
- https://github.com/tyiannak/pyAudioAnalysis pyAudioAnalysis - Python library covering a wide range of audio analysis tasks, including: feature extraction, classification, segmentation and visualization.
QLoud
- QLoud - tool to measure a loudspeaker frequency response and distortions
- https://github.com/molke-productions/qloud - an attempt of porting QLoud to QT5
freqresp
- https://github.com/flok99/freqresp - Calculates a frequency response diagram
Room EQ Wizard
- REW - free room acoustics analysis software for measuring and analysing room and loudspeaker responses. The audio analysis features of REW help you optimise the acoustics of your listening room, studio or home theater and find the best locations for your speakers, subwoofers and listening position. It includes tools for generating audio test signals; measuring SPL and impedance; measuring frequency and impulse responses; measuring distortion; generating phase, group delay and spectral decay plots, waterfalls, spectrograms and energy-time curves; generating real time analyser (RTA) plots; calculating reverberation times; calculating Thiele-Small parameters; determining the frequencies and decay times of modal resonances; displaying equaliser responses and automatically adjusting the settings of parametric equalisers to counter the effects of room modes and adjust responses to match a target curve.
RIR-Generator
- https://github.com/ehabets/RIR-Generator - The image method, proposed by Allen and Berkley in 1979 [1], is probably one of the most frequently used methods in the acoustic signal processing community to create synthetic room impulse responses. A mex-function, which can be used in MATLAB, was developed to generate multi-channel room impulse responses using the image method. This function enables the user to control the reflection order, room dimension, and microphone directivity.This repository includes a tutorial, MATLAB examples, and the source code of the mex-function.
PostQC
- PostQC - a tool for measuring, logging and reporting audio levels for professional audio production and post production studios.
Gist
- https://github.com/adamstark/Gist - a C++ based audio analysis library
Mixers
Command-line mixers
amixer
- amixer - a command-line program for controlling the mixer in the ALSA soundcard driver. amixer supports multiple soundcards
amixer -c 0 | pcregrep "control" # shows all audio channels
JackMiniMix
- JackMiniMix - a simple mixer for the Jack Audio Connection Kit with an OSC based control interface. It supports a user configurable number of stereo inputs, which can then be queried and controlled by sending it OSC messages. It is released under the GPL license.
kbd2jackmix
- https://github.com/dsheeler/kbd2jackmix - Listen on a keyboard event device and respond to key combinations with jack midi volume messages for use with a midi-aware jack mixer, like jackmix or jack_mixer.
jack_switch
- https://github.com/MaurizioB/jack_switch - Simple jack client to switch outputs from any input client
ncurses mixers
alsamixer
aumix
Graphical mixers
Kmix
- Kmix - an application to allow you to change the volume of your sound card. Though small, it is full-featured, and it supports several platforms and sound drivers. Features: Support for ALSA and OSS sound systems, Plasma Desktop integrated on-screen-display for volume changes
Recommended.
QasMixer
- QasMixer - a desktop mixer application for ALSA's "Simple Mixer Interface".
QasHctl
- QasHctl - a mixer for ALSA's more complex "High level Control Interface".
alsamixergui
- alsamixergui - a FLTK based frontend for alsamixer. It is written directly on top of the alsamixer source, leaving the original source intact, only adding a couple of ifdefs, and some calls to the gui part, so it provides exactly the same functionality, but with a graphical userinterface.
gnome-alsamixer
volti
- https://github.com/gen2brain/volti - an GTK+ application for controlling ALSA volume from system tray/notification area.
Buggy, forgets card sometimes, menu/preferences don't open.
Xmixer
- https://www.freshports.org/audio/xmixer audio/xmixer: Audio mixer (gtk and Xlib) for X11R6
XMMIX
- XMMIX - Motif Audio Mixer
NewMixer
- https://github.com/jatinchowdhury18/NewMixer - An audio mixing tool that allows the user to visualize audio sources by their location in space rather than as channels on a mixing board.
Jack mixers
Non Mixer
- Non Mixer
- non-mixer manual
- http://non.tuxfamily.org/wiki/UsingMidiWithNon - doesn't handle MIDI directly, requires CV or OSC conversion
/strip/[STRIP_NAME]/[MODULE_NAME]/[PARAMETER_NAME]
See also non-midi-mapper [30]
mx2482
- https://github.com/cybercatalyst/mx2482 - uses QJackAudio, works with JACK. 24 channels routed to 8 subgroups each with direct out. Three-band parametric EQ for each channel. Aux send/return for each channel, so you can hook in other effects processors. Save and restore complete EQ states. Clean source code and free sofware licensed under GPL. Using latest Qt5, which means it runs on all major platforms.
JackMix
- JackMix - Matrix Mixer with dials for Jack. Not quite MIDI controllable?
jack_mixer
- https://github.com/relascope/jack_mixer - a GTK+ JACK audio mixer app with a look similar to its hardware counterpart. input to output, and send outputs. only MIDI control of one level and stereo balance! manual send groups, solo and mute.
jackmixdesk
- jackmixdesk - an audio mixer for JACK with an OSC control interface and LASH support. It has a configurable number of inputs and pre/post sends/outs which can be controlled by sending it OSC messages. There is a XML config file and a GTK interface.
JackMaster
- JackMaster - "Master Console" for the jack-audio-connection-kit. Number of inputs/subs and screen layout can only be changed by recompiling.
qtjmix
- https://github.com/ycollet/qtjmix - A Jack mixer GUI
jackmaster
- https://github.com/linuxmao-org/jackmaster - - A "Master Console" for the jack-audio-connection-kit
injector
- https://github.com/dastax/injector - A simple mixer application for jack audio written in Qt4
MU1
- MU1 - a simple Jack app used to organise stereo monitoring. It was written originally for use with Ardour2, but still useful with Ardour3 as it provides some extra functions.
LV2 mixers
Some of these are also standalone.
xfade.lv2
- https://github.com/x42/xfade.lv2 - an audio-plugin for stereo cross-fading 2 x 2 input channels to 2 output channels.
balance.lv2
- https://github.com/x42/balance.lv2 - for stereo balance control with optional per channel delay. balance.lv2 facilitates adjusting stereo-microphone recordings (X-Y, A-B, ORTF). But it also generally useful as "Input Channel Conditioner". It allows for attenuating the signal on one of the channels as well as delaying the signals (move away from the microphone). To round off the feature-set channels can be swapped or the signal can be downmixed to mono after the delay.
sonejostudios
- https://github.com/sonejostudios/LiveFader - LiveFader is a very simple stereo passive volume fader
- https://github.com/sonejostudios/StereoKnot - Simple Stereo Through with Volume Slider
- https://github.com/sonejostudios/AudioThrough16 - very simple 16-channel audio through
- https://github.com/sonejostudios/StereoSwitch - A simple Stereo Switch (send stereo signal to output A or to output B)
- https://github.com/sonejostudios/ABswitchStereo - Stereo source comparison tool.
- https://github.com/sonejostudios/Mixer4x - A simple 4-channel stereo mixer. The main goal is to use it as a submixer on a 4 channel track, but you can use it everywhere you need a small 4 channel stereo mixer.
- https://github.com/sonejostudios/LiveMixer - Stereo Mixer Strip with 2 Aux Sends (post Fader)
- https://github.com/sonejostudios/XYMatrix - XY Surround Matrix for one Source (Mono Input) with 4 Outputs (Left, Right, Surround Left, Surround Right) and Position Lock.
BalanceGain / BalanceWidth
- https://github.com/johnflynnjohnflynn/BalanceGain - Stepped gain audio plugin for Balance Mastering
- https://github.com/johnflynnjohnflynn/BalanceWidth - Stepped stereo width plugin for Balance Mastering
vopa
- https://github.com/ycollet/vopa - Volume Panning Midi CC controlable LV2 plugin
x42-mixtrix
- mixtri(x) - a matrix mixer and trigger processor intended to be used with the oscilloscope, but also useful for other applications.
matrixmixer.lv2
- https://github.com/x42/matrixmixer.lv2 - matrixmixer.lv2 is a matrix mixer :)It is available as LV2 plugin and standalone JACK-application.
BadAmp
- https://github.com/badosu/BadAmp - A simple amplifier LV2 plugin
BAmp
- https://github.com/sjaehn/BAmp - Simple amplifier LV2 plugin using BWidgets GUI
Simple Amplifier
- https://github.com/ul/simple-amplifier - A very simple example of LV2 plugin built in Zig
Plujain
- https://github.com/Houston4444/plujain-plugins - utility LV2 plugins. The fadeswitch is an audio mono switch that progressly fade between the 2 channels outs. The triswitch follow the same principle with 3 outputs. For the Quadriswitch, guess !
Kn0ck0ut
- Kn0ck0ut - takes two mono 44.1KHz inputs and spectrally subtracts one from the other. It can be used to help create 'acapellas' - to extract vocals from a track - if an instrumental version (or section) of the track is available.
- YouTube: KnOckOut by St3pan0va
- https://github.com/jeremysalwen/kn0ck0ut-LV2 - Port of kn0ck0ut to LV2 plugin
intersect-lv2
- https://github.com/sboukortt/intersect-lv2 - an LV2 plugin which, given a stereo audio stream, “expands” it to three channels. Everything that is present in both input channels will be in the center channel of the output, and what is specific to each channel will be in the corresponding output channel. This can be useful, for example, to rediscover some of your favorite music by hearing things that you had never noticed before. (With that said, note that it does not necessarily work equally well on all songs, depending on how they were mixed.)
sm.lv2
- https://github.com/nettings/sm.lv2 - A simple speaker management LV2 plugin with global master volume and per-channel trim, delay, and low-shelf.This plugin lets you optimize a stereo or multichannel speaker system to your listening environment.
tinyamp.lv2
- https://github.com/x42/tinyamp.lv2 - minimalistic gain control with small MOD GUI
mod-volume-lv2
- https://github.com/moddevices/mod-volume-lv2 - LV2 volume plugin
Other mixers
ladspa-xfade
- https://github.com/diizy/ladspa-xfade - LADSPA 2-channel crossfader
BackgroundMusic
faderratic
- faderratic - brings you cross-fading of 2 stereo inputs, but with a mind of its own and a ton of options to change the fade shape, length, limits, frequency and probability. faderratic works by generating a pulse on a tempo-sync frequency, and depending on the probability it may trigger a cross-fade event. You can optionally make the fader auto-return to either side and if you feel like it, trigger a fade manually or control the fader movement totally manually. Windows VST.
Metering
- https://en.m.wikipedia.org/wiki/K-system - an audio level measuring technique proposed by mastering engineer Bob Katz in the paper "An integrated approach to Metering, Monitoring and Levelling". It proposes a studio monitor calibration system and a set of meter ballistics to help engineers produce consistent sounding music while preserving appropriate dynamic range.[
JackMeter
- Jack Meter is a basic console based DPM (Digital Peak Meter) for JACK. I wrote it for quickly checking remote signal levels, without having to run X11 to use a pretty graphical meter such as meterbridge.
JACK Meterbridge
- JACK Meterbridge - software meterbridge for the UNIX based JACK audio system. It supports a number of different types of meter, rendered using the SDL library and user-editable pixmaps.
Ebumeter
- Ebumeter provides level metering according to the EBU R-128 recommendation. The current release implements all features required by the EBU document except the oversampled peak level monitoring.
mod-peakmeter
lv2
- meters.lv2 is a collection of audio-level meters with GUI in LV2 plugin format.
- traKmeter - Loudness meter for correctly setting up tracking and mixing levels.
- K-Meter - Implementation of a K-System meter according to Bob Katz’ specifications.
JACK bitmeter
- JACK bitmeter - a diagnosis tool for JACK audio software on Linux (and perhaps other systems which have JACK and GTK+ 2.x). As its name might suggest, the bitmeter operates at the bare metal of JACK's I/O layer, looking at the 32 binary digits in each individual sample.
jack-peak-meter
- https://github.com/gethiox/jack-peak-meter - terminal-based peak-meter for JACK audio system writen in Go
Visualisation
Oscilloscope
- YouTube: Jerobeam Fenderson - channel
XXY Oscilloscope
- XXY Oscilloscope - WebGL, version 1.0, April 2017, by Neil Thape [32]
woscope
xoscope
- xoscope - a digital oscilloscope for Linux
x42-scope
- x42-scope - aka sisco.lv2, audio oscilloscope with variable time scale in LV2 plugin format.
jack_oscrolloscope
- jack_oscrolloscope - a simple waveform viewer for JACK. The waveform is displayed in realtime, so you can always see the signal the instant it comes through JACK's input port.
jack-scope
- jack-scope - an oscilloscope for JACK under X11. jack-scope draws either a time domain signal trace or a self correlation trace. Multiple input channels are superimposed, each channel is drawn in a different color. jack-scope accepts OSC packets for interactive control of drawing parameters.
QOscC
- QOscC - a highly flexible and configurable software Oscilloscope with a large number of features. This includes support for any number of audio devices (ALSA or OSS), each with any number of channels. Each scope display can be configured individually to different display types and variants. e.g. you can chose from standard y-t mode (as on an usual oscilloscope), xy mode (e.g. for measuring the phase shift between two signals) of the FFT mode (to view a spectrum plot of the signal). This software is intended for electronic hobyists, who cannot afford a hardware oscilloscope or need a simple spectrum analyzer as well as for musicans for doing basic signal analysis.
DSSI Oscilloscope
- DSSI Oscilloscope - old
Oscilloscope
- https://github.com/kritzikratzi/Oscilloscope - Oscilloscope for Mac/Windows written in OF
PrettyScope
- PrettyScope - by Soundemote - $
Spectrum graph
- cava - Console-based Audio Visualizer for Alsa
- https://github.com/wayou/HTML5_Audio_Visualizer - An audio spectrum visualizer built with HTML5 Audio API
Spectrogram
- https://github.com/johnhldavis/xjackfreak - audio analysis/EQ tool for GNU/Linux/X11/Jack Audio Connection Kit. It can display the FFT of any input, modify it and output the result.
- spectrojack - A little spectrogram/audiogram/sonogram/whatever for jack. gtk 2 and fftw 3.
- Spectrum 3D - a 3D audio spectrogram in real time or not from the microphone or an audio file (including recorded file from the microphone); it is compatible with Jack (jack-audio-connection-kit). Optionally, it supports multitouch gestures from touchscreen and touchpad. It is build with the Gstreamer, SDL (or Gtkglext), OpenGl, GTK+-2.0 and uTouch-Geis free libraries and is under GPL license.
- xspect3d - a bespoke, radical drawing algorithm and buffer scheduling paradigm, to render a 3D sonic landscape in real time, at up to several hundred frames a second.
- https://github.com/pdesaulniers/wolf-spectrum - a spectrogram plugin. It can be built as an LV2 or VST plugin and as a standalone Jack application.
- Jack Live Spectrum - a small (only one file) C program that runs on Linux that will display a frequency spectrum of the live streaming sound as a floating down animation. Below are some still screenshots.
- Photosounder Spiral - a music analysis plugin. It's a fresh take on spectral analysis focused on allowing you to see and understand music and the notes that make it up instantly. This is achieved mainly by coiling the spectrum into a spiral framed by a chromatic circle, thus allowing you to instantly see what's happening musically and spectrally. - $
Waveform
- Peaks.js - a JavaScript component from BBC Research and Development that allows users to view and interact with audio waveforms in the browser.Peaks.js uses the HTML <canvas> element to display the waveform at different zoom levels, and synchronise the display to playback of an associated <audio> or <video> element. The component also allows point and segment markers to be added to the waveform, e.g., for distinguishing music from speech, or identifying different music tracks.
- https://github.com/andrewrk/waveform - simultaneously transcode and generate visuals for an audio file
Various
- xoscope for Linux - a digital oscilloscope for Linux! ALSA, ESD, and COMEDI data sources; Sweep rates from 2 ns to 2 seconds per division; Eight simultaneous display channels; Scrollable memory buffers; Triggers; Cursors; Both analog and digital inputs; Sweep, accumulate, and strip chart display modes
- Signalizer - a all-in-one signal visualizing package with a bunch of unique focus-points; real-time audio visualization with optimized 3D GPU graphics, everything being scalable and zoomable gridlessly as well as being arbitrarily precise in both settings and display. Combined with a rich feature set, Signalizer is suited both for electrical/audio engineers fullscreen-inspecting signals, or for general small windows giving an overview of your audio as you create it.
- RepoVizz - a data repository and visualization tool for structured storage and user-friendly browsing of music performance multi-modal recordings. The primary purpose of RepoVizz is to offer means for the scientific community to gain on-line access to a music performance multi-modal database shared among researchers.
- sndpeek - real-time 3D animated display/playback, can use mic-input or wav/aiff/snd/raw/mat file (with playback), time-domain waveform, FFT magnitude spectrum, 3D waterfall plot
- https://bitbucket.org/asiniscalchi/visualjackm - connect projectM visualisation to Jack
- VSXu - VSX Ultra, is an OpenGL-based (hardware-accelerated), modular visual programming environment with its main purpose to visualize music and create graphic effects in real-time. Its intention is to bridge the gap between programmer and artist and enabling acreative and inspiring environment to work in for all parties involved. VSXu is built on a modular plug-in-based architecture so anyone can extend it and or make visualization presets ("visuals" or "states"). The program is free software which means it's free from restrictions, free to share and copy, free to adapt / modify and use it any way you like.
- Le Biniou - As an artist/creator/DJ/VJ, to create live visuals based on your audio performances. As a user/listener, to watch an everlasting and totally unseen creation reacting to the music.
- https://github.com/party/tv - TV display for Party.
Phase
Windows VST
- Voxengo Correlometer free multi-band correlation meter plugin released - a free analog-style stereo multi-band correlation meter AudioUnit, AAX and VST plugin for professional music production applications. It is based on correlation meter found in PHA-979 phase-alignment plugin.Multi-band correlation meter is an advanced way to check for presence of out-of-phase elements in the mix. Broadband correlation metering reports overall phase issues and may misrepresent problems present in select spectral bands, while multi-band correlation meter easily highlights problems present in mid to high frequencies that are not easily heard by ear, but may still reduce clarity of the mix. Another application of multi-band correlation metering is phase- and time-aligning of channels and tracks, especially bass and bass-drum pairs, guitar mic and D.I. source pairs, two-microphone stereo recordings, etc.Correlometer can display 4 to 64 individual spectral bands, with adjustable band quality factor that controls the degree of band’s selectivity. Averaging time of correlation estimator can be adjusted. Correlometer supports side-chain inputs for easy correlation estimation between separate audio tracks.
Transcription
Transcribe!
- Transcribe! - an assistant for people who want to work out a piece of music from a recording, in order to write it out, or play it themselves, or both. It doesn't do the transcribing for you, but it is essentially a specialised player program which is optimised for the purpose of transcription. It has many transcription-specific features not found on conventional music players. It is also used by many people for play-along practice. It can change pitch and speed instantly, and you can store and recall any number of named loops. So you can practice in all keys, and you can speed up as well as slow down. There is some advice about play-along practice in Transcribe!'s help, under the heading "Various Topics". And it is also used for speech transcription. With its support for foot pedals and its superior slowed-down sound quality, it is an excellent choice for this purpose. There is some advice about speech transcription in Transcribe!'s help, under the heading "Various Topics".
Feature detection/extraction
aubio
- aubio is a tool designed for the extraction of annotations from audio signals. Its features include segmenting a sound file before each of its attacks, performing pitch detection, tapping the beat and producing midi streams from live audio. Because these tasks are difficult, we thought it was important to gather them in a dedicated library. To increase the fun, we have made these algorithms work in a causal way, so as to be used in real time applications with as low delay as possible. Functions can be used offline in sound editors and software samplers, or online in audio effects and virtual instruments.
- Aubio-LV2-Plugins is an unoffial set of LV2 plugins which wrap the functionality of the audio analysis library Aubio. Currently it consists of a transient/steady state separator, and an onset detector.
Vamp
- Vamp - an audio processing plugin system for plugins that extract descriptive information from audio data — typically referred to as audio analysis plugins or audio feature extraction plugins.
- Find and Download Plugins - listing
- QM Vamp Plugins - A set of plugins for feature extraction from audio data, using the Vamp plugin format suitable for use in programs such as Sonic Visualiser and Sonic Annotator.VAMP diagramThis plugin set includes note onset detector, beat and barline tracker, tempo estimator, key estimator, tonal change detector, structural segmenter, timbral and rhythmic similarity, wavelet scaleogram, adaptive spectrogram, note transcription, chromagram, constant-Q spectrogram, and MFCC plugins.
- Segmentino - a Vamp plugin for automatic music structural segmentation, based on an algorithm first used in Mauch et al.'s paper on Using Musical Structure to Enhance Automatic Chord Transcription.
- Silvet Note Transcription - or Shift-Invariant Latent Variable Transcription, is a Vamp plugin for polyphonic music transcription (from audio to note times and pitches).What can I use it for?In conjunction with a Vamp plugin host like Sonic Visualiser, you can use Silvet to help you work out what notes are being played in a piece of music, for example if you want to learn to play it yourself.You might also use it to study performances for musicological features such as timing and dynamics.Silvet also serves as a useful stable baseline for comparative purposes, for researchers working on other methods related to note transcription.Silvet uses a high-quality and quite flexible method, but it has various limitations which you can find described in the README file. Although you can easily get interesting and useful results for many kinds of music, don't expect it to take you straight from the audio to a complete and readable score!
- Melodia - automatically estimates the pitch of a song's main melody. More specifically, it implements an algorithm that automatically estimates the fundamental frequency corresponding to the pitch of the predominant melodic line of a piece of polyphonic (or homophonic or monophonic) music.Given a song, the algorithm estimates: When the melody is present and when it is not (a.k.a. voicing detection) The pitch of the melody when it is present
- HPCP - a vamp plug-in for audio feature extraction that computes the instantaneous evolution of HPCP (Harmonic Pitch Class Profile) of a signal. The HPCP is an approach for chroma feature estimation which represents the pitch content of polyphonic music signals, mapped to a single octave. HPCP have been extensively used for several final applications such as key and chord estimation, similarity computation (cover version identification) and music classification.
auditok
- https://github.com/amsehili/auditok - an Audio Activity Detection tool that can process online data (read from an audio device or from standard input) as well as audio files. It can be used as a command line program and offers an easy to use API.
Beats [per minute]
- bonk - Pure Data unit [bonk~] is a very useful musical tool for performance and composition. It processes a stream of audio on its input and produces messages when it thinks the signal matches certain patterns. It doesn't have an audio output, just messages. What [bonk~] does is analyse the incoming signal.
- MiniBPM is a simple, reliable tempo estimator for use in music audio applications. It quickly gets you a fixed beats-per-minute estimate from a sample of audio, provided the tempo doesn't change too much in it.
- libbeat - a lightweight beat detection library for Qt. It currently supports ALSA and PulseAudio. It uses fftw to process the samples.
- bpm-tools software is the result of some experiments I did into automatically calculating and tagging the tempo (in beats-per-minute) of music files. Right now the code serves as the best explanation of the algorithm — a relatively simple application of an autocorrelation by statistical sampling. As yet, there is no scientific comparison of the algorithm with others software.
- BeatDetektor - uses a very simple statistical model designed from scratch by myself to detect the BPM of music and provides real-time feedback useful for visualization and synchronization.
- https://github.com/adamstark/BTrack - a causal beat tracking algorithm intended for real-time use. It is implemented in C++ with wrappers for Python and the Vamp plug-in framework.
- MIDI Trigger - LV2 plugin which detects peaks by audio signal and sends MIDI notes.
- BeatCounter - a simple plugin designed to facilitate beatmatching software and turntables. It displays the current tempo in beats per minute (BPM), and an accumulated average over the last few seconds. BeatCounter is the perfect tool for DJ’s that want to integrate computer effects with turntables or a live band.
- Tapita - (snack in spanish) is a BPM detector trough keyboard, MIDI and jack written in C with GTK2.
Frequency
- https://github.com/ibsh/libKeyFinder - Musical key detection for digital audio, GPL v3
Silence
silan
- https://github.com/x42/silan - audio file -silence- analyzer
SilentJack
- SilentJack is a silence/dead air detector for the Jack Audio Connection Kit.
SilenceRemover
- https://github.com/sagamusix/SilenceRemover - removes silence from the beginning of a WAV or FLAC file. Getting the job done was more important than beautiful code. For example, using FILE* instead of std::ifstream was simply done so that the file feeds directly into libflac.
Essentia
- Essentia - an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPLv3 license (also available under proprietary license upon request). It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal
- https://github.com/MTG/gaia - C++ library to apply similarity measures and classifications on the results of audio analysis, including Python bindings. Together with Essentia it can be used to compute high-level descriptions of music.
openSMILE
- openSMILE - feature extration tool enables you to extract large audio feature spaces in realtime. It combines features from Music Information Retrieval and Speech Processing. SMILE is an acronym forSpeech & Music Interpretation by Large-space Extraction. It is written in C++ and is available as both a standalone commandline executable as well as a dynamic library. The main features of openSMILE are its capability of on-line incremental processing and its modularity. Feature extractor components can be freely interconnected to create new and custom features, all via a simple configuration file. New components can be added to openSMILE via an easy binary plugin interface and a comprehensive API.
Madmom
- https://github.com/CPJKU/madmom - an audio signal processing library written in Python with a strong focus on music information retrieval (MIR) tasks. The library is internally used by the Department of Computational Perception, Johannes Kepler University, Linz, Austria (http://www.cp.jku.at) and the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria (http://www.ofai.at). Possible acronyms are: Madmom Analyzes Digitized Music Of Musicians, Mostly Audio / Dominantly Music Oriented Modules
jMIR
- jMIR - an open-source software suite implemented in Java for use in music information retrieval (MIR) research. It can be used to study music in the form of audio recordings, symbolic encodings and lyrical transcriptions, and can also mine cultural information from the Internet. It also includes tools for managing and profiling large music collections and for checking audio for production errors. jMIR includes software for extracting features, applying machine learning algorithms, applying heuristic error error checkers, mining metadata and analyzing metadata.
KeyFinder
- KeyFinder - an open source key detection tool, for DJs interested in harmonic and tonal mixing. It's intended to be very focused: no library management, no track suggestions, no media player. Just a fast, efficient workflow tool. It supports a huge range of codecs thanks to LibAV, and writes to metadata tags using TagLib.
polyscribe
- https://github.com/joelrobichaud/polyscribe - Convert polyphonic multi-track audio to sheet music.
kbd-audio
- https://github.com/ggerganov/kbd-audio - collection of command-line and GUI tools for capturing and analyzing audio data.
Sufrboard
- [2005.08848 Surfboard: Audio Feature Extraction for Modern Machine Learning] - We introduce Surfboard, an open-source Python library for extracting audio features with application to the medical domain. Surfboard is written with the aim of addressing pain points of existing libraries and facilitating joint use with modern machine learning frameworks. The package can be accessed both programmatically in Python and via its command line interface, allowing it to be easily integrated within machine learning workflows. It builds on state-of-the-art audio analysis packages and offers multiprocessing support for processing large workloads. We review similar frameworks and describe Surfboard's architecture, including the clinical motivation for its features. Using the mPower dataset, we illustrate Surfboard's application to a Parkinson's disease classification task, highlighting common pitfalls in existing research. The source code is opened up to the research community to facilitate future audio research in the clinical domain. [34]
Editing
Audacity
- Audacity is a free, easy-to-use, multi-track audio editor and recorder for Windows, Mac OS X, GNU/Linux and other operating systems. The interface is translated into many languages. You can use Audacity to record live audio, record computer playback on any Windows Vista or later machine, convert tapes and records into digital recordings or CDs, edit WAV, AIFF, FLAC, MP2, MP3 or Ogg Vorbis sound files, cut, copy, splice or mix sounds together, change the speed or pitch of a recording, etc.
mhWaveEdit
- mhWaveEdit is a graphical program for editing, playing and recording sound files. It is lightweight, portable, user-friendly and handles large files very well. The program itself has only simple editing features such as cut'n'paste and volume adjustment but it can also use Ladspa effect plugins and the effects provided by the SoX application. It can also support additional file formats besides wav through libsndfile and mp3/ogg import and export through lame and oggenc/oggdec.
Sweep
- Sweep is an audio editor and live playback tool for GNU/Linux, BSD and compatible systems. It supports many music and voice formats including WAV, AIFF, Ogg Vorbis, Speex and MP3, with multichannel editing and LADSPA effects plugins.
ReZound
- ReZound aims to be a stable, open source, and graphical audio file editor primarily for but not limited to the Linux operating system.
ocenaudio
- ocenaudio is a cross-platform, easy to use, fast and functional audio editor. It is the ideal software for people who need to edit and analyze audio files without complications. ocenaudio also has powerful features that will please more advanced users.
WaveSurfer
- WaveSurfer is an open source tool for sound visualization and manipulation. Typical applications are speech/sound analysis and sound annotation/transcription. WaveSurfer may be extended by plug-ins as well as embedded in other applications.
Jokosher
soniK
- soniK is an open source digital audio editor for Linux, using the KDE platform. soniK allows you to record, edit and process sounds on your computer.
EKO
- EKO - a simple sound editor.
wavbreaker
- wavbreaker is a GTK wave file splitter for Linux and Unix-like operating systems licensed under the terms of the GNU General Public License. This application's purpose in life is to take a wave file and break it up into multiple wave files. It makes a clean break at the correct position to burn the files to an audio cd without any dead air between the tracks. It will only read wave files, so use an appropriate tool to convert ogg, mp3, etc. files and then break them up.
AudioAlign
- https://github.com/protyposis/AudioAlign - a tool written for research purposes to automatically synchronize audio and video recordings that have either been recorded in parallel at the same event or contain the same aural information. AudioAlign is basically a GUI for the Aurio library with a little bit of glue code in between.
LAoE
- LAoE means Layer-based Audio Editor, and it is a rich featured graphical audiosample-editor, based on multi-layers, floating-point samples, volume-masks, variable selection-intensity, and many plugins suitable to manipulate sound, such as filtering, retouching, resampling, graphical spectrogram editing by brushes and rectangles, sample-curve editing by freehand-pen and spline and other interpolation curves, effects like reverb, echo, compress, expand, pitch-shift, time-stretch, and much more... And it is free of charge, under GPL license!
Snd
- Snd is a sound editor modelled loosely after Emacs. It can be customized and extended using either s7 (included in the Snd sources), Ruby, or Forth.
- San Dysth is a standalone realtime soft-synth written in SND. This softsynth has controls to generate various kinds of sounds inbetween white noise and pure tones. It also provides controllers to disturb the generated sound by using a "period counter" to extend the variety of the generated output. Common usage for the softsynth is organ-like sound, organic-like sound, alien-like sounds, water-like sounds, and various kinds of noise (noise artists could find this softsynth most useful).
GNUsound
- GNUsound is a multitrack sound editor for GNOME 1 and 2. The current version is 0.7.5, which was released 6 July 2008.
Marlin
- Marlin - A GNOME Sample Editor. last updated 03-08-2004
Gnoise
- GNoise - gtk+ or gnome (you can ./configure it either way) wave file editor for Linux. Prime considerations were for it to be speedy and be able to handle big files. So far it can: load and display files, generate a display cache, play the file, cut, copy, paste, (unlimited) undo, mute, fade in/out, reverse, normalize, and more. 2003
Source separation
ISSE
- ISSE - An Interactive Source Separation Editor
Spleeter
- https://github.com/deezer/spleeter - Deezer source separation library including pretrained models. [35]
Deep-Audio-Prior
- https://github.com/adobe/Deep-Audio-Prior - Audio Source Separation Without Any Training Data.
Processing tools
python-mix
- https://github.com/j3ffhubb/python-mix - CLI audio file mixer using python-wavefile and numpy. Warning: Alpha-grade, under-tested code, use at your own risk
SoX
- SoX - Sound eXchange, the Swiss Army knife of sound processing programs. SoX is a cross-platform (Windows, Linux, MacOS X, etc.) command line utility that can convert various formats of computer audio files in to other formats. It can also apply various effects to these sound files, and, as an added bonus, SoX can play and record audio files on most platforms.
play --show-progress -c 2 --null synth brownnoise reverb bass 6 treble -3 echos 0.8 0.9 1000 0.3 1800 0.25 [36] play -n -c1 synth whitenoise band -n 100 20 band -n 50 20 gain +25 fade h 1 864000 1 play -c2 -n synth pinknoise band -n 280 80 band -n 60 25 gain +20 treble +40 500 bass -3 20 flanger 4 2 95 50 .3 sine 50 lin [37]
- sonfilade - allows the user to rapidly strip junk audio from the beginning and end of audio files. It can be used, for example, to clean up files recorded with Streamripper (e.g., streamripper --xs_padding=5000:5000). Sonfilade is designed to be as effortless and fun as possible to use. An entire edit session can be carried out using only three keys and sound feedback as the entire user interface. (There is also text output, but it is non-essential.) Uses sox.
- https://github.com/tartina/sox-plugins - Some additional plugins for SoX
pyDub
Uos
- http://wiki.lazarus.freepascal.org/uos - a multi-platform package of audio handling routines that unifies the best open-source audio libraries. play .mp3, .ogg, .wav, .flac, .m4a, .opus and cdrom audio files. 16, 32 or float 32 bit resolution. record all types of input into file, in 16 or 32 bit resolution, mono or stereo. add DSP effects and filters, however many you want and record it. play multiple inputs and outputs simultaneously. internet audio streaming of mp3 and opus files. produce sound by the build-in synthesizer. Uos can use the SoundTouch, PortAudio, SndFile, Mpg123, Faad, OpusFile and Mp4ff audio libraries. Included in the package: Examples and binary libraries for Linux 32/64, arm-Raspberry Pi, Windows 32/64, Mac OSX 32 and FreeBSD 32/64.
Composers Desktop Project
- CDP Home Page - The CDP software, first released in 1987, now contains hundreds of ways to transform digitally sampled sound. Its software belongs to the musique concrète category, as realised on computer. Processing is off-line. Although the processing is often faster than real-time, several processes could not run in real-time for technical reasons.The CDP software can be run via one of the two available GUIs or via command line / batch file.Based in the UK, CDP is an international network of composers and programmers guided by a vision of amazing sonic possibilities and how they can be woven into the fabric of music. We have been working together since 1986.
- CDP-Soundfiles - The CDP system is one of the most comprehensive and innovative sound design suites available. Written largely by English electro-acoustic composer Trevor Wishart and reflecting his musical aesthetics in many ways, its processes cover almost every aspect of sound manipulation you've ever heard of, plus many that will be unfamiliar, and usually from an original or compositional viewpoint.CDP has over 430 processes covering EDIT-MIX functions, SOUNDFILE processes (time domain), SPECTRAL and PITCH processes, a small but significant SYNTH group, DATA GENERATING functions and a large INFO section. In addition there are over 100 DATA massaging functions and an extensive HELP.
- Soundshaper - a free control interface for the CDP sound transformation software, with an emphasis on speed and ease of use. CDP is a suite of over 430 command-line processes for creating, altering and manipulating sounds to make music. Soundshaper (PC only) fully supports the latest CDP Release 7.Soundshaper provides quick and easy access to CDP processes and parameters and assembles scripts which run CDP in the background. Soundshaper saves CDP output to temporary files, which you can save at any stage. Parameter values can be adjusted at any point, even after further processes have been run. Soundshaper's auto-conversion makes it possible to move seamlessly from one process to another while the program handles the different CDP file types.When run, processes are displayed in a table called the Patch Grid. Soundshaper patches are an easy way to store and recall whole sequences of CDP processes in a fully editable form. All values are retained and the patch can be re-run with any source. Soundshaper patches support up to 16 separate process chains, which can come from different sources. Soundshaper also supports bulk processing, presets and multiple parameter sets.
- The Sound Loom - an integrated graphic interface to the CDP sound-processing software, a comprehensive collection of over 500 instruments for sound transformation developed as practical working tools by composers over many years, available from the Composers' Desktop Project. The Sound Loom + CDP software is a powerful toolbox for composers, not a performance instrument. Using it, you can specify the parameters of any process to any degree of time-varying detail, detail you may have composed or have extracted from some other complex sound-event. You cannot, however, alter these parameters while the process is running. In compensation, the system offers almost any conceivable process for transforming sounds and sound-data (the data might be loudness envelopes, pitch-tracking information, spectral analysis data, filter specifications etc.) all running in a unified, intelligent environment.
- YouTube: Trevor Wishart - Imago
- YouTube: Trevor Wishart - Tongues of Fire
- Xenakios's Blog: CDP frontend Reaper extension plugin
Mammut
- Mammut does an FFT of the whole sound (no windows). Various operations can subsequently be done in the frequency domain, such as unlinear stretching of the spectrum, sprectrum shifting, etc. How is the program useful? Doing a giant FFT of the entire sound, as opposed to splitting the sound up into short windows, is unusual. Such a method implies that time-related parameters are included in the spectral coefficients in a non-intuitive manner, and changes in the frequency domain may radically change developments in the time domain. Mammut is a fairly unpredictable program, and the user will need to get used to letting go of controlling the time axis. The sounding results are often surprising and exciting.
Fscape
FreqTweak
- FreqTweak is a tool for FFT-based realtime audio spectral manipulation and display. It provides several algorithms for processing audio data in the frequency domain and a highly interactive GUI to manipulate the associated filters for each. It also provides high-resolution spectral displays in the form of scrolling-raster spectrograms and energy vs frequency plots displaying both pre- and post-processed spectra.
- https://github.com/ycollet/freqtweak - Mirror of the original freqtweak repository
- https://github.com/nettings/freqtweak - additional compile fixes on top of ycollet's fork of Jesse Chappell's freqtweak
TAPESTREA
- TAPESTREA (Techniques And Paradigms for Expressive Synthesis, Transformation, and Rendering of Environmental Audio, or taps, is a unified framework for interactively analyzing, transforming and synthesizing complex sounds. Given one or more recordings, it provides well-defined means to: identify points of interest in the sound and extract them into reusable templates; transform sound components independently of the background and/or other events; continually resynthesize the background texture in a perceptually convincing manner; controllably place event templates over backgrounds, using a novel graphical user interface and/or scripts written in the ChucK audio programming language
- YouTube: Tapestrea demo
Build fails on Linux. Fixing two or three indirect includes gets further, but fails on building it's included [old] chuck.
SPEAR
- SPEAR (Sinusoidal Partial Editing Analysis and Resynthesis) is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude. Something which closely resembles the original input sound (a resynthesis) can be generated by computing and adding all of the individual time varying sinusoidal waves together. In almost all cases the resynthesis will not be exactly identical to the original sound (although it is possible to get very close).
Aside from offering a very detailed analysis of the time varying frequency content of a sound, a sinusoidal model offers a great deal of flexibility for editing and manipulation. SPEAR supports flexible selection and immediate manipulation of analysis data, cut and paste, and unlimited undo/redo. Hundreds of simultaneous partials can be synthesized in real-time and documents may contain thousands of individual partials dispersed in time. SPEAR also supports a variety of standard file formats for the import and export of analysis data.
Windows/Mac only :(
Ceres3
- Ceres3 is a cut-and-paste spectral editor with musically enhanced graphic control over spectral activity of a sound file. It is a free educational program with no other aims, and it owes most of its framework to Oyvind Hammer's Ceres and Jonathan Lee's Ceres2. It has an X-window Motif/OpenMotif based GUI, organized around four principal menus with simple keyboard shortcuts.
- https://github.com/jeremysalwen/Ceres4 - build fails with linked use of depreciated OSS code
ATS
- ATS is a LISP environment for spectral modeling system based on a sinusoidal plus critical-band noise decomposition. The system can be used to analyze recorded sounds, transform their spectrum using a wide variety of algorithms and resynthesize them both out of time and in real time.
- ATS is a software library of functions for spectral Analysis, Transformation, and Synthesis of sound based on a sinusoidal plus critical-band noise model. A sound in ATS is a symbolic object representing a spectral model that can be sculpted using a variety of transformation functions. Spectral data can be accessed trough an API, and saved to/loaded from disk. ATS is written in LISP, its analysis and synthesis algorithms are implemented using the CLM (Common Lisp Music) synthesis and sound processing language.
Only takes mono .wav files
Cecilia
- Cecilia is an audio signal processing environment aimed at sound designers. Cecilia mangles sound in ways unheard of. Cecilia lets you create your own GUI using a simple syntax. Cecilia comes with many original built-in modules and presets for sound effects and synthesis.
Loris
- Loris is an Open Source sound modeling and processing software package based on the Reassigned Bandwidth-Enhanced Additive Sound Model. Loris supports modified resynthesis and manipulations of the model data, such as time- and frequency-scale modification and sound morphing. The Loris programmers' interface supports the C, C++, and Python programming languages, and SWIG interface files are provided so that the API can be easily extended to a variety of other languages. The package includes a handful of utility programs for basic sound modeling and resynthesis, and standard UNIX/Linux tools that build and install the libraries, headers, and utilties.
SMS Tools
- Spectral Modeling Synthesis Tools - SMS Tools is a set of techniques and software implementations for the analysis, transformation, and synthesis of musical sounds based on various spectral modeling approaches. These techniques can be used for synthesis, processing and coding applications, while some of the intermediate results might also be applied to other music related problems, such as sound source separation, musical acoustics, music perception, or performance analysis. The basic model and implementation were developed by Xavier Serra as part of his PhD thesis published 1989. Since then many extensions have been proposed at MTG-UPF and by other researchers.
FxEngine
- FxEngine - an Open C++ Framework under LGPL license. The FxEngine Framework simplifies the plugin architecture for the data flow processing. It provides a full control to the plugin architecture for applications that require custom solutions.
- FxJackPack - contains two plugins for the FxEngine framework which enables the recording and playback sound through JACK (Jack Audio Connection Kit).
NASPRO
- NASPRO - acronym for "NASPRO Architecture for Sound PROcessing" is a collection of free and open source sound processing software built around the LV2 plugin standard.
Aglaophone
SpectMorph
- SpectMorph is a free software project which allows to analyze samples of musical instruments, and to combine them (morphing). It can be used to construct hybrid sounds, for instance a sound between a trumpet and a flute; or smooth transitions, for instance a sound that starts as a trumpet and then gradually changes to a flute.
- YouTube: SpectMorph Overview
Spectral Toolbox
- The Spectral Toolbox - a suite of analysis-resynthesis programs that locate relevant partials of a sound and allow them to be resynthesized at any specified frequencies. This enables a variety of techniques including spectral mappings (sending all partials of a sound to fixed destinations), spectral morphing continuously interpolating between the partials of a source sound and a destination) and dynamic tonality (a way of organizing the relationship between a family of tunings and a set of related timbres). A complete application called the TransFormSynth concretely demonstrates the methods using either a one-dimensional controller such as a midi keyboard or a two-dimensional control surface (such as a MIDI guitar, a computer keyboard, or the forthcoming Thummer controller). Requires installing either Max Runtime (free from cycling74) or Max/MSP (not free) and some java routines.
Wav2Spectrum
- https://github.com/paulnasca/wav2spectrum - a simple application which takes a small chunk (window) from input wav and outputs the frequencies one by one (a sweep) into another wav file. It is very useful to hear the harmonics (one by one) from a sound. It can be used as a spectrum tool for the blind people who are interested in sound analysis.
wave-tools
- https://github.com/ZoeB/wave-tools - Command line tools for wave files
swingify
- Swingify - Upload any audio file and make it swing.
Melodyne
$
- YouTube: Melodyne - Direct Note Access
Time/pitch
Paul's Extreme Sound Stretch
- https://xenakios.wordpress.com/paulxstretch-plugin/
- https://bitbucket.org/xenakios/paulstretchplugin/overview - no Linux build yet
- https://github.com/paulnasca/2xphases - This repository contains two programs: "2xautoconvolution" which uses autoconvolution to process audio and an older program "2xphases" which uses long-term FFT audio processing.
tcStrech
- tcStretch - a Windows VST 2.4 plug-in for time stretching, pitch shifting, and blurring. Time stretch can be up to 1 million times slower. Pitch shift is plus or minus one octave. Blurring blends nearby spectral material to make the output less static. Playback is sensitive to transients in the source material. Playback rate and blur amount are automatically adjusted according to the transient contour of the material being stretched. Playing transients at a faster rate than non-transients tends to make the output sound less obviously stretched. Playing transients more slowly than non-transients emphasizes the stretchiness [good when playing in reverse mode with highly transient material]. Adding blur brings in some subtle (or not so subtle) randomness which helps to keep the output less static.
Rubber Band
- Rubber Band Library is a high quality software library for audio time-stretching and pitch-shifting. It permits you to change the tempo and pitch of an audio stream or recording dynamically and independently of one another. Rubber Band Library is intended for use by developers creating their own application programs rather than directly by end users, although it does also include a simple (free) command-line utility program that you can use for fixed adjustments to the speed and pitch of existing audio files.
Play it Slowly
- Play it Slowly - software to play back audio files at a different speed or pitch. It does also allow you to loop over a certain part of a file. It's intended to help you learn or transcribe songs. It can also play videos thanks to gstreamer. Play it slowly is intended to be used on a GNU/Linux system like Ubuntu.
StretchPlayer
- StretchPlayer is an audio file player that allows you to change the speed of the song without changing the pitch. It will also allow you to transpose the song to another key (while also changing the speed). This is a very powerful tool for musicians who are learning to play a pre-recorded song.
PitchTempoPlayer
- PitchTempoPlayer (PTPlayer) is an audio player for Linux that allows to change pitch and speed (tempo) of the sound independently of each other. Fine tuning (less than half tone) is also possible, as well recording, exporting the modified audio file and managing a playlist.
SBSMS
- SBSMS - Subband Sinusoidal Modeling Synthesis
SoundTouch
- SoundTouch is an open-source audio processing library for changing the Tempo, Pitch and Playback Rates of audio streams or audio files. The library additionally supports estimating stable beats-per-minute rates for audio tracks. The SoundTouch library is intended for application developers writing sound processing tools that require tempo/pitch control functionality, or just for playing around with the sound effects.
The SoundTouch library source kit includes also an example utility SoundStretch for processing .wav audio files from command-line interface.
fibonaccistretch
- Fibonacci Stretch
- https://github.com/usdivad/fibonaccistretch - Stretch the rhythm of an audio track along the Fibonacci sequence.
sonic
- https://github.com/waywardgeek/sonic - a simple algorithm for speeding up or slowing down speech. However,it's optimized for speed ups of over 2X, unlike previous algorithms for changing speech rate. The Sonic library is a very simple ANSI C library that is designed to easily be integrated into streaming voice applications, like TTS back ends.
Groovinator
- https://github.com/usdivad/Groovinator - an audio plugin, available in VST and AU format, that performs real-time playhead-aware rhythm modification. Currently supported modification modes are: Classic Stretch, Sample 'n' Shift, and Step Repeater. Developed using the JUCE framework in addition to Olli Parviainen's SoundTouch library and Sergio Castro's implementation of Bjorklund's algorithm.
Melodrumatic
- https://github.com/usdivad/Melodrumatic - an audio plugin that lets you "pitch-shift" via delay (i.e. the Doppler effect) to turn unpitched audio into melodies. Controllable via MIDI or mouse :)
Denoise
Gnome Wave Cleaner
- Gnome Wave Cleaner - mainly to remove clicks from recorded vinyl
Postfish
audio-declipper
- https://github.com/kripton/audio-declipper - Proof-of-concept (or more) to declip sample-based audio files
Neural network
Audio and MIDI looping
See also Sampling#Audio looping and MIDI#MIDI looping
Giada
- Giada - a free, minimal, hardcore audio tool for DJs, live performers and electronic musicians. How does it work? Just pick up your channel, fill it with samples or MIDI events and start the show by using this tiny piece of software as a loop machine, drum machine, sequencer, live sampler or yet as a plugin/effect host. Giada aims to be a compact and portable virtual device for Linux, Mac OS X and Windows for production use and live sets.
- YouTube: Giada LoopMachine
What can you control with MIDI:
- Global elements — sequencer, metronome, main volumes and so on, stored inside the configuration file and you set them once;
- Per-channel elements — channel on/off, mute, volume, solo and so on, stored inside the patch and you set them whenever you create a new song.
No MIDI mod system, each binding is 'channel' specific ('channel' being the Giada term for a sample or sequence), which doesn't seem like it would scale well.
Soundscape
Boodler
- Boodler is an open-source soundscape tool -- continuous, infinitely varying streams of sound. Boodler is designed to run in the background on a computer, maintaining whatever sound environment you desire. Boodler is extensible, customizable, and modular. Each soundscape is a small piece of Python code -- typically less than a page. A soundscape can incorporate other soundscapes; it can combine other soundscapes, switch between them, fade them in and out. This package comes with many example soundscapes. You can use these, modify them, combine them to arbitrary levels of complexity, or write your own.
Klangwunder3000
- https://github.com/bk138/klangwunder3000/ - a cross-platform soundscape generator. It loads a set of sound files and associated control data and generates a constantly changing aural ambient.
Random Parallel Player
- https://github.com/hilbrichtsoftware/random-parallel-player - Takes a bunch of audio files as tracks and plays them back randomly creating new music each playthrough. The core rule of RPP: No human interaction once the playback has started. RPP is based on an idea of Louigi Verona. The included audio samples in example.rpp were created by him. You can read about the original project here
Atmosfear
- https://github.com/teragonaudio/Atmosfear - a VSTi plugin which generates random atmospheric soundscapes with samples scraped from FreeSound. We had originally imagined that the plugin colud generate soundscapes resembling parks, public places, nature, etc. However, the resulting sounds that it makes are generally quite surreal and creepy, hence the name. :)
Foco
- https://github.com/akashnimare/foco - a cross-platform desktop app which runs in menubar. Foco boosts your productivity by creating perfect productive environment. It has the best sounds for getting work done .
jungle
- jungle - an audio-system. It allows you to create an ambiance using random audio-samples. It works with systems that have one or more audio-devices (with one or more channels) and can also use multiple systems (e.g. a couple of raspberry pis) in a cluster.This software requires a Linux system. It uses the ALSA sub system.I run it with a server-pc and 3 raspberry pies.
Web
- myNoise - background noises and relaxing soundscape generator, web/app
Sonification
See also Synthesis#Graphics synthesis
- https://en.wikipedia.org/wiki/Sonification - the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques. For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.
- PDF: The Sounding Object - Edited by Davide Rocchesso and Federico Fontana
- The Sonification Handbook - Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff
- CodeSounding - CodeSounding is an open source sonification framework which makes possible to hear how any existing Java program "sounds like", by assigning instruments and pitches to code statements (if, for, etc) and playing them as they are executed at runtime. In this way the flowing of execution is played as a flow of music and its rhythm changes depending on user interaction.
- https://github.com/markandrus/Sonify - JACK plugin that encodes images into audio and vice-versa, in realtime.
- HyperMammut - transform sounds to images and vice-versa using single BIG Fourier Transforms (or DCT/DST,etc.).
- https://github.com/chronopoulos/heliosonic - Tools for the sonification of helioseismic data.
- [Listen to your web pages](https://gist.github.com/tomhicks/6cb5e827723c4eaef638bf9f7686d2d8) - tomhicks/plink-plonk.js
Retargeting
- Scalable Music: Automatic Music Retargeting and Synthesis - S. Wenner, J.C. Bazin, A. Sorkine-Hornung, C. Kim, M. Gross. In this paper we propose a method for dynamic rescaling of music, inspired by recent works on image retargeting, video reshuffling and character animation in the computer graphics community. Given the desired target length of a piece of music and optional additional constraints such as position and importance of certain parts, we build on concepts from seam carving, video textures and motion graphs and extend them to allow for a global optimization of jumps in an audio signal. Based on an automatic feature extraction and spectral clustering for segmentation, we employ length-constrained least-costly path search via dynamic programming to synthesize a novel piece of music that best fulfills all desired constraints, with imperceptible transitions between reshuffled parts. We show various applications of music retargeting such as part removal, decreasing or increasing music duration, and in particular consistent joint video and audio editing.
- https://github.com/ucbvislab/radiotool - a python library that aims to make it easy to create audio by piecing together bits of other audio files. This library was originally written to enable my research in audio editing user interfaces, but perhaps someone else might find it useful.
Web
See also Web Audio, Drumming#Web
Background
- http://www.rainymood.com/
- http://mynoise.net/
- http://www.coffitivity.com/
- http://rainycafe.com/
- http://simplynoise.com/
- http://www.noisli.com/
- http://naturesoundsfor.me/
- http://timeforzen.com/
- http://raining.fm/
- http://asoftmurmur.com/ [40]
Tuning
Dart-Mic
- Dart-Mic - a Javascript library which listens to microphone input and performs pitch/note detection, volume detection, recording, and general purpose data processing. It makes use of the Web Audio APi (which is only supported by Chrome currently) and DSP.js.
AudioNotch
- AudioNotch - Tinnitus Treatment Sound Therapy - Tuner and Tone Generator
x42-tuner
- x42-tuner - aka Tuna.LV2, an musical instrument tuner with strobe characteristics in LV2 plugin format.
tunescope
- https://github.com/dack/tunescope - an oscilloscope style guitar tuner. It uses jack for audio input and opengl for rendering. The signal is displayed in both normal and XY mode, using an automatically selected not as the reference.
FMIT
- FMIT - Free Music Instrument Tuner, is a graphical utility for tuning your musical instruments, with error and volume history and advanced features.
LINGOT
- LINGOT - a musical instrument tuner. It's accurate, easy to use, and highly configurable. Originally conceived to tune electric guitars, it can now be used to tune other instruments. It looks like an analogue tuner, with a gauge indicating the relative shift to a certain note, found automatically as the closest note to the estimated frequency.
jackstrobe
- https://github.com/jessecrossen/jackstrobe - A simple strobe tuner using JACK and Qt 5.
Guitar Tuning Database
alt-tuner
- alt-tuner - a DAW microtonal tuning plug-in that retunes almost every midi keyboard or softsynth. It runs on PCs, macs and Linux/Wine machines. Click here to buy it.
MTuner
- https://www.meldaproduction.com/MTuner - a simple Windows VST audio frequency analyzer designed mostly for tuning guitars and other instruments. It detects frequency, note and deviation from correct pitch in cents, resolving frequencies in the range 50Hz to 2kHz, which is enough for most instruments and vocals.
tuner
- https://github.com/logsol/tuner - A simple standalone app for macOS based on JUCE that detects the frequency of an instrument and shows its note based on autocorrelation.
Machine learning
- https://github.com/bmcfee/pumpp - practically universal music pre-processorpumpp up the jamsThe goal of this package is to make it easy to convert pairs of (audio, jams) into data that can be easily consumed by statistical algorithms.
Video
See also Video
- Xjadeo is a software video player that displays a video-clip in sync with an external time source (MTC, LTC, JACK-transport). Xjadeo is useful in soundtrack composition, video monitoring or any task that requires to synchronizing movie frames with external events.
Gaze
Performance
- The Box of No Return - a Linux-based musical synthesizer platform, suitable for live musicianship, designed to handle multiple patches with enormous demands, and switch between them with zero delay and zero cutout. If you sit in your home studio and use single SoundFonts with a laptop and simple GUI, you don't need this. If you play live, and pile on the tone generators and filters in patch development in order to feel and deliver the unyielding power of the musical harmonic roar, a full implementation of the BNR may suit you well. There are obviously middle grounds too ☺, and there are articles here to help in general.
- https://github.com/EliasKesh/LiveMusicApp - GTK Application to configure and control music software for live performances
Games
- FRACT - a musical exploration game. You arrive in a forgotten place and explore the unfamiliar landscape to discover the secrets of an abandoned world that was once built on sound. As you start to make sense of this strange new environment, you work to rebuild its machinery by solving puzzles and bring the world back to life by shaping sound and creating music in the game.
- https://sunebear.github.io/Piano-Flow - A music game follows the piano flows.
Apps
- Augment - an amazing way to listen to the world. It harmonizes your listening experience which helps you to be less distracted and stressed. The Augment app filters your acoustic environment, takes out harsh sounds and turns stressful noise into harmonic sound environments. Try it now, it's a free download on the app Store!
DOS
macOS
Other
- https://github.com/ggerganov/kbd-audio - Tools for capturing and analysing keyboard input paired with microphone capture [41]
- https://en.wikipedia.org/wiki/ReWire_(software_protocol) - a software protocol, jointly developed by Propellerhead and Steinberg, allowing remote control and data transfer among digital audio editing and related software. Originally appearing in the ReBirth software synthesizer in 1998, the protocol has since evolved into an industry standard. Currently used in macOS and Microsoft Windows 32-bit or 64-bit audio applications, ReWire enables the simultaneous transfer of up to 256 audio tracks of arbitrary resolution and 4080 channels of MIDI data. This allows, for example, the output from synthesizer software to be fed directly into a linear editor without the use of intermediate files or analog transfers. There are also provisions to remotely trigger actions, such as starting and stopping recording. The protocol is licensed free of charge to companies only, but comes with a "non-disclosure of source code" license that is incompatible with most free-software licenses. The ReWire system consists of "Hosts", "Panels", and "Devices". Hosts are the host applications which typically do the sequencing at one end and the final mixdown at the other end. A Device is a dynamic link library that only generates sound; it has no user interface. A Panel is a graphical interface for setting the parameters of one Device. A typical setup would be to use Ableton Live in "Host" mode, and use Propellerhead Reason as a synthesizer. In this case Reason would provide Device/Panel pairs to Ableton, which could then send midi commands, sync timing and mix Reason's output into its own effects chains. Many applications support either mode. In fact, an application could (at the discretion of a developer) act as both a Host and a Panel at the same time.