From Things and Stuff Wiki
Jump to navigation Jump to search


See also Electrical, Creative coding, Synthesis, Audio, Video, Computing, Media, etc.

to fix re other poages

  • - the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. It is a measure of magnitude and phase of the output as a function of frequency, in comparison to the input. In simplest terms, if a sine wave is injected into a system at a given frequency, a linear system will respond at that same frequency with a certain magnitude and a certain phase angle relative to the input. Also for a linear system, doubling the amplitude of the input will double the amplitude of the output. In addition, if the system is time-invariant (so LTI), then the frequency response also will not vary with time. Thus for LTI systems, the frequency response can be seen as applying the system's transfer function to a purely imaginary number argument representing the frequency of the sinusoidal excitation.

Two applications of frequency response analysis are related but have different objectives.For an audio system, the objective may be to reproduce the input signal with no distortion. That would require a uniform (flat) magnitude of response up to the bandwidth limitation of the system, with the signal delayed by precisely the same amount of time at all frequencies. That amount of time could be seconds, or weeks or months in the case of recorded media.In contrast, for a feedback apparatus used to control a dynamic system, the objective is to give the closed-loop system improved response as compared to the uncompensated system. The feedback generally needs to respond to system dynamics within a very small number of cycles of oscillation (usually less than one full cycle), and with a definite phase angle relative to the commanded control input. For feedback of sufficient amplification, getting the phase angle wrong can lead to instability for an open-loop stable system, or failure to stabilize a system that is open-loop unstable.Digital filters may be used for both audio systems and feedback control systems, but since the objectives are different, generally the phase characteristics of the filters will be significantly different for the two applications.

  •–Shannon_sampling_theorem - a fundamental bridge between continuous-time signals (often called "analog signals") and discrete-time signals (often called "digital signals"). It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

  • - the unit impulse symbol,[6] may be regarded through its Laplace transform, as coming from the boundary values of a complex analytic function of a complex variable. The formal rules obeyed by this function are part of the operational calculus, a standard tool kit of physics and engineering. In many applications, the Dirac delta is regarded as a kind of limit (a weak limit) of a sequence of functions having a tall spike at the origin (in theory of distributions, this is a true limit). The approximating functions of the sequence are thus "approximate" or "nascent" delta functions.

  • - also known as an apodization function or tapering function, is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions.


  • - an effect that causes different signals to become indistinguishable (or aliases of one another) when sampled. It also often refers to the distortion or artifact that results when a signal reconstructed from samples is different from the original continuous signal.Aliasing can occur in signals sampled in time, for instance digital audio, and is referred to as temporal aliasing. Aliasing is generally avoided by applying low pass filters or anti-aliasing filters (AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitable reconstruction filtering should then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. For spatial anti-aliasing, the types of anti-aliasing include fast sample anti-aliasing (FSAA), multisample anti-aliasing, and supersampling.

  • - a filter used before a signal sampler to restrict the bandwidth of a signal to approximately or completely satisfy the Nyquist–Shannon sampling theorem over the band of interest. Since the theorem states that unambiguous reconstruction of the signal from its samples is possible when the power of frequencies above the Nyquist frequency is zero, a real anti-aliasing filter trades off between bandwidth and aliasing. A realizable anti-aliasing filter will typically either permit some aliasing to occur or else attenuate some in-band frequencies close to the Nyquist limit. For this reason, many practical systems sample higher than would be theoretically required by a perfect AAF in order to ensure that all frequencies of interest can be reconstructed, a practice called oversampling.

  • IMPROVED POLYNOMIAL TRANSITION REGIONS ALGORITHM FOR ALIAS-SUPPRESSED SIGNAL SYNTHESIS - One of the building blocks of virtual analog synthesizers is the oscillator algorithm producing simple geometric waveforms, such as saw or triangle. An important requirement for such a digital oscillator is that its spectrum is similar to that of the analog waveform, that is, the heavy aliasing that would result from a trivial modulo-counter based implementation is reduced. Until now, the computationally most efficient oscillator algorithm with reduced aliasing was the Polynomial Transition Regions (PTR) method. This paper shows that the efficiency can be increased even further by eliminating the phase offset of the PTR method. The new Efficient PTR (EPTR) algorithm produces the same output as the PTR method, while requires roughly 30% less operations, making it the most efficient alias-reduced oscillator algorithm up to date. In addition to presenting an EPTR sawtooth algorithm, the paper extends the differentiated parabolic wave (DPW) triangle algorithm to the case of asymmetric triangle waves, followed by an EPTR implementation. The new algorithm provides continuous transition between triangle and sawtooth signals, while still requires low computational power. LESS

  • - a class of carrier modulation that is used in telecommunications transmission systems. The class comprises frequency modulation (FM) and phase modulation (PM), and is based on altering the frequency or the phase, respectively, of a carrier signal to encode the message signal. This contrasts with varying the amplitude of the carrier, practiced in amplitude modulation (AM) transmission, the earliest of the major modulation methods used widely in early radio broadcasting.

  • - a sinusoid with angle modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are offset in phase by one-quarter cycle (π/2 radians). All three functions have the same center frequency. The amplitude modulated sinusoids are known as the in-phase and quadrature components. In some contexts it is more convenient to refer to only the amplitude modulation (baseband) itself by those terms.

  • - or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is the sender encodes the message in a redundant way, most often by using an error-correcting code (ECC).

  • - also known as rateless erasure codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of size equal to or only slightly larger than the number of source symbols. The term fountain or rateless refers to the fact that these codes do not exhibit a fixed code rate.



  • - a variety of demonstration filters. These filters will be discussed and used as examples on the ZipCPU blog at If you watch carefully, you may find filters here before they are posted, as I'm going to be doing my development here. Still, there have been many posts already that you may find valuable. These include: A description (and implementation of) the two simplest filters I know of; A Generic FIR implementation; A Simpler Generic FIR implementation; A Moving Average/Boxcar Filter; A Linear Feedback Shift Register (LFSR); Building a generic filtering test harness; Measuring a filter's frequency response; Delaying elements in a DSP system; Generating a Pseudorandom noise stream via an LFSR; An Example LFSR Output; And How to generate multiple bits per clock using an LFSR; Testing a generic filter using the test harness; Building a slower filter, one that time-multiplexes a single one hardware multiply across many coefficients.