Video

From Things and Stuff Wiki
Jump to navigation Jump to search


General

See also TV, Radio, Streaming, Playback, Visuals, Lighting, VoIP


  • https://en.wikipedia.org/wiki/Zoopraxiscope - an early device for displaying moving images and is considered an important predecessor of the movie projector. It was conceived by photographic pioneer Eadweard Muybridge in 1879 (and built for him by January 1880 to project his famous chronophotographic pictures in motion and thus prove that these were authentic). Muybridge used the projector in his public lectures from 1880 to 1895. The projector used 16" glass disks onto which Muybridge had an unidentified artist paint the sequences as silhouettes. This technique eliminated the backgrounds and enabled the creation of fanciful combinations and additional imaginary elements. Only one disk used photographic images, of a horse skeleton posed in different positions. A later series of 12" discs, made in 1892–1894, used outlines drawn by Erwin F. Faber that were printed onto the discs photographically, then colored by hand. These colored discs were probably never used in Muybridge's lectures. All images of the known 71 disks, including those of the photographic disk, were rendered in elongated form to compensate the distortion of the projection. The projector was related to other projecting phenakistiscopes and used some slotted metal shutter discs that were interchangeable for different picture disks or different effects on the screen. The machine was hand-cranked. The device appears to have been one of the primary inspirations for Thomas Edison and William Kennedy Dickson's Kinetoscope, the first commercial film exhibition system. Images from all of the known seventy-one surviving zoopraxiscope discs have been reproduced in the book Eadweard Muybridge: The Kingston Museum Bequest (The Projection Box, 2004).


Film

  • https://en.wikipedia.org/wiki/Kinetoscope - an early motion picture exhibition device. The Kinetoscope was designed for films to be viewed by one individual at a time through a peephole viewer window at the top of the device. The Kinetoscope was not a movie projector, but introduced the basic approach that would become the standard for all cinematic projection before the advent of video, by creating the illusion of movement by conveying a strip of perforated film bearing sequential images over a light source with a high-speed shutter. A process using roll film was first described in a patent application submitted in France and the U.S. by French inventor Louis Le Prince. The concept was also used by U.S. inventor Thomas Edison in 1889, and subsequently developed by his employee William Kennedy Laurie Dickson between 1889 and 1892. Dickson and his team at the Edison lab also devised the Kinetograph, an innovative motion picture camera with rapid intermittent, or stop-and-go, film movement, to photograph movies for in-house experiments and, eventually, commercial Kinetoscope presentations.A prototype for the Kinetoscope was shown to a convention of the National Federation of Women's Clubs on May 20, 1891. The first public demonstration of the Kinetoscope was held at the Brooklyn Institute of Arts and Sciences on May 9, 1893. Instrumental to the birth of American movie culture, the Kinetoscope also had a major impact in Europe; its influence abroad was magnified by Edison's decision not to seek international patents on the device, facilitating numerous imitations of and improvements on the technology. In 1895, Edison introduced the Kinetophone, which joined the Kinetoscope with a cylinder phonograph. Film projection, which Edison initially disdained as financially nonviable, soon superseded the Kinetoscope's individual exhibition model. Many of the projection systems developed by Edison's firm in later years would use the Kinetoscope name.


Tape



  • https://en.wikipedia.org/wiki/Quadruplex_videotape - 2″ quad, or just quad, for short) is the first practical and commercially successful analog recording videotape format. It was developed and released for the broadcast television industry in 1956 by Ampex, an American company based in Redwood City, California. The first videotape recorder using this format was built and created in the same year. This format revolutionized broadcast television operations and television production, since the only recording medium available to the TV industry before then was film used for kinescopes, which was much more costly to utilize and took time to develop at a film laboratory. In addition, kinescope images were usually of obviously inferior quality to the live television broadcast images they recorded, whereas quadruplex videotape preserved almost all the image detail of a live broadcast.

Since most United States West Coast network broadcast delays done by the television networks at the time were done with film kinescopes that needed time for developing, the networks wanted a more practical, cost-effective, and quicker way to time-shift television programming for later airing on the West Coast. Broadcasters also sought a recording medium that was not as costly or time-consuming to edit and develop as film. Faced with these challenges, broadcasters sought to adapt magnetic tape recording technology (already in use for recording audio) for use with television as well.


DVD

  • https://en.wikipedia.org/wiki/VOB - the container format in DVD-Video media. VOB can contain digital video, digital audio, subtitles, DVD menus and navigation contents multiplexed together into a stream form. Files in VOB format may be encrypted.

HD DVD

  • https://en.wikipedia.org/wiki/HD_DVD - (short for High Definition Digital Versatile Disc) is an obsolete high-density optical disc format for storing data and playback of high-definition video. Supported principally by Toshiba, HD DVD was envisioned to be the successor to the standard DVD format.On February 19, 2008, after a protracted format war with rival Blu-ray, Toshiba abandoned the format, announcing it would no longer manufacture HD DVD players and drives. The HD DVD Promotion Group was dissolved on March 28, 2008.

DV / HDV


UHD / 4K

Signals

Analogue

RF

  • https://github.com/isotope-engineering/PiMod-Zero - brings old tech back to life by allowing a Raspberry Pi Zero to display color or B&W video—and play audio—on vintage televisions. It provides a super-compact way to watch old movies, play retro games, present digital art, or navigate your operating system using any television that receives standard NTSC broadcasts on VHF channels 2 and 3 (55.25 MHz and 61.25 MHz).


Composite

  • https://en.wikipedia.org/wiki/Composite_video - is one channel analog video transmission (without audio) that carries standard definition video typically at 480i or 576i resolution. Video information is encoded on one channel, unlike the higher-quality S-video (two channels) and the even higher-quality component video (three or more channels). Composite video is usually in standard formats such as NTSC, PAL, and SECAM and is often designated by the initials CVBS, for color, video, blanking and sync, or simply as video.


S-Video

  • https://en.wikipedia.org/wiki/S-Video - also known as separate video and Y/C, a signaling standard for standard definition video, typically 480i or 576i. By separating the black-and-white and coloring signals, it achieves better image quality than composite video, but has lower color resolution than component video.

Component

  • https://en.wikipedia.org/wiki/Component_video - a video signal that has been split into two or more component channels. In popular use, it refers to a type of component analog video (CAV) information that is transmitted or stored as three separate signals. When used without any other qualifications the term component video usually refers to analog YPBPR component video with sync on luma.

SCART

  • https://en.wikipedia.org/wiki/SCART - (from Syndicat des Constructeurs d'Appareils Radiorécepteurs et Téléviseurs – Radio and Television Receiver Manufacturers' Association) is a French-originated standard and associated 21-pin connector for connecting audio-visual (AV) equipment. In Europe, SCART used to be the most common method of connecting AV equipment, and was a standard connector for such devices; it was far less common elsewhere. As it was designed to carry analog standard-definition content, SCART has become obsolete with the introduction of new digital standards such as HDMI and DisplayPort. HDMI-CEC is derived from SCART's AV.link.[citation needed] However, SCART Connection can also support HD signals like 480p, 720p, 1080i, 1080p, if the SCART connection of a device is designed to support YPbPr connection, but this configuration is rare. The same for multichannel audio, but even this configuration remains rare, as it is not standardized.

D-Terminal

  • https://en.wikipedia.org/wiki/D-Terminal - a type of analog video connector found on Japanese consumer electronics, typically HDTV, DVD, Blu-ray, D-VHS and HD DVD devices. It was developed by the EIAJ (Electronic Industry Association of Japan) in its standard, RC-5237, for use in digital satellite broadcast tuners. In appearance it is a small flat trapezic connector, the same connector as the AAUI connector used by Apple Computer for some time to connect to ethernet. Some items sold outside Japan use the connector as well. Notable examples are Canon's XH-A1 DVC high-definition camcorder and Panasonic's AG-HVX200 DVCPro HD camcorder.

TRRS

  • https://en.wikipedia.org/wiki/Phone_connector_(audio)#Video - Equipment requiring video with stereo audio input/output sometimes uses 3.5 mm TRRS connectors. Two incompatible variants exist, of 15 millimetres (0.59 in) and 17 mm (0.67 in) length, and using the wrong variant may either simply not work, or could cause physical damage.

VGA


  • https://github.com/olegkapitonov/BlueVGA - VGA library for STM32F103C (BluePill, that can manipulate a screen with 28x30 tiles with 8x8 pixels each, in a total resolution of 224x240 pixels with 8 colors using a very low footprint

Digital


  • https://en.wikipedia.org/wiki/Uncompressed_video - is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices , and in video processors that perform functions such as image resizing, image rotation, deinterlacing, and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI, DVI, DisplayPort and SDI. Standards also exist for the carriage of uncompressed video over computer networks.

Some HD video cameras output uncompressed video, whereas others compress the video using a lossy compression method such as MPEG or H.264. In any lossy compression process, some of the video information is removed, which creates compression artifacts and reduces the quality of the resulting decompressed video. When editing video, it is preferred to work with video that has never been compressed as this maintains the best possible quality, with compression performed after completion of editing. Uncompressed video should not be confused with raw video. Raw video represents largely unprocessed data captured by an imaging device.


  • OSSC - a low-latency video digitizer and scan conversion board designed mainly for connecting retro video game consoles and home computers into modern displays. Please check the wikipage for more detailed description and latest features.


SDI

  • https://en.wikipedia.org/wiki/Serial_digital_interface - SDI, a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989. For example, ITU-R BT.656 and SMPTE 259M define digital video interfaces used for broadcast-grade video. A related standard, known as high-definition serial digital interface (HD-SDI), is standardized in SMPTE 292M; this provides a nominal data rate of 1.485 Gbit/s.

DVI

  • https://en.wikipedia.org/wiki/Digital_Visual_Interface - DVI - a video display interface developed by the Digital Display Working Group (DDWG). The digital interface is used to connect a video source, such as a video display controller, to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of digital video content. The interface is designed to transmit uncompressed digital video and can be configured to support multiple modes such as DVI-A (analog only), DVI-D (digital only) or DVI-I (digital and analog). Featuring support for analog connections, the DVI specification is compatible with the VGA interface. This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets and DVD players.

HDMI

  • https://en.wikipedia.org/wiki/HDMI - High-Definition Multimedia Interface is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from an HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.

HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the digital visual interface (DVI). No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used. The CEC (Consumer Electronics Control) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.

Several versions of HDMI have been developed and deployed since initial release of the technology but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control) extensions. Production of consumer HDMI products started in late 2003. In Europe either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTVs in 2004 and camcorders and digital still cameras in 2006. As of January 6, 2015 (twelve years after the release of the first HDMI specification), over 4 billion HDMI devices have been sold.


  • https://github.com/hdl-util/hdmi - SystemVerilog code for HDMI 1.4a video/audio output on an FPGA. Most free and open source HDMI source (computer/gaming console) implementations actually output a DVI signal, which HDMI sinks (TVs/monitors) are backwards compatible with. To support audio and other HDMI-only functionality, a true HDMI signal must be sent. The code in this repository lets you do that without having to license an HDMI IP block from anyone. [https://news.ycombinator.com/item?id=22279308


  • https://en.wikipedia.org/wiki/Consumer_Electronics_Control - a feature of HDMI designed to control HDMI connected devices by using only one remote controller; so, individual CEC enabled devices can command and control each other without user intervention, for up to 15 devices. For example, a television set remote controller can also control a set-top box and a DVD player.It is a one-wire bidirectional serial bus that is based on the CENELEC standard AV.link protocol to perform remote control functions. CEC wiring is mandatory, although implementation of CEC in a product is optional. It was defined in HDMI Specification 1.0 and updated in HDMI 1.2, HDMI 1.2a and HDMI 1.3a (which added timer and audio commands to the bus). USB to CEC adapters exist that allow a computer to control CEC-enabled devices.


  • https://en.wikipedia.org/wiki/Mobile_High-Definition_Link - an industry standard for a mobile audio/video interface that allows the connection of smartphones, tablets, and other portable consumer electronics devices to high-definition televisions (HDTVs), audio receivers, and projectors. The standard was designed to share existing mobile device connectors, such as Micro-USB, and avoid the need to add additional video connectors on devices with limited space for them. MHL connects to display devices either directly through special HDMI inputs that are MHL-enabled, or indirectly through standard HDMI inputs using MHL-to-HDMI adapters. MHL was developed by a consortium of five companies: Nokia, Samsung, Silicon Image, Sony and Toshiba.

DisplayPort

  • https://en.wikipedia.org/wiki/DisplayPort - a digital display interface developed by a consortium of PC and chip manufacturers and standardized by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, and it can also carry audio, USB, and other forms of data. DisplayPort was designed to replace VGA, DVI, and FPD-Link. The interface is backward compatible with legacy interfaces, such as HDMI and DVI, through the use of either active or passive adapters.

USB-C / Thunderbolt


Other

  • https://en.wikipedia.org/wiki/SCTE-35 - a joint ANSI/Society of Cable and Telecommunications Engineers standard that describes the inline insertion of cue tones in mpeg-ts streams. The full standard name is "Digital Program Insertion Cueing Message for Cable."SCTE-35 is used in the US to signal a local ad insertion opportunity in the transport stream, and in Europe to insert local TV programs (e.g. local news transmissions).Currently, SCTE-35 is erratically implemented. Different providers have different "interpretations" of the specification.





  • https://en.wikipedia.org/wiki/Extended_Display_Identification_Data - EDID and Enhanced EDID (E-EDID) are metadata formats for display devices to describe their capabilities to a video source (e.g. graphics card or set-top box). The data format is defined by a standard published by the Video Electronics Standards Association (VESA).The EDID data structure includes manufacturer name and serial number, product type, phosphor or filter type (as chromaticity data), timings supported by the display, display size, luminance data and (for digital displays only) pixel mapping data.DisplayID is a VESA standard targeted to replace EDID and E-EDID extensions with a uniform format suited for both PC monitor and consumer electronics devices.


  • https://github.com/bulletmark/edid-rw - will read and/or write a display's EDID data structure. Use it with the edid-decode utility to view and check an EDID. You can also write new EDID data to attempt to fix a corrupt EDID.


  • https://en.wikipedia.org/wiki/DisplayID - a VESA standard for metadata describing display device capabilities to the video source. It is designed to replace E-EDID standard and EDID structure v1.4. The DisplayID standard was initially released in December 2007. Version 1.1 was released in March 2009 and was followed by version 1.2 released in August 2011. Version 1.3 was released in June 2013 and current version 2.0 was released in September 2017.

DisplayID uses variable-length structures of up to 256 bytes each, which encompass all existing EDID extensions as well as new extensions for 3D displays, embedded displays, Wide Color Gamut and HDR EOTF. DisplayID format includes several blocks which describe logical parts of the display such as video interfaces, display device technology, timing details and manufacturer information. Data blocks are identified with a unique tag. The length of each block can be variable or fixed to a specific number of bytes. Only the base data block is mandatory, while all extension blocks are optional. This variable structure is based on CEA EDID Extension Block Version 3 first defined in CEA-861-B. The DisplayID standard is freely available and is royalty-free to implement.

Form


  • https://kodi.wiki/view/Samples - a collection of a/v clips for testing, pushing media player capabilities. Not all types, but the most widely used or critical types for home theater enthusiasts.


Resolution and aspet ratio

Most common display resolutions in the first half of 2012
Acronym Aspect ratio Width (px) Height (px) % of Steam users % of web users
VGA 4:3 640 480 0.02 n/a
SVGA 4:3 800 600 0.17 1.03
WSVGA 17:10 1024 600 0.31 2.25
XGA 4:3 1024 768 5.53 18.69
XGA+ 4:3 1152 864 0.87 1.55
WXGA 16:9 1280 720 1.51 1.54
WXGA 5:3 1280 768 n/a 1.54
WXGA 16:10 1280 800 4.25 12.97
SXGA– (UVGA) 4:3 1280 960 0.72 0.72
SXGA 5:4 1280 1024 10.66 7.49
HD ~16:9 1360 768 2.36 2.28
HD ~16:9 1366 768 17.19 19.14
SXGA+ 4:3 1400 1050 0.18 n/a
WXGA+ 16:10 1440 900 7.60 6.61
HD+ 16:9 1600 900 6.82 3.82
UXGA 4:3 1600 1200 0.53 n/a
WSXGA+ 16:10 1680 1050 10.26 3.66
FHD 16:9 1920 1080 25.04 5.09
WUXGA 16:10 1920 1200 3.65 1.11
QWXGA 16:9 2048 1152 0.13 n/a
WQHD 16:9 2560 1440 0.72 0.36
WQXGA 16:10 2560 1600 0.19 n/a
3:4 768 1024 n/a 1.93
16:9 1093 614 n/a 0.63
~16:9 1311 737 n/a 0.35
Other 1.29 7.25
  • 480i (NTSC standard uses an analog system of 486i split into two interlaced fields of 243 lines)
  • 576i (PAL, 720 × 576 split into two interlaced fields of 288 lines)

Enhanced-definition television (EDTV):

  • 480p (720 × 480 progressive scan)
  • 576p (720 × 576 progressive scan)

High-definition television (HDTV):

  • 720p (1280 × 720 progressive scan) - 0.9 megapixels
  • https://en.wikipedia.org/wiki/1080i (1920 × 1080 split into two interlaced fields of 540 lines)
  • 1080p (1920 × 1080 progressive scan) - Full HD / FHD

Ultra-high-definition television (UHDTV)

  • 2160p/4K (3840 × 2160 progressive scan)
  • 4320p/8K (7680 × 4320 progressive scan)

framerate; http://www.kk.org/thetechnium/archives/2013/01/pain_of_the_new

Interlaced

  • https://en.wikipedia.org/wiki/Interlaced_video - a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon. This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals. Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.



3D

  • https://en.wikipedia.org/wiki/3D_film - also known as three-dimensional film, 3D film or S3D film) is a motion picture that enhances the illusion of depth perception, hence adding a third dimension. The most common approach to the production of 3D films is derived from stereoscopic photography. In this approach, a regular motion picture camera system is used to record the images as seen from two perspectives (or computer-generated imagery generates the two perspectives in post-production), and special projection hardware or eyewear is used to limit the visibility of each image to the viewer's left or right eye only. 3D films are not limited to theatrical releases; television broadcasts and direct-to-video films have also incorporated similar methods, especially since the advent of 3D television and Blu-ray 3D.3D films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3D film, and the lack of a standardized format for all segments of the entertainment business. Nonetheless, 3D films were prominently featured in the 1950s in American cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by IMAX high-end theaters and Disney-themed venues. 3D films became increasingly successful throughout the 2000s, peaking with the success of 3D presentations of Avatar in December 2009, after which 3D films again decreased in popularity.[2] Certain directors have also taken more experimental approaches to 3D filmmaking, most notably celebrated auteur Jean-Luc Godard in his films 3X3D and Goodbye to Language.


  • https://en.wikipedia.org/wiki/3D_television - television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. As of 2019, most 3D TV sets and services are no longer available.


  • https://en.wikipedia.org/wiki/DVB_3D-TV - a new standard that partially came out at the end of 2010 which included techniques and procedures to send a three-dimensional video signal through actual DVB transmission standards (Cable, Terrestrial or Satellite). Currently there is a commercial requirement text for 3D TV broadcasters and Set-top box manufacturers, but no technical information is in there. Nowadays 3D television technology is already in its first steps regarding its standardization, now the major 3D market is in theaters and Blu-ray Disc players with stereoscopic systems, but in the near future it will be extended to diffusion, and later Free viewpoint television will come into our homes, which means the need of new coding and transmission standards.


  • https://en.wikipedia.org/wiki/Multiview_Video_Coding - MVC, also known as MVC 3D, is a stereoscopic video coding standard for video compression that allows for the efficient encoding of video sequences captured simultaneously from multiple camera angles in a single video stream. It uses the 2D plus Delta method and is an amendment to the H.264 (MPEG-4 AVC) video compression standard, developed jointly by MPEG and VCEG, with contributions from a number of companies, primarily Panasonic and LG Electronics. MVC formatting is intended for encoding stereoscopic (two-view) 3D video, as well as free viewpoint television and multi-view 3D television. The Stereo High profile has been standardized in June 2009; the profile is based on the MVC toolset and is used in stereoscopic Blu-ray 3D releases. MVC streams are backward compatible with H.264/AVC, which allows older video player devices and software decoders to decode MVC video streams while ignoring additional information for the second view, meaning that the stereoscopic image will not be available.



StereoPi

  • StereoPi - DIY stereoscopic camera based on Raspberry Pi. StereoPi can capture, save, livestream, and process real-time stereoscopic video and images. It opens up countless possibilities in robotics, AR/VR, computer vision, drone instrumentation, panoramic video, and more.

360

  • https://github.com/facebook/transform360 - a video/image filter that transforms a 360 video from one projection to another. Usually, the input projection is equirectangular and the output projection is cubemap. We also keep the previous version of the transform, Transform_V1, in the file vf_transform_v1.c

Container format

.avi

.mov

MPEG transport stream / ISO/IEC13818-1

  • https://en.wikipedia.org/wiki/MPEG_transport_stream - or simply transport stream (TS) is a standard digital container format for transmission and storage of audio, video, and Program and System Information Protocol (PSIP) data. It is used in broadcast systems such as DVB, ATSC and IPTV.Transport stream specifies a container format encapsulating packetized elementary streams, with error correction and synchronization pattern features for maintaining transmission integrity when the communication channel carrying the stream is degraded.Transport streams differ from the similarly-named MPEG program stream in several important ways: program streams are designed for reasonably reliable media, such as discs (like DVDs), while transport streams are designed for less reliable transmission, namely terrestrial or satellite broadcast. Further, a transport stream may carry multiple programs.Transport stream is specified in MPEG-2 Part 1, Systems, formally known as ISO/IEC standard 13818-1 or ITU-T Rec. H.222.0.

ISO/IEC14496-12

It is designed as a flexible, extensible format that facilitates interchange, management, editing and presentation of the media. The presentation may be local, or via a network or other stream delivery mechanism. The file format is designed to be independent of any particular network protocol while enabling support for them in general. It is used as the basis for other media file formats (e.g. container formats MP4 and 3GP).


.flv

  • https://en.wikipedia.org/wiki/Flash_Video - a container file format used to deliver digital video content (e.g., TV shows, movies, etc.) over the Internet using Adobe Flash Player version 6 and newer. Flash Video content may also be embedded within SWF files. There are two different Flash Video file formats: FLV and F4V. The audio and video data within FLV files are encoded in the same way as SWF files. The F4V file format is based on the ISO base media file format, starting with Flash Player 9 update 3. Both formats are supported in Adobe Flash Player and developed by Adobe Systems. FLV was originally developed by Macromedia. In the early 2000s, Flash Video was the de facto standard for web-based streaming video (over RTMP). Users include Hulu, VEVO, Yahoo! Video, metacafe, Reuters.com, and many other news providers.

Flash Video FLV files usually contain material encoded with codecs following the Sorenson Spark or VP6 video compression formats. As of 2010 public releases of Flash Player (collaboration between Adobe Systems and MainConcept) also support H.264 video and HE-AAC audio.


.mkv


Codecs

to rework






  • https://en.wikipedia.org/wiki/ITU-T - The International Telecommunication Union Telecommunication Standardization Sector (ITU-T, is one of the three Sectors (branches) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Communication Technology, such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members.

The World Telecommunication Standardization Assembly (WTSA), the sector's governing conference, convenes every four years.



  • https://gitlab.com/standards/HDRTools - under development to support video standardization work and may eventually be formally published as reference software, e.g., by ITU-T and ISO/IEC.

Please do not use this package for any other applications without the knowledge or permission from the software owners/creators. Disclaimer Please note that this software was developed mainly for research purposes, with the desire for portability, extensibility, and testing in the context of video coding standards development. It was highly desirable to make this software independent from other "non-standard" libraries, such as libraries for I/O and processing, to ease compilation and testing across multiple platforms. Instead, such functionality may be replicated in this software, while at the same time, it may be highly unoptimized or limited in scope. Optimized code, however was never the intent of this tool. Optimization as well as extensions in scope for some of this functionality may be added in the future as deemed necessary.


M-JPEG

  • https://en.wikipedia.org/wiki/Motion_JPEG - a video compression format in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image. Originally developed for multimedia PC applications, Motion JPEG enjoys broad client support: most major web browsers and players provide native support, and plug-ins are available for the rest. Software and devices using the M-JPEG standard include web browsers, media players, game consoles, digital cameras, IP cameras, webcams, streaming servers, video cameras, and non-linear video editors

Unlike the video formats specified in international standards such as MPEG-2 and the format specified in the JPEG still-picture coding standard, there is no document that defines a single exact format that is universally recognized as a complete specification of “Motion JPEG” for use in all contexts. This raises compatibility concerns about file outputs from different manufacturers. However, each particular file format usually has some standard on how M-JPEG is encoded. For example, Microsoft documents their standard format to store M-JPEG in AVI files, Apple documents how M-JPEG is stored in QuickTime files, RFC 2435 describes how M-JPEG is implemented in an RTP stream, and an M-JPEG CodecID is planned for the Matroska file format.


  • RFC 2435 - RTP Payload Format for JPEG-compressed Video - This memo describes the RTP payload format for JPEG video streams. The packet format is optimized for real-time video streams where codec parameters change rarely from frame to frame. This document is a product of the Audio-Video Transport working group within the Internet Engineering Task Force. Comments are solicited and should be addressed to the working group's mailing list at rem- conf@es.net and/or the author(s).


  • MJPEG Tools - The mjpeg programs are a set of tools that can do recording of videos and playback, simple cut-and-paste editing and the MPEG compression of audio and video under Linux.


MPEG-1

  • https://en.wikipedia.org/wiki/MPEG-1 - a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively) without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical. Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, and is used in a large number of products and technologies. Perhaps the best-known part of the MPEG-1 standard is the first version of the MP3 audio format it introduced. The MPEG-1 standard is published as ISO/IEC 11172 – Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s.


MPEG-2

  • https://en.wikipedia.org/wiki/MPEG-2 - a standard for "the generic coding of moving pictures and ISO/IEC 13818 MPEG-2 at the ISO Store. It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264 and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard.

H.264 / MPEG-4

  • H.264/MPEG-4 AVC is a block-oriented motion-compensation-based codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC JTC1 Moving Picture Experts Group (MPEG). The project partnership effort is known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content. H.264 is perhaps best known as being one of the codec standards for Blu-ray Discs; all Blu-ray Disc players must be able to decode H.264. It is also widely used by streaming internet sources, such as videos from Vimeo, YouTube, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC, ISDB-T, DVB-T or DVB-T2), cable (DVB-C) and satellite (DVB-S and DVB-S2).





  • OpenH264 - Cisco has taken their H.264 implementation, and open sourced it under BSD license terms. Development and maintenance will be overseen by a board from industry and the open source community. Furthermore, we have provided a binary form suitable for inclusion in applications across a number of different operating systems, and make this binary module available for download from the Internet. We will not pass on our MPEG-LA licensing costs for this module, and based on the current licensing environment, this will effectively make H.264 free for use on supported platforms.

H.265 / HVEC


  • x265 - The goal of x265 is to become the best open source H.265/HEVC encoder,with the highest compression efficiency at the highest performance,while also allowing commercial use via a dual GPL and commercial license. In a nutshell, we would like to reproduce x264's successful development model for HEVC. [3]


H.266 / VVC


FFV1

  • https://github.com/FFmpeg/FFV1/tree/master - manages the development of specification documents for FFV1, a lossless intra-frame video codec. The goals of this specification effort are defined within the charter of the cellar working group of the Internet Engineering Task Force. Information within this repository should be considered in draft form. The most recent official version of the specification for FFV1 versions 0, 1 and 3 may be found at https://datatracker.ietf.org/doc/draft-ietf-cellar-ffv1/ and for FFV1 version 4 at https://datatracker.ietf.org/doc/draft-ietf-cellar-ffv1-v4/. The FFV1 specification was initially written in lyx. In July 2015 the formatting of the specification was transitioned to Markdown to be used with xml2rfc version 2. In August 2019 the formatting was transitioned to target xml2rfc version 3.
  • https://en.wikipedia.org/wiki/FFV1 - short for FF Video 1, is a lossless intra-frame video coding format. It can use either variable-length coding or arithmetic coding for entropy coding. The encoder and decoder are part of the free, open-source library libavcodec in the project FFmpeg since June 2003. FFV1 is also included in ffdshow and LAV Filters, which makes the video codec available to Microsoft Windows applications that support system-wide codecs over Video for Windows or DirectShow. FFV1 is particularly popular for its performance regarding speed and size, compared to other lossless preservation codecs, such as M-JPEG2000. The European Broadcasting Union lists FFV1 under the codec-family index "31" in their combined list of video codec references. It has been standardized at the IETF under RFC 9043

WMV

  • https://en.wikipedia.org/wiki/Windows_Media_Video - a series of video codecs and their corresponding video coding formats developed by Microsoft. It is part of the Windows Media framework. WMV consists of three distinct codecs: The original video compression technology known as WMV, was originally designed for Internet streaming applications, as a competitor to RealVideo. The other compression technologies, WMV Screen and WMV Image, cater for specialized content. After standardization by the Society of Motion Picture and Television Engineers SMPTE, WMV version 9 was adapted for physical-delivery formats such as HD DVD and Blu-ray Disc and became known as VC-1. Microsoft also developed a digital container format called Advanced Systems Format to store video encoded by Windows Media Video.

VP8

  • https://en.wikipedia.org/wiki/VP8 - an open and royalty free video compression format owned by Google and created by On2 Technologies as a successor to VP7. In May 2010, after the purchase of On2 Technologies, Google provided an irrevocable patent promise on its patents for implementing the VP8 format, and released a specification of the format under the Creative Commons Attribution 3.0 license. That same year, Google also released libvpx, the reference implementation of VP8, under the revised BSD license.

VP9

  • https://en.wikipedia.org/wiki/VP9 - an open and royalty-free[1] video coding format developed by Google. VP9 is a successor to VP8 and competes mainly with MPEG's High Efficiency Video Coding (HEVC/H.265). At first, VP9 was mainly used on Google's video platform YouTube. The emergence of the Alliance for Open Media, and its support for the ongoing development of the successor AV1, led to growing interest in the format.


AV1

  • Alliance for Open Media - founded by leading Internet companies focused on developing next-generation media formats, codecs and technologies.  Alliance members bring their collective technology and expertise to meet growing Internet demand for top-quality video, audio, imagery and streaming across devices of all kinds and for users worldwide. The initial project – AOMedia Video – pursues a new, royalty-free video codec specification and open source implementation based on the contributions of Alliance members and the broader developer community, along with binding specifications for media format, content encryption and adaptive streaming. Founding members are Amazon, Apple, ARM, Cisco, Facebook, Google, IBM, Intel, Microsoft, Mozilla, Netflix and NVIDIA.
  • https://en.wikipedia.org/wiki/AV1 - an open, royalty-free video coding format designed for video transmissions over the Internet. It is being developed by the Alliance for Open Media (AOMedia), a consortium of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015. AV1 is the primary contender for standardization by the video standard working group NetVC of the Internet Engineering Task Force (IETF). The group has put together a list of criteria to be met by the new video standard. It is meant to succeed its predecessor VP9 and compete with HEVC/H.265 from the Moving Picture Experts Group. AV1 can be used together with the audio format Opus in a future version of the WebM container format for HTML5 web video and WebRTC.


  • https://github.com/OpenVisualCloud/SVT-AV1 - an AV1-compliant encoder library core. The SVT-AV1 development is a work-in-progress targeting performance levels applicable to both VOD and Live encoding / transcoding video applications.



Dirac

Daala

  • Daala is the code-name for a new video compression technology. The effort is a collaboration between Mozilla Foundation, Xiph.Org Foundation and other contributors. The goal of the project is to provide a free to implement, use and distribute digital media format and reference implementation with technical performance superior to h.265.

VVC

  • https://en.wikipedia.org/wiki/Versatile_Video_Coding - MPEG-I Part 3, is a future video compression standard being developed for finalization around mid-2020 by the Joint Video Experts Team (JVET)[1], a united video expert team of the MPEG working group of ISO/IEC JTC 1 and the VCEG working group of ITU-T. At times, it has also been referred to as Future Video Coding (FVC) and as ITU-T H.266. It will be the successor to High Efficiency Video Coding (HEVC, also known as ITU-T H.265 and MPEG-H Part 2).


ProRes

  • About Apple ProRes – Apple Support (UK) - codecs provide an unparalleled combination of multistream, real-time editing performance, impressive image quality, and reduced storage rates. Apple ProRes codecs take full advantage of multicore processing and feature fast, reduced-resolution decoding modes. All Apple ProRes codecs support all frame sizes (including SD, HD, 2K, 4K, and 5K) at full resolution. The data rates vary based on codec type, image content, frame size, and frame rate. Apple ProRes includes the following formats.



  • https://en.wikipedia.org/wiki/Apple_ProRes - a high quality, "visually lossless" lossy video compression format developed by Apple Inc. for use in post-production that supports video resolution up to 8K. It is the successor of the Apple Intermediate Codec and was introduced in 2007 with Final Cut Studio 2. Much like the H.26x and MPEG standards, the ProRes family of codecs use compression algorithms based on the discrete cosine transform (DCT). ProRes is widely used as a final format delivery method for HD broadcast files in commercials, features, Blu-ray and streaming.

ProRes is a line of intermediate codecs, which means they are intended for use during video editing, and not for practical end-user viewing. This is achieved by only using intra-frame compression, where each frame is stored independently and can be decoded with no dependencies on other frames. The benefit of an intermediate codec is that it offers excellent random access performance in post-production applications, and retains higher quality than end-user codecs while still requiring much less expensive disk systems compared to uncompressed video. It is comparable to Avid's DNxHD codec or CineForm which offer similar bitrates and are also intended to be used as intermediate codecs. ProRes is a DCT scalar based intra-frame-only codec and is therefore simpler to decode than distribution-oriented formats like H.264. In 2018 Apple added a new "ProRes RAW" (compressed Bayer filter, to Final Cut Pro X, after Blackmagic Design implemented compressed Bayer as "CinemaDNG 3:1" and "CinemaDNG 4:1" in their cameras and DaVinci Resolve.


  • FFmpeg Codecs Documentation - FFmpeg contains 2 ProRes encoders, the prores-aw and prores-ks encoder. The used encoder can be chosen with the -vcodec option.



  • FFmpeg - Convert to Apple ProRes 422 / 4444 Simplified - OTTVerse - In this article, we will learn how to use FFmpeg to convert videos to Apple ProRes 422 and 444 formats. To try this tutorial on a Windows machine, you can download an FFmpeg static build from OTTVerse’s FFmpeg build page. Instructions for FFmpeg installation for macOS and Linux can be found in our FFmpeg installation guide.

DXV

Hap

  • http://vdmx.vidvox.net/blog/hap - a new video codec designed for digital video artists and VJs with the goal of achieving higher-performance video playback in OpenGL-based applications such as VDMX.
    • https://github.com/Vidvox/hap - a family of video codecs which perform decompression using a computer's graphics hardware, substantially reducing the CPU usage necessary to play video — this is useful in situations where CPU power is a limiting factor, such as when working with multiple high resolution videos in real-time.

Huffyuv

Lagarith

Decoding

VDPAU

  • VDPAU - the Video Decode and Presentation API for UNIX. It provides an interface to video decode acceleration and presentation hardware present in modern GPUs.
  • Hardware decoding of MPEG-1, MPEG-2, MPEG-4 part 2, H.264, VC-1, and DivX 4 and 5 bitstreams on supported hardware, with a bitstream (VLD) level API.
  • Video post-processing including advanced deinterlacing algorithms, inverse telecine, noise reduction, configurable color space conversion, and procamp adjustments.
  • Sub-picture, on-screen display, and UI element compositing.
  • Direct rendering timestamp-based presentation of final video frames, with detailed frame delivery reports.


  • https://en.wikipedia.org/wiki/VDPAU - a royalty-free application programming interface as well as its implementation as free and open-source library (libvdpau) distributed under the MIT License. VDPAU is also supported by Nvidia. The VDPAU interface is to be implemented by device drivers, such as Nvidia GeForce driver, nouveau, amdgpu, to offer end-user software, such as VLC media player or GStreamer, a standardized access to available video decompression acceleration hardware in the form of application-specific integrated circuit (ASIC) blocks on graphics processing units (GPU), such as Nvidia's PureVideo or AMD's Unified Video Decoder and make use of it. VDPAU is targeted at Unix-like operating systems (including Linux, FreeBSD, Solaris).


Conversion / transforming


FFmpeg

See also Visuals#ffmpeg, Streaming#ffmpeg


  • FFmpeg - a complete, cross-platform solution to record, convert and stream audio and video.
ffmpeg -i input.mp4 output.avi




  • FFmpeg.guide - A simple GUI tool to create complex FFmpeg filtergraphs quickly and correctly, without having to mess with the cumbersome filter syntax


  • ffmprovisr - helpful information about how to perform a wide variety of tasks using FFmpeg. [5]


  • https://github.com/dnaka91/ffprog - with nice progress visualization. This project is a small tool, mostly for myself, to better visualize the progress of FFmpeg while it's running. It is specifically built for the goal of re-encoding audio/video with a focus on reducing the output size.


  • https://github.com/Raveler/ffmpeg-cpp - A clean C++ wrapper around the ffmpeg libraries which can be used in any C++ project or C# project (with DllImport or CLR). Makes the most commonly used functionality of FFmpeg easily available for any C++ projects with an easy-to-use interface. The full power of FFmpeg compacted in 10 lines of C++ code: if this sounds useful to you, read on!






ffmpeg -i in.opus -ss 00:00:30.0 -t 00:03:00 -c copy -map 0 out.opus 
ffmpeg -loop 1 -r 2 -i image.jpg -i audio.mp3 -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p output.mkv







  • QWinFF is a GUI for FFmpeg, a powerful command-line media converter. FFmpeg can read audio and video files in various formats and convert them into other formats. QWinFF provides an intuitive graphical interface and a rich set of presets to help you use ffmpeg easily without having to type a single command. Advanced users can also adjust conversion parameters like bitrate and sample rate in detail.


  • qctools - a free and open source software tool that helps users analyze and understand their digitized video files through use of audiovisual analytics and filtering. QCTools is funded by the National Endowment for the Humanities and the Knight Foundation, and is developed by the Bay Area Video Coalition.

mpgtx

  • mpgtx - a command line MPEG audio/video/system file toolbox, that slices and joins audio and video files, including MPEG1, MPEG2 and MP3. mpgtx was designed with the good old Unix philosophy in mind : do few but do it well and provide end user an austary yet powerfull command line interface. At least this was the goal. Reality is different, and it's available for you to try. mpgtx is released under the GPL license.

avconv

  • avconv - a very fast video and audio converter that can also grab from a live audio/video source. It can also convert between arbitrary sample rates and resize video on the fly with a high quality polyphase filter.avconv reads from an arbitrary number of input "files" (which can be regular files, pipes, network streams, grabbing devices, etc.), specified by the -i option, and writes to an arbitrary number of output "files", which are specified by a plain output filename. Anything found on the command line which cannot be interpreted as an option is considered to be an output filename.

HandBrake

  • HandBrake - a tool for converting video from nearly any format to a selection of modern, widely supported codecs.Reasons you’ll love HandBrake: Convert video from nearly any format, Free and Open Source, Multi-Platform (Windows, Mac and Linux)

Mobile Media Converter

  • Mobile Media Converter - a free audio and video converter for converting between popular desktop audio and video formats like MP3, Windows Media Audio (wma), Ogg Vorbis Audio (ogg), Wave Audio (wav), MPEG video, AVI, Windows Media Video (wmv), Flash Video (flv), QuickTime Video (mov) and commonly used mobile devices/phones formats like AMR and 3GA audio (amr) and 3GP video. iPod/iPhone and PSP compatible MP4 video are supported. Moreover, you can remove and add new formats or devices through the internet.

Transmageddon

  • Transmageddon - a video transcoder for Linux and Unix systems built using GStreamer. It supports almost any format as its input and can generate a very large host of output files. The goal of the application was to help people to create the files they need to be able to play on their mobile devices and for people not hugely experienced with multimedia to generate a multimedia file without having to resort to command line tools with ungainly syntaxes.For information about latest releases check the NEWS file. To see what translations are available for Transmageddon check out the Transmageddon translation report from the GNOME translation project.

transcode

  • transcode - a text-console utility for video stream processing, running on a platform that supports shared libraries and threads. Decoding and encoding is done by loading modules that are responsible for feeding transcode with raw video/audio streams (import modules) and encoding the frames (export modules). It supports elementary video and audio frame transformations, including de-interlacing or fast resizing of video frames and loading of external filters.

videotrans

  • videotrans - a set of scripts that allow its user to reformat existing movies into the VOB format that is used on DVDs.Furthermore, videotrans supplies programs which allow its user to create a complete DVD, including selection menus which allow the viewer to choose which movie he/she wants to see, if more than one movie fits onto a DVD.

MEncoder

Transcoder

Constatus

  • https://github.com/folkertvanheusden/constatus - monitors, converts, transforms, filters and multiplexes video-feeds. Feeds like IP-cameras, "video4linux"-devices, pixelflut, VNC-servers, Raspberry Pi-cameras, etc. It is an NVR (network video recorder) with special features.

Vitomu

  • https://github.com/digimezzo/vitomu - stands for VIdeo TO MUsic converter. It allows easy conversion of online and offline videos to audio files. Just drop a URL or a local video file on the target to start the conversion. The current version supports dropping URL's of Youtube videos, and dropping of local MKV and MP4 files.

Distributed

  • Encoding, Fast and Slow: Low-Latency Video Processing Using Thousands of Tiny Threads - First, we designed a framework to run general-purpose parallel computations on a commercial “cloud function” service. The system starts up thousands of threads in seconds and manages inter-thread communication. Second, we implemented a video encoder intended for fine-grained parallelism, using a functional-programming style that allows computation to be split into thousands of tiny tasks without harming compression efficiency. Our design reflects a key insight: the work of video encoding can be divided into fast and slow parts, with the “slow” work done in parallel, and only “fast” work done serially.

Info

MediaInfo

MediaConch

QCTools

  • QCTools - a software tool that helps users analyze and understand their digitized video files through use of audiovisual analytics and filtering.Funded by the National Endowment for the Humanities and the Knight Foundation; designed and led by Dave Rice and the Bay Area Video Coalition; developed by MediaArea, Fabio Utzig, Alexander Ivash.MediaArea was involved in the initial development and provides binaries for all platforms.

Capture and recording

  • https://en.wikipedia.org/wiki/Video4Linux - a collection of device drivers and an API for supporting realtime video capture on Linux systems. It supports many USB webcams, TV tuners, and related devices, standardizing their output, so programmers can easily add video support to their applications. MythTV, tvtime and Tvheadend are typical applications that use the V4L framework.



mjpeg

  • mjpeg programs are a set of tools that can do recording of videos and playback, simple cut-and-paste editing and the MPEG compression of audio and video under Linux.

Vrecord

See also Screencast

Webcam

madcam

  • https://github.com/Sonicrobots/madcam - an open frameworks (http://www.openframeworks.cc) project for using multiple USB web cams (e.g. PS3 Eye) simultaneously for displaying live video material. Its controllable by MIDI and OSC! We wrote it as a Live VJ extension for the robotics media art installation project "Glitchrobot" by SonicRobots. #Description The goal is set up an VJ setup which is based on webcam material. So far we used 9-10 Web cams, but the amount can probably extended. The webcam content is displayed on one screen, in different tilings and with FX. Tiling and FX can be controlled by Midi and OSC. [7]

terminal-webcam

DroidCam

Spydroid

ipwebcam-gst

PulseVideo

  • PulseVideo - Allows multiplexing access to webcams such that more than one application can read video from a single piece of hardware at a time.

webcam-filters


bokeh-camera


scanseqjs

BORIS

  • BORIS - an easy-to-use event logging software for video/audio coding and live observations. BORIS is a free and open-source software available for GNU/Linux, Windows and MacOS.

Motion detection

  • Constatus - monitors cameras. If it sees anything moving, it can start recording and stops recording when the motion stops.It can monitor "video4linux" devices (e.g. webcams etc connected directly to a Linux system), MJPEG, RTSP and JPEG cameras.While Constatus monitors the camera, you can use the built-in webserver (and VNC server) to view the video stream concurrently.It can also record all video and/or timelapse videos.


Glitch




  • https://github.com/Akascape/FF-Dissolve-Glitch - a GUI based glitch tool that uses FFmpeg to create awesome and wierd motion interpolated glitches in videos. I call it FF dissolve glitch because it creates kinda dissolving pixel type looks.


Scripting

AviSynth

  • AviSynth - a powerful tool for video post-production. It provides ways of editing and processing videos. AviSynth works as a frameserver, providing instant editing without the need for temporary files. AviSynth itself does not provide a graphical user interface (GUI), but instead relies on a script system that allows advanced non-linear editing. While this may at first seem tedious and unintuitive, it is remarkably powerful and is a very good way to manage projects in a precise, consistent, and reproducible manner. Because text-based scripts are human readable, projects are inherently self-documenting. The scripting language is simple yet powerful, and complex filters can be created from basic operations to develop a sophisticated palette of useful and unique effects.

VapourSynth

  • VapourSynth - an application for video manipulation. Or a plugin. Or a library. It’s hard to tell because it has a core library written in C++ and a Python module to allow video scripts to be created. It came to be when I started thinking about alternative designs for Avisynth and most of it was written over a 3 month period. The software has been heavily inspired by Avisynth and aims to be a 21st century rewrite, taking advantage of the advancements computers have made since the late 90s. The main features compared to Avisynth are: Multithreaded – Frame level multithreading, Generalized Colorspaces – New Colorspaces can be specified at runtime, Per Frame Properties – Additional metadata can be attached to frames, Python Based – The scripting part is implemented as a Python module so you don’t have have to learn a special language, Support for video with format changes – Some video just can’t stick to one format or frame size VapourSynth can handle any kind of change


Video Language

  • Video Language - a language for making movies. It combines the power of a traditional video editor with the capabilities of a full programming language. Video integrates with the Racket ecosystem and extensions for DrRacket to transform it into a non-linear video editor.
  • The Video Language Guide - a DSL for editing...videos. It aims to merge the capabilities of a traditional graphical non-linear video editor (NLVE), with the power of a programming language. The current interface is similar to that of HTML, LaTeX, or Scribble. VidLang comes with a prototype graphical NLVE for DrRacket. This extensions is still experimental and highly unstable. The next version of Video will focus on this extension and make the tool significantly cleaner and stable.

MoviePy

  • MoviePy - a Python module for video editing, which can be used for basic operations (like cuts, concatenations, title insertions), video compositing (a.k.a. non-linear editing), video processing, or to create advanced effects. It can read and write the most common video formats, including GIF.

foleys_video_engine

Production

  • Video Production Stack Exchange - a question and answer site for engineers, producers, editors, and enthusiasts spanning the fields of video, and media creation.



Shooting



Dubbing


Editing / compositing

QtlMovie

  • QtlMovie - a simple specialized graphical front-end to FFmpeg and other free media tools. Why another front-end to FFmpeg? Because most of these general-purpose front-ends are complex. QtlMovie has been designed with simplicity in mind for just a few specific common operations. QtlMovie performs a few repetitive specialized tasks which proved to be difficult or boring with other tools. In short, I developed QtlMovie primarily for my own usage to automate tasks which took me too long and I now share it.


Avidemux

  • Avidemux is a free video editor designed for simple cutting, filtering and encoding tasks. It supports many file types, including AVI, DVD compatible MPEG files, MP4 and ASF, using a variety of codecs. Tasks can be automated using projects, job queue and powerful scripting capabilities.


OpenShot


Pitivi

  • Pitivi Video Editor - Built upon the GStreamer multimedia framework, Pitivi aims to be an intuitive and flexible video editor that integrates well with GNOME and can appeal to newbies and professionals alike.


Shotcut

  • Shotcut - a free, open source, cross-platform video editor.

VidCutter

Kdenlive

  • Kdenlive - an intuitive and powerful multi-track video editor, including most recent video technologies, released as a free software (GPL). Using Kdenlive is investing in a community driven project, which aims to establish relationships between people in order to built the best video tools.

Flowblade

  • Flowblade - a multitrack non-linear video editor for Linux released under GPL 3 license. Flowblade provides powerful tools to mix and filter video and audio for a fast, precise and robust editing experience. Flowblade employs a film-style insert editing model as workflow. In insert editing clips are generally placed tightly after other clips when they are inserted on the timeline. Edits are fine tuned by trimming in and out points of clips or by cutting and deleting parts of clips.

VideoLAN Movie Creator

  • VideoLAN Movie Creator - a non-linear editing software for video creation based on libVLC and running on Windows, Linux and Mac OS X! It is free software distributed under the terms of the GPLv2. Alpha.


Cinelerra

  • Cinelerra - a Free Linux-based Audio, Images, and Special Effects system created by Adam Williams and was originally uploaded in the fall of 2002 with numerous improvements since that time. If you are new to Cinelerra and would like to see examples of Cinelerra, you can view it here on YouTube and Vimeo. Cinelerra has had many downloads and there are thousands of users of Cinelerra worldwide. Cinelerra has been exhibited at NAB, IBC, Interbee, SIGGRAPH, Linux World, and has been demonstrated and used at most major motion picture studios. Cinelerra has many components and if you are new, take your time, as the whole system is complex and powerful and it takes time to fully comprehend how powerful the program can be when configured correctly. It is our goal here to make it the most powerful and accessible media production system in the world. Cinelerra managers and directors have worked professionally in artistic expression, filmmaking, technology development, software and hardware design consulting and production services and systems development for well over a decade. Adam Williams and Mike Collins have been working together developing and promoting Linux editing systems, starting together in 2000, with Broadcast 2000 and later with Cinelerra starting in 2001. The entire team has been devoted to open systems and advanced Linux production technologies. Collectively, our current and past clients and former employers include most of the Hollywood motion picture and post-production studios, as well as government services contractors, and computer software and hardware technology manufacturers and systems integration companies.


Lightworks

LiVES

  • LiVES - a Video Editing System. It is designed to be simple to use, yet powerful. It is small in size, yet it has many advanced features. LiVES mixes realtime video performance and non-linear editing in one professional quality application. It will let you start editing and making video right away, without having to worry about formats, frame sizes, or framerates. It is a very flexible tool which is used by both professional VJ's and video editors - mix and switch clips from the keyboard, use dozens of realtime effects, trim and edit your clips in the clip editor, and bring them together using the multitrack timeline. You can even record your performance in real time, and then edit it further or render it straight away.

For the more technically minded, the application is frame and sample accurate, and it can be controlled remotely or scripted for use as a video server. And it supports all of the latest free standards. LiVES is good enough to be used as a VJ tool for professional performances, and as a video editor is capable of creating dazzling clips in a huge variety of formats. For a full list of the features of LiVES, look at the features page.



Blender





Natron

MediaEditor

TuttleOFX

ButtleOFX

  • ButtleOFX - a simple, user-friendly and open source compositing software. It is based on TuttleOFX, an open source, modular and sustainable image processing architecture.

ShuttleOFX

  • https://github.com/shuttleofx/ShuttleOFX - designed to share OpenFX plugins between plugins creators (industrials, developpers, researchers, students, ...) and graphic artists. Following the TuttleOFX initiative, the ambition is to promote a standard way to create image processing algorithms usable across softwares.

KALISCOPE

  • KALISCOPE - a set of tools and plugins to create a professional pipeline for cinema works. It can also be used to create a telecinema that convert silver films into numeric movies. The goal is to produce reliable sophisticated tools to help artists in their analog/numeric film projects by respecting the quality of their precious images.

ffDiaporama

  • http://ffdiaporama.tuxfamily.org/ffdiaporama ffDiaporama] is an application for creating video sequences consisting of titles, fixed or animated, images or photos, fixed or animated, movie clip, music. These sequences are assembled into a slide show by means of transitions to produce complete videos

Advene

  • Advene - (Annotate Digital Video, Exchange on the NEt) is an ongoing project in the LIRIS laboratory (UMR 5205 CNRS) at University Claude Bernard Lyon 1. It aims at providing a model and a format to share annotations about digital video documents (movies, courses, conferences...), as well as tools to edit and visualize the hypervideos generated from both the annotations and the audiovisual documents. Teachers, moviegoers, etc. can use them to exchange multimedia comments and analyses about video documents. The project also aims at studying the way that communities of users (teachers, moviegoers, students...) will use these self-publishing tools to share their audiovisual "readings", and to envision new editing and viewing interfaces for interactive comment and analysis of audiovisual content. For more details, see Advene: active reading through hypervideo or other articles in our publication list.

Komposition

blind

  • blind - a suckless collection of command line video editing utilities. blind uses a raw video format with a simple container. A file begins with an plain-text line, containing the number of frames, the width, the height, and the pixel format, all separated by a single regular blank space, without and leading or tailing white space. After this line, which ends with an LF, there is a NUL-byte followed by the 4 characters “uivf” (unportable, interim video format). This head is followed by the video frame-by-frame with row-major frames. Pixels are independently encoded, and are encoded unscaled CIE XYZ with non-premultiplied alpha and without any transfer-function, with values stored in native doubles or optionally in native floats. These two configurations are the only options, but the container format is designed so this can be changed arbitrarily in the future.

dvbcut

  • dvbcut is a Qt application that allows you to select certain parts of an MPEG transport stream (as received via Digital Video Broadcasting, DVB) and save these parts into a single MPEG output file. It follows a "keyhole surgery" approach where the input video and audio data is mostly kept unchanged, and only very few frames at the beginning and/or end of the selected range are re-encoded in order to obtain a valid MPEG file.

MLT


  • http://sourceforge.net/projects/smoothlife/ - Conway's Game of Life is generalized to a continuous domain. OpenGL and GLSL shaders are used for real-time 2D and 3D graphics. Time stepping is done via a real to complex FFT based convolution operation. There are also two deliberately simple test implementations in FreeBasic and Matlab. A 2D version on a sphere shows that it is really not dependent on the underlying grid.


Xjadeo

  • Xjadeo - a software video player that displays a video-clip in sync with an external time source (MTC, LTC, JACK-transport). Xjadeo is useful in soundtrack composition, video monitoring or any task that requires to synchronizing movie frames with external events.

Olive

Chestnut

jumpcutter

DJV

  • DJV - provides professional review software for VFX, animation, and film production. Playback high resolution, high bit-depth, image sequences and videos, with frame accurate control and color management. Available for Linux, Apple macOS, and Microsoft Windows. Source code is provided under a BSD style open source license.

GNEVE

  • GNEVE - GNU Emacs Video Editing, an extension for GNU Emacs to provide flexible EDL (Edit Decision List) video editing facilities as part of a free, multimedia production toolchain. GNEVE is under active development, currently with three developers working towards a first beta release. GNEVE now depends on the freely available MPlayer software for previewing, and the latest, free MLT framework/video editing server for EDL processing. The software is keystroke driven, with flexible preview and render functionalities. GNEVE is written in Emacs Lisp, and runs on GNU/Linux.

Brave

  • https://github.com/bbc/brave - Basic Real-time AV Editor - allowing you to preview, mix, and route live audio and video streams on the cloud

video-splitter

LosslessCut

VidCutter

tsv_edl.vim

Colour

  • OpenColorIO - a complete color management solution geared towards motion picture production with an emphasis on visual effects and computer animation. OCIO provides a straightforward and consistent user experience across all supporting applications while allowing for sophisticated back-end configuration options suitable for high-end production usage. OCIO is compatible with the Academy Color Encoding Specification (ACES) and is LUT-format agnostic, supporting many popular formats.OpenColorIO is released as version 1.0 and has been in development since 2003. OCIO represents the culmination of years of production experience earned on such films as SpiderMan 2 (2004), Surf’s Up (2007), Cloudy with a Chance of Meatballs (2009), Alice in Wonderland (2010), and many more. OpenColorIO is natively supported in commercial applications like Katana, Mari, Nuke, Silhouette FX, and others.

Timeline

OpenTimelineIO

  • OpenTimelineIO - an API and interchange format for editorial cut information. You can think of it as a modern Edit Decision List (EDL) that also includes an API for reading, writing, and manipulating editorial data. It also includes a plugin system for translating to/from existing editorial formats as well as a plugin system for linking to proprietary media storage schemas.OTIO supports clips, timing, tracks, transitions, markers, metadata, etc. but not embedded video or audio. Video and audio media are referenced externally. We encourage 3rd party vendors, animation studios and visual effects studios to work together as a community to provide adaptors for each video editing tool and pipeline.

Effects

  • Open Effects Association (OFX), a non-profit organization, develops and promotes open standards across the visual effects community. The Association’s initial focus is to improve the OpenFX image processing plug-in standard. This goal of this standard is to reduce development effort and support needed for plug-ins across different compositing and editing host platforms. The founding members come from Assimilate, Autodesk, Digieffects, FilmLight, The Foundry, Genarts and RE:Vision FX. These are companies which have helped artists create ground-breaking VFX shots on nearly every blockbuster movie.


  • OpenFX - an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on any application that supports the standard. This avoids the current per application fragmentation of plug-in development and support, which causes much heartache to everyone, plug-in developers, application developers and end users alike


FreeFrame

  • FreeFrame - Open Realtime Video Effects, provides a framework for developing video effects plugins and hosts on Windows, Linux and Mac OSX. Sample code is provided which compiles in Delphi, MSVC, GCC, Project Builder and Kylix.The plugins are initially intended for use in VJ ('Visual Jockey') applications. The project was initiated by developers from VJamm, Resolume and EffecTV. Freeframe support is now available for a range of video applications. Open - i.e.. not governed by a commercial organisation. CrossPlatform & CrossCompiler. Standardised. Technical independence.



  • FFRend - a renderer for Freeframe plugins. It allows you to chain any number of plugins together, automate their parameters using oscillators, and record the output to an AVI file. The input is a video (AVI/MPG), still image (BMP/JPG/GIF), or source plugin.Most VJ softwares support Freeframe, and can record their output, so what makes FFRend different? FFRend is optimized for content generation, whereas VJ softwares are typically optimized for live performance. The key difference is that FFRend never drops frames, even when your project is too CPU-intensive to be rendered in real time.It's also possible to perform with FFRend. The user interface is modular, dual-monitor is fully supported, and all parameters and oscillator settings can be controlled via MIDI. Version 2 also supports parallel processing on multi-core CPUs, for greatly improved frame rates.FFRend supports plugin authoring, which means you can export a FFRend project as a Freeframe plugin. The exported plugin is called a metaplugin, because it uses other plugins as components. A metaplugin can be used in any Freeframe-compatible host application, and behaves as if you were running the equivalent project in FFRend.FFRend is free, open-source software for Windows XP/Vista/Win7. It includes comprehensive help, an installer, and a simple demo project. If you want to render complex effects at high resolution, using chains of automated Freeframe plugins, FFRend is your friend.


  • https://github.com/leadedge/ShaderMaker - A source code template to build FreeframeGL 2.1 plugins for Resolume from GLSL Sandbox or ShaderToy shaders.During compilation, shader and image files are loaded into into resources and saved with the plugin. The plugin then accesses these resources when it loads. ShaderToy multiple buffers are not supported. This is a Windows project for Visual Studio 2017.



Frei0r

  • Frei0r - a minimalistic plugin API for video effects. The main emphasis is on simplicity for an API that will round up the most common video effects into simple filters, sources and mixers that can be controlled by parameters.

HasciiCam

  • https://github.com/jaromil/HasciiCam - Hasciicam makes it possible to have live ascii video on the web. It captures video from a tv card and renders it into ascii, formatting the output into an html page with a refresh tag or in a live ascii window or in a simple text file as well, giving the possiblity to anybody that has a bttv card. a linux box and a cheap modem line to show a live asciivideo feed that can be browsable without any need for plugin, java etc.

AviGlitch

  • AviGlitch - A Ruby library to destroy your AVI files.

Speed

slowmoVideo

  • https://github.com/slowmoVideo/slowmoVideo - an OpenSource program that creates slow-motion videos from your footage. But it does not simply make your videos play at 0.01× speed. You can smoothly slow down and speed up your footage, optionally with motion blur. How does slow motion work? slowmoVideo tries to find out where pixels move in the video (this information is called Optical Flow), and then uses this information to calculate the additional frames between the ones recorded by your camera.

butterflow

Futatabi

  • Futatabi - multi-camera instant replay system with slow motion. It supports efficient real-time interpolation using optical flow, making for full-framerate output without having to use special high-framerate cameras. (Of course, interpolation can only take you so far, and the results will depend on the type of content.) Futatabi is currently in alpha. It is distributed and built together with Nageru.

MVTools

  • https://avisynth.nl/index.php/MVTools - plugin for AviSynth 2.6 is a collection of functions for estimation and compensation of objects motion in video clips. Motion compensation may be used for strong temporal denoising, advanced framerate conversions, image restoration and other tasks.

SVP

  • SVP - 60 fps / 120 fps HFR motion interpolation for Windows, macOS in mpv, VLC, Plex - $

motioninterpolation.vpy

  • motioninterpolation.vpy - Realtime motion interpolating playback in mpv. SVP now has a free (but proprietary) version of their software, which includes a GPU port of the MVTools functions, making this much less resource intense.

VFX / 3D










  • Gaffer is a general purpose node-based application designed for use in the visual effects industry. It provides basic tools for procedural scene generation, shader authoring, rendering, and image compositing.


  • appleseed is a modern, open source, physically-based rendering engine for VFX and animation.


  • PixelStruct is an opensource tool for visualizing 3D scenes reconstructed from photographs. It uses the opensource structure-from-motion system Bundler, which is based on the same research as Microsoft Live Labs Photosynth.



  • Electric Sheep - a collaborative abstract artwork founded by Scott Draves. It's run by thousands of people all over the world, and can be installed on almost anything. When these computers "sleep", the Electric Sheep comes on and the computers communicate with each other by the internet to share the work of creating morphing abstract animations known as "sheep". Anyone watching one of these computers may vote for their favorite animations using the keyboard. The more popular sheep live longer and reproduce according to a genetic algorithm with mutation and cross-over. Hence the flock evolves to please its global audience. The Electric Sheep are a free and open source service. The Gold Sheep are an HD premium version.


  • vid.stab - Transcode video stabilization plugin. Imagine you captured a nice video with your camcorder, compact camera or even cell phone while skiing, cycling or whatever sports and the video is basically just jiggled. Modern cameras come along with hardware stabilisation, however this does not work if you have really strong vibrations - rather the contrary sometimes this mechanisms start to oscillate. Vid.Stab is your friend in this matter. It is integrated in transcode, ffmpeg (check also static builds) and kdenlive. It is designed to stabilize even strongly jiggled clips.


  • https://github.com/GafferHQ/gaffer - a VFX application that enables look developers, lighters, and compositors to easily build, tweak, iterate, and render scenes. Gaffer supports in-application scripting in Python and OSL, so VFX artists and technical directors can design shaders, automate processes, and build production workflows. An open-source project, Gaffer also provides an application framework for studios to design and create their own VFX production pipeline tools. Built using the Cortex libraries, Gaffer ships with a multi-threaded, deferred evaluation engine and a flexible user interface framework.

Upscaling



Machine learning

  • https://en.wikipedia.org/wiki/Deepfake - a portmanteau of "deep learning" and "fake", is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos.









https://github.com/iperov/DeepFaceLab - DeepFaceLab is a tool that utilizes machine learning to replace faces in videos. Includes prebuilt ready to work standalone Windows 7,8,10 binary (look readme.md).





Cinema

Restoration

Streaming

See Streaming, Sharing#Streaming

TV

See TV

Hardware





Camera

Subtitles


Creating

  • https://en.wikipedia.org/wiki/Subtitle_editor - a type of software used to create and edit subtitles to be superimposed over, and synchronized with, video. Such editors usually provide video preview, easy entering/editing of text, start, and end times, and control over text formatting and positioning. Subtitle editors are available as standalone applications, as components of many video editing software suites, and as web applications.



Subtitle Edit

  • Subtitle Edit - a free (open source) editor for video subtitles - a subtitle editor :) With SE you can easily adjust a subtitle if it is out of sync with the video in several different ways. You can also use SE for making new subtitles from scratch (do use the time-line/waveform/spectrogram) or translating subtitles.


Subtitle Composer


autosub

Automatic YouTube subtitle generation

Alignment




to sort












  • https://github.com/iCell/srt-linter - a simple command line app written in Go, the goal is to make the work more efficient for the people who frequently edit subtitles. You can use it to find the error formats within the srt files, such aslint the subtitle number to ensure that it is incrementedlint the subtitle timeline to ensure that it is incrementedlint the file to ensure there is no extra space line



Finding


  • https://github.com/alexanderwink/subdl - subdl - command-line tool to download subtitles from opensubtitles.org Official support for subdl seems to have been dropped. This site is intended for the future use of subdl with community driven support.



Web

  • WebVTT: The Web Video Text Tracks Format - This specification defines WebVTT, the Web Video Text Tracks format. Its main use is for marking up external text track resources in connection with the HTML <track> element. WebVTT files provide captions or subtitles for video content, and also text video descriptions [MAUR], chapters for content navigation, and more generally any form of metadata that is time-aligned with audio or video content.

Screenshots

  • movie thumbnailer (mtn) -- saves thumbnails (screenshots) of movie or video files to jpeg files. It uses FFmpeg's libavcodec as its engine, so it supports all popular codecs, e.g. divx h264 mpeg1 mpeg2 mp4 vc1 wmv xvid, and formats, e.g. .3gp .avi .dat .mkv .wmv. mtn is open source software. It should run on all operating systems which have gcc, FFmpeg, and GD, for example, Linux and Windows.

Video booth/wall



Amiga

(http://www.8bitplus.co.uk/projects/amiga-1200-desktop/)

Flash

Research


GPAC

  • GPAC - an open source multimedia framework developed for research and academic purposes, and used in many media production chains.The project covers various aspects of multimedia, from multimedia packaging and distribution using formats such as MP4 or MPEG-2 TS to interactive presentation technologies (graphics, animation and interactivity).


Advene

Security

Free

  • Videezy - Free Stock Footage Videos, 4k After Effects Templates and More!


Other


  • UbuWeb is a completely independent resource dedicated to all strains of the avant-garde, ethnopoetics, and outsider arts.


  • AVLD is a V4L kernel driver created to simulate a physical recording video device like a webcam or a video capture card. The following non-exhaustive list provides some uses that can be done of AVLD: Send a video via a video conferencing software like Skype, Gmail or Adobe Flash, Capture your screen in realtime with a specific software and share it via a video conferencing software like Skype, Gmail or Adobe Flash, Use it with an image processing (or other) software which has been designed to use a video device as input, In order to use it, you just have to send the video stream on it by using, for instance, mplayer or ffmpeg. That's all you have to do !!


  • Fugio - an open visual programming system designed for rapid development of digital art and other projects. It is designed to enable anyone who wants to work creatively with technology to learn and experiment without needing to get bogged down in how ideas need to be implemented in programming. Fugio has been designed and coded by artist Alex May who uses it for his installations.
  • http://freeframe.sourceforge.net/





  • AVTK - a user-interface toolkit. The letters are short for the OpenAV ToolKit. It is focused on custom widgets, and making it really easy to code slick user-interfaces. Totally anti-aliased, and it redraws fast.