Jump to navigation Jump to search
See also Automation etc.
- https://github.com/codeforequity-at/botium-speech-processing - a unified, developer-friendly API to the best available Speech-To-Text and Text-To-Speech services. 
- * https://github.com/gooofy/py-nltools - A collection of basic python modules for spoken natural language processing
- ESPS - Entropic Signal Processing System, is a package of UNIX-like commands and programming libraries for speech signal processing. As a commercial product of Entropic Research Laboratory, Inc, it became extremely widely used in phonetics and speech technology research laboratories in the 1990's, in view of the wide range of functions it offered, such as get_f0 (for fundamental frequency estimation), formant (for formant frequency measurement), the xwaves graphical user interface, and many other commands and utilities. Following the acquisition of Entropic by Microsoft in 1999, Microsoft and AT&T licensed ESPS to the Centre for Speech Technology at KTH, Sweden, so that a final legacy version of the ESPS source code could continue to be made available to speech researchers. At KTH, code from the ESPS library (such as get_f0) was incorporated by Kåre Sjölander and Jonas Beskow into the Wavesurfer speech analysis tool. This is a very good alternative way to use many ESPS functions if you want a graphical user interface rather than scripting.
- https://github.com/jeremysalwen/ESPS - This archive contains source files from the ESPS toolkit.
Speech Research Tools
- https://sourceforge.net/projects/speechresearch - Software for speech research. It includes programs and libraries for signal processing, along with general purpose scientific libraries. Most of the code is in Python, with C/C++ supporting code. Also, contains code releases corresponding to publishe
HTK Speech Recognition Toolkit
- HTK Speech Recognition Toolkit - a portable toolkit for building and manipulating hidden Markov models. HTK is primarily used for speech recognition research although it has been used for numerous other applications including research into speech synthesis, character recognition and DNA sequencing. HTK is in use at hundreds of sites worldwide.HTK consists of a set of library modules and tools available in C source form. The tools provide sophisticated facilities for speech analysis, HMM training, testing and results analysis. The software supports HMMs using both continuous density mixture Gaussians and discrete distributions and can be used to build complex HMM systems. The HTK release contains extensive documentation and examples.
- speechrate - software for the analysis of speech. Below you will find a script that automatically detects syllable nuclei in order to measure speech rate without the need of a transcription. Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered as potential syllable nuclei. The script subsequently discards peaks that are not voiced. On this page you find an example of how the script works.
- Higgins Annotation Tool - can be used to transcribe and annotate speech with one or more audio tracks (such as dialogue). Windows.
- https://github.com/Sleepwalking/SHIRO - Phoneme-to-speech alignment toolkit based on liblrhsmm
- https://github.com/wjassim/WARP-Q - code is to run the WARP-Q speech quality metric. WARP-Q (Quality Prediction For Generative Neural Speech Codecs) is an objective, full-reference metric for perceived speech quality. It uses a subsequence dynamic time warping (SDTW) algorithm as a similarity between a reference (original) and a test (degraded) speech signal to produce a raw quality score. It is designed to predict quality scores for speech signals processed by low bit rate speech coders.
- https://github.com/rhasspy/phonemes2ids - Flexible tool for assigning integer ids to phonemes
- https://en.wikipedia.org/wiki/Speech_recognition - an interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields.
- https://github.com/Picovoice/speech-to-text-benchmark - a minimalist and extensible framework for benchmarking different speech-to-text engines. It has been developed and tested on Ubuntu using Python3. 
- CMUSphinx Open Source Speech Recognition
- https://github.com/cmusphinx/pocketsphinx - a lightweight speech recognition engine, specifically tuned for handheld and mobile devices, though it works equally well on the desktop
- https://github.com/benoitfragit/pocketVox - a both an application and a library written in C by Benoit Franquet. It uses Pocketsphinx to do voice recognition. The voice recognition is done offline and doesn't need an Internet connexion. It's aim is to provide an eficient way to integrate voice recognition on the Linux desktop. More particularly, its main goal is to give to visual impaired (as I am) a powerfull tool to control their desktop.
- Audiogrep - transcribes audio files and then creates "audio supercuts" based on search phrases. It uses CMU Pocketsphinx for speech-to-text and pydub to stitch things together. 
Speech Signal Processing Toolkit
- Speech Signal Processing Toolkit (SPTK) - a suite of speech signal processing tools for UNIX environments, e.g., LPC analysis, PARCOR analysis, LSP analysis, PARCOR synthesis filter, LSP synthesis filter, vector quantization techniques, and other extended versions of them. This software is released under the Modified BSD license. SPTK was developed and has been used in the research group of Prof. Satoshi Imai (he has retired) and Prof. Takao Kobayashi (currently he is with Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology) at P&I laboratory, Tokyo Institute of Technology. A sub-set of tools was chosen and arranged for distribution by Prof. Keiichi Tokuda (currently he is with Department of Computer Science and Engineering, Nagoya Institute of Technology) as a coordinator in cooperation and other collaborates (see "Acknowledgments" and "Who we are" in README).The original source codes have been written by many people who took part in activities of the research group. The most original source codes of this distribution were written by Takao Kobayashi (graph, data processing, FFT, sampling rate conversion, etc.), Keiichi Tokuda (speech analysis, speech synthesis, etc.), and Kazuhito Koishida (LSP, vector quantization, etc.).
- https://github.com/julius-speech/julius - Open-Source Large Vocabulary Continuous Speech Recognition Engine
- YouTube: Emily Shea - "Perl Out Loud"
- Kaldi - a toolkit for speech recognition, intended for use by speech recognition researchers and professionals. 
- https://github.com/alphacep/vosk-api - Kaldi API for Android, Python and Node
- https://github.com/daanzu/kaldi-active-grammar - Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time
- RWTH ASR - The RWTH Aachen University Speech Recognition System, a software package containing a speech recognition decoder together with tools for the development of acoustic models, for use in speech recognition systems. It has been developed by the Human Language Technology and Pattern Recognition Group at the RWTH Aachen University since 2001. Speech recognition systems developed using this framework have been applied successfully in several international research projects and corresponding evaluations.
- https://github.com/espnet/espnet - an end-to-end speech processing toolkit, mainly focuses on end-to-end speech recognition and end-to-end text-to-speech. ESPnet uses chainer and pytorch as a main deep learning engine, and also follows Kaldi style data processing, feature extraction/format, and recipes to provide a complete setup for speech recognition and other speech processing experiments.
- RETURNN - extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.
- https://github.com/gooofy/zamia-speech/ - Open tools and data for cloudless automatic speech recognition
- Snowboy Hotword Detection - DNN based hotword and wake word detection toolkit
- voice2json - a collection of command-line tools for offline speech/intent recognition on Linux. It is free, open source, and supports 16 languages.
- https://github.com/fredvs/sak - Speecher Assistive Kit. With sak, your application becomes assistive directly, without changing anything in your code. sak uses the PortAudio and eSpeak Open Source Libraries.
- https://github.com/respeaker - To build voice interface objects
- https://github.com/respeaker/get_started_with_respeaker - This is the wiki of ReSpeaker Core V2, ReSpeaker Core and ReSpeaker Mic Array.
- https://github.com/respeaker/respeakerd - server application for the microphone array solutions of SEEED, based on librespeaker which combines the audio front-end processing algorithms.
- https://github.com/respeaker/respeaker_python_library - To build voice enabled objects/applications with Python and ReSpeaker
- https://github.com/respeaker/seeed-voicecard - 2 Mic Hat, 4 Mic Array, 6-Mic Circular Array Kit, and 4-Mic Linear Array Kit for Raspberry Pi
- https://github.com/HinTak/seeed-voicecard - an enhancement fork with the explicit aim of supporting current shipping Raspbian/Ubuntu kernels without requiring downgrading.
- https://github.com/Uberi/speech_recognition - Library for performing speech recognition, with support for several engines and APIs, online and offline.
- Sequence Modeling with CTC - A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems. 
- NICO toolkit - mainly intended for, and originally developed for speech recognition applications, a general purpose toolkit for constructing artificial neural networks and training with the back-propagation learning algorithm. The network topology is very flexible. Units are organized in groups and the group is a hierarchical structure, so groups can have sub-groups or other objects as members. This makes it easy to specify multi-layer networks with arbitrary connection structure and to build modular networks.
- https://github.com/mozilla/DeepSpeech - an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier.
- Online speech recognition with wav2letter@anywhere - a fast, open source speech processing toolkit from the Speech team at Facebook AI Research built to facilitate research in end-to-end models for speech recognition. It is written entirely in C++ and uses the ArrayFire tensor library and the flashlight machine learning library for maximum efficiency.
- https://github.com/talonvoice/wav2train - automatically align transcribed audio and generate a wav2letter training corpus
- https://github.com/bytedance/neurst - aims at easily building and training end-to-end speech translation, which has the careful design for extensibility and scalability. We believe this design can make it easier for NLP researchers to get started. In addition, NeurST allows researchers to train custom models for translation, summarization and so on.
- https://github.com/key2miao/TSTNN - transformer based neural network for speech enhancement in time domain
- https://github.com/speechbrain/speechbrain - A PyTorch-based Speech Toolkit
- https://github.com/auspicious3000/SpeechSplit - provides a PyTorch implementation of SpeechSplit, which enables more detailed speaking style conversion by disentangling speech into content, timbre, rhythm and pitch.
- https://github.com/wenet-e2e/wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit
- https://github.com/rhasspy/phonemes2ids - Useful for text to speech or speech to text applications where text is phonemized, converted to an integer vector, and then used to train a network.
- Simon - an open source speech recognition program that can replace your mouse and keyboard. The system is designed to be as flexible as possible and will work with any language or dialect.
- Blather is a speech recognizer that will run commands when a user speaks preset sentences.
- Xvoice - enables continuous speech dictation and speech control of most X applications. To convert users' speech into text it uses the IBM ViaVoice speech recognition engine, which is distributed separately (see below).When in dictation mode Xvoice passes this text directly to the currently focused X application. When in command mode, Xvoice matches the speech with predefined, user-modifieable, key sequences or commands. For instance "list" would match "ls -l" when commanding the console, so that when the user says "list" "ls -l" will be sent to the console as if the user had typed it.
- CVoiceControl - a speech recognition system that enables a user to connect spoken commands to unix commands. It automagically detects speech input from a microphone, performs recognition on this input and - in case of successful recognition - executes the associated unix command.
- Perlbox Voice - an voice enabled application to bring your desktop under your command. With a single word, you can start your web browser, your favorite editor or whatever you want. Perlbox Voice now has desktop plugins, which will allow you to control desktop functions. You can switch virtual desktops, invoke the desktop menu, switch wallpaper or lock the screen. This is command, and it's right on the tip of your tongue! Start with the tutorial, which has a lot of screenshots.
- https://github.com/arrdem/Voicely - A voice command interface for Linux, and a fork of the Voximp project
See also Effects#Noise reduction
- https://github.com/nay0648/unified2021 - A UNIFIED SPEECH ENHANCEMENT FRONT-END FOR ONLINE DEREVERBERATION, ACOUSTIC ECHO CANCELLATION, AND SOURCE SEPARATION. Dereverberation (DR), acoustic echo cancellation (AEC), and blind source separation (BSS) are the three most important submodules in speech enhancement front-end. In traditional systems, the three submodules work independently in a sequential manner, each submodule has its own signal model, objective function, and optimization policy. Although this architecture has high flexibility, the speech enhancement performance is restricted, since each submodule's optimum cannot guarantee the entire system's global optimum. In this paper, a unified signal model is derived to combine DR, AEC, and BSS together, and the online auxiliary-function based independent component/vector analysis (Aux-ICA/IVA) technique is used to solve the problem. The proposed approach has unified objective function and optimization policy, the performance improvement is verified by simulated experiments.
- https://github.com/jackhwalters/DCS-Net - Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"
DNN-based source separation
- https://github.com/tky823/DNN-based_source_separation - A PyTorch implementation of DNN-based source separation.
- https://github.com/rrbluke/BSSD - contains python/tensorflow code to reproduce the experiments presented in our paper Blind Speech Separation and Dereverberation using Neural Beamforming.
- https://github.com/rrbluke/NRES - Neural Residual Echo Suppressor