Automation

From Things and Stuff Wiki
Jump to navigation Jump to search


General

See also Speech, Synth vox, AI, Audio, Playback, Lighting



  • https://en.wikipedia.org/wiki/Islands_of_automation - was a popular term used largely during the 1980s to describe how rapidly developing automation systems were at first unable to communicate easily with each other. Industrial communication protocols, network technologies, and system integration helped to improve this situation. Just a few of the many examples of helping technologies are Modbus, Fieldbus, Ethernet, etc. It is more recently used by automation specialists to describe a discrete and fully enclosed automated system applied in a largely manual environment. In today's interconnected world it is uncommon for automated systems to be fully stand alone. Therefore, the old usage is defunct and the new usage is more appropriate for companies that wish to automate in a limited fashion.



  • https://en.wikipedia.org/wiki/X10_(industry_standard) - a protocol for communication among electronic devices used for home automation (domotics). It primarily uses power line wiring for signaling and control, where the signals involve brief radio frequency bursts representing digital information. A wireless radio based protocol transport is also defined. X10 was developed in 1975 by Pico Electronics of Glenrothes, Scotland, in order to allow remote control of home devices and appliances. It was the first general purpose domotic network technology and remains the most widely available. Although a number of higher bandwidth alternatives exist, X10 remains popular in the home environment with millions of units in use worldwide, and inexpensive availability of new components.


  • https://en.wikipedia.org/wiki/xPL_Protocol - an open protocol intended to permit the control and monitoring of home automation devices. The primary design goal of xPL is to provide a rich set of features and functionality, whilst maintaining an elegant, uncomplicated message structure. The protocol includes complete discovery and auto-configuration capabilities which support a fully "plug-n-play" architecture - essential to ensure a good end-user experience. xPL benefits from a strongly specified message structure, required to ensure that xPL-enabled devices from different vendors are able to communicate without the risk of incompatibilities. Communications between xPL applications on a Local Area Network (LAN) use UDP on port 3865. xPL development has primarily occurred in the DIY community, where users have written connecting software to existing protocols and devices. Some examples include bridges to other home automation protocols like Z-Wave and UPB. Commercially, the Logitech SqueezeCenter software for the Squeezebox supports xPL.




  • PiHAT - Rasberry Pi Home Automation Transmitter
  • WallPanel - an Android application for Web Based Dashboards and Home Automation Platforms.

Control

See also Dataflow

openHAB

MisterHouse

Homebridge

  • https://github.com/nfarina/homebridge - a lightweight NodeJS server you can run on your home network that emulates the iOS HomeKit API. It supports Plugins, which are community-contributed modules that provide a basic bridge from HomeKit to various 3rd-party APIs provided by manufacturers of "smart home" devices.



Home Assistant

  • Home Assistant - Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server.








Hass.io

  • Hass.io - turns your Raspberry Pi (or another device) into the ultimate home automation hub powered by Home Assistant. With Hass.io you can focus on integrating your devices and writing automations.

Domoticz

  • Domoticz - a Home Automation System that lets you monitor and configure various devices like: Lights, Switches, various sensors/meters like Temperature, Rain, Wind, UV, Electra, Gas, Water and much more. Notifications/Alerts can be sent to any mobile device. This system is designed to operate in various operating systems. The user-interface is a scalable HTML5 web frontend, and is automatically adapted for Desktop and Mobile Devices. Compatible with all recent browsers.

HomeApp

  • https://github.com/Domi04151309/HomeApp - a small and easy to use smart home app with a simple framework. The goal of this application is to make remote execution of predefined features as easy and user-friendly as possible to help you get started with smart home technology.

PiHomeScreen


MyController


Other

NLP

See also Speech


  • https://en.wikipedia.org/wiki/Natural_language_processing - a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.



  • https://en.wikipedia.org/wiki/Natural-language_understanding - or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem. There is considerable commercial interest in the field because of its application to automated reasoning, machine translation, question answering, news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis.


  • https://en.wikipedia.org/wiki/Natural-language_generation - a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out by a text-to-speech system.Automated NLG can be compared to the process humans use when they turn ideas into writing or speech. Psycholinguists prefer the term language production for this process, which can also be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared to translators of artificial computer languages, such as decompilers or transpilers, which also produce human-readable code generated from an intermediate representation. Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging.




  • TextBlob - a Python (2 and 3) library for processing textual data. It provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.



  • Tutorial: Computational modelling of metaphor | EACL 2014 - Metaphor processing is a rapidly growing area in NLP. The ubiquity of metaphor in language has been established in a number of corpus studies and the role it plays in human reasoning has been confirmed in psychological experiments. This makes metaphor an important research area for computational and cognitive linguistics, and its automatic identification and interpretation indispensable for any semantics­oriented NLP application. The tutorial is targeted both at NLP researchers who are new to metaphor and need a comprehensive overview of metaphor processing techniques and applications, as well as at scientists working in the field who want to stay up to date on the recent advances in metaphor research.


  • languagengine - Blog - What Happened to Old School NLP? - For the last 25 or so years, NLP has consisted almost entirely of fairly simple statistical models of language have prevailed. Rather than using carefully built grammars and fancy parsers, rather than trying to build systems that rely on linguistics, instead modern NLP tries to learn as much as possible from scratch. NLP research is now, essentially, just a branch of machine learning rather than a branch of linguistics. What happened? [3]


  • -1803.11175- Universal Sentence Encoder - We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub. [4]



  • Metacat - a computer model of analogy-making and perception that builds on the foundations of an earlier model called Copycat. Copycat was originally developed by Douglas Hofstadter and Melanie Mitchell as part of a research program aimed at computationally modeling the fundamental mechanisms underlying human thought processes. Central to the philosophy of this research is the belief that the mind's ability to perceive connections between apparently dissimilar things, and to make analogies based on these connections, lies at the heart of intelligence. According to this view, to understand the analogical mechanisms of thinking and perception is to understand the source of the remarkable fluidity of the human mind, including its hidden wellsprings of creativity.

Like Copycat, Metacat operates in an idealized world of analogy problems involving short strings of letters. Although the program understands only a limited set of concepts about its letter-string world, its emergent processing mechanisms give it considerable flexibility in recognizing and applying these concepts in a wide variety of situations. The program's high-level behavior emerges in a bottom-up manner from the collective actions of many small nondeterministic processing agents (called codelets) working in parallel, in much the same way that an ant colony's high-level behavior emerges from the individual behaviors of the underlying ants, without any central executive directing the course of events.

Metacat focuses on the issue of self-watching: the ability of a system to perceive and respond to patterns that arise not only in its immediate perceptions of the world, but also in its own processing of those perceptions. Copycat lacked such an "introspective" capacity, and consequently lacked insight into how it arrived at its answers. It was unable to notice similarities between analogies, or to explain the differences between them or why one might be considered to be better or worse than another. In contrast, Metacat's self-watching mechanisms enable it to create much richer representations of analogies, allowing it to compare and contrast answers in an insightful way. Furthermore, it is able to recognize, remember, and recall patterns that occur in its own "train of thought" as it makes analogies. For instance, by monitoring its own processing, Metacat can often recognize when it has fallen into a repetitive cycle of behavior, enabling it to break out of its "rut" and try something else. [6]

Virtual assistant

  • https://en.wikipedia.org/wiki/Virtual_assistant - a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices. In many cases users can ask their virtual assistants questions, control home automation devices and media playback, and manage other basic tasks such as email, to-do lists, and calendars - all with verbal commands.

In recent years, prominent virtual assistants for direct consumer use have included Amazon's Alexa, Apple's Siri, Microsoft's Cortana, and Google Assistant. Also, companies in various industries often incorporate some kind of virtual assistant technology into their customer service or support. Recently, the emergence of recent artificial intelligence based chatbots, such as ChatGPT, has brought increased capability and interest to the field of virtual assistant products and services.


  • Voice in Home Assistant | Home Assistant Developer Docs - Building a voice assistant is a complex task. It requires a lot of different technologies to work together. This page will give you an overview of the different parts inside Home Assistant and how they will work together.


Bots

See also IRC#Bots

Microsoft Bot Framework

Rasa Core

BotLibre.org

  • BotLibre.org - an open source platform based on an advanced artificial intelligence engine developed in Java. The Bot Libre AI engine can be used in any Java platform, such as a Java webserver, Java client, or on Android. The Bot Libre SDK supports access to Bot Libre's web API from JavaScript, Android, Java, iOS, and objective C. [8]

Wit.ai

  • Wit.ai - makes it easy for developers to build applications and devices that you can talk or text to. Our vision is to empower developers with an open and extensible natural language platform. Wit.ai learns human language from every interaction, and leverages the community: what's learned is shared across developers.

SUSI.AI

Olivia

  • Olivia - An open-source contextual chatbot built with an artificial neural network.

Keyword detection

Snowboy

Porcupine

  • Porcupine - On-device wake word (a.k.a. keyword spotting, trigger word detection, hotword detection, and voice command) detection engine powered by deep learning trained in real-world situations. It enables developers to build always-listening voice-enabled applications. Compact and computationally-efficient making it suitable for IoT. It can run with as low as 20 KB RAM on an MCU. It is implemented in fixed-point ANSI C. Currently Raspberry Pi (all variants), Beagle Bone, Android, iOS, watchOS, Linux (x86_64), Mac, Windows, and web browsers WebAssembly) are supported. Furthermore, Support for various ARM Cortex-A and ARM Cortex-M processors and DSP cores is available for commercial customers. It can detect multiple (possibly many) voice commands concurrently with no added CPU/memory footprint. Developers are empowered to choose from a set of predefined wake phrases on different platforms and use them for free. In addition, developers can generate custom wake phrases using Picovoice Console (subject to certain limitations and only on Linux, Mac, or Windows) for non-commercial, personal, and evaluation-only purposes.

Virtual assistant


Mycroft


Almond


Jasper



Naomi


Kalliope


Dragonfire


Leon

  • Leon - an open-source personal assistant who can live on your server. He does stuff when you ask for it. You can talk to him and he can talk to you. You can also text him and he can also text you. If you want to, Leon can communicate with you by being offline to protect your privacy.


Stephanie


Rhasspy


Pi-Vopice


Alexa

  • https://github.com/alexa-pi/AlexaPi - a client for Amazon's Alexa service. It is intended and tested to run on a wide range of platforms, such as Raspberry Pi, Orange Pi, CHIP and ordinary Linux or Windows desktops. Undeveloped.




Google Assistant

Hardware

  • 2018-04-20: Making a Window Manager (part 1) - You know what happens in the office, usually late in the afternoon? That’s right! Exactly when you’re at the peak of productivity, Mr. Sun basically slams into your window and mocks you. In the face. You could raise from the chair and turn the shades, I guess. You could. But I can not. I need an automatic sun-b-gone mechanism! Something smart, IOT and with blockchain technology. A true Window Manager™. Can we make it using only assorted junk found on the desk? Looking through the stuff I think I see a small stepper and some micros… I believe We Can



Sensors


  • https://github.com/gmag11/EnigmaIOT - an open source solution for wireless multi sensor systems. It has two main components, multiple nodes and one gateway.A number of nodes with one or more sensors each one communicate in a secure way to a central gateway in a star network using EnigmaIoT protocol.


Firmware

See also Lighting#Bulbs

Espurna

  • https://github.com/xoseperez/espurna - a custom firmware for ESP8285/ESP8266 based smart switches, lights and sensors. It uses the Arduino Core for ESP8266 framework and a number of 3rd party libraries.

ESPEasy

Tasmota

  • https://github.com/arendst/Tasmota - Alternative Firmware for ESP8266 based devices like itead Sonoff, with Web, Timers, OTA, MQTT, KNX and Sensors Support, to be used on Smart Home Systems. Written for Arduino IDE and PlatformIO

espota-server

  • https://gitlab.com/stavros/espota-server - an HTTP server that can hold firmware for flashing with the ESP8266 Arduino firmware. It's useful when you have many ESP8266-based devices in the field and want them to download their own firmware to update themselves, rather than you having to push firmware to each one separately.The main difference between the espota-server and any other HTTP server is that it only serves firmware to the device if there's a newer version of the firmware available. If there's no newer version, it just responds with a "no new version" and the device goes on its merry way.

RaspberryPi-Gateway

ESPHome

Libraries

Espressif Audio Development Framework

  • https://github.com/espressif/esp-adf - supports development of audio applications for the Espressif Systems ESP32 chip in the most comprehensive way. With ESP-ADF, you can easily add features, develop audio applications from simple to complex: Music player or recorder supports audio formats such as MP3, AAC, FLAC, WAV, OGG, OPUS, AMR, TS, EQ, Downmixer, Sonic, ALC, G.711... Play music from sources: HTTP, HLS (HTTP Live Streaming), SPIFFS, SDCARD, A2DP-Source, A2DP-Sink, HFP ... Integrate Media services such as: DLNA, VoIP ... Internet Radio; Voice recognition and integration with online services such as Alexa, DuerOS, ...

Streaming


  • https://github.com/vedranMv/Ka-Radio32 - Fork of Ka-Radio (karawin/Ka-Radio) which implements both internet radio and bluetooth speaker allowing you to seamlesly switch between the two on-the-fly


Doorbell

Web

Piping

See also WebDev





Growing








  • Open Ag Toolkit - Precision Farm Management. Android software for agriculture management that improvs the way farmers are able to collect and use information.





  • Open Ag Toolkit - Precision Farm Management. We want to make it easier to manage farms by improving the way farmers are able to collect and use information. So, we're writing some simple, free, open-source mobile apps that sync across everyone on a farm through existing cloud services like Trello and Dropbox.


  • BuckyBox - powering your local food enterprise. We have built all of your daily operations into a simple to use web app that customers love and which saves you time. Bucky Box is open-source software, we charge for hosting only. Technical support is not included and billed separately.