Open social

From Things and Stuff Wiki
Jump to navigation Jump to search


Movements and initiatives

See also Feeds, WebDev#API, HTTP, Semantic web, Chat, Net media, Open data, Mesh

massively jumbled atm, to iron out order, etc.

  • Social Web Protocols - The Social Web Protocols are a collection of standards which enable various aspects of decentralised social interaction on the Web. This document describes the purposes of each, and how they fit together. Pre SOLID.



  • - a global nonprofit consortium that works on the development, convergence, and adoption of standards for security, Internet of Things, energy, content technologies, emergency management, and other areas. OASIS was founded under the name "SGML Open" in 1993. It began as a trade association of SGML tool vendors to cooperatively promote the adoption of the Standard Generalized Markup Language (SGML) through mainly educational activities, though some amount of technical activity was also pursued including an update of the CALS



Semantic Social Web

  • - developments in which social interactions on the Web lead to the creation of explicit and semantically rich knowledge representations. The Social Semantic Web can be seen as a Web of collective knowledge systems, which are able to provide useful information based on human contributions and which get better as more people participate. The Social Semantic Web combines technologies, strategies and methodologies from the Semantic Web, social software and the Web 2.0.



  • OpenSocial is the industry's leading and most mature standards-based component model for cloud based social apps.
    • Wikipedia:OpenSocial is a public specification that defines a component hosting environment (container) and a set of common application programming interfaces (APIs) for web-based applications. Initially it was designed for social network applications and was developed by Google along with MySpace and a number of other social networks. In more recent times it has become adopted as a general use runtime environment for allowing untrusted and partially trusted components from third parties to run in an existing web application. The OpenSocial Foundation has also moved to integrate or support numerous other open web technologies. This includes Oauth and OAuth 2.0, Activity Streams, and portable contacts, among others.


December 2007

  • DiSo Project (dee • soh) is an initiative to facilitate the creation of open, non-proprietary and interoperable building blocks for the decentralized social web. Silo free living. Social networks are becoming more open, more interconnected, and more distributed. Many of us in the web creation world are embracing and promoting web standards — both client-side and server-side. Microformats, standard APIs, and open-source software are key building blocks of these technologies. This model can be described as having three sides: Information, Identity, and Interaction. Our first target is WordPress, bootstrapping on existing work and building out from there.


November 2007

"There are numerous open standards that are considered to advance the vision, such as RDF, RDFa, microformats, APML, FOAF, OAuth, OpenID, OPML, RSS, SIOC, the XHTML Friends Network (XFN), XRI, and XDI."

Drupal related

misc. old



  • HowTo: Rebooting the RSS cloud - There are three sides to the cloud:1. The authoring tool. I edit and update a feed. It contains a <cloud> element that says how a subscriber should request to notification of updates.2. The cloud. It is notified of an update, and then in turn notifies all subscribers.3. The subscriber. A feed reader, aggregator, whatever -- that subscribes to feeds that may or may not be part of a cloud.

"There are three sides to the cloud:

  • 1. The authoring tool. I edit and update a feed. It contains a <cloud> element that says how a subscriber should request to notification of updates.
  • 2. The cloud. It is notified of an update, and then in turn notifies all subscribers.
  • 3. The subscriber. A feed reader, aggregator, whatever -- that subscribes to feeds that may or may not be part of a cloud.

Federated Social Web


  • W3C Social Web Incubator Group - This XG started on 6 April 2009 and closed on 6 December 2010. See the final report.The mission of the Social Web Incubator Group, part of the Incubator Activity, is to understand the systems and technologies that permit the description and identification of people, groups, organizations, and user-generated content in extensible and privacy-respecting ways.
    • A Standards-based, Open and Privacy-aware Social Web - The Social Web is a set of relationships that link together people over the Web. The Web is an universal and open space of information where every item of interest can be identified with a URI. While the best known current social networking sites on the Web limit themselves to relationships between people with accounts on a single site, the Social Web should extend across the entire Web. Just as people can call each other no matter which telephone provider they belong to, just as email allows people to send messages to each other irrespective of their e-mail provider, and just as the Web allows links to any website, so the Social Web should allow people to create networks of relationships across the entire Web, while giving people the ability to control their own privacy and data. The standards that enable this should be open and royalty-free. We present a framework for understanding the Social Web and the relevant standards (from both within and outside the W3C) in this report, and conclude by proposing a strategy for making the Social Web a "first-class citizen" of the Web.
    • FinalReport - Social Web XG Wiki - This document is the final report of the W3C Social Web Incubator Group. This report presents systems and technologies that are working towards enabling a Social Web, and is followed by a strategy for standardizing this work in order to ensure the Social Web is open, decentralized, and royalty-free. This report focuses on work that permits the description and identification of people, groups, organizations, as well as user-generated content in extensible and privacy-respecting ways. This report describes a common framework for the concepts behind the Social Web and the state of the art in 2010, including current technologies and standards. We conclude with an analysis of where future research and standardization will benefit users and the entire Social Web ecosystem's growth. We also suggest a strategy for the role of the W3C in the Social Web.

  • W3C Federated Social Web Incubator Group - This XG started on 15 December 2010 and transitioned on 12 January 2012 to Federated Social Web Community Group. The mission of the Federated Social Web Incubator Group, part of the Incubator Activity, is to provide a set of community-driven specifications and a test-case suite for a federated social web.

  • Socially aware cloud storage - Design Issues - There is an architecture in which a few existing or Web protocols are gathered together with some glue to make a world wide system in which applications (desktop or Web Application) can work on top of a layer of commodity read-write storage. Crucial design issues are that principals (users) and groups are identifies by URIs, and so are global in scope, and that elements of storage are access controlled using those global identifiers. The result is that storage becomes a commodity, independent of the application running on it.

  • PDF: An Architecture of a Distributed Semantic Social Network - Online social networking has become one of the most popular services on the Web. However, current social networks are like walled gardens in which users do not have full control over their data, are bound to specific usage terms of the social network operator and suffer from a lock-in effect due to the lack of interoperability and standards compliance between social networks. In this paper we propose an architecture for an open, distributed social network, which is built solely on Semantic Web standards and emerging best practices. Our architecture combines vocabularies and protocols such as WebID, FOAF, Se-mantic Pingback and PubSubHubbub into a coherent distributed semantic social network, which is capable to provide all crucial functionalities known from centralized social networks. We present our reference implementation, which utilizes the OntoWiki application framework and take this framework as the basis for an extensive evaluation. Our results show that a distributed social network is feasible, while it also avoids the limitations of centralized solutions

  • Read Write Web Community Group - The activity of this group is to apply Web standards to trusted read and write operations.
    • - an implementation in Play of a number of tools to build a Read-Write-Web server using Play2.x and akka. It is very early stages at present and it implements sketches of the following A CORS proxy An initial implementation of Linked Data Basic Profile

OStatus / StatusNet

via, 2008

august 2010

aug 2013; @tantek | previous efforts at directly designing decentralized protocols (without selfdogfood) always result in overly complex protocols that not enough people can implement. e.g. Salmon

now part of GNU Social

2012, succeeds OStatus / StatusNet

  • - successor to StatusNet, a general-purpose Activity Streams engine. It diverges from OStatus in a few other respects, of course, such as sending activity messages as JSON rather than as Atom, and by defining a simple REST inbox API instead of using PubSubHubbub and Salmon to push messages to other servers. also uses a new database abstraction layer called Databank, which has drivers for a variety of NoSQL databases, but supports real relational databases, too. StatusNet, in contrast, was bound closely to MySQL. But, in the end, the important thing is the feature set; a instance can generate a microblogging feed, an image stream, or essentially any other type of feed. Activity Streams defines actions (which are called "verbs") that handle common social networking interaction; merely sends and receives them.

  • - This site runs, the high-performance Open Source social engine. It pumps your life in and out of your friends, family and colleagues.


indieauth,, RSSB, etc.

  • ActivityPush - A lightweight method for URI addressable resources to be automatically notified about remote (off-site) activites on them. A crypto-free alternative to the Salmon Protocol for public activites.

  • Bridgy Fed - Got an IndieWeb site? Want to interact with Mastodon, Hubzilla, and the rest of the fediverse? Bridgy Fed is for you.


  • Known - gives you full control. Host your site on our service, or install it on your own server and extend it to meet your needs. The core Known platform is open source, and it is designed to be easy to customize, redesign, and build on top of.

Dark Matter



  • Koype - provides people with software that allows them to own their social data on the Web and determine who gets to do what with it.
    • * Koype: Humane Social Networking - provides people with software that allows them to own their social data on the Web and determine who gets to do what with it.


  • ActivityPub - ActivityPub protocol is a decentralized social networking protocol based upon the ActivityStreams 2.0 data format. It provides a client to server API for creating, updating and deleting content, as well as a federated server to server API for delivering notifications and content. [4]
  • - an open, decentralized social networking protocol based on's ActivityPump protocol. It provides a client/server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content.


  • Sepia Search - A search engine of PeerTube videos and channels. Developed by Framasoft



  • - meant to run as a standalone service to deliver posts from your own website to ActivityPub followers. You can run your own website at your own domain, and this service can handle the ActivityPub-specific pieces needed to let people follow your own website from Mastodon or other compatible services.





  • Tent - the protocol for decentralized communication. Tent uses HTTPS and JSON to transport posts between servers and apps.
    • Tent - use data and posts across your apps and send and receive posts from friends. Right now, most people use Tent to share short 256 character long status posts with friends. Many independent developers are building other apps that use the Tent protocol.



GNU Social, Hubzilla, PostActiv, Friendica, Mastodon or Pleroma, etc.




  • Pixelfed - - A free, federated and ethical photo sharing platform.


  • GangGo - a decentralized social network written in GoLang. It uses the same federation library like Diaspora, Mastodon and therefore can participate in conversations like any other Pod!


  • Prismo - a new link-sharing platform intended to work similarly to Reddit, Lobst.ers, or Hacker News. Michał Bajur, creator of Mastodon Tags Explorer, is building the platform for the fediverse, meaning that it speaks the ActivityPub protocol and is intended to work with other federated parts of the ecosystem.


  • Hyperspace - the fluffiest client for Mastodon and other fediverse networks written in TypeScript and React. Hyperspace offers a fun, clean, fast, and responsive design that scales beautifully across devices and enhances the fediverse experience.



  • Spritely - a project to level up the federated social web. It builds on our experience from co-authoring ActivityPub, the largest decentralized social network on the web to date, while applying lesser known but powerful ideas from the object capability security community.Spritely consists of a number of modular components bringing new and rich features, from distributed programming, to decentralized storage, to virtual worlds.Better worlds await, because better worlds are possible. We all deserve freedom of communication. Why not make the journey fun in the process?



  • - Solid (derived from "social linked data") is a proposed set of conventions and tools for building decentralized Web applications based on Linked Data principles. Solid is modular and extensible. It relies as much as possible on existing W3C standards and protocols. [14]

  • Solid Hacks - Solid Hacks - This book covers but a fraction of what is possible in solid. It hopefully illustrates some interesting tips and tricks.

  • Co-operating Systems - a social enterprise that researches, develops, maintains and provides services around an open source/open standards software platform to enable co-operation among autonomous actors. Our stack takes a completely new approach: the web is the database and data is a web of relations. Apps using our libraries can follow relations through the web of data and write to any Social Linked Data server (SoLiD) if allowed. As a result designers will be able to write applications providing a consistent human interface for people needing to co-operate within and across organisations, each application tailored for the person viewing the data.


  • - This repository contains the source code of the Solid specification, which aims to be a clear, unambiguous, and implementable rewrite of the Unofficial Draft.
  • - This document contains an unofficial draft description of implementation guidelines for Solid servers and clients. A normative specification is in the making at solid/specification. For the time being, the present document contains the best approximation of expected server and client behavior.



  • - User Interface widgets and utilities for SolidThese are HTML5 widgets which connect to a solid store. Building blocks for solid-based apps.


  • Databox - BBC R&D - privacy-aware personal data container/manager and dashboard server
    • Databox
    • - an open-source personal networked device, augmented by cloud-hosted services, that collates, curates, and mediates access to an individual’s personal data by verified and audited third-party applications and services. The Databox will form the heart of an individual’s personal data processing ecosystem, providing a platform for managing secure access to data and enabling authorised third parties to provide the owner with authenticated services, including services that may be accessed while roaming outside the home environment. Databox project is led by Dr Hamed Haddadi (Imperial College) in collaboration with Dr Richard Mortier (University of Cambridge) and Professors Derek McAuley, Tom Rodden, Chris Greenhalgh, and Andy Crabtree (University of Nottingham) and funded by EPSRC.

Autonomous Data

  • Introduction | Autonomous Data - an application architecture that respects users privacy and data ownership.Traditional architectures involve storing user data on service providers' servers. Even if they give ample control to their users, they are still acting as middlemen. In this situation, true data ownership cannot be more than a mirage.


  • radicle - Secure peer-to-peer code collaboration without intermediaries. 🌱

Data Transfer Project

  • Data Transfer Project - launched in 2018 to create an open-source, service-to-service data portability platform so that all individuals across the web could easily move their data between online service providers whenever they want. The contributors to the Data Transfer Project believe portability and interoperability are central to innovation. Making it easier for individuals to choose among services facilitates competition, empowers individuals to try new services and enables them to choose the offering that best suits their needs.Current contributors include:


Identity / authentication

See also Net media#Identity


  • - specified in RFC 1413, is an Internet protocol that helps identify the user of a particular TCP connection. One popular daemon program for providing the ident service is identd.

  • - an RFC 1413 compliant ident daemon which runs on Linux, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, and some versions of Darwin and Solaris. It can handle IP masqueraded or NAT connections, and has a flexible mechanism for specifying ident responses. Users can be granted permission to specify their own ident responses, hide responses for connections owned by them, or return random ident responses. Responses can be specified according to host and port pairs. One of the most notable capabilities is spoofed ident responses or ident spoofing.



  • OpenID is an open standard that allows users to be authenticated by certain co-operating sites (known as Relying Parties or RP) using a third party service, eliminating the need for webmasters to provide their own ad hoc systems and allowing users to consolidate their digital identities.


  • OAuth - 2006-2010
  • OAuth 2 - 2012
  • OAuth 2.1 - 2020

OAuth began in November 2006 when Blaine Cook was developing the Twitter OpenID implementation. Meanwhile, Ma.gnolia needed a solution to allow its members with OpenIDs to authorize Dashboard Widgets to access their service. Cook, Chris Messina and Larry Halff from Ma.gnolia met with David Recordon to discuss using OpenID with the Twitter and Ma.gnolia APIs to delegate authentication. They concluded that there were no open standards for API access delegation.

The OAuth discussion group was created in April 2007, for the small group of implementers to write the draft proposal for an open protocol. DeWitt Clinton from Google learned of the OAuth project, and expressed his interest in supporting the effort. In July 2007 the team drafted an initial specification. Eran Hammer joined and coordinated the many OAuth contributions, creating a more formal specification. On October 3, 2007, the OAuth Core 1.0 final draft was released.

Because OAuth 2.0 is more like a framework rather than a defined protocol, any OAuth 2.0 implementation is unlikely to naturally be interoperable with any other OAuth 2.0 implementation. Further deployment profiling and specification is required for any interoperability.

OpenID Connect / OIDC




BrowserID / Persona


  • EyeDee.Me - EyeDee.Me is an example Indentity Provider for the BrowserID protocol. This protocol is used by Mozilla Persona to authenticate users across the web. EyeDee.Me styles itself like an email provider, but does not actually handle any email. Rather, it exists solely as an example for how services, such as email providers, can provide first-class support for BrowserID.
  • BigTent - A ProxyIdP service for bridging major IdPs who lack support for the BrowserID protocol.
  • - A Mozilla Persona Identity Provider (IdP) Linter
  • 123done - your tasks - simplified. test login.



  • IndieAuth is a way to use your own domain name to sign in to websites. It's like OpenID, but simpler! It works by linking your website to one or more authentication providers such as Twitter or Google, then entering your domain name in the login form on websites that support IndieAuth.

url, not uri. costs a domain, digital divide..


  • GRC: SQRL - Secure (QR) Login. Proposing a comprehensive, easy-to-use, high security replacement for usernames, passwords, reminders, one-time-code authenticators and everything else.


  • WebAuth - A Plugin Replacement for HTTPS CCA


BitShares Login


  • Universal 2nd Factor - open ecosystem documents, an initiative started by Google. The intent is to enable Internet users to carry a non-phishable strong 2 factor device which the user can register at any supporting site to get strong authentication security. The goal is to get many internet services accepting these devices as an option for 2nd Factor, get the key client platforms (browsers, OSes) to have built in support for these open-protocol devices and a large number of vendors making protocol compliant devices.


  • gpgAuth is an authentication mechanism which uses Public/Private (cryptographic) keys (such as GnuPG, PGP) to authenticate users to a web-page or service. The process works by the two-way exchange of Encrypted/Signed tokens between a user and the service. gpgAuth is a generic authentication protocol that is not specific to any technology, platform or provider type. Being a versitile authentication mechansim, there are many hardware/software options available - in the projects section you will find information regarding various gpgAuth client and server implementations or tools.

Login with SSH

  • Login with SSH - a simple experiment to authenticate web sessions with SSH. Doing so gives you a fully decentralized, passwordless authentication for free. A custom SSH server listens for connections. Instead of providing a shell or any other service, it only validates the public key you offer. Then, a callback is made to a web application with a payload containing the validated public key along with the login token used. In practise, you would have first hit a "login with SSH button" on the web app. It would have generated a session that resolves into a valid one with a callback handing the proper key / token combination.[28]

JSON Web Tokens

  • - JWT, is a JSON-based open standard (RFC 7519) for creating access tokens that assert some number of claims. For example, a server could generate a token that has the claim "logged in as admin" and provide that to a client. The client could then use that token to prove that it is logged in as admin. The tokens are signed by one party's private key (usually the server's), so that both parties (the other already being, by some suitable and trustworthy means, in possession of the corresponding public key) are able to verify that the token is legitimate. The tokens are designed to be compact, URL-safe and usable especially in web browser single sign-on (SSO) context. JWT claims can be typically used to pass identity of authenticated users between an identity provider and a service provider, or any other type of claims as required by business processes. JWT relies on other JSON-based standards: JWS (JSON Web Signature) RFC 7515 and JWE (JSON Web Encryption) RFC 7516.


  • - AML, pronounced SAM-el) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML is an XML-based markup language for security assertions (statements that service providers use to make access-control decisions). SAML is also: A set of XML-based protocol messages; A set of protocol message bindings; A set of profiles (utilizing all of the above)The single most important use case that SAML addresses is web browser single sign-on (SSO). Single sign-on is relatively easy to accomplish within a security domain (using cookies, for example) but extending SSO across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The SAML Web Browser SSO profile was specified and standardized to promote interoperability] (For comparison, the more recent OpenID Connect protocol is an alternative approach to web browser SSO.)


  • -1805.06253- reclaimID: Secure, Self-Sovereign Identities using Name Systems and Attribute-Based Encryption - "In this paper we present reclaimID: An architecture that allows users to reclaim their digital identities by securely sharing identity attributes without the need for a centralised service provider. We propose a design where user attributes are stored in and shared over a name system under user-owned namespaces. Attributes are encrypted using attribute-based encryption (ABE), allowing the user to selectively authorize and revoke access of requesting parties to subsets of his attributes. We present an implementation based on the decentralised GNU Name System (GNS) in combination with ciphertext-policy ABE using type-1 pairings. To show the practicality of our implementation, we carried out experimental evaluations of selected implementation aspects including attribute resolution performance. Finally, we show that our design can be used as a standard OpenID Connect Identity Provider allowing our implementation to be integrated into standard-compliant services."



XRD: Extensible Resource Descriptor 
XRDS: Extensible Resource Descriptor Sequence

The XML format used by XRDS was originally developed in 2004 by the OASIS XRI (extensible resource identifier) Technical Committee as the resolution format for XRIs. The acronym XRDS was coined during subsequent discussions between XRI TC members and OpenID developers at first Internet Identity Workshop held in Berkeley, CA in October 2005. The protocol for discovering an XRDS document from a URL was formalized as the Yadis specification published by in March 2006. Yadis became the service discovery format for OpenID 1.1.

A common discovery service for both URLs and XRIs proved so useful that in November 2007 the XRI Resolution 2.0 specification formally added the URL-based method of XRDS discovery (Section 6). This format and discovery protocol subsequently became part of OpenID Authentication 2.0. In early 2008, work on OAuth discovery by Eran Hammer-Lahav led to the development of XRDS Simple, a profile of XRDS that restricts it to the most basic elements and introduces some extensions to support OAuth discovery and other protocols that use specific HTTP methods. In late 2008, XRDS Simple has been cancelled and merged back into the main XRDS specification resulting in the upcoming XRD 1.0 format.

  • Extensible Resource Descriptor (XRD) Version 1.0 - OASIS Standard, 1 November 2010. "This document defines XRD (Extensible Resource Descriptor), a simple generic format for describing resources. Resource descriptor documents provide machine-readable information about resources (resource metadata) for the purpose of promoting interoperability. They also assist in interacting with unknown resources that support known interfaces. For example, a web page about an upcoming meeting can provide in its descriptor document the location of the meeting organizer's free/busy information to potentially negotiate a different time. The descriptor for a social network profile page can identify the location of the user's address book as well as accounts on other sites. A web service implementing an API protocol can advertise which of the protocol's optional components are supported."

  • XRD Alignment with Link Syntax | hueniverse - "a simple generic format for describing resources. Unlike past attempts, this time we got it right, and truly deliver on the promise of simple. In fact, the XRI TC spent the past year throwing features out if they were not supported by well-established use cases. Last month the specification reached the important milestone of a Committee Draft and was opened for public comments. While public review is open until January 6th (and we encourage feedback), we decide to publish a new working draft to address comments we already reach consensus on to help early adopters."

  • - a scheme and resolution protocol for abstract identifiers compatible with Uniform Resource Identifiers and Internationalized Resource Identifiers, developed by the XRI Technical Committee at OASIS (closed in 2015). The goal of XRI was a standard syntax and discovery format for abstract, structured identifiers that are domain-, location-, application-, and transport-independent, so they can be shared across any number of domains, directories, and interaction protocols.The XRI 2.0 specifications were rejected by OASIS, a failure attributed to the intervention of the W3C Technical Architecture Group which recommended against using XRIs or taking the XRI specifications forward. The core of the dispute is whether the widely interoperable HTTP URIs are capable of fulfilling the role of abstract, structured identifiers, as the TAG believes, but whose limitations the XRI Technical Committee was formed specifically to address. The designers of XRI believed that, due to the growth of XML, web services, and other ways of adapting the Web to automated, machine-to-machine communications, it was increasingly important to be able to identify a resource independent of any specific physical network path, location, or protocol in order to: Create structured identifiers with self-describing "tags" that can be understood across domains. Maintain a persistent link to the resource regardless of whether its network location changes. Delegate identifier management not just in the authority segment (the first segment following the "xxx://" scheme name) but anywhere in the identifier path. Map identifiers used to identify a resource in one domain to other synonyms used to identify the same resource in the same domain, or in other domains.This work led, by early 2003, to the publication of a protocol based on HTTP(S) and simple XML documents called XRDS (Extensible Resource Descriptor Sequence).




  • Simple Web Discovery (SWD) defines an HTTPS GET based mechanism to discover the location of a given type of service for a given principal starting only with a domain name.

Web Linking


  • WebFist uses DKIM-signed email to prove that you, the user, want to participate in WebFinger, regardless of what your provider says. By sending a single email you can delegate your WebFinger profile to your own website host or anything that can serve the service document over HTTP (e.g., Google Drive). This is ridiculously easy for users. You can even set up WebFist via a mailto link on a webpage. To accomplish decentralization, WebFist servers take delegation emails, encrypt them into blobs, and distribute the blobs safely across a pool of peer servers. These servers synchronize with a "fist bump", transferring just encrypted blobs without secret keys. This makes it near impossible to enumerate every email address in WebFist.

JSON Resource Descriptor

  • JSON Resource Descriptor (JRD) is a simple JSON object that describes a "resource" on the Internet, where a "resource" is any entity on the Internet that is identified via a URI or IRI. For example, a person's account URI (e.g., is a resource. So are all web URIs (e.g., The JSON Resource Descriptor, originally introduced in RFC 6415 and based on the Extensible Resource Descriptor (XRD) format, was adopted for use in the WebFinger protocol, though its use is not restricted to WebFinger or RFC 6415.
  • JRD, the Other Resource Descriptor - May 24th 2010
  • RFC6415: Web Host Metadata

Also used in OpenID Connect.

Web Intents

  • Web Intents is a framework for client-side service discovery and inter-application communication. Services register their intention to be able to handle an action on the user's behalf. Applications request to start an Action of a certain verb (share, edit, view, pick etc.) and the system will find the appropriate Services for the user to use based on the user's preference. Web Intents puts the user in control of service integrations and makes the developer's life simple.


  • UserAddress is a search engine that can discover users as long as they are discoverable through one of the following languages: xrd (e.g. Webfinger including StatusNet, Google+, Friendica, Diaspora), rdf (e.g. Foaf), html (e.g. Tantek, Melvin), turtle (e.g. Facebook), Twitter-flavoured json (Twitter), Planned: xmpp-vcard (e.g. BuddyCloud)


  • - NodeInfo2 is an effort to create a standardized way of exposing metadata about a server. This might be necessary to expose ownership and organization details, usage statistics and protocol capabilities.




  • hCard is a microformat that allows a vCard to be embedded inside an HTML page. It makes use of CSS class names to identify each vCard property. Normal HTML markup and CSS styling can be used alongside the hCard class names without affecting the webpage's ability to be parsed by a hCard parser.




Blogger API



  • RFC: MetaWeblog API - a programming interface that allows external programs to get and set the text and attributes of weblog posts. It builds on the popular XML-RPC communication protocol, with implementations available in many popular programming environments. The MetaWeblog API is designed to enhance the Blogger API, which was limited in that it could only get and set the text of weblog posts. By the time MWA was introduced, in spring 2002, many weblog tools had more data stored with each post, and without an API that understood the extra data, content creation and editing tools could not access the data. At the time of this writing, summer 2003, most popular weblog tools and editors support both the Blogger API and the MetaWeblog API.
  • - an application programming interface created by software developer Dave Winer that enables weblog entries to be written, edited, and deleted using web services. The API is implemented as an XML-RPC web service with three methods whose names describe their function: metaweblog.newPost(), metaweblog.getPost() and metaweblog.editPost(). These methods take arguments that specify the blog author's username and password along with information related to an individual weblog entry.

The impetus for the creation of the API in 2002 was perceived limitations of the Blogger API, which serves the same purpose. Another weblog publishing API, the Atom Publishing Protocol became an IETF Internet standard (RFC 5023) in October 2007. Subsequently, another weblog publishing API, Micropub, which was developed with modern technologies like OAuth, became a W3C Recommendation in May 2017. Many blog software applications and content management systems support the MetaWeblog API, as do numerous desktop clients.


  • Atom Publishing Protocol - based on HTTP and is used for publications and posting on Web resources. The Atom Publishing Protocol (APP) together with the Atom Syndication Format (ASF) provides interaction with content, especially at blogs and RSS. Atom has become a popular element of Web 2.0-style solutions. APP/ASF represent a data model, which is simpler than WebDAV model, and consists of elements and selection of elements (entries), but doesn’t include location of selections (hierarchy of selection).

  • AtomSub - Transporting Atom Notifications over the Jabber/XMPP Publish-Subscribe Extension to the Extensible Messaging and Presence Protocol (XMPP)]


  • - an open API standard (W3C Recommendation) that is used to create, update, and delete posts on one's own domain using third-party clients, and supersedes both MetaWeblog and AtomPub. Web apps and native apps (e.g. iPhone, Android) can use Micropub to post and edit articles, short notes, comments, likes, photos, events, or other kinds of posts to your own site.

  • Micropub - The Micropub protocol is used to create, update and delete posts on one's own domain using third-party clients. Web apps and native apps (e.g., iPhone, Android) can use Micropub to post and edit articles, short notes, comments, likes, photos, events or other kinds of posts on your own website.
  • - Micropub

  • Micropub Rocks! - a validator to help you test your Micropub implementation. Several kinds of tests are available on the site.


  •–subscribe_pattern - a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are. Publish–subscribe is a sibling of the message queue paradigm, and is typically one part of a larger message-oriented middleware system. Most messaging systems support both the pub/sub and message queue models in their API, e.g. Java Message Service (JMS). This pattern provides greater network scalability and a more dynamic network topology, with a resulting decreased flexibility to modify the publisher and the structure of the published data.

Jabber/XMPP PubSub




  • - previously known as PubSubHubbub or PuSH, and briefly PubSub, is a notification-based protocol for web publishing and subscribing to streams and legacy feed files in real time. Currently there are no known indieweb sites that subscribe to anything via WebSub, but there are a few separate-UI indie-readers that use WebSub to subscribe to h-feed streams. WebSub is developed in the W3C Social Web Working Group.


  • PubSubHubbub (PuSH) - A simple, open, server-to-server webhook-based pubsub (publish/subscribe) protocol for any web accessible resources.




2013. Via WebHooks.

Feeds / Activity






  • Six Apart - Labs: Trackback - uses a REST model, where requests are made through standard HTTP calls. To send a TrackBack ping, the client makes a standard HTTP request to the server, and receives a response in a simple XML format (see below for more details). In the TrackBack system, the URL that receives TrackBack pings is the TrackBack Ping URL. A typical TrackBack Ping URL looks like, where 5 is the TrackBack ID. Server implementations can use whatever format makes sense for the TrackBack Ping URL; client implementations should not depend on a particular format. To send a ping, the client sends an HTTP POST request to the TrackBack Ping URL. The client MUST send a Content-Type HTTP header, with the content type set to application/x-www-form-urlencoded. The client SHOULD include the character encoding of the content being sent (title, excerpt, and weblog name) in the charset attribute of the Content-Type header.

Semantic Trackback


Semantic TrackbackTrackback (33) is a system whereby one blog entry can reference another, and automatically have this reference placed on both blog entries, thereby creating a binary link. This is useful to bloggers, so they can find out easily who is blogging about their content. The web does not have a mechanism itself to permit the creation of binary links, and regular hyperlinks on the web are unary in nature.The idea of a Semantic Trackback system is that instead of simply creating a binary link between two pages, as trackback currently enables, that link could carry knowledge and meaning with it. The meaning of a link at present, at least to a PageRank (7) system, is a vote for the other page, an assumption that publishing a link to somewhere not preferred by the posters would not happen. This behaviour has changed slightly in recent time, with the use of a rel=nofollow (30) to signify a link that should not be considered a positive vote of that page, mainly to curb the effect of "comment spam" on blogs. A Semantic Trackback system could apply more knowledge than a yea or nea about the linked artefact via an RDF graph to the link. For example, if somebody blogged about going to a Zoo, another blogger could create a Semantic Trackback link supported by an RDF graph that describes the fact that that person went with them, all using the same interaction as bloggers currently use to create ordinary trackbacks. This kind of acton-confirming information is useful to trust systems, as well as friend systems, since when the other user confirms the link, it asserts into the system that both parties agree they went to the Zoo on that day. Such confirmation brings the mundane into the Semantic Web trust layer.Semantic Trackback might also be used in a way similar to the "seeAlso" predicate is used in the RDF Syntax ontology (5) to create a link to more data about a particular resource.



  • - an XML-RPC request (not to be confused with an ICMP ping) sent from Site A to Site B, when an author of the blog at Site A writes a post that links to Site B. The request includes the URI of the linking page. When Site B receives the notification signal, it automatically goes back to Site A checking for the existence of a live incoming link. If that link exists, the pingback is recorded successfully. This makes pingbacks less prone to spam than trackbacks. Pingback-enabled resources must either use an X-Pingback header or contain a <link> element to the XML-RPC script.

Semantic Pingback

  • Semantic Pingback - The Semantic Pingback mechanism is an extension of the well-known Pingback method, a technological cornerstone of the blogosphere, thus supporting the interlinking within the Data Web.
  • - This small vocabulary defines resources which are used in the context of Semantic Pingback. The Semantic Pingback mechanism is an extension of the well-known Pingback method, a technological cornerstone of the blogosphere, thus supporting the interlinking within the Data Web.] - This small vocabulary defines resources which are used in the context of Semantic Pingback. The Semantic Pingback mechanism is an extension of the well-known Pingback method, a technological cornerstone of the blogosphere, thus supporting the interlinking within the Data Web



  • - simply the usage of the HTTP referrer header to discover incoming links. Whenever a browser traverses an incoming link from Site A (originator) to Site B (receptor) the browser will send a referrer value indicating the URL from where the user came. Site B might publish a link to Site A after visiting Site A and extracting relevant information from Site A such as the title, meta information, the link text, and so on. Refback only requires Site B to be Refback enabled in order to establish this communication. Refback requires Site A to physically link to Site B. Refback also requires browsers to traverse the links.



  • is an open-source project and hosted service for receiving webmentions and pingbacks on behalf of your indieweb site.




  • Salmon Protocol is a message exchange protocol running over HTTP designed to decentralize commentary and annotations made against newsfeed articles such as blog posts. It allows a single discussion thread to be established between the article's origin and any feed reader or "aggregator" which is subscribing to the content. Put simply, that if an article appeared on 3 sites A (the source), B and C (the aggregates), that members of all 3 sites could see and contribute to a single thread of conversation regardless of site they were viewing from.


  • - are displayed in the context of an original post, and may be a mix of syndicated reply posts from other sites received via Webmention, as well as locally created comments.

  • - a web standard for mentions and conversations across the web, a powerful building block that is used for a growing federated network of comments, likes, reposts, and other rich interactions across the decentralized social web.

  • - a protocol extension to Webmention to propagate comments and other interactions upstream by sending a webmention from a response to the original post when the response itself receives a response (comment, like, etc.). The original post then checks the response to the original, parses the response to the response (e.g. comment on a comment) and then displays it as an additional response on the original post.





Jabber/XMPP Data Forms


  • XEP-0004: Data Forms - an XMPP protocol extension for data forms that can be used in workflows such as service configuration as well as for application-specific data description and reporting. The protocol includes lightweight semantics for forms processing (such as request, response, submit, and cancel), defines several common field types (boolean, list options with single or multiple choice, text with single line or multiple lines, single or multiple JabberIDs, hidden fields, etc.), provides extensibility for future data types, and can be embedded in a wide range of applications. The protocol is not intended to provide complete forms-processing functionality as is provided in the W3C XForms technology, but instead provides a basic subset of such functionality for use by XMPP entities.

  • - This specification defines a backwards-compatible extension to the XMPP Data Forms protocol that enables an application to specify form layouts, including the layout of form fields, sections within pages, and subsections within sections.



  • - an XML format used for collecting inputs from web forms. XForms was designed to be the next generation of HTML / XHTML forms, but is generic enough that it can also be used in a standalone manner or with presentation languages other than XHTML to describe a user interface and a set of common data manipulation tasks.XForms 1.0 (Third Edition) was published on 29 October 2007. The original XForms specification became an official W3C Recommendation on 14 October 2003, while XForms 1.1, which introduced a number of improvements, reached the same status on 20 October 2009.



Atom Publishing Protocol (APP)


Web Intents

  • Web Intents is a framework for client-side service discovery and inter-application communication. Services register their intention to be able to handle an action on the user's behalf. Applications request to start an Action of a certain verb (share, edit, view, pick etc.) and the system will find the appropriate Services for the user to use based on the user's preference.





FOAF (from "friend of a friend") is an RDF based schema to describe persons and their social network in a semantic way. FOAF could get used within many wikis for annotating user pages, or describing articles about people.

See Open web#WebID



Portable Contacts


The protocol is a combination of OAuth, XRDS-Simple and a wire-format based on vCard harmonized with schema from OpenSocial.




  • remoteStorage - the first (and currently only) open standard to enable truly unhosted web apps. That means users are in full control of their precious data and where it is stored, while app developers are freed of the burden of hosting, maintaining and protecting a central database.


See also Comms, Network#Projects

theory and practice



  • GNUnet is a framework for secure peer-to-peer networking that does not use any centralized or otherwise trusted services. A first service implemented on top of the networking layer allows anonymous censorship-resistant file-sharing. Anonymity is provided by making messages originating from a peer indistinguishable from messages that the peer is routing. All peers act as routers and use link-encrypted connections with stable bandwidth utilization to communicate with each other. GNUnet uses a simple, excess-based economic model to allocate resources. Peers in GNUnet monitor each others behavior with respect to resource usage; peers that contribute to the network are rewarded with better service. GNUnet is part of the GNU project. GNUnet can be downloaded from GNU and the GNU mirrors.





  • xOperator - A semantic agent for xmpp / jabber network which finds and shares content about resources (using RDF/SPARQL) for you and your jabber friends.



  • Sneer is a free and open source sovereign computing platform. It runs on your Windows, Mac or Linux machine (like Skype or Firefox) using the Java VM. It enables you to create your personal cluster by sharing hardware resources (CPU, disk space, network bandwidth) with your friends, host your own social network, information and media, create sovereign applications and share them with others, download and run sovereign applications created by others.


Semantic Microblogging


OpenLink Data Spaces




GNU Social

2010. uses older OStatus



  • Foafpress - An open-source PHP web application and presentation engine that publishes profiles and web pages based on RDF data stored in files. It allows you to aggregate and publish data from multiple web sources via Linked Data.





Mobile Social Semantic Web








  • Nightweb is an app for Android devices and PCs that connects you to an anonymous, peer-to-peer social network. It is written in Clojure and uses I2P and BitTorrent on the backend.



  • Vole is a web-based social network that you use in your browser, without a central server. It's built on the power of Bittorrent, Go and Ember.js.



  • Sockethub is a polyglot (speaking many different protocols and APIs) messaging service for social and other interactive messaging applications. It assists web app developers by providing server-independent, server-side functionality - which gives the application greater autonomy. It can be used as a tool for many different types of applications, large and small.

IRC Paradigm




  • Aether - a peer-to-peer app, and it has no servers. A result of this is that source IP of any specific public post cannot (easily) be determined. Most people use pseudonyms, though you can use your real name, or company. Aether is private by default, so that you can choose to be fully private, or fully public yourself. Aether keeps a 6 months of content by default. It's gone after. If something is worth keeping, someone will save it within six months — but not from beyond that. If you screw up, argue for the wrong opinion, and then think otherwise, that's okay. No one is going to come after you — it gives you the freedom to be wrong, and move on. Actions of moderators are visible to users. No content can just 'disappear', if something gets deleted, you'll know who did it, why they did it, and if you want, how to get it back. Moderation is important for healthy communities, and Aether adds onto it some checks-and-balances. Everyone watches the watchmen. Communities can elect and impeach their own mods by voting. If a mod behaves inappropriately, users can disable that mod locally as well. [39]






  • Charme - A decentralized social network with end-to-end encryption for messaging, private posts and private profile data. Posts can contain semantic information, so it is possible to search for all friends driving from A to B for example. This is a preview version. It is not secure yet!!! So do not wonder if you find some crypto mistakes!


Secure Scuttlebutt

  • Planetary - mainstream client for a truly distributed social network.



  • Mastofeed - embeddable Mastodon feeds for blogs etc.

  • TheDesk - Mastodon client for PC(Windows/Linux/macOS). Boost your Mastodon life, and also Misskey experience.

  • - a curses-based mastodon client modeled off usenet news readers & pine, with an emphasis on getting to 'timeline zero'


Pleroma / Write Freely

  • - Minimalist, privacy-focused,writing and publishing platform.
  • Guides - a lightweight publishing platform made for sharing your thoughts quickly. Publish text in a variety of shareable formats, from individual articles to blogs.


to sort





  • Fraidycat - a desktop app or browser extension for Firefox or Chrome. I use it to follow people (hundreds) on whatever platform they choose - Twitter, a blog, YouTube, even on a public TiddlyWiki.


older ideas/ramblings

Social news idea

While I'm here (sending feedback for the new digg), my wishlist would be for;

  • Reddit style voting meets Slashdot comment categories, with social bookmark tagging for both users and groups/communities. *

Between upvoting and commenting, there are other types of actions that can be performed on items, like flagging as a favourite.

To add a better social bookmarking management system (better than Reddit search!), if a user could either;

- Click to Digg - (Click to thumbs-down/whatever is optional for communities, or like Hacker News) - Click to Favourite to own bookmark list -- And Tag, like, - Also, click to "Notice", as in not like the user "Diggs" or "Likes" the content of the article, and not that they want to bother saving it to Favourites, but a touch in the sence of a the *nix command, or a 'poke' to the issue embodies in the linked to page.

So, on the page, in ASCII;

This is the title of the link! 324 diggs/94 undiggs (small url) 863 noticed [++] [*] [~] [Tag:] [img thumb/whatever]

This is the title of another link! 243 diggs/213 undiggs (small url) 546 noticed [++] [*] [~] [Tag:] [img thumb/whatever]

(the buttons representing 'digg', 'favourite/save', 'notice' and 'tag:'

the tag textarea expands on clickingm like the stackexchange search box, and does auto-complete for a users tags, with suggestions from the global tags, like personal taxonomy can be cached locally for users.)

ALSO - reposts in different communities can tie back to a global site dashboard listing related taxonomies, from groups and users who favourite the link publically


  • Bottom up social tagging link topic clustering
  • Some communities have taxonomies moderated
  • Moderated taxonomies could be linked with Linked Data, DPpedia, etc.
  • So some serious bits of the site, some open and silly bits
  • Bits and bobs displayed in a timeline format (D3.js?), arrows between concepts in a postcyberpunk style semantic news and search display
  • Paid access for high-volume API calls

tag/category/channel ?

[social] bookmarks as service

Group types

from old wiki

Groups for collaboration on and sharing of conversation, news, code, media, services, etc.

  • Active = Groups as in membership.
    • Who can 'join'?
      • Open = Cost of entry is participation.
      • Closed = Some form of new-member rules.
    • What output can people see?
      • Public = Open process, easily forkable.
      • Private = Group or subgroup curates output, hidden process. Trust?
  • Passive = Groups as in topics of interest.
    • Web of semantically related topics and ideas.
    • Mining and 'routing' of relevant related content

Individual hubs could federate the service they wished (widgets, social aggregation, files, etc) in a manner that could be open or hidden. Tunnelled inter-darknet connections between anonymised users and services.

Process consensus-holders


  • interoperable oss platform(s)
  • activity stream, accessible for aggregation
    • easy filterable by others
  • from easy oembed style referencing using entry points with either html5 microformats or semantic apis
  • following/friend activity aggregation.
  • federated commenting
  • provides with one public (https) and one private (hidden service)
  • categories and tags the same
    • category tags out-of-bound but in-band referable
      • linked to evolving social fuzzy web ontology? npl for suggestions
        • other services use same ontology for wikis and social bookmarking
  • easy link friends with post categories and