Things and Stuff Wiki - An organically evolving personal wiki knowledge base with an on-the-fly taxonomy containing topic outlines, descriptions, notes and breadcrumbs, with links to sites, systems, software, manuals, organisations, people, articles, guides, slides, papers, books, comments, videos, screencasts, webcasts, scratchpads and more. Quality varies drastically. Use the Table of Contents to navigate long pages, use the Small-ToC and Tiny-ToC header links on longer pages. Not that mobile friendly atm. #tnswiki on freenode IRC for feedback chat, or see About for login and further information. / et / em
- 1 General
- 2 Systems
- 3 Services
- 4 robots.txt
- 5 Alerts
- 6 SEM/SEO
- 7 Distributed
- 8 Other
- Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites. Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Tomcat. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use from virtually any programming language. Solr's powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required.
update; jetty much easier!
- Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Architected from the ground up for use in distributed environments where reliability and scalability are must haves, Elasticsearch gives you the ability to move easily beyond simple full-text search. Through its robust set of APIs and query DSLs, plus clients for the most popular programming languages, Elasticsearch delivers on the near limitless promises of search technology
- Sphinx is an open source full text search server, designed from the ground up with performance, relevance (aka search quality), and integration simplicity in mind. It's written in C++ and works on Linux (RedHat, Ubuntu, etc), Windows, MacOS, Solaris, FreeBSD, and a few other systems. Sphinx lets you either batch index and search data stored in an SQL database, NoSQL storage, or just files quickly and easily — or index and search data on the fly, working with Sphinx pretty much as with a database server.
- Techu exposes a RESTful API for realtime indexing and searching with the Sphinx full-text search engine. We leverage Redis, Nginx and the Python Django framework to make searching easy to handle & flexible.
- Majestic-12 is working towards creation of a World Wide Web search engine based on concepts of distributing workload in a similar fashion achieved by successful projects such as SETI@home and distributed.net.
MJ12bot, the principal distributed component of the Majestic-12 search engine project is the subject of continuing investment by Majestic-12 Ltd. The results of this crawl are fed into a specialised search engine with daily updates. A full explanation follows.
- http://code.google.com/p/googlecl/ - commandline
- https://en.wikipedia.org/wiki/Robots_exclusion_standard - also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize web sites. Not all robots cooperate with the standard; email harvesters, spambots and malware robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard is different from, but can be used in conjunction with Sitemaps, a robot inclusion standard for websites.
Bad robots (csf blocked);
- bingbot - 188.8.131.52/24, 184.108.40.206/24
- majestic-12 - 220.127.116.11
- Google Tag Manager lets you add and update your website tags, easily and for free, whenever you want, without bugging the IT folks. It gives marketers greater flexibility, and lets webmasters relax and focus on other important tasks.
- Google Keyword Planner is like a workshop for building new Search Network campaigns or expanding existing ones. You can search for keyword and ad group ideas, get historical statistics, see how a list of keywords might perform, and even create a new keyword list by multiplying several lists of keywords together. A free AdWords tool, Keyword Planner can also help you choose competitive bids and budgets to use with your campaigns.
- What Every Programmer Should Know About SEO
- Are there any clear indicators that my sitemap file is beneficial?
- Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
- The Importance of Sitemaps - Oct 12, 2008
- Yacy - Distributed web search service
- https://nerdydata.com/search - web site source code