Computing
to very much further sort and understand.
General
See also Computer#Emulation, Virtualisation, Data, *nix, Programming
- A program is many things. It is a piece of text typed by a programmer, it is the directing force that makes the computer do what it does, it is data in the computer's memory, yet it controls the actions performed on this same memory. Analogies that try to compare programs to objects we are familiar with tend to fall short, but a superficially fitting one is that of a machine. The gears of a mechanical watch fit together ingeniously, and if the watchmaker was any good, it will accurately show the time for many years. The elements of a program fit together in a similar way, and if the programmer knows what he is doing, the program will run without crashing.
- A computer is a machine built to act as a host for these immaterial machines. Computers themselves can only do stupidly straightforward things. The reason they are so useful is that they do these things at an incredibly high speed. A program can, by ingeniously combining many of these simple actions, do very complicated things. ... When a program works, it is beautiful. The art of programming is the skill of controlling complexity. The great program is subdued, made simple in its complexity.
-- http://eloquentjavascript.net/chapter1.html
Basics
- Khan Academy: Computer science: information theory
Eric Steven Raymond;
- Pipe Logic - "In this model, a UNIX pipe acts like a wire, that is, a conductor with parasitic capacitance."
- What I tell all new programmers
- Programmer Competency Matrix
- Six languages to master.
- Learning C, reducing fear.
- Programming is Hard, Let's Go Scripting...
- A Scripter at Heart
- http://vislab-ccom.unh.edu/~schwehr/Classes/2011/esci895-researchtools/
- Simple Made Easy - Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path. ]
- Self-Taught Developers: Are You Missing Your Foundation?
- Torvalds' quote about good programmer - "Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
- Stack Exchange: I've inherited 200K lines of spaghetti code — what now?
- Become a Programmer, Motherfucker
- Teach Yourself Programming in Ten Years
- Learnable Programming - Bret Victor - Designing a programming system for understanding programs
- The 5 Hardest Parts of Programming - Optimization, Networking, Security, Reliability, Scalability
Reference
- WikiChip - a semiconductor and computer engineering technology website. We detail historical and modern electronic systems and technologies as well as related topics.
- http://stackoverflow.com/
- StackMonthly - A monthly digest of the best questions on StackOverflow
- Hyperpolyglot - a programming language side-by-side reference sheet
- http://rosettacode.org/wiki/Rosetta_Code
- A Very Quick Comparison of Popular Languages for Teaching Computer Programming
- syntax across languages
- http://helloworldquiz.com/
- How do we tell truths that might hurt? - E.W. Dijkstra
- YourLanguageSucks
- Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems.
- Websites like projecteuler.net
- JavaScript solutions to the Project Euler problems.
- http://lee-phillips.org/lispmath/
- http://www.evanmiller.org/mathematical-hacker.html
- http://www.the-adam.com/adam/rantrave/st02.pdf
Books
- https://github.com/vhf/free-programming-books/blob/master/free-programming-books.md
- http://hackershelf.com/
- http://en.wikibooks.org/wiki/Computer_Programming
- WikiBooks: A-level Computing/AQA/Problem Solving, Programming, Operating Systems, Databases and Networking/Programming Concepts
- http://eric.themoritzfamily.com/books-every-self-taught-computer-scientist-should-read.html
- http://nlpwp.org/book/
- http://stackoverflow.com/questions/194812/list-of-freely-available-programming-books/392926#392926
- http://citizen428.net/blog/2010/08/12/30-free-programming-ebooks/
News and Blogs
- http://www.h-online.com - closed
History
- YouTube: Ron Pressler - Finite of Sense and Infinite of Thought: A History of Computation, Logic and Algebra
- https://en.wikipedia.org/wiki/Calculus_ratiocinator
- Leibniz: Logic | Internet Encyclopedia of Philosophy -
- https://en.wikipedia.org/wiki/Stepped_reckoner - a digital mechanical calculator invented by the German mathematician Gottfried Wilhelm Leibniz around 1672 and completed in 1694. The name comes from the translation of the German term for its operating mechanism, Staffelwalze, meaning 'stepped drum'. It was the first calculator that could perform all four arithmetic operations. Its intricate precision gearwork, however, was somewhat beyond the fabrication technology of the time; mechanical problems, in addition to a design flaw in the carry mechanism, prevented the machines from working reliably.
- https://en.wikipedia.org/wiki/Setun - Russian, 1958
- http://www.curtamania.com/curta/database/brand/olivetti/Olivetti%20Programma%20101/index.html - 1965 [8]
- https://en.wikipedia.org/wiki/Altair_8800 - a microcomputer designed in 1975 based on the Intel 8080 CPU. Interest grew quickly after it was featured on the cover of the January 1975 issue (published in late November 1974) of Popular Electronics, and was sold by mail order through advertisements there, in Radio-Electronics, and in other hobbyist magazines. The designers hoped to sell a few hundred build-it-yourself kits to hobbyists, and were surprised when they sold thousands in the first month.
- https://en.wikipedia.org/wiki/Xerox_Star - 1981
- https://en.wikipedia.org/wiki/Pilot_(operating_system)
- YouTube: The Final Demonstration of the Xerox 'Star' computer [9]
- "with the longest perspective perhaps of most folks on object orientated guis, has your faith in object orientated guis been shaken at all?" "not badly enough to call them guis." - 2:01:50
- PARC Movies
- http://www.mission-base.com/tamiko/theory/cm_txts/di-frames.html - 1983
- http://en.wikipedia.org/wiki/Connection_Machine
- YouTube: Alan Kay - Normal Considered Harmful [14]
- The Jargon File - a comprehensive compendium of hacker slang illuminating many aspects of hackish tradition, folklore, and humor.
Computation
See also Maths#Logic, Hardware
- https://en.wikipedia.org/wiki/Theory_of_computation - branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and language, computability theory, and computational complexity theory.
- https://en.wikipedia.org/wiki/Model_of_computation - the definition of the set of allowable operations used in computation and their respective costs. It is used for measuring the complexity of an algorithm in execution time and or memory space: by assuming a certain model of computation, it is possible to analyze the computational resources required or to discuss the limitations of algorithms or computers.
- https://en.wikipedia.org/wiki/Abstract_machine - abstract machines are often used in thought experiments regarding computability or to analyze the complexity of algorithms (see computational complexity theory). A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. The best-known example is the Turing machine.
- https://en.wikipedia.org/wiki/File:Theoretical_computer_science.svg - Relationship between complexity theory, formal languages and computability theory.
- Theory of Computing - An Open Access Electronic Journal in Theoretical Computer Science
People
Automata theory
- https://en.wikipedia.org/wiki/Automata_theory - the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a subject of study in both mathematics and computer science). The word automata (the plural of automaton) comes from the Greek word αὐτόματα, which means "self-acting".
Automata theory is closely related to formal language theory. An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they can recognize. Automata play a major role in theory of computation, compiler construction, artificial intelligence, parsing and formal verification.
An automaton can be said to recognize a string if we view the content of its tape as input. In other words, the automaton computes a function that maps strings into the set {0,1}. Alternatively, we can say that an automaton generates strings, which means viewing its tape as an output tape. On this view, the automaton generates a formal language, which is a set of strings. The two views of automata are equivalent: the function that the automaton computes is precisely the indicator function of the set of strings it generates. The class of languages generated by finite automata is known as the class of regular languages.
- YouTube: Computers Without Memory - Computerphile - They're called 'Finite State Automata" and occupy the centre of Chomsky's Hierarchy - Professor Brailsford explains the ultimate single purpose computer.
- https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton
- https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton_with_%CE%B5-moves
- https://en.wikipedia.org/wiki/Pushdown_automaton - type of automaton that employs a stack.
- https://en.wikipedia.org/wiki/%CE%A9-automaton
- https://en.wikipedia.org/wiki/Probabilistic_automaton
- https://en.wikipedia.org/wiki/Nested_stack_automaton - a finite automaton that can make use of a stack containing data which can be additional stacks. Like a stack automaton, a nested stack automaton may step up or down in the stack, and read the current symbol; in addition, it may at any place create a new stack, operate on that one, eventually destroy it, and continue operating on the old stack. This way, stacks can be nested recursively to an arbitrary depth; however, the automaton always operates on the innermost stack only.
- https://en.wikipedia.org/wiki/Register_machine - a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent.
- https://en.wikipedia.org/wiki/Mealy_machine
- https://en.wikipedia.org/wiki/Moore_machine - a finite-state machine whose output values are determined solely by its current state
- https://en.wikipedia.org/wiki/Finite_state_transducer - a finite state machine with two tapes: an input tape and an output tape. This contrasts with an ordinary finite state automaton (or finite state acceptor), which has a single tape.
Computability theory
to sort
See also Maths#Type theory
- https://en.wikipedia.org/wiki/Computability_theory - also called recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The basic questions addressed by recursion theory are "What does it mean for a function on the natural numbers to be computable?" and "How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched. The field has since grown to include the study of generalized computability and definability.
- https://en.wikipedia.org/wiki/Computable_function - the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithm. They are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the μ-recursive functions.
- https://en.wikipedia.org/wiki/Church–Turing_thesis - states that recursion, lambda calculus and the Turing machine are equivalent
- https://en.wikipedia.org/wiki/Turing_degree - or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set. The concept of Turing degree is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems; the Turing degree of a set tells how difficult it is to solve the decision problem associated with the set. That is, to determine whether an arbitrary number is in the given set.
- https://en.wikipedia.org/wiki/Curry–Howard_correspondence - the direct relationship between computer programs and mathematical proofs. It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and logician William Alvin Howard.
- YouTube: Chomsky Hierarchy - Computerphile - Uncomputable through to finite state - Professor Brailsford explains Chomsky's hierarchy.
- YouTube: Same Story, Different Notation - Computerphile - Finite State Automata meets Recursion. Professor Brailsford continues the story of computers without memory.
- https://en.wikipedia.org/wiki/Grzegorczyk_hierarchy - a hierarchy of functions used in computability theory (Wagner and Wechsung 1986:43). Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower level of the hierarchy grow slower than functions in the higher levels.
- https://en.wikipedia.org/wiki/Primitive_recursive_function - class of functions defined using primitive recursion and composition as central operations, strict subset of the total µ-recursive functions
- https://en.wikipedia.org/wiki/μ-recursive_function - general recursive functions, a class of partial functions from natural numbers to natural numbers which are "computable" in an intuitive sense
- https://en.wikipedia.org/wiki/Recursion_(computer_science) - in computer science is a method where the solution to a problem depends on solutions to smaller instances of the same problem (as opposed to iteration). The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
- https://en.wikipedia.org/wiki/Recursive_set - a set of natural numbers is called recursive, computable or decidable if there is an algorithm which terminates after a finite amount of time and correctly decides whether or not a given number belongs to the set. A more general class of sets consists of the recursively enumerable sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set. A set which is not computable is called noncomputable or undecidable.
- https://en.wikipedia.org/wiki/Recursively_enumerable_language - also recognizable, partially decidable, semidecidable, Turing-acceptable or Turing-recognizable, if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language. Recursively enumerable languages are known as type-0 languages in the Chomsky hierarchy of formal languages. All regular, context-free, context-sensitive and recursive languages are recursively enumerable. The class of all recursively enumerable languages is called RE.
- https://en.wikipedia.org/wiki/Recursively_enumerable_set - a set S of natural numbers is called recursively enumerable, computably enumerable, semidecidable, provable or Turing-recognizable if: There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S, or equivalently; There is an algorithm that enumerates the members of S. That means that its output is simply a list of the members of S: s1, s2, s3, ... . If necessary, this algorithm may run forever. The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase. In computational complexity theory, the complexity class containing all recursively enumerable sets is RE.
- The Holy Trinity - "The doctrine of computational trinitarianism holds that computation manifests itself in three forms: proofs of propositions, programs of a type, and mappings between structures. These three aspects give rise to three sects of worship: Logic, which gives primacy to proofs and propositions; Languages, which gives primacy to programs and types; Categories, which gives primacy to mappings and structures. The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to enlightenment: each aspect provides insights that comprise the experience of computation in our lives." [23]
- Physics, Topology, Logic and Computation: A Rosetta Stone [24] - "In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a "cobordism". Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of "closed symmetric monoidal category". We assume no prior knowledge of category theory, proof theory or computer science."
- PDF: The Galois Connection between Syntax and Semantics - Peter Smith, University of Cambridge]
- Where LISP Fits - [26] - "These simplifications made LISP into a way of describing computable functions much neater than the Turing machines or the general recursive definitions used in recursive function theory. The fact that Turing machines constitute an awkward programming language doesn’t much bother recursive function theorists, because they almost never have any reason to write particular recursive definitions, since the theory concerns recursive functions in general. They often have reason to prove that recursive functions with specific properties exist, but this can be done by an informal argument without having to write them down explicitly. In the early days of computing, some people developed programming languages based on Turing machines; perhaps it seemed more scientific. Anyway, I decided to write a paper describing LISP both as a programming language and as a formalism for doing recursive function theory."
- http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-fall-2010/video-lectures/ [27]
Turing machine
- http://en.wikipedia.org/wiki/Turing_machine - FSM with two stacks (left and right on tape)
- http://en.wikipedia.org/wiki/Turing_completeness
- http://www.felienne.com/?p=2974
- http://beza1e1.tuxen.de/articles/accidentally_turing_complete.html
- https://news.ycombinator.com/item?id=6693653
- https://en.wikipedia.org/wiki/Langton%27s_ant - a two-dimensional Turing machine with a very simple set of rules but complicated emergent behavior. It was invented by Chris Langton in 1986 and runs on a square lattice of black and white cells. The universality of Langton's ant was proven in 2000. The idea has been generalized in several different ways, such as turmites which add more colors and more states. [29]
Lambda calculus
- https://en.wikipedia.org/wiki/Lambda_calculus - a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any single-taped Turing machine and was first introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. Lambda calculus consists of constructing lambda terms and performing reduction operations on them.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement the lambda calculus. Lambda calculus also is a current research topic in Category theory.
- https://en.wikipedia.org/wiki/Church_encoding - a means of representing data and operators in the lambda calculus. The data and operators form a mathematical structure which is embedded in the lambda calculus. The Church numerals are a representation of the natural numbers using lambda notation. The method is named for Alonzo Church, who first encoded data in the lambda calculus this way.
Terms that are usually considered primitive in other notations (such as integers, booleans, pairs, lists, and tagged unions) are mapped to higher-order functions under Church encoding. The Church-Turing thesis asserts that any computable operator (and its operands) can be represented under Church encoding. In the untyped lambda calculus the only primitive data type is the function.
- https://en.wikipedia.org/wiki/μ-recursive_function - a class of partial functions from natural numbers to natural numbers that are "computable" in an intuitive sense. In fact, in computability theory it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every μ-recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the λ-recursive functions and the functions that can be computed by Markov algorithms. The set of all recursive functions is known as R in computational complexity theory.
- YouTube: Lambda Calculus - Computerphile
- PDF: A short introduction to the Lambda Calculus - Achim Jung, March 18, 2004. The lambda calculus can appear arcane on first encounter. Viewed purely as a “naming device”, however, it is a straighforward extension of ordinary mathematical notation.
- Compiling Lambda Calculus - This book introduces the theory and interpretation of lambda calculus. [33]
- http://stackoverflow.com/questions/93526/what-is-a-y-combinator
- http://www.confreaks.com/videos/1287-rubyconf2012-y-not-adventures-in-functional-programming
- https://en.wikipedia.org/wiki/SKI_combinator_calculus - a combinatory logic, a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It was introduced by Moses Schönfinkel[1] and Haskell Curry. All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators).
- https://en.wikipedia.org/wiki/Binary_lambda_calculus - a formulation of combinatory logic using only the symbols 0 and 1.
- John's Lambda Calculus and Combinatory Logic Playground
- http://www.ioccc.org/2012/tromp/hint.html tromp] - celebrating the close connection between obfuscation and conciseness, an implementaton of the most concise language known, Binary Lambda Calculus (BLC).
- The pattern calculus - "There is a significant class of operations such as mapping that are common to all data structures. The goal of generic programming is to support these operations on arbitrary data types without having to recode for each new type. The pattern calculus and combinatory type system reach this goal by representing each data structure as a combination of names and a finite set of constructors. These can be used to define generic functions by pattern-matching programs in which each pattern has a different type. Evaluation is type-free. Polymorphism is captured by quantifying over type variables that represent unknown structures. A practical type inference algorithm is provided."
Typed
- https://en.wikipedia.org/wiki/Lambda-mu_calculus - an extension of the lambda calculus introduced by M. Parigot. It introduces two new operators: the μ operator (which is completely different both from the μ operator found in computability theory and from the μ operator of modal μ-calculus) and the bracket operator. Semantically these operators correspond to continuations, found in some functional programming languages. Proof-theoretically, it provides a well-behaved formulation of classical natural deduction.
One of the main goals of this extended calculus is to be able to describe expressions corresponding to theorems in classical logic. According to the Curry–Howard isomorphism, lambda calculus on its own can express theorems in intuitionistic logic only, and several classical logical theorems can't be written at all. However with these new operators one is able to write terms that have the type of, for example, Peirce's law.
- https://en.wikipedia.org/wiki/System_F - also known as the (Girard–Reynolds) polymorphic lambda calculus or the second-order lambda calculus, is a typed lambda calculus that differs from the simply typed lambda calculus by the introduction of a mechanism of universal quantification over types. System F thus formalizes the notion of parametric polymorphism in programming languages, and forms a theoretical basis for languages such as Haskell and ML. System F was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds (1974).
Whereas simply typed lambda calculus has variables ranging over functions, and binders for them, System F additionally has variables ranging over types, and binders for them. The upper-case Lambda character is traditionally used to denote type-level functions, as opposed to the lower-case lambda which is used for value-level functions.
- https://en.wikipedia.org/wiki/System_U - System U and System U− are pure type systems, i.e. special forms of a typed lambda calculus with an arbitrary number of sorts, axioms and rules (or dependencies between the sorts). They were both proved inconsistent by Jean-Yves Girard in 1972. This result led to the realization that Martin-Löf's original 1971 type theory was inconsistent as it allowed the same "Type in Type" behaviour that Girard's paradox exploits.
to sort
- An Introduction to the Lambda Calculus - using Clojure [36]
- https://en.wikipedia.org/wiki/Process_calculus - or process algebras - are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus
Computational complexity
- https://en.wikipedia.org/wiki/Time_complexity - the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n for any n (bigger than some n0), the asymptotic time complexity is O(n3).
- https://en.wikipedia.org/wiki/Big_O_notation - a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Edmund Landau and Paul Bachmann, collectively called Bachmann-Landau notation or asymptotic notation.
- https://en.wikipedia.org/wiki/Decision_problem - a question in some formal system with a yes-or-no answer
- https://en.wikipedia.org/wiki/Halting_Problem
- Scooping The Loop Snooper - A proof that the Halting Problem is undecidable
- https://en.wikipedia.org/wiki/Parameterized_complexity - a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function in those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured by the number of bits in the input.
- https://en.wikipedia.org/wiki/Kolmogorov_complexity - the inherent complexity of a given piece of data is hos long the shortest program to make it is
to sort
- https://github.com/pervognsen/bitwise - Bitwise is an educational project where we create the software/hardware stack for a computer from scratch.
- https://en.wikipedia.org/wiki/Code_as_data - In computer science, the expressions code as data and data as code refer to the duality between code and data, that allows computers to treat instructions in a programming language as data handled by a running program.
- https://en.wikipedia.org/wiki/Semantics_(computer_science) - rigorous mathematical study of the meaning of programming languages by evaluating the meaning of syntactically legal strings
- https://en.wikipedia.org/wiki/Denotational_semantics - approach of formalizing the meanings of programming languages by constructing mathematical objects (called denotations) that describe the meanings of expressions from the languages
- https://en.wikipedia.org/wiki/Operational_semantics - certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures
- https://en.wikipedia.org/wiki/Axiomatic_semantics - define the meaning of a command in a program by describing its effect on assertions about the program state
- https://en.wikipedia.org/wiki/Algebraic_semantics_(computer_science) - a form of axiomatic semantics based on algebraic laws for describing and reasoning about program semantics in a formal manner.
- https://en.wikipedia.org/wiki/Symbolic_execution - also symbolic evaluation, is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would, a case of abstract interpretation. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch.
- http://en.wikipedia.org/wiki/Programming_language_theory
- http://en.wikipedia.org/wiki/Automatic_programming
- https://www.youtube.com/watch?v=AD4b-52jtos&list=PL2FF649D0C4407B30
- https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54363BC46
- https://www.youtube.com/watch?v=EKWGGDXe5MA&feature=youtu.be
Architecture
See also Electronics
- https://www.youtube.com/watch?v=Tg5gJxXBh8s George Dyson on the Origins of the Digital Universe] - The talk focuses on the work done at The Institute for Advanced Study in Princeton New Jersey by such renowned scientists as John von Neumann and Kurt Godel.
- https://en.wikipedia.org/wiki/Harvard_architecture - a computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself.
- https://en.wikipedia.org/wiki/Von_Neumann_architecture - meaning has evolved to be any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the von Neumann bottleneck and often limits the performance of the system. The design of a von Neumann architecture is simpler than the more modern Harvard architecture which is also a stored-program system but has one dedicated set of address and data buses for reading data from and writing data to memory, and another set of address and data buses for fetching instructions.
- https://en.wikipedia.org/wiki/Modified_Harvard_architecture - a variation of the Harvard computer architecture that allows the contents of the instruction memory to be accessed as if it were data. Most modern computers that are documented as Harvard architecture are, in fact, Modified Harvard architecture.
- https://en.wikipedia.org/wiki/Flynn's_taxonomy - a classification of computer architectures, proposed by Michael Flynn in 1966. The classification system has stuck, and has been used as a tool in design of modern processors and their functionalities.
- OpenCores - Since 1999, OpenCores is the most prominent online community for the development of gateware IP (Intellectual Properties) Cores. It is the place where such cores are shared and promoted in the spirit of Free and Open Source collaboration.
- http://opencores.org/projects - main objective is to design and publish core designs under a license for hardware modeled on the Lesser General Public License (LGPL) for software. We are committed to the ideal of freely available, freely usable and re-usable open source hardware.
Instructions
- https://en.wikipedia.org/wiki/Microarchitecture - sometimes abbreviated to µarch or uarch, also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology. Computer architecture is the combination of microarchitecture and instruction set design.
- https://en.wikipedia.org/wiki/Instruction_set - or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), and the native commands implemented by a particular processor.
- https://en.wikipedia.org/wiki/Microcode - a layer of hardware-level instructions or data structures involved in the implementation of higher level machine code instructions in central processing units, and in the implementation of the internal logic of many channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and other hardware. It resides in special high-speed memory and translates machine instructions into sequences of detailed circuit-level operations. It helps separate the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also makes it feasible to build complex multi-step instructions while still reducing the complexity of the electronic circuitry compared to other methods. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram.
- https://en.wikipedia.org/wiki/Machine_code - a set of instructions executed directly by a computer's central processing unit (CPU). Each instruction performs a very specific task, such as a load, a jump, or an ALU operation on a unit of data in a CPU register or memory. Every program directly executed by a CPU is made up of a series of such instructions. Numerical machine code (i.e., not assembly code) may be regarded as the lowest-level representation of a compiled or assembled computer program or as a primitive and hardware-dependent programming language. While it is possible to write programs directly in numerical machine code, it is tedious and error prone to manage individual bits and calculate numerical addresses and constants manually. It is thus rarely done today, except for situations that require extreme optimization or debugging.
- https://en.wikipedia.org/wiki/Word_(computer_architecture) - term for the natural unit of data used by a particular processor design. A word is basically a fixed-sized group of digits (binary or decimal) that are handled as a unit by the instruction set or the hardware of the processor. The number of digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.
- https://en.wikipedia.org/wiki/Opcode - the portion of a machine language instruction that specifies the operation to be performed.
- https://en.wikipedia.org/wiki/Operand#Computer_science - the part of a computer instruction which specifies what data is to be manipulated or operated on, while at the same time representing the data itself. A computer instruction describes an operation such as add or multiply X, while the operand (or operands, as there can be more than one) specify on which X to operate as well as the value of X. Additionally, in assembly language, an operand is a value (an argument) on which the instruction, named by mnemonic, operates. The operand may be a processor register, a memory address, a literal constant, or a label.
- https://en.wikipedia.org/wiki/Bytecode - also known as p-code (portable code), is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normally numeric addresses) which encode the result of parsing and semantic analysis of things like type, scope, and nesting depths of program objects. They therefore allow much better performance than direct interpretation of source code.
"fundamentally the same as machine code, in that it describes low level operations such as reading and writing memory, and basic calculations. Bytecode is typically conceived to be produced when COMPILING a higher level language, for example PHP or Java, and unlike machine code for many hardware based processors, may have operations to support specific features of the higher level language. A key difference is that the processor of bytecode is usually a program, though processors have been created for interpreting some bytecode specifications, e.g. a processor called SOAR (Smalltalk On A RISC) for Smalltalk bytecode. While you wouldn't typically call native machine code bytecode, for some types of processors such as CISC and EISC (e.g. Linn Rekursiv, from the people who made record players), the processor itself contains a program that is interpreting the machine instructions, so there are parallels." [47]
Assembly
- https://en.wikipedia.org/wiki/Assembly_language - Human readable instructors to the assembler + data bytes + operators
- Rappel - assembly REPL. It works by creating a shell ELF, starting it under ptrace, then continiously rewriting/running the .text section, while showing the register states. It's maybe half done right now, and supports Linux x86, amd64, and armv7 (no thumb) at the moment. [50]
https://github.com/osandov/asmase - a REPL for assembly language using an LLVM backend. It provides an accurate interactive environment for several architectures.
- https://en.wikipedia.org/wiki/Disassembler - a computer program that translates machine language into assembly language—the inverse operation to that of an assembler. A disassembler differs from a decompiler, which targets a high-level language rather than an assembly language. Disassembly, the output of a disassembler, is often formatted for human-readability rather than suitability for input to an assembler, making it principally a reverse-engineering tool.
- yasp - a fully functional web-based assembler development environment, including a real assembler, emulator and debugger. The assembler dialect is custom and very simple so as to keep the learning curve as shallow as possible. It also features some hardware-elements (LED, Potentiometer, Button, etc.). The goal of this project is to create an environment in which students can learn the assembly language so that they understand computers better. Furthermore it allows them to experiment without the fear of breaking something. The original project team of yasp consists of Robert Fischer and Michael "luto" Lutonsky. For more information take a look at the about-section in the IDEs menu. [56]
- MOnSter 6502 - A new dis-integrated circuit project to make a complete, working transistor-scale replica of the classic MOS 6502 microprocessor. [57]
- YouTube: MOnSter 6502 Update #1
- Easy 6502 - how to get started writing 6502 assembly language. The 6502 processor was massive in the seventies and eighties, powering famous computers like the BBC Micro, Atari 2600, Commodore 64, Apple II, and the Nintendo Entertainment System. Bender in Futurama has a 6502 processor for a brain. Even the Terminator was programmed in 6502.
- https://floooh.github.io/visual6502remix
- https://github.com/floooh/v6502r - Visual6502 Remixed. TL;DR: Take the visual6502 project and turn it into a native+wasm app via perfect6502, Dear ImGui and Sokol.
- https://github.com/gregjames/Sim2600 - Visual6502.org transistor-level simulation of the Atari 2600 game console
Processing
- https://en.wikipedia.org/wiki/Voltage_regulator_module - VRM), sometimes called processor power module (PPM), is a buck converter that provides a microprocessor the appropriate supply voltage, converting +5 V or +12 V to a much lower voltage required by the CPU, allowing processors with different supply voltage to be mounted on the same motherboard.
CPU
- https://en.wikipedia.org/wiki/Control_unit - a component of a computer's central processing unit (CPU) that directs operation of the processor. It controls communication and co-ordination between input/output devices. It reads and interprets instructions and determines the sequence for processing the data.
- https://en.wikipedia.org/wiki/Control_store - the part of a CPU's control unit that stores the CPU's microprogram. It is usually accessed by a microsequencer.
- https://en.wikipedia.org/wiki/Memory_address_register
- https://en.wikipedia.org/wiki/Memory_data_register
- https://en.wikipedia.org/wiki/Register_renaming
- http://www.mikeash.com/pyblog/friday-qa-2013-10-11-why-registers-are-fast-and-ram-is-slow.html
- https://en.wikipedia.org/wiki/Cache_coherence - the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, which is particularly the case with CPUs in a multiprocessing system. In the illustration on the right, consider both the clients have a cached copy of a particular memory block from a previous read. Suppose the client on the bottom updates/changes that memory block, the client on the top could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts by maintaining a coherent view of the data values in multiple caches.
- https://en.wikipedia.org/wiki/Instruction_pipeline
- http://lolengine.net/blog/2011/9/17/playing-with-the-cpu-pipeline [63]
- https://en.wikipedia.org/wiki/NX_bit - stands for No-eXecute, a technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or for storage of data, a feature normally only found in Harvard architecture processors. However, the NX bit is being increasingly used in conventional von Neumann architecture processors, for security reasons.
- https://en.wikipedia.org/wiki/Branch_predication - a strategy in computer architecture design for mitigating the costs usually associated with conditional branches, particularly branches to short sections of code. It does this by allowing each instruction to conditionally either perform an operation or do nothing.
- https://en.wikipedia.org/wiki/Vector_processor - a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors
Microprocessor
- https://en.wikipedia.org/wiki/Microprocessor - a computer processor that incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit
Coprocessor
- https://en.wikipedia.org/wiki/Coprocessor - a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating point arithmetic, graphics, signal processing, string processing, encryption or I/O Interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance need not pay for it.
- https://en.wikipedia.org/wiki/Floating-point_unit - FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating point numbers. Typical operations are addition, subtraction, multiplication, division, square root, bitshifting. Some systems (particularly older, microcode-based architectures) can also perform various transcendental functions such as exponential or trigonometric calculations, though in most modern processors these are done with software library routines. In a general purpose computer architectures, one or more FPUs may be integrated with the central processing unit; however many embedded processors do not have hardware support for floating-point operations.
- https://en.wikipedia.org/wiki/Controller_(computing) - a chip, an expansion card, or a stand-alone device that interfaces with a peripheral device. This may be a link between two parts of a computer (for example a memory controller that manages access to memory for the computer) or a controller on an external device that manages the operation of (and connection with) that device.
The term is sometimes used in the opposite sense to refer to a device by which the user controls the operation of the computer, as in game controller. In desktop computers the controller may be a plug in board, a single integrated circuit on the motherboard, or an external device. In mainframes the controller is usually either a separate device attached to a channel or integrated into the peripheral.
- https://en.wikipedia.org/wiki/Compute_kernel - a routine compiled for high throughput accelerators (such as GPUs), DSPs or FPGAs, separate from (but used by) a main program. They are sometimes called compute shaders, sharing execution units with vertex shaders and pixel shaders on GPUs, but are not limited to execution on one class of device, or graphics APIs.
DSP
FPGA
- http://forums.xilinx.com/t5/Embedded-Development-Tools/GPUs-vs-FPGAs/td-p/60112
- https://news.ycombinator.com/item?id=6305113
CISC
x86
- https://news.ycombinator.com/item?id=13045558
- https://news.ycombinator.com/item?id=13051137
- https://news.ycombinator.com/item?id=7422703
- http://mainisusuallyafunction.blogspot.co.uk/2014/02/x86-is-turing-complete-with-no-registers.html [72]
- https://code.google.com/p/corkami/wiki/x86oddities
- http://www.cl.cam.ac.uk/~sd601/papers/mov.pdf [73]
- https://news.ycombinator.com/item?id=7680706
- https://en.wikipedia.org/wiki/Advanced_Vector_Extensions
RISC
MIPS
- EE Times: MIPS Goes Open Source - [78]
SPARC
Power
- https://en.wikipedia.org/wiki/IBM_POWER_Instruction_Set_Architecture - older POWER ISA
Alpha
ARM
- YouTube: A tour of the ARM architecture and its Linux support - Thomas Petazzoni
RISC-V
- http://gigaom.com/2014/08/19/risc-creator-is-pushing-open-source-chips-for-cloud-computing-and-the-internet-of-things/ [80]
- lowRISC - producing fully open hardware systems. From the processor core to the development board, our goal is to create a completely open computing eco-system. Our open-source SoC (System-on-a-Chip) designs will be based on the 64-bit RISC-V instruction set architecture. Volume silicon manufacture is planned as is a low-cost development board. There are more details on our plans in these slides from a recent talk [81] [82]
- http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-day-one/ [83]
- http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-day-two/
- PULP Platform - What you will get is a competitive, state-of-the-art 32-bit processor based on the RISC-V architecture, with a rich set of peripherals, and full debug support. At ETH Zurich and Universita’ di Bologna we have put many of the ideas that we have developed through our research on ultra-low-power parallel processing (PULP project) into PULPino. It is the little hip brother to its more serious bigger brothers.
- RISC-V from scratch 1: Introduction, toolchain setup, and hello world!
- RISC-V from scratch 2: Hardware layouts, linker scripts, and C runtimes - [86]
Other
- https://en.wikipedia.org/wiki/SuperH
- http://www.shared-ptr.com/sh_insns.html
- http://j-core.org/ [88]
- https://news.ycombinator.com/item?id=12645661 - 64 bit, 1024 core
GPU
TPU
- https://en.wikipedia.org/wiki/Tensor_processing_unit - are application-specific integrated circuits (ASIC) developed specifically for machine learning. Compared to graphics processing units (which as of 2016 are frequently used for the same tasks), they are designed explicitly for a higher volume of reduced precision computation with higher IOPS per watt (e.g. as little as 8-bit precision), and lack hardware for rasterisation/texture mapping. The chip has been specifically designed for Google's TensorFlow framework, however Google still uses CPUs and GPUs for other machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.
The Mill
- The Belt | Mill computing [91]
- The Mill - summary of first three talks
- http://ootbcomp.com/topic/introduction-to-the-mill-cpu-programming-model-2/ [92]
Parallel computing
- https://en.wikipedia.org/wiki/SIMD - Single instruction, multiple data (SIMD) is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but only a single process (instruction) at a given moment. SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions to improve the performance of multimedia use. Not to be confused with SIMT which utilizes threads.
GPGPU
- https://github.com/Meinersbur/ppcg - Using PPCG to generate CUDA or OpenCL code
CUDA
- https://en.wikipedia.org/wiki/CUDA - (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (General-Purpose computing on Graphics Processing Units). The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels. The CUDA platform is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA-powered GPUs also support programming frameworks such as OpenACC and OpenCL; and HIP by compiling such code to CUDA. When CUDA was first introduced by Nvidia, the name was an acronym for Compute Unified Device Architecture, but Nvidia subsequently dropped the common use of the acronym.
OpenCL
- https://en.wikipedia.org/wiki/OpenCL - a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators. OpenCL specifies programming languages (based on C99 and C++11) for programming these devices and application programming interfaces (APIs) to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.
- https://github.com/Oblomov/clinfo - Print all known information about all available OpenCL platforms and devices in the system
- https://en.wikipedia.org/wiki/SYCL - a higher-level programming model for OpenCL as a single-source domain specific embedded language (DSEL) based on pure C++11 for SYCL 1.2.1 to improve programming productivity. This is a standard developed by Khronos Group, announced in March 2014.
ROCm
- ROCm - the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent. We are bringing the UNIX philosophy of choice, minimalism and modular software development to GPU computing. The new ROCm foundation lets you choose or even develop tools and a language run time for your application.ROCm is built for scale; it supports multi-GPU computing in and out of server-node communication through RDMA. It also simplifies the stack when the driver directly incorporates RDMA peer-sync support.ROCm has a rich system run time with the critical features that large-scale application, compiler and language-run-time development requires.HSA Compliant Runtime and Driver for AMD RADEON GPU’s
Microcontroller
- https://en.wikipedia.org/wiki/Microcontroller - a small computer (SoC) on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals. Program memory in the form of Ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a typically small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips.
Memory
There are four major storage levels:
- Internal – Processor registers and cache.
- Main – the system RAM and controller cards.
- On-line mass storage – Secondary storage.
- Off-line bulk storage – Tertiary and Off-line storage.
- https://en.wikipedia.org/wiki/NVDIMM NVDIMM - a type of random-access memory for computers. Non-volatile memory is memory that retains its contents even when electrical power is removed, for example from an unexpected power loss, system crash, or normal shutdown. "Dual in-line" identifies the memory as using the DIMM package. NVDIMMs improve application performance and system crash recovery time. They enhance solid-state drive (SSD) endurance and reliability. "Non-volatile" products may use volatile memory during normal operation and dump the contents into non-volatile memory if the power fails, using an on-board backup power source. Volatile memory is faster than non-volatile; it is byte-addressable; and it can be written to arbitrarily, without concerns about wear and device lifespan. However, including a second memory to achieve non-volatility (and the on-board backup power source) increases the product cost compared to volatile memory.
- https://en.wikipedia.org/wiki/Memory_rank - a set of DRAM chips connected to the same chip select, which are therefore accessed simultaneously. In practice all DRAM chips share all of the other command and control signals, and only the chip select pins for each rank are separate (the data pins are shared across ranks).
- https://en.wikipedia.org/wiki/Address_space - defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity.
- https://en.wikipedia.org/wiki/Bank_switching - a technique used in computer design to increase the amount of usable memory beyond the amount directly addressable by the processor. It can be used to configure a system differently at different times; for example, a ROM required to start a system from diskette could be switched out when no longer needed.
- https://en.wikipedia.org/wiki/Slab_allocation - a memory management mechanism intended for the efficient memory allocation of kernel objects. It eliminates fragmentation caused by allocations and deallocations. The technique is used to retain allocated memory that contains a data object of a certain type for reuse upon subsequent allocations of objects of the same type. It is analogous to an object pool, but only applies to memory, not other resources. Slab allocation was first introduced in the Solaris 5.4 kernel by Jeff Bonwick. It is now widely used by many Unix and Unix-like operating systems including FreeBSD and Linux.
- https://en.wikipedia.org/wiki/Executable_space_protection - the marking of memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit, or in some cases software emulation of those features.
- https://en.wikipedia.org/wiki/Virtual_memory - a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.
- https://en.wikipedia.org/wiki/Paging - a memory management scheme by which a computer stores and retrieves data from secondary storage[a] for use in main memory.
- https://en.wikipedia.org/wiki/Page_(computer_memory) - memory page, or virtual page is a fixed-length contiguous block of virtual memory, described by a single entry in the page table. It is the smallest unit of data for memory management in a virtual memory operating system.
- https://en.wikipedia.org/wiki/Page_table - the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are used by the accessing process, while physical addresses are used by the hardware, or more specifically, by the RAM subsystem.
- https://en.wikipedia.org/wiki/Page_fault - a type of interrupt, called trap, raised by computer hardware when a running program accesses a memory page that is mapped into the virtual address space, but not actually loaded into main memory. When handling a page fault, the operating system generally tries to make the required page accessible at the location in physical memory, or terminates the program in case of an illegal memory access. Contrary to what "fault" might suggest, valid page/hard faults are not errors, and are common and necessary to increase the amount of memory available to programs in any operating system that utilizes virtual memory,
- https://en.wikipedia.org/wiki/Thrashing_(computer_science) - occurs when a computer's virtual memory subsystem is in a constant state of paging, rapidly exchanging data in memory for data on disk, to the exclusion of most application-level processing
- http://en.wikipedia.org/wiki/Call_stack - a stack data structure that stores information about the active subroutines of a computer program. This kind of stack is also known as an execution stack, control stack, run-time stack, or machine stack, and is often shortened to just "the stack". Although maintenance of the call stack is important for the proper functioning of most software, the details are normally hidden and automatic in high-level programming languages. Many computer instruction sets provide special instructions for manipulating stacks.
A call stack is used for several related purposes, but the main reason for having one is to keep track of the point to which each active subroutine should return control when it finishes executing.
- https://en.wikipedia.org/wiki/Stack-based_memory_allocation - because data is added and removed to the stack in a last-in-first-out manner, stack-based memory allocation is very simple and typically faster than heap-based memory allocation (also known as dynamic memory allocation).
- https://en.wikipedia.org/wiki/Non-uniform_memory_access - a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.
- http://en.wikipedia.org/wiki/Pointer_(computer_programming) - a programming language object, whose value refers to (or "points to") another value stored elsewhere in the computer memory using its address. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer.
- r/learnprogramming: What's the point of pointers?
- https://en.wikipedia.org/wiki/Pointer_aliasing - refers to the situation where the same memory location can be accessed using different names.
Storage
Other
Security
ASIC
Interrupts
- https://en.wikipedia.org/wiki/Interrupt - a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.
- https://en.wikipedia.org/wiki/Interrupt_request_(PC_architecture) - or IRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, an interrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from a modem or network card, key presses, or mouse movements.
Interrupt lines are often identified by an index with the format of IRQ followed by a number. For example, on the Intel 8259 family of PICs there are eight interrupt inputs commonly referred to as IRQ0 through IRQ7. In x86 based computer systems that use two of these PICs, the combined set of lines are referred to as IRQ0 through IRQ15. Technically these lines are named IR0 through IR7, and the lines on the ISA bus to which they were historically attached are named IRQ0 through IRQ15. Newer x86 systems integrate an Advanced Programmable Interrupt Controller (APIC) that conforms to the Intel APIC Architecture. These APICs support a programming interface for up to 255 physical hardware IRQ lines per APIC, with a typical system implementing support for only around 24 total hardware lines.
- https://en.wikipedia.org/wiki/Interrupt_handler - also known as an interrupt service routine or ISR, a callback function in microcontroller firmware, an operating system or a device driver, whose execution is triggered by the reception of an interrupt. In general, interrupts and their handlers are used to handle high-priority conditions that require the interruption of the current code the processor is executing.
Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the interrupt handler completes its task. For example, pressing a key on a computer keyboard, or moving the mouse, triggers interrupts that call interrupt handlers which read the key, or the mouse's position, and copy the associated information into the computer's memory. An interrupt handler is a low-level counterpart of event handlers. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.
- https://en.wikipedia.org/wiki/Exception_handling - the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution. It is provided by specialized programming language constructs or computer hardware mechanisms.
- https://en.wikipedia.org/wiki/Message_Signaled_Interrupts - an alternative in-band method of signaling an interrupt, using special in-band messages to replace traditional out-of-band assertion of dedicated interrupt lines. While more complex to implement in a device, message signaled interrupts have some significant advantages over pin-based out-of-band interrupt signaling. Message signaled interrupts are supported in PCI bus since its version 2.2, and in later available PCI Express bus. Some non-PCI architectures also use message signaled interrupts.
- https://github.com/Irqbalance/irqbalance - a daemon to help balance the cpu load generated by interrupts across all of a systems cpus. Irqbalance identifies the highest volume interrupt sources, and isolates them to a single unique cpu, so that load is spread as much as possible over an entire processor set, while minimizing cache miss rates for irq handlers.
Runtime
- https://en.wikipedia.org/wiki/Green_threads - threads that are scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system. Green threads emulate multithreaded environments without relying on any native OS capabilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support.
Hacking
Areas
See also Games#Software, AI, Audio, etc.
Command-line
Compression
- Data Compression Explained - Matt Mahoney
Network
- Beej's Guide to Network Programming - Using Internet Sockets
Graphics
- YouTube: How "oldschool" graphics worked. [100]
3D
Audio
See also Audio#Programming
Virtual / augmented reality
- https://en.wikipedia.org/wiki/X_Reality_(XR) - defined as: a form of “mixed reality environment that comes from the fusion (union) of ... ubiquitous sensor/actuator networks and shared online virtual worlds....”. It encompasses a wide spectrum of hardware and software, including sensory interfaces, applications, and infrastructures, that enable content creation for virtual reality (VR), mixed reality (MR), augmented reality (AR), cinematic reality (CR). With these tools, users generate new forms of reality by bringing digital objects into the physical world and bringing physical world objects into the digital world.Its meaning has more recently been broadened to include technologies of extended ("cyborg" or wearable) intelligence in regards to the IEEE Council on Extended Intelligence (CXI).
Distributed
- http://the-paper-trail.org/blog/distributed-systems-theory-for-the-distributed-systems-engineer/ [110]
Virus
Social
to sort
For kids
Scratch
- http://scratch.mit.edu/
- http://ase.tufts.edu/DevTech/ScratchJr/ScratchJrHome.asp
- http://programmingisterrible.com/post/76953953784/programming-with-building-blocks
Snap
to sort
- http://www.robomind.net/en/index.html
- http://dancali.io/blog/a-74-step-account-of-my-7-year-old-daughters-first-programming-experience [114]
- http://squeak.org/ - smalltalk
- https://www.makewonder.com/ - robot
Future
- Learnable Programming - Designing a programming system for understanding programs - Bret Victor
- "The Future of Programming" - Bret Victor
- http://www.omarrizwan.com/cruncher/ [118] - bret victor inspired
- http://blog.csta.acm.org/2015/10/01/accesscs10k-quorum-programming-language-and-evidence-oriented-programming/ [122]
- Studying the Language and Structure in Non-Programmers’ Solutions to Programming Problems [123] / https://news.ycombinator.com/item?id=11513818
- https://github.com/pel-daniel/mind-bicycles - List of future of programming projects
Performance
- https://en.wikipedia.org/wiki/Cyclomatic_complexity - a software metric (measurement), used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.
Quantum computing
- https://github.com/QCGPU/qcgpu-rust - Open Source, High Performance & Hardware Accelerated, Quantum Computer Simulation in Rust [126]
Cool
- YouTube: RiftSketch: Live coding in VR with the Oculus Rift, Firefox WebVR, JavaScript and Three.js https://news.ycombinator.com/item?id=8411638
- Quine Relay is a Ruby program that generates Scala program that generates Scheme program that generates ...(through 50 languages)... REXX program that generates the original Ruby code again.
- http://mamememo.blogspot.ca/2010/09/qlobe.html
- http://www.digitalcraft.org/?artikel_id=292 - elegant fork bomb
- http://c2.com/cgi/wiki?PerlPoetry
- https://en.wikipedia.org/wiki/Black_Perl [128]
- http://www.perlmonks.org/?node=Perl%20Poetry
- http://www.perlmonks.org/?node_id=45213
- https://en.wikipedia.org/wiki/Just_another_Perl_hacker
- https://en.wikipedia.org/wiki/Obfuscated_code#Recreational_obfuscation
- https://en.wikipedia.org/wiki/International_Obfuscated_C_Code_Contest
- https://en.wikipedia.org/wiki/Obfuscated_Perl_Contest
- http://ruben.verborgh.org/blog/2013/02/21/programming-is-an-art/
- https://blooki.st/BlookElement/ShowTextPhoto?blookElementId=1962
Humour
- A Brief, Incomplete, and Mostly Wrong History of Programming Languages
- http://tjathurman.tumblr.com/post/64695616290/molesworth-1
- The Evolution of a Programmer
- http://www.reddit.com/r/programmerhumor
- http://www.reddit.com/r/programminghorror
- http://www.reddit.com/r/ProgrammerCringe
- http://www.reddit.com/r/shittyprogramming/
- What is your best programmer joke?
- http://stackoverflow.com/questions/184618/what-is-the-best-comment-in-source-code-you-have-ever-encountered
to sort
- The language of languages - explains grammars and common notations for grammars, such as Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF) and regular extensions to BNF.
- https://en.wikipedia.org/wiki/Big_O_notation
- Lecture 2: Asymptotic Notation, Recurrences, Substitution, Master Method
- Stackoverflow: Absolute Beginner's Guide to Bit Shifting?
- RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data. RRDtool can be easily integrated in shell scripts, perl, python, ruby, lua or tcl applications.
- Ask HN: What's the best technical talk you've heard?
- Bret Victor - Inventing on Principle
- YouTube: Extracting Energy from the Turing Tarpit - Alan C. Kay during the ACM A.M. Turing Centenary Celebration, June, 2012.
- https://code.google.com/p/semicomplete/wiki/Grok
- YouTube: Stanford Seminar - Google's Steve Yegge on GROK
- http://techland.time.com/2013/04/02/an-interview-with-computing-pioneer-alan-kay/
- Ask HN: Alan Kay says programing is pop culture. Where can I find the classics?