Maths
General
to sort. huge mess. also working back from Computing#Computation.
"Most mathematical arguments are actually not formal, in the way a program is a formal language.
In many instances, there are not only ambiguities but errors in the mathematical expression. They only become formalised to an equivalent degree in a system like Isabelle or Coq.
Writing proofs in Isabelle and Coq is essentially the same process as writing highly functional code, and is susceptible to the same arduous process. It's hard because it's a formal language running on a rigid machine.
Programming would be easy if you specified everything in a semi --formal pseudo code, designed to be interpreted by a human mind that can fill in the blanks. This is what most mathematics is."
- YouTube: The Map of Mathematics
- https://en.wikipedia.org/wiki/Areas_of_mathematics
- https://en.wikipedia.org/wiki/Mathematics_Subject_Classification
- http://mathwithbaddrawings.com/
- http://www.mathsisfun.com/index.htm
- http://betterexplained.com/articles/category/math/
to sort
Learning
- Better Explained - Math lessons that actually explain concepts. [3]
- Better Maths Intuition Cheatsheet - Math is no more about equations than poetry is about spelling. Find your Aha! Moment.
- PDF: A Mathematician’s Lament by Paul Lockhart
- 5-Year-Olds Can Learn Calculus [4]
- European Digital Mathematics Library (EuDML) - mathematics literature available online in the form of an enduring digital collection, developed and maintained by a network of institutions.
- MatematicasVisuales - visual expositions of mathematical concepts. [9]
- YouTube: Introduction to Higher Mathematics by Bill Shillito
Tools
- Mathway - provides students with the tools they need to understand and solve their math problems. With hundreds of millions of problems already solved, Mathway is the #1 problem solving resource available for students, parents, and teachers.
- Webmath is a math-help web site that generates answers to specific math questions and problems, as entered by a user, at any particular moment.
- Desmos Graphing Calculator – HTML5 with LaTeX editor [12]
- YouTube: I animated this in Desmos Graphing Calculator
- YouTube: How I animated this in Desmos Graphing Calculator
- https://github.com/skauertz/QCounter - a simple counter app based on Qt and QML.
Calculator
History
- https://thonyc.wordpress.com/2017/02/10/the-widespread-and-persistent-myth-that-it-is-easier-to-multiply-and-divide-with-hindu-arabic-numerals-than-with-roman-ones/ [15]
- A Prayer for Archimedes - A long-lost text by the ancient Greek mathematician shows that he had begun to discover the principles of calculus.
- YouTube: More Hiking in Modern Math World
Social
- Function Space is a social learning network for science with complete ecosystem (articles, video lectures, problem solving, discussions, and networking) for participants from academia to corporate research. We intend to bridge the gap between academic curriculum and skill based requirements of workplace.
- Polymath1Wiki - massively collaborative online mathematical projects
Books
Other
Research
- zbMATH (Zentralblatt MATH) is the world’s most comprehensive and longest-running abstracting and reviewing service in pure and applied mathematics. It is produced by the Berlin office of FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH (FIZ Karlsruhe). Editors are the European Mathematical Society (EMS), FIZ Karlsruhe, and the Heidelberg Academy of Sciences and Humanities. zbMATH is distributed by Springer. The zbMATH database contains more than 3.5 million bibliographic entries with reviews or abstracts currently drawn from more than 3,000 journals and serials, and 170,000 books. The coverage starts in the 18th century and is complete from 1868 to the present by the integration of the “Jahrbuch über die Fortschritte der Mathematik” database. About 7,000 active expert reviewers from all over the world contribute reviews to zbMATH.
zbMATH provides easy access to bibliographic data, reviews and abstracts from all areas of pure mathematics as well as applications, in particular to the natural sciences, computer science, economics and engineering. It also covers history and philosophy of mathematics and university education. All entries are classified according to the Mathematics Subject Classification Scheme (MSC 2010) and are equipped with keywords in order to characterize their particular content. zbMATH covers all available published and peer-reviewed articles, books, conference proceedings as well as other publication formats pertaining to the scope given above. For the list of journals and book series covered see the Journals search.
- Formal Abstracts - a formal abstract service that will express the results of mathematical publications in a computer-readable form that captures the semantic content of publications. Specifically, the service will give a statement of the main theorem of each published mathematical paper in a language that is both human and machine readable, link each term in theorem statements to a precise definition of that term (again in human/machine readable form), and ground every statement and definition in the system in some foundational system for doing mathematics (the Lean theorem prover). [19]
People
- https://en.wikipedia.org/wiki/Per_Martin-L%C3%B6f - a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer science. Since the late 1970s, Martin-Löf's publications have been mainly in logic. In philosophical logic, Martin-Löf has wrestled with the philosophy of logical consequence and judgment, partly inspired by the work of Brentano, Frege, and Husserl. In mathematical logic, Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics; Martin-Löf's work on type theory has influenced computer science.
Systems
- https://en.wikipedia.org/wiki/Mathematical_logic - a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.
- https://en.wikipedia.org/wiki/Formal_system - or logical calculus is any well-defined system of abstract thought based on the model of mathematics. A formal system need not be mathematical as such; for example, Spinoza's Ethics imitates the form of Euclid's Elements. Spinoza employed Euclidiean elements such as "axioms" or "primitive truths", rules of inferences etc. so that a calculus can be built using these. For nature of such primitive truths, one can consult Tarski's "Concept of truth for a formalized language". Some theorists use the term formalism as a rough synonym for formal system, but the term is also used to refer to a particular style of notation, for example, Paul Dirac's bra–ket notation.
- https://en.wikipedia.org/wiki/Formal_methods - a particular kind of mathematically rigorous techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, discrete event dynamic system and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
- https://en.wikipedia.org/wiki/Formal_language - well-defined vocabulary and grammar
- https://en.wikipedia.org/wiki/Formal_grammar - a set of production rules for strings in a formal language. The rules describe how to form strings from the language's alphabet that are valid according to the language's syntax. A grammar does not describe the meaning of the strings or what can be done with them in whatever context—only their form. Formal language theory, the discipline that studies formal grammars and languages, is a branch of applied mathematics. Its applications are found in theoretical computer science, theoretical linguistics, formal semantics, mathematical logic, and other areas.
- https://en.wikipedia.org/wiki/Expression_(mathematics) - a finite combination of symbols that is well-formed according to rules that depend on the context. Symbols can designate numbers (constants), variables, operations, functions, and other mathematical symbols, as well as punctuation, symbols of grouping, and other syntactic symbols.
- https://en.wikipedia.org/wiki/Equation - a formula of the form A = B, where A and B are expressions that may contain one or several variables called unknowns, and "=" denotes the equality binary relation. Although written in the form of proposition, an equation is not a statement that is either true or false, but a problem consisting of finding the values, called solutions, that, when substituted for the unknowns, yield equal values of the expressions A and B.
- https://en.wikipedia.org/wiki/Satisfiability - elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true. A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.
- https://en.wikipedia.org/wiki/Satisfiability_modulo_theories - a decision problem for logical formulas with respect to combinations of background theories expressed in classical first-order logic with equality. Examples of theories typically used in computer science are the theory of real numbers, the theory of integers, and the theories of various data structures such as lists, arrays, bit vectors and so on. SMT can be thought of as a form of the constraint satisfaction problem and thus a certain formalized approach to constraint programming.
- https://en.wikipedia.org/wiki/Metalanguage - a language used to describe another language, often called the object language. Expressions in a metalanguage are often distinguished from those in the object language by the use of italics, quotation marks, or writing on a separate line. The structure of sentences and phrases in a metalanguage can be described by a metasyntax.
- https://en.wikipedia.org/wiki/Metasyntax - describes the allowable structure and composition of phrases and sentences of a metalanguage, which is used to describe either a natural language or a computer programming language. Some of the widely used formal metalanguages for computer languages are Backus–Naur form (BNF), extended Backus–Naur form (EBNF), Wirth syntax notation (WSN), and augmented Backus–Naur form (ABNF).
These metalanguages have their own metasyntax each composed of terminal symbols, nonterminal symbols, and metasymbols. A terminal symbol, such as a word or a token, is a stand-alone structure in a language being defined. A nonterminal symbol represents a syntactic category, which defines one or more valid phrasal or sentence structure consisted of an n-element subset. Metasymbols provide syntactic information for denotational purposes in a given metasyntax. Terminals, nonterminals, and metasymbols do not apply across all metalanguages.
Typically, the metalanguage for token-level languages (formally called "regular languages") does not have nonterminals because nesting is not an issue in these regular languages. English, as a metalanguage for describing certain languages, does not contain metasymbols since all explanation could be done using English expression. There are only certain formal metalanguages used for describing recursive languages (formally called context-free languages) that have terminals, nonterminals, and metasymbols in their metasyntax.
- https://en.wikipedia.org/wiki/Logical_constant - a language L is a symbol that has the same semantic value under every interpretation of L. Two important types of logical constants are logical connectives and quantifiers. The equality predicate (usually written '=') is also treated as a logical constant in many systems of logic. One of the fundamental questions in the philosophy of logic is "What is a logical constant?"; that is, what special feature of certain constants makes them logical in nature?
- https://en.wikipedia.org/wiki/Non-logical_symbol - of a language of first-order logic consist of predicates and individual constants. These include symbols that, in an interpretation, may stand for individual constants, variables, functions, or predicates. A language of first-order logic is a formal language over the alphabet consisting of its non-logical symbols and its logical symbols. The latter include logical connectives, quantifiers, and variables that stand for statements.A non-logical symbol only has meaning or semantic content when one is assigned to it by means of an interpretation. Consequently, a sentence containing a non-logical symbol lacks meaning except under an interpretation, so a sentence is said to be true or false under an interpretation.
- https://en.wikipedia.org/wiki/Metalogic - the study of the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths. The basic objects of metalogical study are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the branch of mathematical logic that is known as model theory, and the study of deductive systems is the branch that is known as proof theory.
- https://ncatlab.org/nlab/show/signature+(in+logic) - a collection of data which prescribes the ‘alphabet’ of non-logical symbols of a logical theory, saying which operations and predicates are taken to be primitive.
Objects
- http://en.wikipedia.org/wiki/Mathematical_objects - an abstract object arising in philosophy of mathematics and mathematics. Commonly encountered mathematical objects include numbers, permutations, partitions, matrices, sets, functions, and relations. Geometry as a branch of mathematics has such objects as hexagons, points, lines, triangles, circles, spheres, polyhedra, topological spaces and manifolds. Algebra, another branch, has groups, rings, fields, group-theoretic lattices, and order-theoretic lattices. Categories are simultaneously homes to mathematical objects and mathematical objects in their own right.
- https://en.wikipedia.org/wiki/Mathematical_structure - a structure on a set is an additional mathematical object that, in some manner, attaches (or relates) to that set to endow it with some additional meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories.
Sometimes, a set is endowed with more than one structure simultaneously; this enables mathematicians to study it more richly. For example, an ordering imposes a rigid form, shape, or topology on the set. As another example, if a set has both a topology and is a group, and these two structures are related in a certain way, the set becomes a topological group.
Mappings between sets which preserve structures (so that structures in the source or domain are mapped to equivalent structures in the destination or codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures.
- https://en.wikipedia.org/wiki/Structure_(mathematical_logic) - consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures with no relation symbols.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic. For a given theory in model theory, a structure is called a model, if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as interpretations. In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
Categories
- https://en.wikipedia.org/wiki/Category_(mathematics) - sometimes called an abstract category to distinguish it from a concrete category) is a collection of "objects" that are linked by "arrows". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions.
- https://en.wikipedia.org/wiki/Category_theory - a branch of mathematics that seeks to generalize all of mathematics in terms of categories, independent of what their objects and arrows represent. Virtually every branch of modern mathematics can be described in terms of categories, and doing so often reveals deep insights and similarities between seemingly different areas of mathematics. As such, category theory provides an alternative foundation for mathematics to set theory and other proposed axiomatic foundations. In general, the objects and arrows may be abstract entities of any kind, and the notion of category provides a fundamental and abstract way to describe mathematical entities and their relationships. In addition to formalizing mathematics, category theory is also used to formalize many other systems in computer science, such as the semantics of programming languages.
- nLab - a wiki-lab for collaborative work on Mathematics, Physics and Philosophy — especially from the n-point of view: insofar as these subjects are usefully treated with tools and notions of category theory or higher category theory.
- Introduction to Category Theory - Graham Hutton, School of Computer Science, University of Nottingham.
"Category theory is a mathematical approach to the study of algebraic structure that has become an important tool in theoretical computing science, particularly for semantics-based research. The aim of this course is to teach the basics of category theory, in a way that is accessible and relevant to computer scientists. The emphasis is on gaining a good understanding the basic definitions, examples, and techniques, so that students are equipped for further study on their own of more advanced topics if required."
- PDF: Elements of Basic Category Therory - Alfio Martini [21]
- PDF: Conceptual Mathematics: A first introduction to categories - by Lawvere and Schanuel
- YouTube: Category Theory by Tom LaGatta
- Category theory definition dependencies - diagram showing how category theory definitions build on each other.
- PDF: Category Theory for Computing Science - Michael Barr, Charles Wells
- Category Theory for Programmers: The Preface [25]
- YouTube: Category Theory - Category theory for programmers by Bartosz Milewski. Seattle, Summer 2016.
- https://github.com/hmemcpy/milewski-ctfp-pdf
- From design patterns to category theory - [26] - "Category theory generalises some intuitive relations, such as how numbers combine (e.g. via addition or multiplication). Instead of discussing numbers, however, category theory considers abstract 'objects'. This field of mathematics explore how object relate and compose. Some category theory concepts can be translated to code. These universal abstractions can form the basis of a powerful and concise software design vocabulary.
"The design patterns movement was an early attempt to create such a vocabulary. I think using category theory offers the chance of a better vocabulary, but fortunately, all the work that went into design patterns isn't wasted. It seems to me that some design patterns are essentially ad-hoc, informally specified, specialised instances of basic category theory concepts. There's quite a bit of overlap. This should further strengthen the argument that category theory is valuable in programming, because some of the concepts are equivalent to design patterns that have already proven useful."
- PDF: The Mathematics of Sentence Structure - Joachim Lambek, 1958
- https://en.wikipedia.org/wiki/Initial_and_terminal_objects - an initial object of a category C is an object I in C such that for every object X in C, there exists precisely one morphism I → X.The dual notion is that of a terminal object (also called terminal element): T is terminal if for every object X in C there exists exactly one morphism X → T. Initial objects are also called coterminal or universal, and terminal objects are also called final.If an object is both initial and terminal, it is called a zero object or null object. A pointed category is one with a zero object.
- https://ncatlab.org/nlab/show/terminal+object - A terminal object in a category CC is an object 11 of CC satisfying the following universal property:for every object xx of CC, there exists a unique morphism !:x→1!:x\to 1. The terminal object of any category, if it exists, is unique up to unique isomorphism. If the terminal object is also initial, it is called a zero object.
- https://ncatlab.org/nlab/show/terminal+category - The terminal category or trivial category or final category is the terminal object in Cat. It is the unique (up to isomorphism) category with a single object and a single morphism, necessarily the identity morphism on that object. It is often denoted 11 or 1\mathbf{1} or *\ast.
- https://github.com/MostAwesomeDude/brismu - These are notes for a basic interpretation of Lojban using category theory.
https://news.ycombinator.com/item?id=26334535
Sets
- https://ncatlab.org/nlab/show/Set - Set is the (or a) category with sets as objects and functions between sets as morphisms.
- An Elementary Theory of the Category of Sets - alternative to ZFC
- An Elementary Theory of the Category of Sets | The n-Category Café - William Lawvere’s Elementary Theory of the Category of Sets (ETCS) was one of the first attempts at using category theory as a foundation of mathematics and formulating set theory in category theoretic language. In this post I will outline the content of Lawvere’s 1965 paper and put it in a historical context; suggestions for further reading are at the end. Before I begin, I’d like to thank Emily Riehl, Steve Awodey and the other Kan Extension Seminar participants for their helpful feedback, suggestions, and reading responses to the paper; and, of course, William Lawvere for writing it!
- https://en.wikipedia.org/wiki/Element_(mathematics) - or member, of a set is any one of the distinct objects that make up that set.
- https://en.wikipedia.org/wiki/Cardinality - a measure of the "number of elements of the set"
- https://en.wikipedia.org/wiki/Inhabited_set - inhabited if there exists an element a ∈ A. In classical mathematics, this is the same as the set being nonempty; however, this equivalence is not valid in intuitionistic logic (or constructive logic).
- https://en.wikipedia.org/wiki/Singleton_(mathematics) - also known as a unit set, is a set with exactly one element. For example, the set {0} is a singleton. The term is also used for a 1-tuple (a sequence with one member).
- https://en.wikipedia.org/wiki/Infimum_and_supremum - the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set T is the greatest element in T that is less than or equal to all elements of S, if such an element exists. Consequently, the term greatest lower bound (abbreviated as GLB) is also commonly used. The supremum (abbreviated sup; plural suprema) of a subset S of a partially ordered set T is the least element in T that is greater than or equal to all elements of S, if such an element exists. Consequently, the supremum is also referred to as the least upper bound (or LUB).
- https://en.wikipedia.org/wiki/Class_(set_theory) - collection of sets (or other mathematical objects) with a property that all its members share
- https://en.wikipedia.org/wiki/Continuum_(set_theory) - means the real numbers, or the corresponding (infinite) cardinal number, c. Georg Cantor proved that the cardinality c is larger than the smallest infinity, namely, aleph 0. He also proved that c equals 2^aleph 0, the cardinality of the power set of the natural numbers.
- https://en.wikipedia.org/wiki/Cardinality_of_the_continuum - the size of the set of real numbers
- https://en.wikipedia.org/wiki/Continuum_hypothesis - sometimes stated by saying that no cardinality lies between that of the continuum and that of the natural numbers, aleph 0.
- https://en.wikipedia.org/wiki/Aleph_number - aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets
- https://en.wikipedia.org/wiki/Countable_set - set with the same cardinality as some subset of the set of natural numbers
- https://en.wikipedia.org/wiki/Uncountable_set - cardinal number is larger than that of the set of all natural numbers
- https://en.wikipedia.org/wiki/Axiom_of_extensionality - A set is determined uniquely by its members.
- https://en.wikipedia.org/wiki/Tarski%E2%80%93Grothendieck_set_theory - named after mathematicians Alfred Tarski and Alexander Grothendieck) is an axiomatic set theory. It is a non-conservative extension of Zermelo–Fraenkel set theory (ZFC) and is distinguished from other axiomatic set theories by the inclusion of Tarski's axio, which states that for each set there is a Grothendieck universe it belongs to (see below). Tarski's axiom implies the existence of inaccessible cardinals, providing a richer ontology than that of conventional set theories such as ZFC. For example, adding this axiom supports category theory. The Mizar system and Metamath use Tarski–Grothendieck set theory for formal verification of proofs.
- https://en.wikipedia.org/wiki/Family_of_sets - a collection F of subsets of a given set S is called a family of subsets of S, or a family of sets over S. More generally, a collection of any sets whatsoever is called a family of sets or a set-family or a set-system. The term "collection" is used here because, in some contexts, a family of sets may be allowed to contain repeated copies of any given member, and in other contexts it may form a proper class rather than a set.A finite family of subsets of a finite set S is also called a hypergraph.
- https://en.wikipedia.org/wiki/Power_set - the set of all subsets of S, including the empty set and S itself. The power set of a set S is variously denoted as P(S), ℘(S) (using the "Weierstrass p"), P(S), ℙ(S), or, identifying the powerset of S with the set of all functions from S to a given set of two elements, 2S. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set. Any subset of P(S) is called a family of sets over S.
- https://en.wikipedia.org/wiki/Constructible_universe - or Gödel's constructible universe, denoted L, is a particular class of sets that can be described entirely in terms of simpler sets. It was introduced by Kurt Gödel in his 1938 paper "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". In this, he proved that the constructible universe is an inner model of ZF set theory, and also that the axiom of choice and the generalized continuum hypothesis are true in the constructible universe. This shows that both propositions are consistent with the basic axioms of set theory, if ZF itself is consistent. Since many other theorems only hold in systems in which one or both of the propositions is true, their consistency is an important result.
Functions and operations
- https://en.wikipedia.org/wiki/Function_(mathematics) - a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real number x to its square x2. The output of a function f corresponding to an input x is denoted by f(x) (read "f of x"). In this example, if the input is −3, then the output is 9, and we may write f(−3) = 9. Likewise, if the input is 3, then the output is also 9, and we may write f(3) = 9. (The same output may be produced by more than one input, but each input gives only one output.) The input variable(s) are sometimes referred to as the argument(s) of the function.
- Fungrim: The Mathematical Functions Grimoire - an open source library of formulas for mathematical functions. Fungrim currently consists of 263 symbols (named mathematical objects), 1198 entries (definitions, formulas, tables, plots), and 42 topics (listings of entries). All data in Fungrim is represented in symbolic, semantic form designed to be usable by computer algebra software. Fungrim is also fully viewable online, with a permanent ID and URL for each entry, symbol or topic.
- https://en.wikipedia.org/wiki/Elementary_function - a function of a single variable composed of particular simple functions. Elementary functions are typically defined as a sum, product, and/or composition of finitely many polynomials, rational functions, trigonometric and exponential functions, and their inverse functions (including arcsin, log, x1/n). Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s.
- https://en.wikipedia.org/wiki/Functional_predicate - or function symbol, is a logical symbol that may be applied to an object term to produce another object term. Functional predicates are also sometimes called mappings, but that term has other meanings as well. Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F(X) is again a symbol representing an object in that language. In typed logic, F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T, F(X) is a symbol representing an object of type U. One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol.
- https://en.wikipedia.org/wiki/Function_space - a set of functions of a given kind from a set X to a set Y. It is called a space because in many applications it is a topological space (including metric spaces), a vector space, or both. Namely, if Y is a field, functions have inherent vector structure with two operations of pointwise addition and multiplication to a scalar. Topological and metrical structures of function spaces are more diverse.
- https://en.wikipedia.org/wiki/Exponential_object - or map object is the categorical generalization of a function space in set theory. Categories with all finite products and exponential objects are called cartesian closed categories. Categories (such as subcategories of Top) without adjoined products may still have an exponential law.
- https://en.wikipedia.org/wiki/Domain_of_a_function - the set of "input" or argument values for which the function is defined
- https://en.wikipedia.org/wiki/Finitary - an operation that takes a finite number of input values to produce an output, like those of arithmetic. Operations on infinite numbers of input values are called infinitary.
- https://en.wikipedia.org/wiki/Operation_(mathematics) - a calculation from zero or more input values (called operands) to an output value. The number of operands is the arity of the operation. The most commonly studied operations are binary operations of arity 2, such as addition and multiplication, and unary operations of arity 1, such as additive inverse and multiplicative inverse. An operation of arity zero, or 0-ary operation is a constant. The mixed product is an example of an operation of arity 3, or ternary operation. Generally, the arity is supposed to be finite, but infinitary operations are sometimes considered. In this context, the usual operations, of finite arity are also called finitary operations.
- https://simple.wikipedia.org/wiki/Closure_(mathematics) - describes the case when the results of a mathematical operation are always defined. For example, in ordinary arithmetic, addition has closure. Whenever one adds two numbers, the answer is a number. The same is true of multiplication. Division does not have closure, because division by 0 is not defined. In the natural numbers, subtraction does not have closure, but in the integers subtraction does have closure. Subtraction of two numbers can produce a negative number, which is not a natural number, but it is an integer.
- https://en.wikipedia.org/wiki/Closure_(mathematics) - A set is closed under an operation if that operation returns a member of the set when evaluated on members of the set. Sometimes the requirement that the operation be valued in a set is explicitly stated, in which case it is known as the axiom of closure. For example, one may define a group as a set with a binary product operator obeying several axioms, including an axiom that the product of any two elements of the group is again an element. However the modern definition of an operation makes this axiom superfluous; an n-ary operation on S is just a subset of Sn+1. By its very definition, an operator on a set cannot have values outside the set.
- https://en.wikipedia.org/wiki/Codomain - or target set of a function is the set Y into which all of the output of the function is constrained to fall.
- https://en.wikipedia.org/wiki/Image_(mathematics) - the subset of a function's codomain which is the output of the function on a subset of its domain
- https://en.wikipedia.org/wiki/Indexed_family - a family, or indexed family, is informally a collection of objects, each associated with an index from some index set. For example, a family of real numbers, indexed by the set of integers is a collection of real numbers, where a given function selects for each integer one real number (possibly the same).
- https://en.wikipedia.org/wiki/Diagram_(category_theory) - A diagram is a category theoretic collection of objects and morphisms, indexed by a fixed category; equivalently, a functor from a fixed index category to some category.
- https://en.wikipedia.org/wiki/Function_composition - the pointwise application of one function to the result of another to produce a third function. For instance, the functions f : X → Y and g : Y → Z can be composed to yield a function which maps x in X to g(f(x)) in Z. The composition of functions is a special case of the composition of relations, so all properties of the latter are true of composition of functions. The composition of functions has some additional properties.
- https://en.wikipedia.org/wiki/Idempotence - the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency).
- https://en.wikipedia.org/wiki/Map_(mathematics) - refers to either a function, often with some sort of special structure, or a morphism in category theory, which generalizes the idea of a function. There are also a few, less common uses in logic and graph theory. Mapping is sometimes used for non sets of numbers.
- https://en.wikipedia.org/wiki/Morphism - refers to a structure-preserving map from one mathematical structure to another. The notion of morphism recurs in much of contemporary mathematics. In set theory, morphisms are functions; in linear algebra, linear transformations; in group theory, group homomorphisms; in topology, continuous functions, and so on. In category theory, morphism is a broadly similar idea, but somewhat more abstract: the mathematical objects involved need not be sets, and the relationship between them may be something more general than a map. The study of morphisms and of the structures (called "objects") over which they are defined is central to category theory. Much of the terminology of morphisms, as well as the intuition underlying them, comes from concrete categories, where the objects are simply sets with some additional structure, and morphisms are structure-preserving functions. In category theory, morphisms are sometimes also called arrows.
- https://en.wikipedia.org/wiki/Arity - of a function or operation is the number of arguments or operands that the function takes. The arity of a relation (or predicate) is the dimension of the domain in the corresponding Cartesian product. (A function of arity n thus has arity n+1 considered as a relation.) The term springs from words like unary, binary, ternary, etc. Unary functions or predicates may be also called "monadic"; similarly, binary functions may be called "dyadic".
In mathematics arity may also be named rank, but this word can have many other meanings in mathematics. In logic and philosophy, arity is also called adicity and degree. In linguistics, arity is usually named valency. In computer programming, there is often a syntactical distinction between operators and functions; syntactical operators usually have arity 0, 1, or 2 (the ternary operator ?: is also common). Functions vary widely in the number of arguments, though large numbers can become unwieldy. Some programming languages also offer support for variadic functions, i.e., functions syntactically accepting a variable number of arguments.
- https://en.wikipedia.org/wiki/Multivalued_function - left-total relation; that is, every input is associated with at least one output
- https://en.wikipedia.org/wiki/Homomorphism - structure-preserving map between two algebraic structures
- https://en.wikipedia.org/wiki/Group_homomorphism - create functions that preserve the algebraic structure. An equivalent definition of group homomorphism is: The function h : G → H is a group homomorphism if whenever a ∗ b = c we have h(a) ⋅ h(b) = h(c). In other words, the group H in some sense has a similar algebraic structure as G and the homomorphism h preserves that.
- https://en.wikipedia.org/wiki/Endomorphism - a morphism (or homomorphism) from a mathematical object to itself. For example, an endomorphism of a vector space V is a linear map, f: V → V, and an endomorphism of a group, G, is a group homomorphism f: G → G. In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself. In any category, the composition of any two endomorphisms of X is again an endomorphism of X. It follows that the set of all endomorphisms of X forms a monoid.
- https://en.wikipedia.org/wiki/Functor - a morphism between categories
- https://en.wikipedia.org/wiki/Cone_(category_theory) - an abstract notion used to define the limit of that functor. Cones make other appearances in category theory as well.
- https://ncatlab.org/nlab/show/contravariant+functor - but it reverses the directions of the morphisms. (Between groupoids, contravariant functors are essentially the same as functors.)
- https://ncatlab.org/nlab/show/endofunctor - a functor from a category to itself is called an endofunctor.
- https://en.wikipedia.org/wiki/Isomorphism - a homomorphism or morphism (i.e. a mathematical mapping) that admits an inverse. Two mathematical objects are isomorphic if an isomorphism exists between them. The interest of isomorphisms lies in the fact that two isomorphic objects cannot be distinguished by using only the properties used to define morphisms; thus isomorphic objects may be considered the same as long as one considers only these properties and their consequences. For most algebraic structures, including groups and rings, a homomorphism is an isomorphism if and only if it is bijective.
In topology, where the morphisms are continuous functions, isomorphisms are also called homeomorphisms or bicontinuous functions. In mathematical analysis, where the morphisms are differentiable functions, isomorphisms are also called diffeomorphisms.
A canonical isomorphism is a canonical map that is an isomorphism. Two objects are said to be canonically isomorphic if there is a canonical isomorphism between them. For example, the canonical map from a finite-dimensional vector space V to its second dual space is a canonical isomorphism; on the other hand, V is isomorphic to its dual space but not canonically in general.
Isomorphisms are formalized using category theory. A morphism f : X → Y in a category is an isomorphism if it admits a two-sided inverse, meaning that there is another morphism g : Y → X in that category such that gf = 1X and fg = 1Y, where 1X and 1Y are the identity morphisms of X and Y, respectively.[
- https://en.wikipedia.org/wiki/Automorphism - an isomorphism from a mathematical object to itself, whose source and target coincide. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
- https://en.wikipedia.org/wiki/Involution_(mathematics) - or an involutory function, is a function f that is its own inverse, f(f(x)) = xfor all x in the domain of f. Equivalently, applying f twice produces the original value. The term anti-involution refers to involutions based on antihomomorphisms.
- https://en.wikipedia.org/wiki/Localization_of_a_category - consists of adding to a category inverse morphisms for some collection of morphisms, constraining them to become isomorphisms. This is formally similar to the process of localization of a ring; it in general makes objects isomorphic that were not so before. In homotopy theory, for example, there are many examples of mappings that are invertible up to homotopy; and so large classes of homotopy equivalent spaces[clarification needed]. Calculus of fractions is another name for working in a localized category.
- https://en.wikipedia.org/wiki/Isomorphism_of_categories - two categories C and D are isomorphic if there exist functors F : C → D and G : D → C which are mutually inverse to each other, i.e. FG = 1D (the identity functor on D) and GF = 1C. This means that both the objects and the morphisms of C and D stand in a one-to-one correspondence to each other. Two isomorphic categories share all properties that are defined solely in terms of category theory; for all practical purposes, they are identical and differ only in the notation of their objects and morphisms.Isomorphism of categories is a very strong condition and rarely satisfied in practice. Much more important is the notion of equivalence of categories
f :: a -> b g :: b -> a
g.f = id^a f.g = id^b
- https://en.wikipedia.org/wiki/Equivalence_of_categories - a relation between two categories that establishes that these categories are "essentially the same". There are numerous examples of categorical equivalences from many areas of mathematics. Establishing an equivalence involves demonstrating strong similarities between the mathematical structures concerned. In some cases, these structures may appear to be unrelated at a superficial or intuitive level, making the notion fairly powerful: it creates the opportunity to "translate" theorems between different kinds of mathematical structures, knowing that the essential meaning of those theorems is preserved under the translation.
An equivalence of categories consists of a functor between the involved categories, which is required to have an "inverse" functor. However, in contrast to the situation common for isomorphisms in an algebraic setting, the composition of the functor and its "inverse" is not necessarily the identity mapping. Instead it is sufficient that each object be naturally isomorphic to its image under this composition. Thus one may describe the functors as being "inverse up to isomorphism". There is indeed a concept of isomorphism of categories where a strict form of inverse functor is required, but this is of much less practical use than the equivalence concept.
- https://en.wikipedia.org/wiki/Projection_(mathematics) - canonical projection morphism to each factor. This projection will take many forms in different categories. The projection from the Cartesian product of sets, the product topology of topological spaces (which is always surjective and open), or from the direct product of groups, etc. Although these morphisms are often epimorphisms and even surjective, they do not have to be.
- https://ncatlab.org/nlab/show/identity+morphism - or simply identity, of an object xx in some category CC is the morphism 1 x:x→x1_x: x \to x, or id x:x→x\id_x: x \to x, which acts as a two-sided identity for composition.
- https://en.wikipedia.org/wiki/Identity_function - also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. The identity morphism (identity mapping) is called the trivial automorphism in some contexts. Respectively, other (non-identity) automorphisms are called nontrivial automorphisms.
- https://en.wikipedia.org/wiki/Identity_element - or neutral element is a special type of element of a set with respect to a binary operation on that set, which leaves other elements unchanged when combined with them. This concept is used in algebraic structures such as groups. The term identity element is often shortened to identity (as will be done in this article) when there is no possibility of confusion.
- https://en.wikipedia.org/wiki/Equality_(mathematics) - relationship between two quantities or, more generally two mathematical expressions, asserting that the quantities have the same value or that the expressions represent the same mathematical object
- https://ncatlab.org/nlab/show/adjunction - Two morphisms a 2-category form an adjunction if they are dual to each other.
The notion of adjunction may usefully be thought of as a weakened version of the notion of equivalence in a 2-category: a morphism in an adjunction need not be invertible, but it has in some sense a left inverse from below and a right inverse from above. If the morphism in an adjunction does happen to be a genuine equivalence, then we speak of the adjunction being an adjoint equivalence.
Essentially everything that makes category theory nontrivial and interesting beyond groupoid theory can be derived from the concept of adjoint functors. In particular universal constructions such as limits and colimits are examples of certain adjunctions. Adjunctions are already interesting (but simpler) in 2-posets, such as the 22-poset Pos of posets.
- https://en.wikipedia.org/wiki/Adjoint_functors - a relationship that two functors may have. Two functors that stand in this relationship are known as adjoint functors, one being the left adjoint and the other the right adjoint. Pairs of adjoint functors are ubiquitous in mathematics and often arise from constructions of "optimal solutions" to certain problems (i.e., constructions of objects having a certain universal property), such as the construction of a free group on a set in algebra, or the construction of the Stone–Čech compactification of a topological space in topology.
- https://en.wikipedia.org/wiki/Injective_function - or injection or one-to-one function is a function that preserves distinctness: it never maps distinct elements of its domain to the same element of its codomain. In other words, every element of the function's codomain is the image of at most one element of its domain. The term one-to-one function must not be confused with one-to-one correspondence (a.k.a. bijective function), which uniquely maps all elements in both domain and codomain to each other, (see figures).
- https://en.wikipedia.org/wiki/Monomorphism - an injective homomorphism. In the more general setting of category theory, a monomorphism (also called a monic morphism or a mono) is a left-cancellative morphism
- https://en.wikipedia.org/wiki/Surjective_function - or onto, or a surjection, if for every element y in the codomain Y of f there is at least one element x in the domain X of f such that f(x) = y. It is not required that x is unique; the function f may map one or more elements of X to the same element of Y. Image overs the whole codomain.
- https://en.wikipedia.org/wiki/Epimorphism - categorical analogues of surjective functions (and in the category of sets the concept corresponds to the surjective functions), but it may not exactly coincide in all contexts. The dual of an epimorphism is a monomorphism (i.e. an epimorphism in a category C is a monomorphism in the dual category Cop).
- https://en.wikipedia.org/wiki/Bijection - total function between the elements of two sets, where every element of one set is paired with exactly one element of the other set, and every element of the other set is paired with exactly one element of the first set. injective and surjective
- https://en.wikipedia.org/wiki/Partial_function - from X to Y (written as f: X ↛ Y) is a function f: X ′ → Y, for some subset X ′ of X. It generalizes the concept of a function f: X → Y by not forcing f to map every element of X to an element of Y (only some subset X ′ of X). If X ′ = X, then f is called a total function and is equivalent to a function. Partial functions are often used when the exact domain, X, is not known (e.g. many functions in computability theory).
- https://ncatlab.org/nlab/show/logical+functor - A logical morphism or logical functor is a homomorphism between elementary toposes that preserves the structure of a topos as a context for logic: a functor which preserves all the elementary topos structure, including in particular power objects, but not necessarily any infinitary structure (such as present additionally in a sheaf topos).If instead a topos is regarded as a context for geometry or specifically geometric logic, then the notion of homomorphism preserving this is that of a geometric morphism.
- https://en.wikipedia.org/wiki/Antihomomorphism - a type of function defined on sets with multiplication that reverses the order of multiplication. An antiautomorphism is a bijective antihomomorphism, i.e. an antiisomorphism, from a set to itself. From bijectivity it follows that antiautomorphisms have inverses, and that the inverse of an antiautomorphism is also an antiautomorphism.
Unary operation
Binary operation
- https://en.wikipedia.org/wiki/Binary_operation - or dyadic operation is a calculation that combines two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.
More specifically, a binary operation on a set is a binary operation whose two domains and the codomain are the same set. Examples include the familiar arithmetic operations of addition, subtraction, multiplication. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication and conjugation in groups. However, a binary operation may also involve several sets. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar.Binary operations are the keystone of most algebraic structures, that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.
- https://en.wikipedia.org/wiki/Binary_function - also called bivariate function, or function of two variables, is a function that takes two inputs.
- https://en.wikipedia.org/wiki/Product_(category_theory) - the product of two (or more) objects in a category is a notion designed to capture the essence behind constructions in other areas of mathematics such as the cartesian product of sets, the direct product of groups, the direct product of rings and the product of topological spaces. Essentially, the product of a family of objects is the "most general" object which admits a morphism to each of the given objects.
- https://en.wikipedia.org/wiki/Cartesian_product - a mathematical operation that returns a set (or product set or simply product) from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b) where a ∈ A and b ∈ B.
Although the Cartesian product is traditionally applied to sets, category theory provides a more general interpretation of the product of mathematical structures. This is distinct from, although related to, the notion of a Cartesian square in category theory, which is a generalization of the fiber product.
- https://en.wikipedia.org/wiki/Cartesian_closed_category - a category is Cartesian closed if, roughly speaking, any morphism defined on a product of two objects can be naturally identified with a morphism defined on one of the factors. These categories are particularly important in mathematical logic and the theory of programming, in that their internal language is the simply typed lambda calculus. They are generalized by closed monoidal categories, whose internal language, linear type systems, are suitable for both quantum and classical computation.
- https://en.wikipedia.org/wiki/Natural_transformation - provides a way of transforming one functor into another while respecting the internal structure (i.e. the composition of morphisms) of the categories involved. a "morphism of functors".
- https://ncatlab.org/nlab/show/natural+transformation - Just as a functor is a morphism between categories, a natural transformation is a 2-morphism between two functors. Natural transformations are the 2-morphisms in the 2-category Cat.
Properties
Commutative
- https://simple.wikipedia.org/wiki/Commutative_property - addition and multiplication tells us that it does not matter which number we add first, or multiply first. We will still get the same answer if we add them backwards. The same thing goes for multiplying backwards. An example is 8+2=10 2+8=10
- https://en.wikipedia.org/wiki/Commutative_property - changing the order of the operands does not change the result.
It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says "3 + 4 = 4 + 3" or "2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, "3 − 5 ≠ 5 − 3"); such operations are not commutative, or noncommutative operations. The idea that simple operations, such as multiplication and addition of numbers, are commutative was for many years implicitly assumed and the property was not named until the 19th century when mathematics started to become formalized. A corresponding property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of the order of the two.
- https://en.wikipedia.org/wiki/Symmetric_function - f its value is the same no matter the order of its arguments.
- https://en.wikipedia.org/wiki/Commutative_diagram - a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra
Associative
- https://simple.wikipedia.org/wiki/Associativity - a property of mathematical operations (like addition and multiplication). It means that if you have more than one of the same associative operator (like +) in a row, the order of operations does not matter.
- https://en.wikipedia.org/wiki/Associative_property - a property of some binary operations. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, (after rewriting the expression with parentheses and in infix notation if necessary) rearranging the parentheses in such an expression will not change its value.
Idempotence
- https://en.wikipedia.org/wiki/Idempotence - the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.
Cancellation
- https://en.wikipedia.org/wiki/Cancellation_property - a generalization of the notion of invertible.
- https://en.wikipedia.org/wiki/Inverse_element - generalises the concepts of negation (sign reversal) (in relation to addition) and reciprocation (in relation to multiplication). The intuition is of an element that can 'undo' the effect of combination with another given element. While the precise definition of an inverse element varies depending on the algebraic structure involved, these definitions coincide in a group.
Idempotence
Order theory
- https://en.wikipedia.org/wiki/Order_theory - a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that".
- https://en.wikipedia.org/wiki/List_of_order_theory_topics - a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is "less than" or "precedes" another.
- https://en.wikipedia.org/wiki/Hasse_diagram - a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive reduction.
- https://en.wikipedia.org/wiki/Monotonic_function - or monotone function, is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
- https://en.wikipedia.org/wiki/Order_isomorphism - a special kind of monotone function that constitutes a suitable notion of isomorphism for partially ordered sets (posets). Whenever two posets are order isomorphic, they can be considered to be "essentially the same" in the sense that one of the orders can be obtained from the other just by renaming of elements.
- https://en.wikipedia.org/wiki/Semilattice - a join-semilattice (or upper semilattice) is a partially ordered set that has a join (a least upper bound) for any nonempty finite subset. Dually, a meet-semilattice (or lower semilattice) is a partially ordered set which has a meet (or greatest lower bound) for any nonempty finite subset. Every join-semilattice is a meet-semilattice in the inverse order and vice versa.
Semilattices can also be defined algebraically: join and meet are associative, commutative, idempotent binary operations, and any such operation induces a partial order (and the respective inverse order) such that the result of the operation for any two elements is the least upper bound (or greatest lower bound) of the elements with respect to this partial order. A lattice is a partially ordered set that is both a meet- and join-semilattice with respect to the same partial order. Algebraically, a lattice is a set with two associative, commutative idempotent binary operations linked by corresponding absorption laws.
- https://en.wikipedia.org/wiki/Lattice_(order) - an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of a partially ordered set in which every two elements have a unique supremum (also called a least upper bound or join) and a unique infimum (also called a greatest lower bound or meet). An example is given by the natural numbers, partially ordered by divisibility, for which the unique supremum is the least common multiple and the unique infimum is the greatest common divisor.
Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities. Since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras. These "lattice-like" structures all admit order-theoretic as well as algebraic descriptions.
- https://en.wikipedia.org/wiki/Absorption_law - an identity linking a pair of binary operations. Two binary operations, ¤ and ⁂, are said to be connected by the absorption law if: a ¤ (a ⁂ b) = a ⁂ (a ¤ b) = a. A set equipped with two commutative, associative and idempotent binary operations ∨ ("join") and ∧ ("meet") that are connected by the absorption law is called a lattice.Examples of lattices include Boolean algebras, the set of sets with union and intersection operators, Heyting algebras, and ordered sets with min and max operations.In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by ∨ and ∧, satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic. The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the field of real numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspondence between the free variables of the defining pair of identities.
- https://en.wikipedia.org/wiki/Map_of_lattices - The concept of a lattice arises in order theory, a branch of mathematics. The Hasse diagram below depicts the inclusion relationships among some important subclasses of lattices.
- https://en.wikipedia.org/wiki/Distributive_lattice - a lattice in which the operations of join and meet distribute over each other. The prototypical examples of such structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely: every distributive lattice is—up to isomorphism—given as such a lattice of sets.
- https://en.wikipedia.org/wiki/Complete_lattice - a partially ordered set in which all subsets have both a supremum (join) and an infimum (meet). Complete lattices appear in many applications in mathematics and computer science. Being a special instance of lattices, they are studied both in order theory and universal algebra.Complete lattices must not be confused with complete partial orders (cpos), which constitute a strictly more general class of partially ordered sets. More specific complete lattices are complete Boolean algebras and complete Heyting algebras (locales).
- https://en.wikipedia.org/wiki/Domain_theory - a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and has close relations to topology. An alternative important approach to denotational semantics in computer science is that of metric spaces.
Graph theory
- https://en.wikipedia.org/wiki/Graph_theory - the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically; see Graph (discrete mathematics) for more detailed definitions and for other variations in the types of graph that are commonly considered. Graphs are one of the prime objects of study in discrete mathematics.
- Chromatic Number - the smallest number of colors needed to color the vertices of G so that no two adjacent vertices share the same color
- graph-tool: Efficent network analysis with python - an efficient Python module for manipulation and statistical analysis of graphs (a.k.a. networks). Contrary to most other Python modules with similar functionality, the core data structures and algorithms are implemented in C++, making extensive use of template metaprogramming, based heavily on the Boost Graph Library. This confers it a level of performance that is comparable (both in memory usage and computation time) to that of a pure C/C++ library.
Trees
- https://en.wikipedia.org/wiki/Tree_(graph_theory) - an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees. A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
The various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory, although such data structures are generally rooted trees. A rooted tree may be directed, called a directed rooted tree, either making all its edges point away from the root—in which case it is called an arborescence or out-tree—or making all its edges point towards the root—in which case it is called an anti-arborescence or in-tree. A rooted tree itself has been defined by some authors as a directed graph. A rooted forest is a disjoint union of rooted trees. A rooted forest may be directed, called a directed rooted forest, either making all its edges point away from the root in each rooted tree—in which case it is called a branching or out-forest—or making all its edges point towards the root in each rooted tree—in which case it is called an anti-branching or in-forest. The term "tree" was coined in 1857 by the British mathematician Arthur Cayley.
- The Graceful Tree Conjecture - This post is a collection of notes about an interesting unproven problem in combinatorics. The Kotzig-Ringel conjecture, better known as the Graceful Tree Conjecture (GTC), claims that: All trees are graceful. This is a simple statement but mathematicians don’t know if it’s true or not! At the time of this writing, nobody has come up with a counterexample and there is no proof that one exists. [31]
Algebra
- https://en.wikipedia.org/wiki/Algebra - one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols; it is a unifying thread of almost all of mathematics. As such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra; the more abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians.
Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are either unknown or allowed to take on many values. For example, in x+2=5 the letter x is unknown, but the law of inverses can be used to discover its value: x=3. In E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are much easier (for those who know how to use them) than the older method of writing everything out in words.
The word algebra is also used in certain specialized ways. A special kind of mathematical object in abstract algebra is called an "algebra", and the word is used, for example, in the phrases linear algebra and algebraic topology.
- https://github.com/mavam/abstract-algebra-cheatsheet - a visual summary of key structures in abstract algebra.
- https://en.wikipedia.org/wiki/Algebraic_logic - the reasoning obtained by manipulating equations with free variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description of models appropriate for the study of various logics (in the form of classes of algebras that constitute the algebraic semantics for these deductive systems) and connected problems like representation and duality. Well known results like the representation theorem for Boolean algebras and Stone duality fall under the umbrella of classical algebraic logic (Czelakowski 2003). Works in the more recent abstract algebraic logic (AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using the Leibniz operator (Czelakowski 2003).
- https://en.wikipedia.org/wiki/Algebraic_semantics_(mathematical_logic) - a formal semantics based on algebras studied as part of algebraic logic. For example, the modal logic S4 is characterized by the class of topological boolean algebras—that is, boolean algebras with an interior operator. Other modal logics are characterized by various other algebras with operators. The class of boolean algebras characterizes classical propositional logic, and the class of Heyting algebras propositional intuitionistic logic. MV-algebras are the algebraic semantics of Łukasiewicz logic.
- My 100th post. So why not bash algebra? - thoughts on education by Grant Wiggins
"Here is a thought experiment: can you identify 4 big ideas in algebra, ideas that not only provide a powerful set of intellectual priorities for the course but that have rich connections to other fields? Doubt it. Because algebra courses, as designed, have no big ideas, as taught, just a list of topics. Look at any textbook: each chapter is just a new tool. There is no throughline to the course nor are their priority ideas that recur and go deeper, by design. In fact, no problems ever require work from many chapters simultaneously, just learning and being quizzed on each topic – a telling sign.
"Here’s a simple test, if you doubt this point, to give to algebra students in order to see if they have any understanding of what they have learned:
- What can algebra do that arithmetic cannot do or does very inefficiently?
- Is the order of operations a matter of core truth or convention? How does that compare with the Associative Property? What is and isn’t arbitrary in algebra?
- Why, mathematically speaking, are imaginary numbers and the inability to divide by zero wise premises?
- What is an equation that states the proper relationship between feet and yards? (60-80% of students will wrongly write: 3F = Y, showing that they have failed to understand the difference between English and algebra)"
- YouTube: The Quadratic Formula - playlist
- https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra
- YouTube: Fundamental Theorem of Algebra - Numberphile
- https://en.wikipedia.org/wiki/Polynomial - an expression consisting of variables (or indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents.
- https://en.wikipedia.org/wiki/Coefficient - a multiplicative factor in some term of a polynomial, a series or any expression; it is usually a number, but may be any expression. In the latter case, the variables appearing in the coefficients are often called parameters, and must be clearly distinguished from the other variables.
For example, in 7x^{2}-3xy+1.5+y, the first two terms respectively have the coefficients 7 and −3. The third term 1.5 is a constant. The final term does not have any explicitly written coefficient, but is considered to have coefficient 1, since multiplying by that factor would not change the term.
- https://en.wikipedia.org/wiki/Parameter#Mathematical_functions - The variables appearing in the coefficients are often called parameters. Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters.
- https://en.wikipedia.org/wiki/Algebraic_equation - or polynomial equation is an equation of the form P = Q where P and Q are polynomials with coefficients in some field, often the field of the rational numbers. For most authors, an algebraic equation is univariate, which means that it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate and the term polynomial equation is usually preferred to algebraic equation.
- https://en.wikipedia.org/wiki/Nth_root - the nth root of a number x is a number r which, when raised to the power of n, equals x
- https://en.wikipedia.org/wiki/Square_root
- https://en.wikipedia.org/wiki/Cube_root
- https://en.wikipedia.org/wiki/Root-finding_algorithm
- https://en.wikipedia.org/wiki/Methods_of_computing_square_roots
- https://ncatlab.org/nlab/show/elementary+function+arithmetic - or EFA, is a first-order theory of natural numbers, one of the weakest fragments of arithmetic strong enough to do nontrivial mathematics. It is strictly weaker than Peano arithmetic.
- Wyzant Resources: Derivative Proofs
- https://github.com/ikorotkin/dae-cpp - A cross-platform, parallel C++ library for solving user-defined, stiff systems of DAEs (an initial value problem). The system may contain both differential and algebraic equations and can be written in the following matrix-vector form:
Abstract algebra
- YouTube: Basic abstract algebra, pt.1 - slides w/ music
- https://www.youtube.com/watch?v=IP7nW_hKB7I&list=PLi01XoE8jYoi3SgnnGorR_XOW3IcK-TP6 https://www.youtube.com/watch?v=QudbrUcVPxk&list=PLi01XoE8jYoi3SgnnGorR_XOW3IcK-TP6
- https://en.wikipedia.org/wiki/Algebraic_structure - a set (called carrier set or underlying set) with one or more finitary operations defined on it that satisfies a list of axioms.
Examples of algebraic structures include groups, rings, fields, and lattices. More complex structures can be defined by introducing multiple operations, different underlying sets, or by altering the defining axioms. Examples of more complex algebraic structures include vector spaces, modules, and algebras.
The properties of specific algebraic structures are studied in abstract algebra. The general theory of algebraic structures has been formalized in universal algebra. The language of category theory is used to express and study relationships between different classes of algebraic and non-algebraic objects. This because it is sometimes possible to find strong connections between some classes of objects, sometimes of different kinds. For example, Galois theory establishes a connection between certain fields and groups: two algebraic structures of different kinds.
- Natural deduction proof editor and checker - This is a demo of a proof checker for Fitch-style natural deduction systems found in many popular introductory logic textbooks. The specific system used here is the one found in forall x: Calgary Remix. (Although based on forall x: an Introduction to Formal Logic, the proof system in that original version differs from the one used here and in the Calgary Remix. However, the system also supports the rules used in the forall x: Cambridge remix.)
- https://github.com/stefanhaustein/emojic - The application shows 5 emoji characters and a logical expression at the top of the screen. Swipe matching emoji to the right and mismatches to the left. An example expression may be "food and not green". The red apple emoji matches this and is supposed to be swiped to the right. A green apple or red heart do not match and are supposed to be swiped to the left. The purpose of the app is to have some fun while learning simple logical formulas. There is a time limit, and expressions get more complex over time.
Universal algebra
- https://en.wikipedia.org/wiki/Universal_algebra - sometimes called general algebra is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. For instance, rather than take particular groups as the object of study, in universal algebra one takes "the theory of groups" as an object of study.
- https://en.wikipedia.org/wiki/Tarski's_high_school_algebra_problem - was a question posed by Alfred Tarski. It asks whether there are identities involving addition, multiplication, and exponentiation over the positive integers that cannot be proved using eleven axioms about these operations that are taught in high-school-level mathematics. The question was solved in 1980 by Alex Wilkie, who showed that such unprovable identities do exist.
- https://en.wikipedia.org/wiki/Free_object - one of the basic concepts of abstract algebra. It is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms. Examples include free groups, tensor algebras, or free lattices. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure.
Special
- https://en.wikipedia.org/wiki/Generating_function - a way of encoding an infinite sequence of numbers (an) by treating them as the coefficients of a power series. This formal power series is the generating function. Unlike an ordinary series, this formal series is allowed to diverge, meaning that the generating function is not always a true function and the "variable" is actually an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal series in more than one indeterminate, to encode information about arrays of numbers indexed by several natural numbers.
Logic
meeeess
- https://en.wikipedia.org/wiki/Primitive_notion - an undefined concept. In particular, a primitive notion is not defined in terms of previously defined concepts, but is only motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory or other formal system, the role of a primitive notion is analogous to that of axiom. In axiomatic theories, the primitive notions are sometimes said to be "defined" by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress.
- https://en.wikipedia.org/wiki/Inference - steps in reasoning, moving from premises to conclusions. Charles Sanders Peirce divided inference into three kinds: deduction, induction, and abduction. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular premises to a universal conclusion. Abduction is inference to the best explanation.
- https://en.wikipedia.org/wiki/Clause - a part of the sentence that contains a verb. A typical clause consists of a subject and a predicate, the latter typically a verb phrase, a verb with any objects and other modifiers. However, the subject is sometimes not said or explicit, often the case in null-subject languages if the subject is retrievable from context, but it sometimes also occurs in other languages such as English (as in imperative sentences and non-finite clauses).
A simple sentence usually consists of a single finite clause with a finite verb that is independent. More complex sentences may contain multiple clauses. Main clauses (matrix clauses, independent clauses) are those that can stand alone as a sentence. Subordinate clauses (embedded clauses, dependent clauses) are those that would be awkward or incomplete if they were alone.
Quantity: How much? Quality: Affirmative, negative
- https://en.wikipedia.org/wiki/Deductive_reasoning - also deductive logic or logical deduction or, informally, "top-down" logic is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion. Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily true.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the following way: In deductive reasoning, a conclusion is reached reductively by applying general rules that hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion is left. In inductive reasoning, the conclusion is reached by generalizing or extrapolating from initial information. As a result, induction can be used even in an open domain, one where there is epistemic uncertainty. Note, however, that the inductive reasoning mentioned here is not the same as induction used in mathematical proofs – mathematical induction is actually a form of deductive reasoning.
- https://en.wikipedia.org/wiki/Inductive_reasoning - a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of the premises are true. Instead of being valid or invalid, inductive arguments are either strong or weak, according to how probable it is that the conclusion is true. We may call an inductive argument plausible, probable, reasonable, justified or strong, but never certain or necessary. Logic affords no bridge from the probable to the certain.
- https://en.wikipedia.org/wiki/Abductive_reasoning - also called abduction, abductive inference, or retroduction, is a form of logical inference which starts with an observation or set of observations and then seeks to find the simplest and most likely explanation for the observations. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it. Abductive conclusions are thus qualified as having a remnant of uncertainty or doubt, which is expressed in retreat terms such as "best available" or "most likely." One can understand abductive reasoning as inference to the best explanation, although not all usages of the terms abduction and inference to the best explanation are exactly equivalent. In the 1990s, as computing power grew, the fields of law, computer science, and artificial intelligence research spurred renewed interest in the subject of abduction. Diagnostic expert systems frequently employ abduction.
- https://en.wikipedia.org/wiki/Intuitionistic_logic - sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not include the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
Formalized intuitionistic logic was originally developed by Arend Heyting to provide a formal basis for Brouwer's programme of intuitionism. From a proof-theoretic perspective, Heyting’s calculus is a restriction of classical logic in which the law of excluded middle and double negation elimination have been removed. Excluded middle and double negation elimination can still be proved for some propositions on a case by case basis, however, but do not hold universally as they do with classical logic.
Several systems of semantics for intuitionistic logic have been studied. One of these semantics mirrors classical Boolean-valued semantics but uses Heyting algebras in place of Boolean algebras. Another semantics uses Kripke models. These, however, are technical means for studying Heyting’s deductive system rather than formalizations of Brouwer’s original informal semantic intuitions. Semantical systems claiming to capture such intuitions, due to offering meaningful concepts of “constructive truth” (rather than merely validity or provability), are Gödel’s dialectica interpretation, Kleene’s realizability, Medvedev’s logic of finite problems, or Japaridze’s computability logic. Yet such semantics persistently induce logics properly stronger than Heyting’s logic. Some authors have argued that this might be an indication of inadequacy of Heyting’s calculus itself, deeming the latter incomplete as a constructive logic.
Term logic
- https://en.wikipedia.org/wiki/Term_logic - also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to logic that began with Aristotle and that was dominant until the advent of modern predicate logic in the late nineteenth century. This entry is an introduction to the term logic needed to understand philosophy texts written before predicate logic came to be seen as the only formal logic of interest. Readers lacking a grasp of the basic terminology and ideas of term logic can have difficulty understanding such texts, because their authors typically assumed an acquaintance with term logic.
- https://en.wikipedia.org/wiki/Organon - Aristotle's six works on logic
- https://en.wikipedia.org/wiki/Statement_(logic) - refers to the wording
- https://en.wikipedia.org/wiki/Proposition - refers to the meaning, basic semantics
- https://en.wikipedia.org/wiki/Antecedent_(logic) - first half of a proposition
- https://en.wikipedia.org/wiki/Consequent - second half of a proposition
- https://en.wikipedia.org/wiki/Syllogism - 2 premises and a conclusion
All S are P. (A form) All S are not P. (E form) Some S are P. (I form) Some S are not P. (O form)
- https://en.wikipedia.org/wiki/Port-Royal_Logic - or Logique de Port-Royal, is the common name of La logique, ou l'art de penser, an important textbook on logic first published anonymously in 1662 by Antoine Arnauld and Pierre Nicole, two prominent members of the Jansenist movement, centered on Port-Royal. Blaise Pascal likely contributed considerable portions of the text. Its linguistic companion piece is the Port Royal Grammar (1660) by Arnauld and Lancelot. Written in French, it became quite popular and was in use up to the twentieth century, introducing the reader to logic, and exhibiting strong Cartesian elements in its metaphysics and epistemology (Arnauld having been one of the main philosophers whose objections were published, with replies, in Descartes' Meditations on First Philosophy). The Port-Royal Logic is sometimes cited as a paradigmatic example of traditional term logic.
Classical logic
- https://en.wikipedia.org/wiki/Classical_logic - an intensively studied and widely used class of formal logics. Each logical system in this class shares characteristic properties:
- Law of excluded middle and double negative elimination
- Law of noncontradiction, and the principle of explosion
- Monotonicity of entailment and idempotency of entailment
- Commutativity of conjunction
- De Morgan duality: every logical operator is dual to another
While not entailed by the preceding conditions, contemporary discussions of classical logic normally only include propositional and first-order logics. In other words, the overwhelming majority of time spent studying classical logic has been spent studying specifically propositional and first-order logic, as opposed to the other more obscure variations of classical logic.Classical logic is a bivalent logic, meaning it accepts only two possible truth values: true and false.
- https://en.wikipedia.org/wiki/Law_of_thought - fundamental axiomatic rules upon which rational discourse itself is often considered to be based. The formulation and clarification of such rules have a long tradition in the history of philosophy and logic. Generally they are taken as laws that guide and underlie everyone's thinking, thoughts, expressions, discussions, etc. However, such classical ideas are often questioned or rejected in more recent developments, such as intuitionistic logic, dialetheism and fuzzy logic. According to the 1999 Cambridge Dictionary of Philosophy, laws of thought are laws by which or in accordance with which valid thought proceeds, or that justify valid inference, or to which all valid deduction is reducible. Laws of thought are rules that apply without exception to any subject matter of thought, etc.; sometimes they are said to be the object of logic[further explanation needed]. The term, rarely used in exactly the same sense by different authors, has long been associated with three equally ambiguous expressions: the law of identity (ID), the law of contradiction (or non-contradiction; NC), and the law of excluded middle (EM). Sometimes, these three expressions are taken as propositions of formal ontology having the widest possible subject matter, propositions that apply to entities as such: (ID), everything is (i.e., is identical to) itself; (NC) no thing having a given quality also has the negative of that quality (e.g., no even number is non-even); (EM) every thing either has a given quality or has the negative of that quality (e.g., every number is either even or non-even). Equally common in older works is the use of these expressions for principles of metalogic about propositions: (ID) every proposition implies itself; (NC) no proposition is both true and false; (EM) every proposition is either true or false.
- https://en.wikipedia.org/wiki/Law_of_identity - states that each thing is identical with itself. It is the first of the three laws of thought, along with the law of noncontradiction, and the law of excluded middle. However, no system of logic is built on just these laws, and none of these laws provide inference rules, such as modus ponens or DeMorgan's Laws.In its formal representation, the law of identity is written "a = a" or "For all x: x = x", where a or x refer to a term rather than a proposition, and thus the law of identity is not used in propositional logic. It is that which is expressed by the equals sign "=", the notion of identity or equality. It can also be written less formally as A is A. One statement of such a principle is "Rose is a rose is a rose is a rose."
- https://en.wikipedia.org/wiki/Law_of_noncontradiction - LNC, also known as the law of contradiction, principle of non-contradiction (PNC), or the principle of contradiction, states that contradictory propositions cannot both be true in the same sense at the same time, e. g. the two propositions "A is B" and "A is not B" are mutually exclusive. Formally this is expressed as the tautology ¬(p ∧ ¬p). One reason to have this law is the principle of explosion, which states that anything follows from a contradiction. The law is employed in a reductio ad absurdum proof. To express the fact that the law is tenseless and to avoid equivocation, sometimes the law is amended to say "contradictory propositions cannot both be true 'at the same time and in the same sense'". The law of non contradiction and the law of excluded middle create a dichotomy in "logical space", wherein the two parts are "mutually exclusive" and "jointly exhaustive". The law of non-contradiction is merely an expression of the mutually exclusive aspect of that dichotomy, and the law of excluded middle, an expression of its jointly exhaustive aspect.
- https://en.wikipedia.org/wiki/Law_of_excluded_middle - sates that for any proposition, either that proposition is true or its negation is true. It is one of the so called three laws of thought, along with the law of noncontradiction, and the law of identity. The law of excluded middle is logically equivalent to the law of noncontradiction by De Morgan's laws; however, no system of logic is built on just these laws, and none of these laws provide inference rules, such as modus ponens or De Morgan's laws.
- https://en.wikipedia.org/wiki/Semantics_of_logic - the study of the semantics, or interpretations, of formal and (idealizations of) natural languages usually trying to capture the pre-theoretic notion of entailment.
- https://en.wikipedia.org/wiki/Rule_of_inference - inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions).
- https://en.wikipedia.org/wiki/Term_(logic) - denotes a mathematical object and a formula denotes a mathematical fact. In particular, terms appear as components of a formula.
A first-order term is recursively constructed from constant symbols, variables and function symbols. An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation. For example, (x+1)*(x+1) is a term built from the constant 1, the variable x, and the binary function symbols + and *; it is part of the atomic formula (x+1)*(x+1) which evaluates to true for each real-numbered value of x.
Besides in logic, terms play important roles in universal algebra, and rewriting systems.
- https://en.wikipedia.org/wiki/Ground_expression - a formal system is a term that does not contain any free variables. Similarly, a ground formula is a formula that does not contain any free variables.In first-order logic with identity, the sentence ∀x(x = x) is a ground formula. A ground expression is a ground term or ground formula.
- https://en.wikipedia.org/wiki/Square_of_opposition - a diagram representing the relations between the four basic categorical propositions. The origin of the square can be traced back to Aristotle making the distinction between two oppositions: contradiction and contrariety. But Aristotle did not draw any diagram. This was done several centuries later by Apuleius and Boethius.
- https://en.wikipedia.org/wiki/Validity - valid form
- https://en.wikipedia.org/wiki/Soundness - valid form and true proposition
- https://ncatlab.org/nlab/show/equivalence - two objects are considered equivalent if they may be replaced by one another in all contexts under consideration, in particular whenever one is not doing anything evil. At the propositional level, the general theory of equivalence is discussed at equivalence relation, but that leaves us with the question of the correct definition of equivalence in various situations. Furthermore, we can work on a higher level and ask what the equivalences are, not just whether things are equivalent.
- https://ncatlab.org/nlab/show/equivalence+relation - a set S is a binary relation ≡ equiv on S that is: reflexive, symmetric and transitive
- https://en.wikipedia.org/wiki/Logical_equivalence - statements p and q are said to be logically equivalent, if they are provable from each other under a set of axioms, or have the same truth value in every model.
- YouTube: Odd Equations - Numberphile
- https://en.wikipedia.org/wiki/Formal_proof - or derivation, is a finite sequence of sentences, each of which is an axiom or follows from the preceding sentences in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concept of natural deduction is a generalization of the concept of proof.
- https://en.wikipedia.org/wiki/Well-formed_formula - a mathematical proof in a formal system
- https://en.wikipedia.org/wiki/Atomic_formula - also known simply as an atom is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives. The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example, the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term. In model theory, atomic formula are merely strings of symbols with a given signature, which may or may not be satisfiable with respect to a given model.
- https://en.wikipedia.org/wiki/Interpretation_(logic) - an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics. An interpretation often (but not always) provides a way to determine the truth values of sentences in a language. If a given interpretation assigns the value True to a sentence or theory, the interpretation is called a model of that sentence or theory.
- https://en.wikipedia.org/wiki/Logical_consequence - entailment
- https://en.wikipedia.org/wiki/Validity - if and only if its conclusion is logically entailed by its premises
- https://en.wikipedia.org/wiki/Modus_ponens - "P implies Q; P is asserted to be true, so therefore Q must be true."
"P implies Q; P is asserted to be true, so therefore Q must be true."
- https://en.wikipedia.org/wiki/Modus_tollens - modus tollendo tollens and also denying the consequent (Latin for "the way that denies by denying") is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contra-positive.
Connectives
- https://en.wikipedia.org/wiki/Logical_connective - or logical operator, a symbol or word used to connect two or more sentences (of either a formal or a natural language) in a grammatically valid way, such that the value of the compound sentence produced depends only on that of the original sentences and on the meaning of the connective.
The most common logical connectives are binary connectives (also called dyadic connectives) which join two sentences which can be thought of as the function's operands. Also commonly, negation is considered to be a unary connective.
Logical connectives along with quantifiers are the two main types of logical constants used in formal systems such as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, presented as a truth function. A logical connective is similar to but not equivalent to a conditional operator.
- https://en.wikipedia.org/wiki/Negation - not, also called logical complement, is an operation that takes a proposition p to another proposition "not p", written ¬p, which is interpreted intuitively as being true when p is false, and false when p is true. Negation is thus a unary (single-argument) logical connective. It may be applied as an operation on propositions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity and vice versa. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition p is the proposition whose proofs are the refutations of p.
- https://en.wikipedia.org/wiki/Material_conditional - also known as material implication, material consequence, or simply implication, implies, or conditional, is a logical connective (or a binary operator) that is often symbolized by a forward arrow "→". The material conditional is used to form statements of the form p→q (termed a conditional statement) which is read as "if p then q". Unlike the English construction "if...then...", the material conditional statement p→q does not specify a causal relationship between p and q. It is merely to be understood to mean "if p is true, then q is also true" such that the statement p→q is false only when p is true and q is false.
The material conditional only states that q is true when (but not necessarily only when) p is true, and makes no claim that p causes q. The material conditional is also symbolized using:
- 𝑝 ⊃ 𝑞 (Although this symbol may be used for the superset symbol in set theory.);
- 𝑝 ⇒ 𝑞 (Although this symbol is often used for logical consequence (i.e., logical implication) rather than for material conditional.)
- C𝑝𝑞 (using Łukasiewicz notation)
- https://en.wikipedia.org/wiki/Converse_implication - the converse of implication. That is to say; that for any two propositions P and Q, if Q implies P, then P is the converse implication of Q.
- https://en.wikipedia.org/wiki/Logical_conjunction - and, the and of a set of operands is true if and only if all of its operands are true. The logical connective that represents this operator is typically written as ∧ or ⋅.
- https://en.wikipedia.org/wiki/Corresponding_conditional - or, the truth-functional operator of (inclusive) disjunction, also known as alternation; the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is typically written as ∨ or +.
- https://en.wikipedia.org/wiki/Logical_disjunction - or is the truth-functional operator of (inclusive) disjunction, also known as alternation; the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is typically written as ∨ or +. "A or B" is true if A is true, or if B is true, or if both A and B are true. An operand of a disjunction is called a disjunct.
- https://en.wikipedia.org/wiki/Exclusive_or - or exclusive disjunction is a logical operation that outputs true only when inputs differ (one is true, the other is false). It is symbolized by the prefix operator J and by the infix operators XOR (/ˌɛks ˈɔːr/), EOR, EXOR, ⊻, ⊕, ↮, and ≢. The negation of XOR is logical biconditional, which outputs true only when both inputs are the same. It gains the name "exclusive or" because the meaning of "or" is ambiguous when both operands are true; the exclusive or operator excludes that case. This is sometimes thought of as "one or the other but not both". This could be written as "A or B, but not, A and B".
- https://en.wikipedia.org/wiki/Logical_biconditional - sometimes known as the material biconditional, is the logical connective of two statements asserting "p if and only if q", where p is an antecedent and q is a consequent. This is often abbreviated "p iff q". The operator is denoted using a doubleheaded arrow (↔), a prefixed E (Epq), an equality sign (=), an equivalence sign (≡), or EQV. It is logically equivalent to (p → q) ∧ (q → p). It is also logically equivalent to "(p and q) or (not p and not q)" (or the XNOR (exclusive nor) boolean operator), meaning "both or neither". The only difference from material conditional is the case when the hypothesis is false but the conclusion is true. In that case, in the conditional, the result is true, yet in the biconditional the result is false.
- https://en.wikipedia.org/wiki/If_and_only_if - equivalent (or materially equivalent) implication - ↔ ⇔ ≡
- https://en.wikipedia.org/wiki/Categorical_proposition - or categorical statement, is a proposition that asserts or denies that all or some of the members of one category (the subject term) are included in another (the predicate term). The study of arguments using categorical statements (i.e., syllogisms) forms an important branch of deductive reasoning that began with the Ancient Greeks.
- https://en.wikipedia.org/wiki/Tautology_(logic) - always true
Hilbert system
- https://en.wikipedia.org/wiki/Hilbert_system - sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well. Hilbert systems can be characterised by the choice of a large number of schemes of logical axioms and a small set of rules of inference. The most commonly studied Hilbert systems have either just one rule of inference — modus ponens, for propositional logics — or two — with generalisation, to handle predicate logics, as well — and several infinite axiom schemes. Hilbert systems for propositional modal logics, sometimes called Hilbert-Lewis systems, are generally axiomatised with two additional rules, the necessitation rule and the uniform substitution rule.
A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if we are interested only in the derivability of tautologies, no hypothetical judgments, then we can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided — not even if we want to use them just for proving derivability of tautologies.
- https://en.wikipedia.org/wiki/Axiom - or postulate is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek axíōma (ἀξίωμα) 'that which is thought worthy or fit' or 'that which commends itself as evident.' The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. As used in modern logic, an axiom is simply a premise or starting point for reasoning.
As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define (e.g., (A and B) implies A), often shown in symbolic form, while non-logical axioms (e.g., a + b = b + a) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident truth, but rather a formal logical expression used in deduction to build a mathematical theory.
- https://en.wikipedia.org/wiki/Axiomatic_system - any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory consists of an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory typically means an axiomatic system, for example formulated within model theory. A formal proof is a complete rendition of a mathematical proof within a formal system.
- https://en.wikipedia.org/wiki/Axiom_schema - a formula in the metalanguage of an axiomatic system, in which one or more schematic variables appear. These variables, which are metalinguistic constructs, stand for any term or subformula of the system, which may or may not be required to satisfy certain conditions. Often, such conditions require that certain variables be free, or that certain variables not appear in the subformula or term.
- https://en.wikipedia.org/wiki/Axiom_schema_of_specification - In many popular versions of axiomatic set theory the axiom schema of specification, also known as the axiom schema of separation, subset axiom scheme or axiom schema of restricted comprehension is an axiom schema. Essentially, it says that any definable subclass of a set is a set. Some mathematicians call it the axiom schema of comprehension, although others use that term for unrestricted comprehension, discussed below. Because restricting comprehension solved Russell's paradox, several mathematicians including Zermelo, Fraenkel, and Gödel considered it the most important axiom of set theory.
- https://en.wikipedia.org/wiki/Axiom_schema_of_replacement - a schema of axioms in Zermelo–Fraenkel set theory (ZF) that asserts that the image of any set under any definable mapping is also a set. It is necessary for the construction of certain infinite sets in ZF. The axiom schema is motivated by the idea that whether a class is a set depends only on the cardinality of the class, not on the rank of its elements. Thus, if one class is "small enough" to be a set, and there is a surjection from that class to a second class, the axiom states that the second class is also a set. However, because ZFC only speaks of sets, not proper classes, the schema is stated only for definable surjections, which are identified with their defining formulas.
Natural deduction
- https://en.wikipedia.org/wiki/Natural_deduction - a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with the axiomatic systems which instead use axioms as much as possible to express the logical laws of deductive reasoning.
- https://en.wikipedia.org/wiki/Rule_of_inference - syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules.
Propositional / sentential calculus
- https://en.wikipedia.org/wiki/Boolean_function - a function in mathematics and logic whose arguments, as well as the function itself, assume values from a two-element set (usually {0,1}). As a result, it is sometimes referred to as a "switching function".
- https://en.wikipedia.org/wiki/Boolean-valued_model - a generalization of the ordinary Tarskian notion of structure from model theory. In a Boolean-valued model, the truth values of propositions are not limited to "true" and "false", but instead take values in some fixed complete Boolean algebra.
- https://en.wikipedia.org/wiki/Propositional_calculus - logic of sentences. propositional calculus or logic (also called sentential calculus or sentential logic) is a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of inference rules and axioms allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions.
Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways. (Aristotelian "syllogistic" calculus, which is largely supplanted in modern logic, is in some ways simpler – but in other ways more complex – than propositional calculus.) The most immediate way to develop a more complex logical calculus is to introduce rules that are sensitive to more fine-grained details of the sentences being used.
- https://en.wikipedia.org/wiki/Propositional_variable - atomic formulas are called propositional variables. which can either be true or false. Propositional variables are the basic building-blocks of propositional formulas, used in propositional logic and higher logics. In a sense, these are nullary (i.e. 0-arity) predicates.
- https://en.wikipedia.org/wiki/Principle_of_bivalence - states that every declarative sentence expressing a proposition (of a theory under inspection) has exactly one truth value, either true or false. A logic satisfying this principle is called a two-valued logic or bivalent logic. In formal logic, the principle of bivalence becomes a property that a semantics may or may not possess. It is not the same as the law of excluded middle, however, and a semantics may satisfy that law without being bivalent.
The principle of bivalence is studied in philosophical logic to address the question of which natural-language statements have a well-defined truth value. Sentences which predict events in the future, and sentences which seem open to interpretation, are particularly difficult for philosophers who hold that the principle of bivalence applies to all declarative natural-language statements. Many-valued logics formalize ideas that a realistic characterization of the notion of consequence requires the admissibility of premises which, owing to vagueness, temporal or quantum indeterminacy, or reference-failure, cannot be considered classically bivalent. Reference failures can also be addressed by free logics.
- https://en.wikipedia.org/wiki/Truth_value - sometimes called a logical value, is a value indicating the relation of a proposition to truth.
- https://en.wikipedia.org/wiki/Logical_truth - one of the most fundamental concepts in logic, and there are different theories on its nature. A logical truth is a statement which is true, and remains true under all reinterpretations of its components other than its logical constants. It is a type of analytic statement. All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence.
Logical truths (including tautologies) are truths which are considered to be necessarily true. This is to say that they are considered to be such that they could not be untrue and no situation could arise which would cause us to reject a logical truth. It must be true in every sense of intuition, practices, and bodies of beliefs. However, it is not universally agreed that there are any statements which are necessarily true.
- https://en.wikipedia.org/wiki/False_(logic) - or untrue is the state of possessing negative truth value or a nullary logical connective. In a truth-functional system of propositional logic it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Boolean logic and computer science), O (in prefix notation, Opq), and the up tack symbol ⊥. Another approach is used for several formal theories (for example, intuitionistic propositional calculus) where the false is a propositional constant (i.e. a nullary connective) ⊥, the truth value of this constant being always false in the sense above.
- https://en.wikipedia.org/wiki/Propositional_function - a statement expressed in a way that would assume the value of true or false, except that within the statement is a variable (x) that is not defined or specified, which leaves the statement undetermined
- https://en.wikipedia.org/wiki/Propositional_formula - formal expression that denotes a proposition
- https://en.wikipedia.org/wiki/Zeroth-order_logic - first-order logic without quantifiers. finitely axiomatizable zeroth-order logic is isomorphic to a propositional logic.
Boolean algebra
- https://en.wikipedia.org/wiki/Boolean_algebra - first algebra of logic. primary: sets, secondary;
- https://en.wikipedia.org/wiki/Lindenbaum–Tarski_algebra - a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation. The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski. It was first introduced by Tarski in 1935 as a device to establish correspondence between classical propositional calculus and Boolean algebras. The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic.
- https://en.wikipedia.org/wiki/Boolean_algebra_(structure) - or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle.
- https://en.wikipedia.org/wiki/Boolean_satisfiability_problem - sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable.
Heyting algebra
- https://en.wikipedia.org/wiki/Heyting_algebra - also known as Pseudo-Boolean algebra, is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b of implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by Arend Heyting (1930) to formalize intuitionistic logic.
- https://en.wikipedia.org/wiki/Complete_Heyting_algebra - a Heyting algebra that is complete as a lattice. Complete Heyting algebras are the objects of three different categories; the category CHey, the category Loc of locales, and its opposite, the category Frm of frames. Although these three categories contain the same objects, they differ in their morphisms, and thus get distinct names. Only the morphisms of CHey are homomorphisms of complete Heyting algebras.Locales and frames form the foundation of pointless topology, which, instead of building on point-set topology, recasts the ideas of general topology in categorical terms, as statements on frames and locales.
- https://en.wikipedia.org/wiki/Dialectica_interpretation - a proof interpretation of intuitionistic arithmetic (Heyting arithmetic) into a finite type extension of primitive recursive arithmetic, the so-called System T. It was developed by Kurt Gödel to provide a consistency proof of arithmetic. The name of the interpretation comes from the journal Dialectica, where Gödel's paper was published in a 1958 special issue dedicated to Paul Bernays on his 70th birthday.
Predicate / first-order logic
- https://en.wikipedia.org/wiki/Predicate_logic - logic of objects. generic term for symbolic formal systems like first-order logic, second-order logic, many-sorted logic, or infinitary logic. formal system is distinguished from other systems in that its formulae contain variables which can be quantified.
With the tools of first-order logic it is possible to formulate a number of theories, either with explicit axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these; others include set theory and mereology.
- https://en.wikipedia.org/wiki/Sentence_(mathematical_logic) - of a predicate logic is a boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, something that must be true or false. The restriction of having no free variables is needed to make sure that sentences can have concrete, fixed truth values: As the free variables of a (general) formula can range over several values, the truth value of such a formula may vary.
- https://en.wikipedia.org/wiki/Atomic_sentence - sentences without any logical connectives or quantifiers; a type of declarative sentence which is either true or false (may also be referred to as a proposition, statement or truthbearer) and which cannot be broken down into other simpler sentences. For example, "The dog ran" is an atomic sentence in natural language, whereas "The dog ran and the cat hid." is a molecular sentence in natural language.
- https://en.wikipedia.org/wiki/Predicate_(grammar) - one of the two main parts of a sentence (the other is the subject, which is what the predicate modifies). For example, "John [is yellow]". John is the subject, and is yellow is the predicate, which modifies the description of the subject. It is usually headed with a verb.In linguistic semantics, a predicate is an expression that can be true of something. Thus, the expressions "is yellow" or "is like broccoli" are correct descriptions of the subject, if the subject is yellow, or is like broccoli, respectively.
- https://en.wikipedia.org/wiki/Predicate_(mathematical_logic) - commonly understood to be a Boolean-valued function P: X→ {true, false}, called the predicate on X; a statement that may be true or false depending on the values of its variables. However, predicates have many different uses and interpretations in mathematics and logic, and their precise definition, meaning and use will vary from theory to theory. So, for example, when a theory defines the concept of a relation, then a predicate is simply the characteristic function or the indicator function of a relation. However, not all theories have relations, or are founded on set theory, and so one must be careful with the proper definition and semantic interpretation of a predicate.
- https://en.wikipedia.org/wiki/First-order_logic - a formal system used in mathematics, philosophy, linguistics, and computer science. It is also known as first-order predicate calculus', the lower predicate calculus, quantification theory, and predicate logic. First-order logic uses quantified variables over (non-logical) objects. This distinguishes it from propositional logic which does not use quantifiers (except possibly over truth values or propositions).
- https://en.wikipedia.org/wiki/Hypostatic_abstraction - also known as hypostasis or subjectal abstraction, is a formal operation that transforms a predicate into a relation; for example "Honey is sweet" is transformed into "Honey has sweetness". The relation is created between the original subject and a new term that represents the property expressed by the original predicate.Hypostasis changes a propositional formula of the form X is Y to another one of the form X has the property of being Y or X has Y-ness. The logical functioning of the second object Y-ness consists solely in the truth-values of those propositions that have the corresponding abstract property Y as the predicate. The object of thought introduced in this way may be called a hypostatic object and in some senses an abstract object and a formal object.
- https://en.wikipedia.org/wiki/Continuous_predicate - a term coined by Charles Sanders Peirce (1839–1914) to describe a special type of relational predicate that results as the limit of a recursive process of hypostatic abstraction.
- https://en.wikipedia.org/wiki/Universal_quantification - a type of quantifier, a logical constant which is interpreted as "given any" or "for all". It expresses that a propositional function can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.
It is usually denoted by the turned A (∀) logical operator symbol, which, when used together with a predicate variable, is called a universal quantifier ("∀x", "∀(x)", or sometimes by "(x)" alone). Universal quantification is distinct from existential quantification ("there exists"), which asserts that the property or relation holds only for at least one member of the domain.
- https://en.wikipedia.org/wiki/Existential_quantification - a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". Some sources use the term existentialization to refer to existential quantification. It is usually denoted by the turned E (∃) logical operator symbol, which, when used together with a predicate variable, is called an existential quantifier ("∃x" or "∃(x)"). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain.
- https://en.wikipedia.org/wiki/Variable_(mathematics) - a symbol used to represent an arbitrary element of a set. In addition to numbers, variables are commonly used to represent vectors, matrices and functions.
Making algebraic computations with variables as if they were explicit numbers allows one to solve a range of problems in a single computation. A typical example is the quadratic formula, which allows one to solve every quadratic equation by simply substituting the numeric values of the coefficients of the given equation for the variables that represent them.
In mathematical logic, a variable is either a symbol representing an unspecified term of the theory, or a basic object of the theory, which is manipulated without referring to its possible intuitive interpretation.
- https://en.wikipedia.org/wiki/Free_variables_and_bound_variables - a notation that specifies places in an expression where substitution may take place. Some older books use the terms real variable and apparent variable for free variable and bound variable. The idea is related to a placeholder (a symbol that will later be replaced by some literal string), or a wildcard character that stands for an unspecified symbol.
In computer programming, the term free variable refers to variables used in a function that are not local variables nor parameters of that function. The term non-local variable is often a synonym in this context. A bound variable is a variable that was previously free, but has been bound to a specific value or set of values.
- https://en.wikipedia.org/wiki/Gödel%27s_completeness_theorem - a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. It makes a close link between model theory that deals with what is true in different models, and proof theory that studies what can be formally proven in particular formal systems.
Predicate functor logic
Cylindric algebra
- https://en.wikipedia.org/wiki/Cylindric_algebra - First-order logic with equality
to sort
- Open Logic Project - a collection of teaching materials on mathematical logic aimed at a non-mathematical audience, intended for use in advanced logic courses as taught in many philosophy departments. It is open-source: you can download the LaTeX code. It is open: you’re free to change it whichever way you like, and share your changes. It is collaborative: a team of people is working on it, using the GitHub platform, and we welcome contributions and feedback. And it is written with configurability in mind. [38]
- Logical Symbols - Although traditional categorical logic can be used to represent and assess many of our most common patterns of reasoning, modern logicians have developed much more comprehensive and powerful systems for expressing rational thought. These newer logical languages are often called "symbolic logic," since they employ special symbols to represent clearly even highly complex logical relationships. We'll begin our study of symbolic logic with the propositional calculus, a formal system that effectively captures the ways in which individual statements can be combined with each other in interesting ways. The first step, of course, is to define precisely all of the special, new symbols we will use. [39]
- https://en.wikipedia.org/wiki/Method_of_analytic_tableaux - also called truth tree) is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics.
- https://en.wikipedia.org/wiki/Domain_of_discourse - also called the universe of discourse (or simply universe), is the set of entities over which certain variables of interest in some formal treatment may range.
- https://en.wikipedia.org/wiki/Mathematical_proof - an inferential argument for a mathematical statement. In the argument, other previously established statements, such as theorems, can be used. In principle, a proof can be traced back to self-evident or assumed statements, known as axioms, along with accepted rules of inference. Axioms may be treated as conditions that must be met before the statement applies. Proofs are examples of exhaustive deductive reasoning or inductive reasoning and are distinguished from empirical arguments or non-exhaustive inductive reasoning (or "reasonable expectation"). A proof must demonstrate that a statement is always true (occasionally by listing all possible cases and showing that it holds in each), rather than enumerate many confirmatory cases. An unproved proposition that is believed to be true is known as a conjecture.
Proofs employ logic but usually include some amount of natural language which usually admits some ambiguity. In fact, the vast majority of proofs in written mathematics can be considered as applications of rigorous informal logic. Purely formal proofs, written in symbolic language instead of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics (in both senses of that term). The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language.
- https://en.wikipedia.org/wiki/Direct_proof - a way of showing the truth or falsehood of a given statement by a straightforward combination of established facts, usually axioms, existing lemmas and theorems, without making any further assumptions. In order to directly prove a conditional statement of the form "If p, then q", it suffices to consider the situations in which the statement p is true. Logical deduction is employed to reason from assumptions to conclusion. The type of logic employed is almost invariably first-order logic, employing the quantifiers for all and there exists. Common proof rules used are modus ponens and universal instantiation.
- https://en.wikipedia.org/wiki/Proof_by_contradiction - a form of proof, and more specifically a form of indirect proof, that establishes the truth or validity of a proposition. It starts by assuming that the opposite proposition is true, and then shows that such an assumption leads to a contradiction. Proof by contradiction is also known as indirect proof, apagogical argument, proof by assuming the opposite, and reductio ad impossibilem. It is a particular kind of the more general form of argument known as reductio ad absurdum. G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game."
- https://en.wikipedia.org/wiki/Conjecture - a conclusion or proposition based on incomplete information, for which no proof has been found. Conjectures such as the Riemann hypothesis (still a conjecture) or Fermat's Last Theorem (which was a conjecture until proven in 1995) have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
- https://en.wikipedia.org/wiki/Theorem - a statement that has been proved on the basis of previously established statements, such as other theorems, and generally accepted statements, such as axioms. A theorem is a logical consequence of the axioms. The proof of a mathematical theorem is a logical argument for the theorem statement given in accord with the rules of a deductive system. The proof of a theorem is often interpreted as justification of the truth of the theorem statement. In light of the requirement that theorems be proved, the concept of a theorem is fundamentally deductive, in contrast to the notion of a scientific law, which is experimental.
- https://en.wikipedia.org/wiki/Lemma_(mathematics) - (plural lemmata or lemmas) from the Greek λῆμμα (lemma, “anything which is received, such as a gift, profit, or a bribe”) or helping theorem is a proven proposition which is used as a stepping stone to a larger result rather than as a statement of interest by itself. There is no formal distinction between a lemma and a theorem, only one of intention – see Theorem terminology. However, a lemma can be considered a minor result whose sole purpose is to help prove a theorem - a step in the direction of proof, so to speak.
- https://en.wikipedia.org/wiki/List_of_lemmas
- https://en.wikipedia.org/wiki/Fundamental_theorem - the theorem considered central to a field of mathematics. The naming of such a theorem is not necessarily based on how often it is used or the difficulty of its proofs. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus, which are two distinct branches that were not obviously related. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory.
Bacon;
Leibniz;
- https://en.wikipedia.org/wiki/Gottlob_Frege - sense and reference, connotation and denotation
- Gottlob Frege by Graham Priest
- https://en.wikipedia.org/wiki/Principia_Mathematica - Russell and Whitehead
- https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theorems - two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic.
- YouTube: Introduction to Logic series
- Chico Jones - Logic
- YouTube: One Hundred Days of Logic - playlist
"Inference rules are rules that describe when one can validly infer a conclusion from a set of premises. Replacement rules are rules of what one can replace and still have a wff with the same truth-value; in other words, they are a list of logical equivalencies. The main difference between the two set of rules is that the first kind allows valid inference from the first set of propositions (premises) to the second set (the conclusion), but with replacement rules we can validly infer both ways."
- Category Theory for the Working Hacker - Philip Wadler on why category theory is relevant for developers, discussing the principle of Propositions as Types connecting propositions and proofs in logic, and types and programs in computing. [43]
Other logics
- https://en.wikipedia.org/wiki/Second-order_logic - see frege's first predicate logic. quantifies over relations.
- https://en.wikipedia.org/wiki/Higher-order_logic - from first-order logic by additional quantifiers and a stronger semantics
- https://en.wikipedia.org/wiki/Many-sorted_logic
- Second-order and Higher-order Logic - Second-order logic is an extension of first-order logic where, in addition to quantifiers such as “for every object (in the universe of discourse),” one has quantifiers such as “for every property of objects (in the universe of discourse).” This augmentation of the language increases its expressive strength, without adding new non-logical symbols, such as new predicate symbols. For classical extensional logic (as in this entry), properties can be identified with sets, so that second-order logic provides us with the quantifier “for every set of objects.”
Categorical logic
- https://en.wikipedia.org/wiki/Categorical_logic - represents both syntax and semantics by a category, and an interpretation by a functor
- YouTube: Logic in category theory
- internal logic in nLab - One of the most important observations of category theory is that large parts of mathematics can be internalized in any category with sufficient structure. The most basic examples of this involve algebraic structures; for instance, a group can be defined in any category with finite products, and an internal category can be defined in any ambient category with pullbacks. For such algebraic (or even essentially algebraic) structures, which are defined by operations with equational axioms imposed, it suffices for the ambient category to have (usually finite) limits. However, if we assume that the ambient category has additional structure, then much more of mathematics can be internalized, potentially including fields, local rings, finite sets, topological spaces, even the field of real numbers. The idea is to exploit the fact that all mathematics can be written in the language of logic, and seek a way to internalize logic in a category with sufficient structure.
Proof calculus
- https://en.wikipedia.org/wiki/Proof_calculus - or proof system is built to prove statements, a family of formal systems that use a common style of formal inference for its inference rules. The specific inference rules of a member of such a family characterize the theory of a logic. Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determining and can be used for radically different logics. Loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term.
- Book of Proof - This book is an introduction to the standard methods of proving mathematical theorems. It has been approved by the American Institute of Mathematics' Open Textbook Initiative. Also see the Mathematical Association of America Math DL review (of the 1st edition) and the Amazon reviews. An adoptions list is here. [44]
Sequent calculus
- https://en.wikipedia.org/wiki/Sequent_calculus - formal logical argumentation where every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen)
- Interactive Tutorial of the Sequent Calculus - This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic.
Linear logic
- PDF: A taste of linear logic - Philip Wadler, Department of Computing Science, University of Glasgow. Abstract: This tutorial paper provides an introduction to intuitionistic logic and linear logic, and shows how they correspond to type systems for functional languages via the notion of ‘Propositions as Types’. The presentation of linear logic is simplified by basing it on the Logic of Unity. An application to the array update problem is briefly discussed. [45]
- boxbase.org: Modeling tower of hanoi in linear logic - [boxbase.org/entries/2019/jun/17/hanoi/]
Sequential logic
- https://en.wikipedia.org/wiki/Sequential_logic - a type of logic circuit whose output depends not only on the present value of its input signals but on the sequence of past inputs, the input history. This is in contrast to combinational logic, whose output is a function of only the present state of input. That is, sequential logic has state (memory) while combinational logic does not. Or, in other words, sequential logic is combinational logic with memory.
Combinatorial logic
- https://en.wikipedia.org/wiki/Combinatorial_logic - a notation to eliminate the need for variables in mathematical logic
- https://github.com/theoden8/matcalc - Performance-oriented implementations of mathematical sequences, callable from header-only/static/dynamic library or as executables.
Model theory
- http://en.wikipedia.org/wiki/Model_theory - the study of classes of mathematical structures (e.g. groups, fields, graphs, universes of set theory) from the perspective of mathematical logic. The objects of study are models of theories in a formal language. We call a theory a set of sentences in a formal language, and model of a theory a structure (e.g. an interpretation) that satisfies the sentences of that theory.
Model theory recognises and is intimately concerned with a duality: It examines semantical elements (meaning and truth) by means of syntactical elements (formulas and proofs) of a corresponding language. To quote the first page of Chang and Keisler (1990):
- universal algebra + logic = model theory.
Model theory developed rapidly during the 1990s, and a more modern definition is provided by Wilfrid Hodges (1997):
- model theory = algebraic geometry − fields
although model theorists are also interested in the study of fields. Other nearby areas of mathematics include combinatorics, number theory, arithmetic dynamics, analytic functions, and non-standard analysis. In a similar way to proof theory, model theory is situated in an area of interdisciplinarity among mathematics, philosophy, and computer science. The most prominent professional organization in the field of model theory is the Association for Symbolic Logic.
Proof theory
- http://en.wikipedia.org/wiki/Proof_theory - a major branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as plain lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of the logical system. As such, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.
Some of the major areas of proof theory include structural proof theory, ordinal analysis, provability logic, reverse mathematics, proof mining, automated theorem proving, and proof complexity. Much research also focuses on applications in computer science, linguistics, and philosophy.
Modal logic
- https://en.wikipedia.org/wiki/Modal_logic - extends classical logic with non-truth-functional ("modal") operators ("necassery", "possible")
- https://en.wikipedia.org/wiki/Deontic_logic - relating to duty, introduces operators "ought" and "can"
- https://en.wikipedia.org/wiki/Hybrid_logic - a modal logic
- https://en.wikipedia.org/wiki/Interpretability_logic - a family of modal logics that extend provability logic to describe interpretability or various related metamathematical properties and relations such as weak interpretability, Π1-conservativity, cointerpretability, tolerance, cotolerance, and arithmetic complexities.
Monads
Aspect of functional. See Haskell, etc. for related.
- https://en.wikipedia.org/wiki/Monad_(category_theory) - triple, triad, standard construction and fundamental construction) is an endofunctor (a functor mapping a category to itself), together with two natural transformations required to fulfill certain coherence conditions. Monads are used in the theory of pairs of adjoint functors, and they generalize closure operators on partially ordered sets to arbitrary categories.
Computation
See also Maths#Structures, Hardware
- https://en.wikipedia.org/wiki/Computation - any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model, for example an algorithm.The study of computation is paramount (hypernymous) to the discipline of computer science.
- https://en.wikipedia.org/wiki/Model_of_computation - the definition of the set of allowable operations used in computation and their respective costs. It is used for measuring the complexity of an algorithm in execution time and or memory space: by assuming a certain model of computation, it is possible to analyze the computational resources required or to discuss the limitations of algorithms or computers.
- https://en.wikipedia.org/wiki/Theory_of_computation - branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and language, computability theory, and computational complexity theory.
- https://en.wikipedia.org/wiki/Evaluation_strategy - used by programming languages to determine two things—when to evaluate the arguments of a function call and what kind of value to pass to the function.To illustrate, a function application may evaluate the argument before evaluating the function's body and pass the ability to look up the argument's current value and modify it via assignment. The notion of reduction strategy in lambda calculus is similar but distinct.
- https://en.wikipedia.org/wiki/Computational_problem - collection of questions that computers might be able to solve. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n." is a computational problem. A computational problem can be viewed as an infinite collection of instances together with a, possibly empty, set of solutions for every instance. For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that describe nontrivial prime factors of n. Computational problems are one of the main objects of study in theoretical computer science. The field of computational complexity theory attempts to explain how much resources (computational complexity) solving the problems will require, and when some problems is intractable or undecidable. Computational problems belong to complexity classes that define broadly how much resources it takes to compute (solve) them with various abstract machines (e.g. classical or quantum).
- https://en.wikipedia.org/wiki/File:Theoretical_computer_science.svg - Relationship between complexity theory, formal languages and computability theory.
- Theory of Computing - An Open Access Electronic Journal in Theoretical Computer Science
- YouTube: See How the CPU Works In One Lesson
- But How Do It Know - the book. The basic principles of computers - distills Computer Science down to the simple basic principles that all computers have in common,and presents the big picture step by step in plain English so that anyone can understand this ingenious, yet basically simple invention called acomputer.
Computability
See Computing
to sort
- https://en.wikipedia.org/wiki/Computability_theory - also called recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The basic questions addressed by recursion theory are "What does it mean for a function on the natural numbers to be computable?" and "How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched. The field has since grown to include the study of generalized computability and definability.
- https://en.wikipedia.org/wiki/Computable_function - the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithm. They are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the μ-recursive functions.
- https://en.wikipedia.org/wiki/Church–Turing_thesis - states that recursion, lambda calculus and the Turing machine are equivalent
- https://en.wikipedia.org/wiki/Turing_degree - or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set. The concept of Turing degree is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems; the Turing degree of a set tells how difficult it is to solve the decision problem associated with the set. That is, to determine whether an arbitrary number is in the given set.
- https://en.wikipedia.org/wiki/Curry–Howard_correspondence - the direct relationship between computer programs and mathematical proofs. It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and logician William Alvin Howard.
- YouTube: Chomsky Hierarchy - Computerphile - Uncomputable through to finite state - Professor Brailsford explains Chomsky's hierarchy.
- YouTube: Same Story, Different Notation - Computerphile - Finite State Automata meets Recursion. Professor Brailsford continues the story of computers without memory.
- https://en.wikipedia.org/wiki/Grzegorczyk_hierarchy - a hierarchy of functions used in computability theory (Wagner and Wechsung 1986:43). Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower level of the hierarchy grow slower than functions in the higher levels.
- The Holy Trinity - "The doctrine of computational trinitarianism holds that computation manifests itself in three forms: proofs of propositions, programs of a type, and mappings between structures. These three aspects give rise to three sects of worship: Logic, which gives primacy to proofs and propositions; Languages, which gives primacy to programs and types; Categories, which gives primacy to mappings and structures. The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to enlightenment: each aspect provides insights that comprise the experience of computation in our lives." [57]
- Physics, Topology, Logic and Computation: A Rosetta Stone [58] - "In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a "cobordism". Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of "closed symmetric monoidal category". We assume no prior knowledge of category theory, proof theory or computer science."
- PDF: The Galois Connection between Syntax and Semantics - Peter Smith, University of Cambridge]
- Where LISP Fits - [60] - "These simplifications made LISP into a way of describing computable functions much neater than the Turing machines or the general recursive definitions used in recursive function theory. The fact that Turing machines constitute an awkward programming language doesn’t much bother recursive function theorists, because they almost never have any reason to write particular recursive definitions, since the theory concerns recursive functions in general. They often have reason to prove that recursive functions with specific properties exist, but this can be done by an informal argument without having to write them down explicitly. In the early days of computing, some people developed programming languages based on Turing machines; perhaps it seemed more scientific. Anyway, I decided to write a paper describing LISP both as a programming language and as a formalism for doing recursive function theory."
- https://en.wikipedia.org/wiki/Thue-Morse_sequence - from a fever !dre!am! i had as a child
Computational complexity
- https://en.wikipedia.org/wiki/Time_complexity - the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n for any n (bigger than some n0), the asymptotic time complexity is O(n3).
- https://en.wikipedia.org/wiki/Big_O_notation - a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Edmund Landau and Paul Bachmann, collectively called Bachmann-Landau notation or asymptotic notation.
- https://complexityzoo.net - Complexity Zoo ... There are now 546 classes and counting! The Zoo first opened in 2002. It was made into a wiki in 2005, and hosted at the University of Waterloo from 2012 to 2020.
- https://en.wikipedia.org/wiki/Decision_problem - a question in some formal system with a yes-or-no answer
- https://en.wikipedia.org/wiki/Halting_Problem
- Scooping The Loop Snooper - A proof that the Halting Problem is undecidable
- https://en.wikipedia.org/wiki/Parameterized_complexity - a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function in those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured by the number of bits in the input.
- https://en.wikipedia.org/wiki/Blum_axioms - or Blum complexity axioms are axioms that specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967. Importantly, Blum's speedup theorem and the Gap theorem hold for any complexity measure satisfying these axioms. The most well-known measures satisfying these axioms are those of time (i.e., running time) and space (i.e., memory usage).
- https://en.wikipedia.org/wiki/Kolmogorov_complexity - the inherent complexity of a given piece of data is hos long the shortest program to make it is
Automata
- https://en.wikipedia.org/wiki/Automata_theory - the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a subject of study in both mathematics and computer science). The word automata (the plural of automaton) comes from the Greek word αὐτόματα, which means "self-acting".
Automata theory is closely related to formal language theory. An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they can recognize. Automata play a major role in theory of computation, compiler construction, artificial intelligence, parsing and formal verification.
- https://en.wikipedia.org/wiki/Abstract_machine - abstract machines are often used in thought experiments regarding computability or to analyze the complexity of algorithms (see computational complexity theory). A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. The best-known example is the Turing machine.
- https://en.wikipedia.org/wiki/Operational_semantics - certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures
- The language of languages - explains grammars and common notations for grammars, such as Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF) and regular extensions to BNF.
- YouTube: Computers Without Memory - Computerphile - They're called 'Finite State Automata" and occupy the centre of Chomsky's Hierarchy - Professor Brailsford explains the ultimate single purpose computer.
- https://en.wikipedia.org/wiki/Nested_stack_automaton - a finite automaton that can make use of a stack containing data which can be additional stacks. Like a stack automaton, a nested stack automaton may step up or down in the stack, and read the current symbol; in addition, it may at any place create a new stack, operate on that one, eventually destroy it, and continue operating on the old stack. This way, stacks can be nested recursively to an arbitrary depth; however, the automaton always operates on the innermost stack only.
- https://en.wikipedia.org/wiki/Moore_machine - a finite-state machine whose output values are determined solely by its current state
- https://en.wikipedia.org/wiki/Finite_state_transducer - a finite state machine with two tapes: an input tape and an output tape. This contrasts with an ordinary finite state automaton (or finite state acceptor), which has a single tape.
Pushdown Automaton
- https://en.wikipedia.org/wiki/Pushdown_automaton - type of automaton that employs a stack
Combinatory logic
- https://en.wikipedia.org/wiki/Combinatory_logic - a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions - and to remove any mention of variables - particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.
- http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-fall-2010/video-lectures [67]
- https://en.wikipedia.org/wiki/Combinatory_categorial_grammar - an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate-argument structure, quantification and information structure. The formalism generates constituency-based structures (as opposed to dependency-based ones) and is therefore a type of phrase structure grammar (as opposed to a dependency grammar). CCG relies on combinatory logic, which has the same expressive power as the lambda calculus, but builds its expressions differently.
Finite State Machines
- https://en.wikipedia.org/wiki/Finite-state_machine - or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. A deterministic finite-state machine can be constructed equivalent to any non-deterministic one.
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which dispense products when the proper combination of coins is deposited, elevators, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, and combination locks, which require the input of a sequence of numbers in the proper order.
The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but a FSM cannot. This is because a FSM's memory is limited by the number of states it has. FSMs are studied in the more general field of automata theory.
An automaton can be said to recognize a string if we view the content of its tape as input. In other words, the automaton computes a function that maps strings into the set {0,1}. Alternatively, we can say that an automaton generates strings, which means viewing its tape as an output tape. On this view, the automaton generates a formal language, which is a set of strings. The two views of automata are equivalent: the function that the automaton computes is precisely the indicator function of the set of strings it generates. The class of languages generated by finite automata is known as the class of regular languages.
- https://en.wikipedia.org/wiki/State-transition_table - a table showing what state (or states in the case of a nondeterministic finite automaton) a finite-state machine will move to, based on the current state and other inputs. It is essentially a truth table in which the inputs include the current state along with other inputs, and the outputs include the next state along with other outputs.A state-transition table is one of many ways to specify a finite-state machine. Other ways include a state diagram.
RE
- https://en.wikipedia.org/wiki/Recursively_enumerable_language - also recognizable, partially decidable, semidecidable, Turing-acceptable or Turing-recognizable, if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language. Recursively enumerable languages are known as type-0 languages in the Chomsky hierarchy of formal languages. All regular, context-free, context-sensitive and recursive languages are recursively enumerable. The class of all recursively enumerable languages is called RE.
- https://en.wikipedia.org/wiki/Recursively_enumerable_set - a set S of natural numbers is called recursively enumerable, computably enumerable, semidecidable, provable or Turing-recognizable if: There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S, or equivalently; There is an algorithm that enumerates the members of S. That means that its output is simply a list of the members of S: s1, s2, s3, ... . If necessary, this algorithm may run forever. The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase. In computational complexity theory, the complexity class containing all recursively enumerable sets is RE.
Turing / register machine
- http://en.wikipedia.org/wiki/Turing_machine - FSM with two stacks (left and right on tape)
- https://en.wikipedia.org/wiki/Register_machine - a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent.
- https://en.wikipedia.org/wiki/Langton%27s_ant - a two-dimensional Turing machine with a very simple set of rules but complicated emergent behavior. It was invented by Chris Langton in 1986 and runs on a square lattice of black and white cells. The universality of Langton's ant was proven in 2000. The idea has been generalized in several different ways, such as turmites which add more colors and more states. [70]
Boolean circuit
- https://en.wikipedia.org/wiki/Boolean_circuit - a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length. Boolean circuits are also used as a formal model for combinational logic in digital electronics.Boolean circuits are defined in terms of the logic gates they contain. For example, a circuit might contain binary AND and OR gates and unary NOT gates, or be entirely described by binary NAND gates. Each gate corresponds to some Boolean function that takes a fixed number of bits as input and outputs a single bit.Boolean circuits provide a model for many digital components used in computer engineering, including multiplexers, adders, and arithmetic logic units, but they exclude sequential logic. They are an abstraction that omits many aspects relevant to designing real digital logic circuits, such as metastability, fanout, glitches, power consumption, and propagation delay variability.
- https://en.wikipedia.org/wiki/Boolean_network - a discrete set of boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t. This may be done synchronously or asynchronously.
Lambda calculus
- https://en.wikipedia.org/wiki/Lambda_calculus - a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any single-taped Turing machine and was first introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. Lambda calculus consists of constructing lambda terms and performing reduction operations on them.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement the lambda calculus. Lambda calculus also is a current research topic in Category theory.
- https://en.wikipedia.org/wiki/Church_encoding - a means of representing data and operators in the lambda calculus. The data and operators form a mathematical structure which is embedded in the lambda calculus. The Church numerals are a representation of the natural numbers using lambda notation. The method is named for Alonzo Church, who first encoded data in the lambda calculus this way.
Terms that are usually considered primitive in other notations (such as integers, booleans, pairs, lists, and tagged unions) are mapped to higher-order functions under Church encoding. The Church-Turing thesis asserts that any computable operator (and its operands) can be represented under Church encoding. In the untyped lambda calculus the only primitive data type is the function.
- Lambda Diagrams - a graphical notation for closed lambda terms, in which abstractions (lambdas) are represented by horizontal lines, variables by vertical lines emanating down from their binding lambda, and applications by horizontal links connecting the leftmost variables. In the alternative style, applications link the nearest deepest variables, for a more stylistic, if less uniform, look.
- Lunarflow - Lambda calculus, but actually intuitive
- YouTube: Lambda Calculus - Computerphile
- PDF: A short introduction to the Lambda Calculus - Achim Jung, March 18, 2004. The lambda calculus can appear arcane on first encounter. Viewed purely as a “naming device”, however, it is a straighforward extension of ordinary mathematical notation.
- Compiling Lambda Calculus - This book introduces the theory and interpretation of lambda calculus. [74]
- ALGEBRA AND THE LAMBDA CALCULUS - by Aubrey Jaffer [75]
ABSTRACT: An algebraic system which includes Church's lambda calculusand currying is presented.Closures, applications, and currying are implemented using variable elimination. The usual connection made between the lambda calculus and algebra is to construct the integers using the lambda calculus and then construct algebraic (and other formulas) by Godelizing them. The system described here reverses this connection by implementing the lambda calculus in an algebraic system. It is used in the JACAL symbolic mathematics system and gives JACAL the ability to represent functions as members of the algebraic system. All of the system'soperations (including simplification) can then be applied to functionsas easily as to expressions.
- https://en.wikipedia.org/wiki/SKI_combinator_calculus - a combinatory logic, a computational system that may be perceived as a reduced version of untyped lambda calculus. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It was introduced by Moses Schönfinkel and Haskell Curry. All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators).
- https://en.wikipedia.org/wiki/Binary_lambda_calculus - a formulation of combinatory logic using only the symbols 0 and 1.
- John's Lambda Calculus and Combinatory Logic Playground
- http://www.ioccc.org/2012/tromp/hint.html tromp] - celebrating the close connection between obfuscation and conciseness, an implementaton of the most concise language known, Binary Lambda Calculus (BLC).
- https://en.wikipedia.org/wiki/Calculus_of_constructions - a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
- http://www.scribd.com/doc/2384893/barendregt92lambda lambda calculi with types
- An Introduction to the Lambda Calculus - using Clojure [77]
Pattern calculus
- https://en.wikipedia.org/wiki/Pattern_calculus - bases all computation on pattern matching of a very general kind. Like lambda calculus, it supports a uniform treatment of function evaluation. Also, it allows functions to be passed as arguments and returned as results. In addition, pattern calculus supports uniform access to the internal structure of arguments, be they pairs or lists or trees. Also, it allows patterns to be passed as arguments and returned as results. Uniform access is illustrated by a pattern-matching function size that computes the size of an arbitrary data structure.
- The pattern calculus - "There is a significant class of operations such as mapping that are common to all data structures. The goal of generic programming is to support these operations on arbitrary data types without having to recode for each new type. The pattern calculus and combinatory type system reach this goal by representing each data structure as a combination of names and a finite set of constructors. These can be used to define generic functions by pattern-matching programs in which each pattern has a different type. Evaluation is type-free. Polymorphism is captured by quantifying over type variables that represent unknown structures. A practical type inference algorithm is provided."
General recursive functions
- https://en.wikipedia.org/wiki/General_recursive_function - a class of partial functions from natural numbers to natural numbers that are "computable" in an intuitive sense. In fact, in computability theory it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every μ-recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the λ-recursive functions and the functions that can be computed by Markov algorithms. The set of all recursive functions is known as R in computational complexity theory.
- https://en.wikipedia.org/wiki/Primitive_recursive_function - class of functions defined using primitive recursion and composition as central operations, strict subset of the total µ-recursive functions
- https://en.wikipedia.org/wiki/Recursion_(computer_science) - in computer science is a method where the solution to a problem depends on solutions to smaller instances of the same problem (as opposed to iteration). The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
- It Is What It Is (And Nothing Else) | Existential Type - What, then, is recursion? It is nothing more than self-reference, the ability to name a computation for use within the computation itself. Recursion is what it is, and nothing more. No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple. Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types. [80]
- https://en.wikipedia.org/wiki/Recursive_set - a set of natural numbers is called recursive, computable or decidable if there is an algorithm which terminates after a finite amount of time and correctly decides whether or not a given number belongs to the set. A more general class of sets consists of the recursively enumerable sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set. A set which is not computable is called noncomputable or undecidable.
P system
- https://en.wikipedia.org/wiki/P_system - a computational model in the field of computer science that performs calculations using a biologically-inspired process. They are based upon the structure of biological cells, abstracting from the way in which chemicals interact and cross cell membranes. The concept was first introduced in a 1998 report by the computer scientist Gheorghe Păun, whose last name is the origin of the letter P in 'P Systems'. Variations on the P system model led to the formation of a branch of research known as 'membrane computing.' Although inspired by biology, the primary research interest in P systems is concerned with their use as a computational model, rather than for biological modeling, although this is also being investigated.
Types
- https://en.wikipedia.org/wiki/Type_theory - any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics. In type theory, every "term" has a "type" and operations are restricted to terms of a certain type.
- https://en.wikipedia.org/wiki/Type_system - a collection of rules that assign a property called a type to constructs such as variables, expressions, functions or modules—a computer program is composed of. reduces bugs by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination thereof.
Type systems have other purposes as well, such as enabling certain compiler optimizations, allowing for multiple dispatch, providing a form of documentation, etc. A type system associates a type with each computed value and, by examining the flow of these values, attempts to ensure or prove that no type errors can occur. The particular type system in question determines exactly what constitutes a type error, but in general the aim is to prevent operations expecting a certain kind of value from being used with values for which that operation does not make sense (logic errors); memory errors will also be prevented. Type systems are often specified as part of programming languages, and built into the interpreters and compilers for them; although the type system of a language can be extended by optional tools that perform additional kinds of checks using the language's original type syntax and grammar.
- http://c2.com/cgi/wiki?TypeSystem - Weak/Strong, Static/Dynamic
- https://en.wikipedia.org/wiki/Type_safety - the extent to which a programming language discourages or prevents type errors.
- https://en.wikipedia.org/wiki/Type_signature - type annotation defines the inputs and outputs for a function, subroutine or method.
- https://en.wikipedia.org/wiki/Type_conversion - typecasting, and coercion are different ways of, implicitly or explicitly, changing an entity of one data type into another. This is done to take advantage of certain features of type hierarchies or type representations. One example would be small integers, which can be stored in a compact format and converted to a larger representation when used in arithmetic computations. In object-oriented programming, type conversion allows programs to treat objects of one type as one of their ancestor types to simplify interacting with them.
Type aliasing provides a way to redefine existing types as new type names. For example, type aliases may be used to define names for object types, effectively modeling interface types.
- https://en.wikipedia.org/wiki/Type_inference - automatic deduction of the type of an expression in a programming language. If some, but not all, type annotations are already present it is referred to as type reconstruction. The opposite operation of type inference is called type erasure.
- http://c2.com/cgi/wiki?TypingQuadrant - weak/strong vs safe/unsafe typing
- https://en.wikipedia.org/wiki/Strong_and_weak_typing - in general, a strongly typed language is more likely to generate an error or refuse to compile if the argument passed to a function does not closely match the expected type. On the other hand, a very weakly typed language may produce unpredictable results or may perform implicit type conversion. A different but related concept is latent typing.
- https://en.wikipedia.org/wiki/Latent_typing - type system where types are associated with values and not variables, requires run-time type checking
- https://en.wikipedia.org/wiki/Structural_type_system - a major class of type system, in which type compatibility and equivalence are determined by the type's actual structure or definition, and not by other characteristics such as its name or place of declaration. Structural systems are used to determine if types are equivalent and whether a type is a subtype of another. It contrasts with nominative systems, where comparisons are based on the names of the types or explicit declarations, and duck typing, in which only the part of the structure accessed at runtime is checked for compatibility.
- https://en.wikipedia.org/wiki/Nominative_type_system - or nominative type system (or name-based type system) is a major class of type system, in which compatibility and equivalence of data types is determined by explicit declarations and/or the name of the types. Nominal systems are used to determine if types are equivalent, as well as if a type is a subtype of another. It contrasts with structural systems, where comparisons are based on the structure of the types in question and do not require explicit declarations.
- https://en.wikipedia.org/wiki/Automatic_variable - a local variable which is allocated and deallocated automatically when program flow enters and leaves the variable's scope. The scope is the lexical context, particularly the function or block in which a variable is defined. Local data is typically (in most languages) invisible outside the function or lexical context where it is defined. Local data is also invisible and inaccessible to a called function, but is not deallocated, coming back in scope as the execution thread returns to the caller.
- https://en.wikipedia.org/wiki/Principal_type - In type theory, a type system is said to have the principal type property if, given a term and an environment, there exists a principal type for this term in this environment, i.e. a type such that all other types for this term in this environment are an instance of the principal type.
- https://en.wikipedia.org/wiki/Duck_typing - style of typing in which an object's methods and properties determine the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface
- https://en.wikipedia.org/wiki/Substructural_type_system - family of type systems analogous to substructural logics where one or more of the structural rules are absent or allowed under controlled circumstances. Such systems are useful for constraining access to system resources such as files, locks and memory by keeping track of changes of state that occur and preventing invalid states
- https://en.wikipedia.org/wiki/Type_constructor - a feature of a typed formal language that builds new types from old
- https://en.wikipedia.org/wiki/Kind_%28type_theory%29 - the type of a type constructor or, less commonly, the type of a higher-order type operator. A kind system is essentially a simply typed lambda calculus "one level up", endowed with a primitive type, denoted * and called "type", which is the kind of any data type which does not need any type parameters.
- https://en.wikipedia.org/wiki/Hindley–Milner_type_system - a classical type system for the lambda calculus with parametric polymorphism, first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis. HM's more notable properties is completeness and its ability to deduce the most general type of a given program without the need of any type annotations or other hints supplied by the programmer. Algorithm W is a fast algorithm, performing type inference in almost linear time with respect to the size of the source, making it practically usable to type large programs. HM is preferably used for functional languages. It was first implemented as part of the type system of the programming language ML. Since then, HM has been extended in various ways, most notably by constrained types as used in Haskell.
- Ideology - Some people claim that unit tests make type systems unnecessary: "types are just simple unit tests written for you, and simple unit tests aren't the important ones". Other people claim that type systems make unit tests unnecessary: "dynamic languages only need unit tests because they don't have type systems." What's going on here? These can't both be right. We'll use this example and a couple others to explore the unknown beliefs that structure our understanding of the world.
Typed lambda calculus
- https://en.wikipedia.org/wiki/Lambda_cube - a framework introduced by Henk Barendregt [1] to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type.
The respective dimensions of the λ-cube correspond to:
- y-axis ↑ : terms that can bind types, corresponding to polymorphism.
- x-axis → : types that can bind terms, corresponding to dependent types.
- z-axis ↗ : types that can bind types, corresponding to (binding) type operators.
- https://en.wikipedia.org/wiki/Lambda-mu_calculus - an extension of the lambda calculus introduced by M. Parigot. It introduces two new operators: the μ operator (which is completely different both from the μ operator found in computability theory and from the μ operator of modal μ-calculus) and the bracket operator. Semantically these operators correspond to continuations, found in some functional programming languages. Proof-theoretically, it provides a well-behaved formulation of classical natural deduction.
One of the main goals of this extended calculus is to be able to describe expressions corresponding to theorems in classical logic. According to the Curry–Howard isomorphism, lambda calculus on its own can express theorems in intuitionistic logic only, and several classical logical theorems can't be written at all. However with these new operators one is able to write terms that have the type of, for example, Peirce's law.
System F / second-order lambda calculus
- https://en.wikipedia.org/wiki/System_F - also known as the (Girard–Reynolds) polymorphic lambda calculus or the second-order lambda calculus, is a typed lambda calculus that differs from the simply typed lambda calculus by the introduction of a mechanism of universal quantification over types. System F thus formalizes the notion of parametric polymorphism in programming languages, and forms a theoretical basis for languages such as Haskell and ML. System F was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds (1974).Whereas simply typed lambda calculus has variables ranging over terms, and binders for them, System F additionally has variables ranging over types, and binders for them.
- https://en.wikipedia.org/wiki/System_U - System U and System U− are pure type systems, i.e. special forms of a typed lambda calculus with an arbitrary number of sorts, axioms and rules (or dependencies between the sorts). They were both proved inconsistent by Jean-Yves Girard in 1972. This result led to the realization that Martin-Löf's original 1971 type theory was inconsistent as it allowed the same "Type in Type" behaviour that Girard's paradox exploits.
Values
- https://en.wikipedia.org/wiki/Value_(computer_science) - an expression which cannot be evaluated any further (a normal form). The members of a type are the values of that type. For example, the expression 1 + 2 is not a value as it can be reduced to the expression 3. This expression cannot be reduced any further (and is a member of the type Nat) and therefore is a value. The "value of a variable" is given by the corresponding mapping in the environment. In languages with assignable variables it becomes necessary to distinguish between the r-value (or contents) and the l-value (or location) of a variable. In declarative (high-level) languages, values have to be referentially transparent. This means that the resulting value is independent of the location in which a (sub-)expression needed to compute the value is stored. Only the contents of the location (the bits, whether they are 1 or 0) and their interpretation are significant.
- YouTube: The Value of Values with Rich Hickey
- https://en.wikipedia.org/wiki/Data_type - simply type is a classification identifying one of various types of data, such as real, integer or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored.
Boolean
- https://en.wikipedia.org/wiki/Boolean_data_type - a data type, having two values (usually denoted true and false), intended to represent the truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions and change control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type; logic does not always have to be Boolean.
1,0
Numbers
- https://en.wikipedia.org/wiki/Signedness - a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers).
As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because half the possible values are non-positive values (so if an 8-bit is signed, positive unsigned values 128 to 255 are gone while -128 to 127 are present). Unsigned variables can dedicate all the possible values to the positive number range. For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is positive or negative (0 for positive, 1 for negative).
Floating point
- https://en.wikipedia.org/wiki/IEEE_754 - a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE 754 standard.
- http://0.30000000000000004.com/ [85]
- Floating Point - Computerphile
- http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0491c/Babbfeda.html
- http://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/ [86]
- Floating point denormals | EarLevel Engineering -
- https://news.ycombinator.com/item?id=15359574
Unum / posit
- https://en.wikipedia.org/wiki/Unum_(number_format) - universal numbers[1]) are an arithmetic and a binary representation format for real numbers analogous to floating point, proposed by John L. Gustafson as an alternative to the now ubiquitous IEEE 754 arithmetic. The first version of unums, now officially known as Type I unum, was introduced in his book The End of Error. Gustafson has since created two newer revisions of the unum format, Type II and Type III, in late 2016. Type III unum is also known as posits and valids; posits present arithmetic for single real values and valids present the interval arithmetic version. This data type can serve as a replacement for IEEE 754 floats for programs which do not depend on specific features of IEEE 754. Details of valids have yet to be officially articulated by Gustafson.
Strings
- https://en.wikipedia.org/wiki/String_(computer_science) - traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is generally considered a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence (or list) data types and structures.
Depending on programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements.When a string appears literally in source code, it is known as a string literal or an anonymous string.
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet.
to sort
- https://en.wikipedia.org/wiki/Primitive_data_type - a basic type, often built-in, used as a building block for composite types
- https://en.wikipedia.org/wiki/Value_type - commonly used to refer to one of two kinds of data types: Types of values or Types of objects with deep copy semantics.
- https://en.wikipedia.org/wiki/Reference_type - a data type that refers to an object in memory. A pointer type on the other hand refers to a memory address. Reference types can be thought of as pointers that are implicitly dereferenced. The objects being referred to are dynamically allocated on the heap whereas value types are allocated automatically on the stack. In languages supporting garbage collection the objects being referred to are destroyed automatically after they become unreachable.
- https://en.wikipedia.org/wiki/Passive_data_structure - also termed a plain old data structure, or plain old data (POD)), is a term for a record, to contrast with objects. It is a data structure that is represented only as passive collections of field values (instance variables), without using object-oriented features. Passive data structures are appropriate when there is a part of a system where it should be clearly indicated that the detailed logic for data manipulation and integrity are elsewhere. PDSs are often found at the boundaries of a system, where information is being moved to and from other systems or persistent storage and the problem domain logic that is found in other parts of the system is irrelevant. For example, PDS would be convenient for representing the field values of objects that are being constructed from external data, in a part of the system where the semantic checks and interpretations needed for valid objects are not applied yet.
- https://en.wikipedia.org/wiki/Reference_(computer_science) - a value that enables a program to indirectly access a particular datum, such as a variable or a record, in the computer's memory or in some other storage device. The reference is said to refer to the datum, and accessing the datum is called dereferencing the reference. A reference is distinct from the data itself. Typically, for references to data stored in memory on a given system, a reference is implemented as the physical address of where the data is stored in memory or in the storage device. For this reason, a reference is often erroneously confused with a pointer or address, and is said to "point to" the data. However a reference may also be implemented in other ways, such as the offset (difference) between the datum's address and some fixed "base" address, as an index into an array, or more abstractly as a handle. More broadly, in networking, references may be network addresses, such as URLs.
The concept of reference must not be confused with other values (keys or identifiers) that uniquely identify the data item, but give access to it only through a non-trivial lookup operation in some table data structure. References are widely used in programming, especially to efficiently pass large or mutable data as arguments to procedures, or to share such data among various uses. In particular, a reference may point to a variable or record that contains references to other data. This idea is the basis of indirect addressing and of many linked data structures, such as linked lists. References can cause significant complexity in a program, partially due to the possibility of dangling and wild references and partially because the topology of data with references is a directed graph, whose analysis can be quite complicated.
- https://en.wikipedia.org/wiki/Symbol_(programming) - a primitive datatype whose instances have a unique human-readable form. Symbols can be used as identifiers. In some programming languages, they are called atoms.In the most trivial implementation, they are essentially named integers (e.g. the enumerated type in C).
- https://en.wikipedia.org/wiki/Composite_data_type - any data type constructed using primitive data types and other composite types
- https://en.wikipedia.org/wiki/Literal_(computer_programming) - a notation for representing a fixed value in source code. Almost all programming languages have notations for atomic values such as integers, floating-point numbers, and strings, and usually for booleans and characters; some also have notations for elements of enumerated types and compound values such as arrays, records, and objects. An anonymous function is a literal for the function type.
In contrast to literals, variables or constants are symbols that can take on one of a class of fixed values, the constant being constrained not to change. Literals are often used to initialize variables, for example, in the following, 1 is an integer literal and the three letter string in "cat" is a string literal:
int a = 1; String s = "cat";
In lexical analysis, literals of a given type are generally a token type, with a grammar rule, like "a string of digits" for an integer literal. Some literals are specific keywords, like true for the boolean literal "true". In some object-oriented languages (like ECMAScript), objects can also be represented by literals. Methods of this object can be specified in the object literal using function literals.
- https://en.wikipedia.org/wiki/Record_(computer_science) - also called struct or compound data, is a basic data structure. A record is a collection of fields, possibly of different data types, typically in fixed number and sequence. The fields of records may also be called elements, though these risk confusion with elements of a collection, or members, particularly in object-oriented programming. A tuple may or may not be considered a record, and vice versa, depending on conventions and the specific programming language.
- https://en.wikipedia.org/wiki/Dependent_type - a type that depends on a value
- https://en.wikipedia.org/wiki/Product_type - a product of types is another, compounded, type in a structure. associated with logical conjunction (AND) in logic.
- https://en.wikipedia.org/wiki/Tagged_union - sum type. data structure used to hold a value that could take on several different, but fixed types. Only one of the types can be in use at any one time, and a tag field explicitly indicates which one is in use.
- https://en.wikipedia.org/wiki/Recursive_data_type - inductive data type, a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
- https://en.wikipedia.org/wiki/Object_composition - a way to combine simple objects or data types into more complex ones
- https://en.wikipedia.org/wiki/Type_constructor - feature of a typed formal language that builds new types from old, such as product types, function types, power types and list types
- https://en.wikipedia.org/wiki/Abstract_data_type - a mathematical model for data types where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This contrasts with data structures, which are concrete representations of data, and are the point of view of an implementer, not a user.
- https://en.wikipedia.org/wiki/Semaphore_(programming) - a variable or abstract data type that is used for controlling access, by multiple processes, to a common resource in a concurrent system such as a multiprogramming operating system. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions. The variable is then used as a condition to control access to some system resource.
- The Little Book of Semaphores - a free (in both senses of the word) textbook that introduces the principles of synchronization for concurrent programming. [92] [93]
Process calculus
- https://en.wikipedia.org/wiki/Process_calculus - or process algebras, are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus.
- https://en.wikipedia.org/wiki/π-calculus - or pi-calculus, is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.
Actor model
- https://en.wikipedia.org/wiki/Actor_model - a mathematical model of concurrent computation that treats actor as the universal primitive of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (obviating lock-based synchronization).
The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi.
to sort
- https://github.com/pervognsen/bitwise - Bitwise is an educational project where we create the software/hardware stack for a computer from scratch.
- https://en.wikipedia.org/wiki/Code_as_data - In computer science, the expressions code as data and data as code refer to the duality between code and data, that allows computers to treat instructions in a programming language as data handled by a running program.
- https://en.wikipedia.org/wiki/Semantics_(computer_science) - rigorous mathematical study of the meaning of programming languages by evaluating the meaning of syntactically legal strings
- https://en.wikipedia.org/wiki/Denotational_semantics - approach of formalizing the meanings of programming languages by constructing mathematical objects (called denotations) that describe the meanings of expressions from the languages
- https://en.wikipedia.org/wiki/Axiomatic_semantics - define the meaning of a command in a program by describing its effect on assertions about the program state
- https://en.wikipedia.org/wiki/Algebraic_semantics_(computer_science) - a form of axiomatic semantics based on algebraic laws for describing and reasoning about program semantics in a formal manner.
- https://en.wikipedia.org/wiki/Symbolic_execution - also symbolic evaluation, is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would, a case of abstract interpretation. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch.
- https://www.youtube.com/watch?v=AD4b-52jtos&list=PL2FF649D0C4407B30
- https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54363BC46
- https://www.youtube.com/watch?v=EKWGGDXe5MA&feature=youtu.be
Non-classical logic
all nc?
- https://en.wikipedia.org/wiki/Kripke_semantics - for non-classical logic systems, first modal logics, later adapted to intuitionistic logic and others
Fuzzy logic
- https://en.wikipedia.org/wiki/Fuzzy_logic - rejects the law of the excluded middle and allows as a truth value any real number between 0 and 1
- https://en.wikipedia.org/wiki/T-norm_fuzzy_logics - a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning. T-norm fuzzy logics belong in broader classes of fuzzy logics and many-valued logics. In order to generate a well-behaved implication, the t-norms are usually required to be left-continuous; logics of left-continuous t-norms further belong in the class of substructural logics, among which they are marked with the validity of the law of prelinearity, (A → B) ∨ (B → A). Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, are studied. Logics that restrict the t-norm semantics to a subset of the real unit interval (for example, finitely valued Łukasiewicz logics) are usually included in the class as well.
- https://en.wikipedia.org/wiki/%C5%81ukasiewicz_logic - a non-classical, many-valued logic. It was originally defined in the early 20th century by Jan Łukasiewicz as a three-valued modal logic; it was later generalized to n-valued (for all finite n) as well as infinitely-many-valued (ℵ0-valued) variants, both propositional and first order. The ℵ0-valued version was published in 1930 by Łukasiewicz and Alfred Tarski; consequently it is sometimes called the Łukasiewicz–Tarski logic. It belongs to the classes of t-norm fuzzy logics and substructural logics. Łukasiewicz logic was motivated by Aristotle's suggestion that bivalent logic was not applicable to future contingents, e.g. the statement "There will be a sea battle tomorrow". In other words, statements about the future were neither true nor false, but an intermediate value could be assigned to them, to represent their possibility of becoming true in the future.
to sort
- https://en.wikipedia.org/wiki/Categorial_grammar - Lambek calculus
- https://en.wikipedia.org/wiki/Cut-elimination_theorem
- https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof
- https://en.wikipedia.org/wiki/Hoare_logic - a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician C. A. R. Hoare, and subsequently refined by Hoare and other researchers. The original ideas were seeded by the work of Robert Floyd, who had published a similar system for flowcharts.
- http://plus.maths.org/content/goumldel-and-limits-logic
- What Logic Is Not
- http://merrigrove.blogspot.co.uk/2013/12/what-heck-is-relation-from-tables-to.html
- http://www.linusakesson.net/programming/pipelogic/index.php
- http://plato.stanford.edu/entries/logic-combining/
Relations
- https://en.wikipedia.org/wiki/Category_of_relations - the category Rel has the class of sets as objects and binary relations as morphisms.
- https://en.wikipedia.org/wiki/Composition_of_relations - a concept of forming a new relation R ; S from two given relations R and S. The composition of relations is called relative multiplication in the calculus of relations. The composition is then the relative product:40 of the factor relations. Composition of functions is a special case of composition of relations.
- https://en.wikipedia.org/wiki/Finitary_relation - a finite number of "places". In set theory and logic, a relation is a property that assigns truth values to k-tuples of individuals. Typically, the property describes a possible connection between the components of a k-tuple. For a given set of k-tuples, a truth value is assigned to each k-tuple according to whether the property does or does not hold.
- https://en.wikipedia.org/wiki/Preorder - or quasiorder is a binary relation that is reflexive and transitive. Preorders are more general than equivalence relations and (non-strict) partial orders, both of which are special cases of a preorder. An antisymmetric preorder is a partial order, and a symmetric preorder is an equivalence relation.
- https://en.wikipedia.org/wiki/Binary_relation - used in many branches of mathematics to model concepts like "is greater than", "is equal to", and "divides" in arithmetic, "is congruent to" in geometry, "is adjacent to" in graph theory, "is orthogonal to" in linear algebra and many more. The concept of function is defined as a special kind of binary relation. A binary relation on a set A is a collection of ordered pairs of elements of A. In other words, it is a subset of the Cartesian product A2 = A × A. More generally, a binary relation between two sets A and B is a subset of A × B. The terms correspondence, dyadic relation and 2-place relation are synonyms for binary relation.
- https://en.wikipedia.org/wiki/Allegory_(category_theory) - a category that has some of the structure of the category of sets and binary relations between them. Allegories can be used as an abstraction of categories of relations, and in this sense the theory of allegories is a generalization of relation algebra to relations between different sorts. Allegories are also useful in defining and investigating certain constructions in category theory, such as exact completions.
- https://en.wikipedia.org/wiki/Converse_relation - or transpose, of a binary relation is the relation that occurs when the order of the elements is switched in the relation.
- https://en.wikipedia.org/wiki/Ternary_relation - or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place.
Just as a binary relation is formally defined as a set of pairs, i.e. a subset of the Cartesian product A × B of some sets A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product A × B × C of three sets A, B and C.
- https://en.wikipedia.org/wiki/Reflexive_relation - a binary relation R over a set X is reflexive if every element of X is related to itself. Formally, this may be written ∀x ∈ X : x R x. An example of a reflexive relation is the relation "is equal to" on the set of real numbers, since every real number is equal to itself. A reflexive relation is said to have the reflexive property or is said to possess reflexivity. Along with symmetry and transitivity, reflexivity is one of three properties defining equivalence relations.
- https://en.wikipedia.org/wiki/Equivalence_relation - a binary relation that is at the same time a reflexive relation, a symmetric relation and a transitive relation. As a consequence of these properties an equivalence relation provides a partition of a set into equivalence classes.
Magma
- https://en.wikipedia.org/wiki/Magma_(algebra) - or groupoid (not to be confused with groupoids in category theory), a basic kind of algebraic structure. Specifically, a magma consists of a set, M, equipped with a single binary operation, M × M → M. The binary operation must be closed by definition but no other properties are imposed.
"Kind of in the same way that any square, any rectangle and any parallelogram fulfills the criteria of a trapezoid, and thus are trapezoids, we say that any group, monoid or semigroup is also a magma. All we demand from the structure in order to call it a magma is that it is closed under the binary operation. And just as any trapezoid in which all angles happens to be right still is a trapezoid even though most people would call it a rectangle, so too will any closed / total algebraic structure with associativity, identity and divisibility be a magma, even though most people would call it a group." [abstract algebra - Can you give me some concrete examples of magmas? - Mathematics Stack Exchange]
- https://en.wikipedia.org/wiki/Category_of_magmas - denoted Mag, is the category whose objects are magmas (that is, sets equipped with a binary operation), and whose morphisms are magma homomorphisms.
Semigroup
Monoid
- https://en.wikipedia.org/wiki/Monoid - an algebraic structure with a single associative binary operation and an identity element.
Monoids are studied in semigroup theory, because they are semigroups with identity. Monoids occur in several branches of mathematics; for instance, they can be regarded as categories with a single object. Thus, they capture the idea of function composition within a set. In fact, all functions from a set into itself form naturally a monoid with respect to function composition. Monoids are also commonly used in computer science, both in its foundational aspects and in practical programming.
The set of strings built from a given set of characters is a free monoid. The transition monoid and syntactic monoid are used in describing finite state machines, whereas trace monoids and history monoids provide a foundation for process calculi and concurrent computing. Some of the more important results in the study of monoids are the Krohn–Rhodes theorem and the star height problem. The history of monoids, as well as a discussion of additional general properties, are found in the article on semigroups.
- Julia Evans: Why are monoidal categories interesting? [96]
Group
- https://en.wikipedia.org/wiki/Group_(mathematics) - a set of elements together with an operation that combines any two of its elements to form a third element satisfying four conditions called the group axioms, namely closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation; the addition of any two integers forms another integer.
The abstract formalization of the group axioms, detached as it is from the concrete nature of any particular group and its operation, allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way, while retaining their essential structural aspects. The ubiquity of groups in numerous areas within and outside mathematics makes them a central organizing principle of contemporary mathematics.
Groups share a fundamental kinship with the notion of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists of the set of transformations that leave the object unchanged and the operation of combining two such transformations by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics; Point groups are used to help understand symmetry phenomena in molecular chemistry; and Poincaré groups can express the physical symmetry underlying special relativity.
The concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups.
In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely (its group representations), both from a theoretical and a computational point of view. A theory has been developed for finite groups, which culminated with the classification of finite simple groups announced in 1983. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly active area in group theory.
- https://en.wikipedia.org/wiki/List_of_group_theory_topics - the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
- YouTube: Visualizing Group Theory 1
- YouTube: MathHistory20: Group theory
- https://en.wikipedia.org/wiki/Group_action - a way of interpreting the elements of the group as "acting" on some space in a way that preserves the structure of that space. Common examples of spaces that groups act on are sets, vector spaces, and topological spaces. Actions of groups on vector spaces are called representations of the group.
- https://en.wikipedia.org/wiki/Group_representation - describe abstract groups in terms of linear transformations of vector spaces; in particular, they can be used to represent group elements as matrices so that the group operation can be represented by matrix multiplication. Representations of groups are important because they allow many group-theoretic problems to be reduced to problems in linear algebra, which is well understood. They are also important in physics because, for example, they describe how the symmetry group of a physical system affects the solutions of equations describing that system.
The term representation of a group is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a linear representation. Some people use realization for the general notion and reserve the term representation for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations.
- https://en.wikipedia.org/wiki/Additive_group - a group of which the group operation is to be thought of as addition in some sense. It is usually abelian, and typically written using the symbol + for its binary operation.
Abelian group
- https://en.wikipedia.org/wiki/Abelian_group - also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on their order (the axiom of commutativity). Abelian groups generalize the arithmetic of addition of integers.
- https://en.wikipedia.org/wiki/Non-abelian_group - also sometimes called a noncommutative group, is a group (G, * ) in which there are at least two elements a and b of G such that a * b ≠ b * a. The term nonabelian is used to distinguish from the idea of an abelian group, where all of the elements of the group commute.
Nonabelian groups are pervasive in mathematics and physics. One of the simplest examples of a nonabelian group is the dihedral group of order 6. It is the smallest finite nonabelian group. A common example from physics is the rotation group SO(3) in three dimensions (rotating something 90 degrees away from you and then 90 degrees to the left is not the same as doing them the other way round). Both discrete groups and continuous groups may be nonabelian. Most of the interesting Lie groups are nonabelian, and these play an important role in gauge theory.
- https://en.wikipedia.org/wiki/Subgroup - In mathematics, given a group G under a binary operation ∗, a subset H of G is called a subgroup of G if H also forms a group under the operation ∗. More precisely, H is a subgroup of G if the restriction of ∗ to H × H is a group operation on H. This is usually represented notationally by H ≤ G, read as "H is a subgroup of G".
A proper subgroup of a group G is a subgroup H which is a proper subset of G (i.e. H ≠ G). The trivial subgroup of any group is the subgroup {e} consisting of just the identity element. If H is a subgroup of G, then G is sometimes called an overgroup of H.
The same definitions apply more generally when G is an arbitrary semigroup, but this article will only deal with subgroups of groups. The group G is sometimes denoted by the ordered pair (G, ∗), usually to emphasize the operation ∗ when G carries multiple algebraic or other structures.
Simple groups
Group of Lie type
- https://en.wikipedia.org/wiki/Group_of_Lie_type - a group closely related to the group G(k) of rational points of a reductive linear algebraic group G with values in the field k. Finite groups of Lie type give the bulk of nonabelian finite simple groups. Special cases include the classical groups, the Chevalley groups, the Steinberg groups, and the Suzuki–Ree groups.
- https://en.wikipedia.org/wiki/Classical_group - defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces.
Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.
The classical groups form the deepest and most useful part of the subject of linear Lie groups. Most types of classical groups find application in classical and modern physics. A few examples are the following. The rotation group SO(3) is a symmetry of Euclidean space and all fundamental laws of physics, the Lorentz group O(3,1) is a symmetry group of spacetime of special relativity. The special unitary group SU(3) is the symmetry group of quantum chromodynamics and the symplectic group Sp(m) finds application in hamiltonian mechanics and quantum mechanical versions of it.
- https://en.wikipedia.org/wiki/Building_(mathematics) - also Tits building, named after Jacques Tits) is a combinatorial and geometric structure which simultaneously generalizes certain aspects of flag manifolds, finite projective planes, and Riemannian symmetric spaces. They were initially introduced by Jacques Tits as a means to understand the structure of exceptional groups of Lie type. The more specialized theory of Bruhat–Tits buildings (named in addition after François Bruhat) plays a role in the study of p-adic Lie groups analogous to that of the theory of symmetric spaces in the theory of Lie groups.
Sporadic group
to sort
- https://en.wikipedia.org/wiki/Orthogonal_group - dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, and an orthogonal matrix is a real matrix whose inverse equals its transpose.
- https://en.wikipedia.org/wiki/Euclidean_group - the group of (Euclidean) isometries of an Euclidean space 𝔼n; that is, the transformations of that space that preserve the Euclidean distance between any two points (also called Euclidean transformations). The group depends only on the dimension n of the space, and is commonly denoted E(n) or ISO(n).The Euclidean group E(n) comprises all translations, rotations, and reflections of 𝔼n; and arbitrary finite combinations of them. The Euclidean group can be seen as the symmetry group of the space itself, and contains the group of symmetries of any figure (subset) of that space.A Euclidean isometry can be direct or indirect, depending on whether it preserves the handedness of figures. The direct Euclidean isometries form a subgroup, the special Euclidean group, whose elements are called rigid motions or Euclidean motions. They comprise arbitrary combinations of translations and rotations, but not reflections.These groups are among the oldest and most studied, at least in the cases of dimension 2 and 3 – implicitly, long before the concept of group was invented.
- https://en.wikipedia.org/wiki/P-group - a group in which the order of every element is a power of p. That is, for each element g of a p-group G, there exists a nonnegative integer n such that the product of pn copies of g, and not fewer, is equal to the identity element. The orders of different elements may be different powers of p.
- https://en.wikipedia.org/wiki/Monster_group - also known as the Fischer–Griess monster, or the friendly giant) is the largest sporadic simple group, having order
Rings
- https://en.wikipedia.org/wiki/Ring_(mathematics) - one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two binary operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices and functions.
- https://en.wikipedia.org/wiki/Ring_homomorphism - a structure-preserving function between two rings
- https://en.wikipedia.org/wiki/Commutative_ring - a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra.
Some specific kinds of commutative rings are given with the following chain of class inclusions: Commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields
- https://en.wikipedia.org/wiki/Ideal_(ring_theory) - a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any other integer results in another even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring similarly to the way that, in group theory, a normal subgroup can be used to construct a quotient group.
- https://en.wikipedia.org/wiki/Principal_ideal - an ideal I in a ring R that is generated by a single element a of R through multiplication by every element of R. The term also has another, similar meaning in order theory, where it refers to an (order) ideal in a poset P generated by a single element x of P, which is to say the set of all elements less than or equal to x in P.
- https://en.wikipedia.org/wiki/Quotient_ring - also known as factor ring, difference ring or residue class ring, is a construction quite similar to the quotient groups of group theory and the quotient spaces of linear algebra. One starts with a ring R and a two-sided ideal I in R, and constructs a new ring, the quotient ring R/I, whose elements are the cosets of I in R subject to special + and ⋅ operations. Quotient rings are distinct from the so-called 'quotient field', or field of fractions, of an integral domain as well as from the more general 'rings of quotients' obtained by localization.
- https://en.wikipedia.org/wiki/Unique_factorization_domain - unique factorization domain (UFD) is a commutative ring in which every non-zero non-unit element can be written as a product of prime elements (or irreducible elements), uniquely up to order and units, analogous to the fundamental theorem of arithmetic for the integers. UFDs are sometimes called factorial rings, following the terminology of Bourbaki.
Unique factorization domains appear in the following chain of class inclusions:
Commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields
- rings with multiplicative identity: unital ring, unitary ring, ring with unity, ring with identity, or ring with 1
- rings not requiring multiplicative identity: rng or pseudo-ring.
Group ring
- https://en.wikipedia.org/wiki/Group_ring - a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is one-to-one with the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring.A group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra.The apparatus of group rings is especially useful in the theory of group representations.
Monoid ring
- https://en.wikipedia.org/wiki/Monoid_ring - a ring constructed from a ring and a monoid, just as a group ring is constructed from a ring and a group.
Rig
- https://ncatlab.org/nlab/show/rig - a ring ‘without negatives’ (hence the missing ‘n’ in the name, get it?). Rigs are commonly also called semirings, but by analogy with semigroup it would be more appropriate to use that word for a ring having neither negatives nor even zero, so that is what we will do here.
Rng
- https://en.wikipedia.org/wiki/Rng_(algebra) - or pseudo-ring or non-unital ring) is an algebraic structure satisfying the same properties as a ring, without assuming the existence of a multiplicative identity. The term "rng" (pronounced rung) is meant to suggest that it is a "ring" without "i", i.e. without the requirement for an "identity element".There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see the history section of the article on rings). The term "rng" was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity.
Integers
- https://en.wikipedia.org/wiki/Integer - commonly known as a "whole number", is a number that can be written without a fractional component. For example, 21, 4, and −2048 are integers, while 9.75, 5½, and √2 are not. Z in group theory
- http://paulhankin.github.io/Fibonacci/ An integer formula for Fibonacci numbers [102]
Real numbers
- https://en.wikipedia.org/wiki/Real_number - includes all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers such as √2 (1.41421356… the square root of two, an irrational algebraic number) and π (3.14159265…, a transcendental number).
Rational numbers
Boolean
Symmetric functions
- https://en.wikipedia.org/wiki/Ring_of_symmetric_functions - a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group. The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative.
Fields
- https://en.wikipedia.org/wiki/Field_(mathematics) - a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus a fundamental algebraic structure, which is widely used in algebra, number theory and many other areas of mathematics.
The best known fields are the field of rational numbers and the field of real numbers. The field of complex numbers is also widely used, not only in mathematics, but also in many areas of science and engineering. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Finite fields are used in most cryptographic protocols used for computer security.
Any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. The theory of field extensions (including Galois theory) involves the roots of polynomials with coefficients in a field; among other results, this theory leads to impossibility proofs for the classical problems of angle trisection and squaring the circle with a compass and straightedge, as well as a proof of the Abel–Ruffini theorem on the algebraic insolubility of quintic equations.
- https://en.wikipedia.org/wiki/Algebra_over_a_field - often simply called an algebra, is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure, which consists of a set, together with operations of multiplication, addition, and scalar multiplication by elements of the underlying field, and satisfies the axioms implied by "vector space" and "bilinear".[
- https://en.wikipedia.org/wiki/Perfect_field - a field k is said to be perfect if any one of the following equivalent conditions holds:
- Every irreducible polynomial over k has distinct roots.
- Every irreducible polynomial over k is separable.
- Every finite extension of k is separable.
- Every algebraic extension of k is separable.
- Either k has characteristic 0, or, when k has characteristic p > 0, every element of k is a pth power.
- Either k has characteristic 0, or, when k has characteristic p > 0, the Frobenius endomorphism x→xp is an automorphism of k
- The separable closure of k is algebraically closed.
- Every reduced commutative k-algebra A is a separable algebra; i.e., {\displaystyle A\otimes _{k}F} A\otimes _{k}F is reduced for every field extension F/k. (see below)
- Otherwise, k is called imperfect.
In particular, all fields of characteristic zero and all finite fields are perfect.
- https://en.wikipedia.org/wiki/Finite_field - or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod p when p is a prime number.
- https://en.wikipedia.org/wiki/GF(2) - also F2, Z/2Z or Z2, is the Galois field of two elements. It is the smallest finite field. The two elements are nearly always called 0 and 1, being the additive and multiplicative identities, respectively.
- https://en.wikipedia.org/wiki/Galois_theory - named after Évariste Galois, provides a connection between field theory and group theory. Using Galois theory, certain problems in field theory can be reduced to group theory, which is, in some sense, simpler and better understood. Originally, Galois used permutation groups to describe how the various roots of a given polynomial equation are related to each other. The modern approach to Galois theory, developed by Richard Dedekind, Leopold Kronecker and Emil Artin, among others, involves studying automorphisms of field extensions. Further abstraction of Galois theory is achieved by the theory of Galois connections.
Natural numbers - ℕ
- https://en.wikipedia.org/wiki/Natural_number - real numbers that have no decimal and are bigger than zero. those used for counting (as in "there are six coins on the table") and ordering (as in "this is the third largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers".
Integers - ℤ
- https://en.wikipedia.org/wiki/Integer - a number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, 5 1⁄2, and √2 are not. The set of integers consists of zero (0), the positive natural numbers (1, 2, 3, …), also called whole numbers or counting numbers, and their additive inverses (the negative integers, i.e., −1, −2, −3, …).
Fractions
Rational numbers - ℚ
- https://core.ac.uk/download/pdf/82481132.pdf - surreal like
Irrational numbers
Pi
- https://en.wikipedia.org/wiki/Pi - a mathematical constant, the ratio of a circle's circumference to its diameter, commonly approximated as 3.14159. It has been represented by the Greek letter "π" since the mid-18th century, though it is also sometimes spelled out as "pi" (/paɪ/).
Being an irrational number, π cannot be expressed exactly as a fraction (equivalently, its decimal representation never ends and never settles into a permanent repeating pattern). Still, fractions such as 22/7 and other rational numbers are commonly used to approximate π. The digits appear to be randomly distributed. In particular, the digit sequence of π is conjectured to satisfy a specific kind of statistical randomness, but to date no proof of this has been discovered. Also, π is a transcendental number, i.e., a number that is not the root of any non-zero polynomial having rational coefficients. This transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge.
Ancient civilizations required fairly accurate computed values for π for practical reasons. It was calculated to seven digits, using geometrical techniques, in Chinese mathematics, and to about five digits in Indian mathematics in the 5th century AD. The historically first exact formula for π, based on infinite series, was not available until a millennium later, when in the 14th century the Madhava–Leibniz series was discovered in Indian mathematics. In the 20th and 21st centuries, mathematicians and computer scientists discovered new approaches that, when combined with increasing computational power, extended the decimal representation of π to many trillions of digits after the decimal point. Practically all scientific applications require no more than a few hundred digits of π, and many substantially fewer, so the primary motivation for these computations is the quest to find more efficient algorithms for calculating lengthy numeric series, as well as the human desire to break records. The extensive calculations involved have also been used to test supercomputers and high-precision multiplication algorithms.
Because its definition relates to the circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses, and spheres. Because of its special role as an eigenvalue, π appears in areas of mathematics and the sciences having little to do with the geometry of circles, such as number theory and statistics. It is also found in cosmology, thermodynamics, mechanics, and electromagnetism. The ubiquity of π makes it one of the most widely known mathematical constants both inside and outside the scientific community; several books devoted to it have been published, the number is celebrated on Pi Day, and record-setting calculations of the digits of π often result in news headlines. Attempts to memorize the value of π with increasing precision have led to records of over 70,000 digits.
Tau
- https://en.wikipedia.org/wiki/Turn_(angle)#Tau_proposals - In 2010, Michael Hartl proposed to use the Greek letter tau to represent the circle constant: τ = 2π. He offered two reasons. First, τ is the number of radians in one turn, which allows fractions of a turn to be expressed more directly: for instance, a 3/4 turn would be represented as 3τ/4 rad instead of 3π/2 rad. Second, τ visually resembles π, whose association with the circle constant is unavoidable. Hartl's Tau Manifesto gives many examples of formulas that are asserted to be clearer where τ is used instead of π, such as a tighter association with the geometry of Euler's identity using eiτ = 1 instead of eiπ = −1. Initially, neither of these proposals received widespread acceptance by the mathematical and scientific communities. However, the use of τ has become more widespread
- The Tau Manifesto - by Michael Hartl, dedicated to one of the most important numbers in mathematics, perhaps the most important: the circle constant relating the circumference of a circle to its linear dimension. For millennia, the circle has been considered the most perfect of shapes, and the circle constant captures the geometry of the circle in a single number. Of course, the traditional choice for the circle constant is (pi)—but, as mathematician Bob Palais notes in his delightful article “ Is Wrong!”,1 is wrong. It’s time to set things right.
- Tau Day - a celebration of the circle constant Founded in 2010 with the publication of The Tau Manifesto by Michael Hartl, Tau Day takes place annually on June 28 (6/28 in the American calendar system).
Real numbers - ℝ
The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways.
- [math/0411418 How real are real numbers?] - We discuss mathematical and physical arguments against continuity and in favor of discreteness, with particular emphasis on the ideas of Emile Borel (1871-1956).
Complex numbers
- YouTube: Complex Numbers in Quantum Mechanics
https://news.ycombinator.com/item?id=13966148
- https://en.wikipedia.org/wiki/Imaginary_number - i, a complex number that can be written as a real number multiplied by the imaginary unit i, which is defined by its property i2 = −1. The square of an imaginary number bi is −b2. For example, 5i is an imaginary number, and its square is −25. Zero is considered to be both real and imaginary.
Algebraic numbers
- https://en.wikipedia.org/wiki/Algebraic_number
- https://en.wikipedia.org/wiki/Algebraic_number_theory
- https://en.wikipedia.org/wiki/Algebraic_integer
Transcendental numbers
Constructable
Hyperreal
Superreal
Surreal
- YouTube: John Conway: Surreal Numbers - How playing games led to more numbers than anybody ever thought of
Arithmetic
- https://en.wikipedia.org/wiki/Radix
- http://threesixty360.wordpress.com/2009/06/09/ethiopian-multiplication/ [108]
- https://en.wikipedia.org/wiki/Modular_arithmetic - a system of arithmetic for integers, where numbers "wrap around" upon reaching a certain value—the modulus (plural moduli). The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801. Modular arithmetic is referenced in number theory, group theory, ring theory, knot theory, abstract algebra, computer algebra, cryptography, computer science, chemistry and the visual and musical arts. It is one of the foundations of number theory, touching on almost every aspect of its study, and provides key examples for group theory, ring theory and abstract algebra.
Recursive
Peano arithmetic
Number theory
- YouTube: Elliptic curves - This lecture will explore the history of counting the number of points on elliptic curves, from ancient Greece to the present day. It will also discuss the law of quadratic reciprocity, a highlight of eighteenth century mathematics; recent controversies about cryptographic algorithms; and a unifying principle in mathematics, the Langlands program. Inaugural lecture of Professor Toby Gee, Imperial College London.
- The five fundamental operations of mathematics: addition, subtraction, multiplication, division, and modular forms - trinity.pdf - The five fundamental operations of mathematics: addition, subtraction, multiplication, division, and modular forms
- YouTube: (Modular Forms Visualised
- https://en.wikipedia.org/wiki/Quaternion - the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. A feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space or equivalently as the quotient of two vectors. Quaternions find uses in both theoretical and applied mathematics, in particular for calculations involving three-dimensional rotations such as in three-dimensional computer graphics and computer vision. In practical applications, they can be used alongside other methods, such as Euler angles and rotation matrices, or as an alternative to them depending on the application.
- Let’s remove Quaternions from every 3D Engine (An Interactive Introduction to Rotors from Geometric Algebra) - [110] [111] [112]
- https://en.wikipedia.org/wiki/Octonion - a normed division algebra over the real numbers, usually represented by the capital letter O, using boldface O or blackboard bold O. There are only four such algebras, the other three being the real numbers R, the complex numbers C, and the quaternions H. The octonions are the largest such algebra, with eight dimensions; twice the number of dimensions of the quaternions, of which they are an extension. They are noncommutative and nonassociative, but satisfy a weaker form of associativity, namely they are alternative. Octonions are not as well known as the quaternions and complex numbers, which are much more widely studied and used. Despite this, they have some interesting properties and are related to a number of exceptional structures in mathematics, among them the exceptional Lie groups. Additionally, octonions have applications in fields such as string theory, special relativity, and quantum logic.
- Quanta Magazine: The Octonion Math That Could Underpin Physics [113]
- https://github.com/voidqk/nvqm - Numeric, Vector, Quaternion, and Matrix functions, consistent across multiple programming languages.
Prime
Geometry
Geometric algebra
- Geometric Algebra - Alan Macdonald. This series of 6 videos is an introduction to geometric algebra for those who know some linear algebra. It is based on my textbook "Linear and Geometric Algebra". Webpage: http://faculty.luther.edu/~macdonal/laga. The book also provides an introduction to linear algebra.
- Geometric Calculus - Alan Macdonald. This series of 5 videos will provide an introduction to geometric calculus for those who know some vector calculus. It is based on my textbook "Vector and Geometric Calculus". The book also provides an introduction to vector calculus. Its webpage is at http://faculty.luther.edu/~macdonal/vagc
- YouTube: Siggraph2019 Geometric Algebra
- https://en.wikipedia.org/wiki/Bivector - or 2-vector is a quantity in exterior algebra or geometric algebra that extends the idea of scalars and vectors. If a scalar is considered an order zero quantity, and a vector is an order one quantity, then a bivector can be thought of as being of order two. Bivectors have applications in many areas of mathematics and physics. They are related to complex numbers in two dimensions and to both pseudovectors and quaternions in three dimensions. They can be used to generate rotations in any number of dimensions, and are a useful tool for classifying such rotations. They are also used in physics, tying together a number of otherwise unrelated quantities.
- https://en.wikipedia.org/wiki/Four-vector - an object with four components, which transform in a specific way under Lorentz transformation. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (½,½) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts (a change by a constant velocity to another inertial reference frame).
Exterior algebra
- https://en.wikipedia.org/wiki/Exterior_algebra - In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors u and v, denoted by u ∧ v, is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of u ∧ v can be interpreted as the area of the parallelogram with sides u and v, which in three dimensions can also be computed using the cross product of the two vectors. Like the cross product, the exterior product is anticommutative, meaning that u ∧ v = −(v ∧ u) for all vectors u and v, but, unlike the cross product, the exterior product is associative. One way to visualize a bivector is as a family of parallelograms all lying in the same plane, having the same area, and with the same orientation—a choice of clockwise or counterclockwise.
Linear algebra
- https://en.wikipedia.org/wiki/Linear_equation - an algebraic equation in which each term is either a constant or the product of a constant and (the first power of) a single variable (however, different variables may occur in different terms). A simple example of a linear equation with only one variable, x, may be written in the form: ax + b = 0, where a and b are constants and a ≠ 0. The constants may be numbers, parameters, or even non-linear functions of parameters, and the distinction between variables and parameters may depend on the problem (for an example, see linear regression).
- YouTube: The Vector Algebra War - There are a wide variety of different vector formalisms currently utilized in engineering and physics. For example, Gibbs’ three-vectors, Minkowski four-vectors, complex spinors in quantum mechanics, quaternions used to describe rigid body rotations and vectors defined in Clifford geometric algebra. With such a range of vector formalisms in use, it thus appears that there is as yet no general agreement on a vector formalism suitable for science as a whole. This is surprising, in that, one of the primary goals of nineteenth century science was to suitably describe vectors in three-dimensional space. This situation has also had the unfortunate consequence of fragmenting knowledge across many disciplines, and requiring a significant amount of time and effort in learning the various formalisms. We thus historically review the development of our various vector systems and conclude that Clifford’s multivectors best fulfills the goal of describing vectorial quantities in three dimensions and providing a unified vector system for science.
- YouTube: Essence of linear algebra - 3Blue1Brown [117]
- Graphical Linear Algebra - blog series
- An Intuitive Guide to Linear Algebra [118]
- A Geometric Review of Linear Algebra [119]
- No bullshit guide to linear algebra [120]
- Immersive Math - The world's first linear algebra book with fully interactive figures.[121]
https://en.wikipedia.org/wiki/Penrose_graphical_notation - or tensor diagram notation, is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines. The notation has been studied extensively by Predrag Cvitanović, who used it to classify the classical Lie groups. It has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra.
Matrices
- https://en.wikipedia.org/wiki/Logical_matrix - binary matrix, relation matrix, Boolean matrix, or (0,1) matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets.
- https://en.wikipedia.org/wiki/Diagonal_matrix - a matrix in which the entries outside the main diagonal are all zero. The term usually refers to square matrices.
- https://en.wikipedia.org/wiki/Identity_matrix - or sometimes ambiguously called a unit matrix, of size n is the n × n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.) Less frequently, some mathematics books use U or E to represent the identity matrix, meaning "unit matrix" and the German word "Einheitsmatrix", respectively.
- https://en.wikipedia.org/wiki/Spectral_theorem - a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
- https://en.wikipedia.org/wiki/Hermitian_matrix - or self-adjoint matrix', is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column
- https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors
- https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
- [2307.01415 Matrix Multiplication Using Only Addition] - Matrix multiplication consumes a large fraction of the time taken in many machine-learning algorithms. Thus, accelerator chips that perform matrix multiplication faster than conventional processors or even GPU's are of increasing interest. In this paper, we demonstrate a method of performing matrix multiplication without a scalar multiplier circuit. In many cases of practical interest, only a single addition and a single on-chip copy operation are needed to replace a multiplication. It thus becomes possible to design a matrix-multiplier chip that, because it does not need time, space- and energy-consuming multiplier circuits, can hold many more processors, and thus provide a net speedup. [126]
Quadratic equation
Modules
- https://en.wikipedia.org/wiki/Module_(mathematics) - one of the fundamental algebraic structures used in abstract algebra. A module over a ring is a generalization of the notion of vector space over a field, wherein the corresponding scalars are the elements of an arbitrary given ring (with identity) and a multiplication (on the left and/or on the right) is defined between elements of the ring and elements of the module. A module taking its scalars from a ring R is called an R-module.
Thus, a module, like a vector space, is an additive abelian group; a product is defined between elements of the ring and elements of the module that is distributive over the addition operation of each parameter and is compatible with the ring multiplication.Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology.
- YouTube: What is a Module? (Abstract Algebra)
- https://en.wikipedia.org/wiki/Free_module - a module that has a basis – that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules.
Algebraic geometry
See also Physics
- https://en.wikipedia.org/wiki/Grassmannian - a space which parametrizes all r-dimensional linear subspaces of the n-dimensional vector space V. For example, the Grassmannian Gr(1, V) is the space of lines through the origin in V, so it is the same as the projective space of one dimension lower than V.
- https://en.wikipedia.org/wiki/Algebraic_variety
- https://en.wikipedia.org/wiki/Dimension_of_an_algebraic_variety
- https://en.wikipedia.org/wiki/Elliptic_curve
- http://jeremykun.com/2014/02/10/elliptic-curves-as-elementary-equations/
- https://en.wikipedia.org/wiki/Greedoid - a type of set system. It arises from the notion of the matroid, which was originally introduced by Whitney in 1935 to study planar graphs and was later used by Edmonds to characterize a class of optimization problems that can be solved by greedy algorithms. Around 1980, Korte and Lovász introduced the greedoid to further generalize this characterization of greedy algorithms; hence the name greedoid. Besides mathematical optimization, greedoids have also been connected to graph theory, language theory, poset theory, and other areas of mathematics.
- https://en.wikipedia.org/wiki/Matroid - a structure that abstracts and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid axiomatically, the most significant being in terms of: independent sets; bases or circuits; rank functions; closure operators; and closed sets or flats. In the language of partially ordered sets, a finite matroid is equivalent to a geometric lattice.Matroid theory borrows extensively from the terminology of linear algebra and graph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications in geometry, topology, combinatorial optimization, network theory and coding theory.
Schemes
- https://en.wikipedia.org/wiki/Scheme_(mathematics) - a mathematical structure that enlarges the notion of algebraic variety to include, among other things multiplicities (the equations x = 0 and x2 = 0 define the same algebraic variety and different schemes) and "varieties" defined over rings (for example Fermat curves are defined over the integers).
Schemes were introduced by Alexander Grothendieck in 1960 in his treatise Éléments de géométrie algébrique; one of its aims was developing the formalism needed to solve deep problems of algebraic geometry, such as the Weil conjectures (the last of which was proved by Pierre Deligne). Strongly based on commutative algebra, scheme theory allows a systematic use of methods of topology and homological algebra. By including rationality questions inside the formalism, scheme theory introduces a strong connection between algebraic geometry and number theory, which eventually allowed Wiles' proof of Fermat's Last Theorem.
To be technically precise, a scheme is a topological space together with commutative rings for all of its open sets, which arises from gluing together spectra (spaces of prime ideals) of commutative rings along their open subsets. In other words, it is a locally ringed space which is locally a spectrum of a commutative ring.
- https://ncatlab.org/nlab/show/scheme - A scheme is a space that locally looks like a particularly simple ringed space: an affine scheme. This can be formalised either within the category of locally ringed spaces or within the category of presheaves of sets on the category of affine schemes Aff Aff.
- https://en.wikipedia.org/wiki/Group_scheme - a type of algebro-geometric object equipped with a composition law. Group schemes arise naturally as symmetries of schemes, and they generalize algebraic groups, in the sense that all algebraic groups have group scheme structure, but group schemes are not necessarily connected, smooth, or defined over a field. This extra generality allows one to study richer infinitesimal structures, and this can help one to understand and answer questions of arithmetic significance.
The category of group schemes is somewhat better behaved than that of group varieties, since all homomorphisms have kernels, and there is a well-behaved deformation theory. Group schemes that are not algebraic groups play a significant role in arithmetic geometry and algebraic topology, since they come up in contexts of Galois representations and moduli problems.
Combinatorics
Noise
- https://en.wikipedia.org/wiki/Value_noise - a type of noise commonly used as a procedural texture primitive in computer graphics. It is conceptually different from, and often confused with gradient noise, examples of which are Perlin noise and Simplex noise. This method consists of the creation of a lattice of points which are assigned random values. The noise function then returns the interpolated number based on the values of the surrounding lattice points.For many applications, multiple octaves of this noise can be generated and then summed together, just as can be done with Perlin noise and Simplex noise, in order to create a form of fractal noise.
- https://en.wikipedia.org/wiki/Gradient_noise - a type of noise commonly used as a procedural texture primitive in computer graphics. It is conceptually different, and often confused with value noise. This method consists of a creation of a lattice of random (or typically pseudorandom) gradients, dot products of which are then interpolated to obtain values in between the lattices. An artifact of some implementations of this noise is that the returned value at the lattice points is 0. Unlike the value noise, gradient noise has more energy in the high frequencies.The first known implementation of a gradient noise function was Perlin noise, credited to Ken Perlin, who published the description of it in 1985. Later developments were Simplex noise and OpenSimplex noise.
- https://en.wikipedia.org/wiki/Perlin_noise - a type of gradient noise developed by Ken Perlin in 1983 as a result of his frustration with the "machine-like" look of computer-generated imagery (CGI) at the time. He formally described his findings in a SIGGRAPH paper in 1985 called An image Synthesizer. In 1997, Perlin was awarded an Academy Award for Technical Achievement for creating the algorithm.
- https://en.wikipedia.org/wiki/Simplex_noise - a method for constructing an n-dimensional noise function comparable to Perlin noise ("classic" noise) but with fewer directional artifacts and, in higher dimensions, a lower computational overhead. Ken Perlin designed the algorithm in 2001 to address the limitations of his classic noise function, especially in higher dimensions.
to sort
- Numberphile: ASTOUNDING: 1 + 2 + 3 + 4 + 5 + ... = -1/12
Analysis
- https://en.wikipedia.org/wiki/Mathematical_analysis - a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions.
These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
Real
- https://en.wikipedia.org/wiki/Real_analysis - traditionally, the theory of functions of a real variable, is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable. In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions.
Complex
- https://en.wikipedia.org/wiki/Complex_analysis - traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics and thermodynamics and also in engineering fields such as; nuclear, aerospace, mechanical and electrical engineering.
Murray R. Spiegel described complex analysis as "one of the most beautiful as well as useful branches of Mathematics". Complex analysis is particularly concerned with analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics.
Functional
- https://en.wikipedia.org/wiki/Functional_analysis - branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formula.tion of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
- Functions are Vectors - Conceptualizing functions as infinite-dimensional vectors lets us apply the tools of linear algebra to a vast landscape of new problems, from image and geometry processing to curve fitting, light transport, and machine learning. [129]
Numerical
- https://en.wikipedia.org/wiki/Limit_of_a_sequence - the value that the ter.ms of a sequence "tend to". If such a limit exists, the sequence is called convergent. A sequence which does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of analysis ultimately rests. Limits can be defined in any metric or topological space, but are usually first encountered in the real numbers.
- https://en.wikipedia.org/wiki/Pointwise_convergence - one of various senses in which a sequence of functions can converge to a particular function. It is considered weaker than uniform convergence, to which it is often compared
- https://en.wikipedia.org/wiki/Interpolation - a type of estimation, a method of constructing new data points within the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate, i.e., estimate the value of that function for an intermediate value of the independent variable.A closely related problem is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error.
- https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation - also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions.Interpolation is the problem of approximating the value of a function for a non-given point in some space when given the value of that function in points around (neighboring) that point. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant. The algorithm is very simple to implement and is commonly used (usually along with mipmapping) in real-time 3D rendering to select color values for a textured surface.
- https://en.wikipedia.org/wiki/Linear_interpolation - a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points.
- https://en.wikipedia.org/wiki/Bilinear_interpolation - an extension of linear interpolation for interpolating functions of two variables (e.g., x and y) on a rectilinear 2D grid. Bilinear interpolation is performed using linear interpolation first in one direction, and then again in the other direction. Although each step is linear in the sampled values and in the position, the interpolation as a whole is not linear but rather quadratic in the sample location. Bilinear interpolation is one of the basic resampling techniques in computer vision and image processing, where it is also called bilinear filtering or bilinear texture mapping.
- https://en.wikipedia.org/wiki/Polynomial_interpolation - the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.
- https://en.wikipedia.org/wiki/Lagrange_polynomial - used for polynomial interpolation.
- https://en.wikipedia.org/wiki/Hermite_interpolation - a method of interpol.ating data points as a polynomial function. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences. However, the Hermite interpolating polynomial may also be computed without using divided differences, see Chinese remainder theorem § Hermite interpolation.
- https://en.wikipedia.org/wiki/Trigonometric_interpolation - interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions. An important special case is w.hen the given data points are equally spaced, in which case the solution is given by the discrete Fourier transform.
- https://en.wikipedia.org/wiki/Multivariate_interpolation - or spatial interpolation is interpolation on functions of more than one variable.The function to be interpolated is known at given points and the interpolation problem consist of yielding values at arbitrary points. Multivariate interpolation is particularly important in geostatistics, where it is used to create a digital elevation model from a set of points on the Earth's surface (for example, spot heights in a topographic survey or depths in a hydrographic survey).
- https://en.wikipedia.org/wiki/Fractal_compression#Fractal_interpolation - The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as the interpolant. Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods like bilinear interpolation and bicubic interpolation. Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them.
- https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations#Consistency_and_order - methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals. Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Dimensional
- https://en.wikipedia.org/wiki/Dimensional_analysis - the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years, et cetera. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
The concept of physical dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822.
- YouTube: Cursed Units
Calculus
- https://en.wikipedia.org/wiki/Calculus - the mathematical study of change, or any method or system of calculation guided by the symbolic manipulation of expressions
- YouTube: Big Picture of Calculus [132]
- https://en.wikipedia.org/wiki/Operational_calculus - also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation.
Series
- https://en.wikipedia.org/wiki/Series_(mathematics) - roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics), through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical by mathematicians and philosophers. This paradox was resolved using the concept of a limit during the 19th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch the tortoise.
In modern terminology, any (ordered) infinite sequence ( a 1 , a 2 , a 3 , … ) {\displaystyle (a_{1},a_{2},a_{3},\ldots )} {\displaystyle (a_{1},a_{2},a_{3},\ldots )} of terms (that is numbers, functions, or anything that can be added) defines a series, which is the operation of adding the a i {\displaystyle a_{i}} a_{i} one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series.
- https://en.wikipedia.org/wiki/Geometric_series - a series with a constant ratio between successive terms. For example, the series 1 2 + 1 4 + 1 8 + 1 16 + ⋯ {\displaystyle {\frac {1}{2}}\,+\,{\frac {1}{4}}\,+\,{\frac {1}{8}}\,+\,{\frac {1}{16}}\,+\,\cdots } {\frac {1}{2}}\,+\,{\frac {1}{4}}\,+\,{\frac {1}{8}}\,+\,{\frac {1}{16}}\,+\,\cdots is geometric, because each successive term can be obtained by multiplying the previous term by 1/2.Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property. Historically, geometric series played an important role in the early development of calculus, and they continue to be central in the study of convergence of series. Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance.
Type theory
See also Computing#Computability theory
Homotopy
- https://en.wikipedia.org/wiki/Homotopy - two continuous functions from one topological space to another are called homotopic (from Greek ὁμός homós "same, similar" and τόπος tópos "place") if one can be "continuously deformed" into the other, such a deformation being called a homotopy between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology.
In practice, there are technical difficulties in using homotopies with certain spaces. Algebraic topologists work with compactly generated spaces, CW complexes, or spectra.
- Homotopy Type Theory - This site serves to collect and disseminate research, resources, and tools for the investigation of homotopy type theory, and hosts a blog for those involved in its study.
Homotopy Type Theory refers to a new field of study relating Martin-Löf’s system of intensional, constructive type theory with abstract homotopy theory. Propositional equality is interpreted as homotopy and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system inherit a homotopical meaning. As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos.
Univalent Foundations of Mathematics is Vladimir Voevodsky’s new program for a comprehensive, computational foundation for mathematics based on the homotopical interpretation of type theory. The type theoretic univalence axiom relates propositional equality on the universe with homotopy equivalence of small types. The program is currently being implemented with the help of the automated proof assistant Coq. The Univalent Foundations program is closely tied to homotopy type theory and is being pursued in parallel by many of the same researchers.
Informatics
See Computing
Geometry
- https://en.wikipedia.org/wiki/Geometry - a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer. Geometry arose independently in a number of early cultures as a body of practical knowledge concerning lengths, areas, and volumes, with elements of formal mathematical science emerging in the West as early as Thales (6th century BC).
- The Proof Companion - Create and practice Geometry proofs. [143]
Absolute geometery
- https://en.wikipedia.org/wiki/Absolute_geometry - a geometry based on an axiom system for Euclidean geometry with the parallel postulate removed and none of its alternatives used in place of it. The term was introduced by János Bolyai in 1832. It is sometimes referred to as neutral geometry, as it is neutral with respect to the parallel postulate.
Euclidean geometry
- Eukleides - a computer language devoted to elementary plane geometry. It aims to be a fairly comprehensive system to create geometric figures, either static or dynamic. Eukleides allows to handle basic types of data: numbers and strings, as well as geometric types of data: points, vectors, sets (of points), lines, circles and conics. A Eukleides script usually consists in a declarative part where objects are defined, and a descriptive part where objects are drawn. Nevertheless, Eukleides is also a full featured programming language, providing conditional and iterative structures, user defined functions, modules, etc. Hence, it can easily be extended.
- null pointer - Euclidean Spaces [145]
Objects
- https://en.wikipedia.org/wiki/Line_segment - a line with two ends
- https://en.wikipedia.org/wiki/Vertex_(geometry) - a special kind of point that describes the corners or intersections of geometric shapes.
- https://en.wikipedia.org/wiki/Polytope - geometric solid object with flat sides and straight edges
- https://polytope.miraheze.org - a wiki dedicated to the classification, description, and discovery of polytopes. Polytopes are a wide class of geometric shapes that generalize the intuitive notions of "flat" shapes like polygons and polyhedra into higher (and lower, dimensions. Some of their main categories are: Regular polytopes, Uniform polytopes, Semi-uniform polytopes, Scaliform polytopes, Convex regular-faced polytopes. Although they are not polytopes, this wiki also contains information on a few "curved" shapes.
- https://en.wikipedia.org/wiki/Polygon - polytope in two dimensions
- https://en.wikipedia.org/wiki/Quadrilateral - polygon with four sides (or edges) and four vertices or corners
- https://en.wikipedia.org/wiki/Polyhedron - polytope in three dimensions
- https://en.wikipedia.org/wiki/Uniform_polyhedron - has regular polygons as faces and is vertex-transitive (i.e., there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent. Uniform polyhedra may be regular (if also face- and edge-transitive), quasi-regular (if also edge-transitive but not face-transitive), or semi-regular (if neither edge- nor face-transitive). The faces and vertices need not be convex, so many of the uniform polyhedra are also star polyhedra.
- https://en.wikipedia.org/wiki/Dual_polyhedron - any polyhedron is associated with a second dual figure, where the vertices of one correspond to the faces of the other and the edges between pairs of vertices of one correspond to the edges between pairs of faces of the other. Such dual figures remain combinatorial or abstract polyhedra, but not all are also geometric polyhedra. Starting with any given polyhedron, the dual of its dual is the original polyhedron.Duality preserves the symmetries of a polyhedron. Therefore, for many classes of polyhedra defined by their symmetries, the duals also belong to a symmetric class. Thus, the regular polyhedra – the (convex) Platonic solids and (star) Kepler–Poinsot polyhedra – form dual pairs, where the regular tetrahedron is self-dual. The dual of an isogonal polyhedron, having equivalent vertices, is one which is isohedral, having equivalent faces. The dual of an isotoxal polyhedron (having equivalent edges) is also isotoxal. Duality is closely related to reciprocity or polarity, a geometric transformation that, when applied to a convex polyhedron, realizes the dual polyhedron as another convex polyhedron.
- https://en.wikipedia.org/wiki/Compound_of_cube_and_octahedron - can be seen as either a polyhedral stellation or a compound.
- https://en.wikipedia.org/wiki/Octahedron - a polyhedron with eight faces, twelve edges, and six vertices. The term is most commonly used to refer to the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex.A regular octahedron is the dual polyhedron of a cube. It is a rectified tetrahedron. It is a square bipyramid in any of three orthogonal orientations. It is also a triangular antiprism in any of four orientations.An octahedron is the three-dimensional case of the more general concept of a cross polytope. A regular octahedron is a 3-ball in the Manhattan (ℓ1) metric.
- https://polytope.miraheze.org/wiki/Great_swirlprism - or gisp, also known as the pentagrammic-retroprismatic hecatonicosachoron, is a nonconvex uniform polychoron. It consists of 120 pentagrammic retroprisms. It is a member of an infinite family of isogonal great dodecahedral swirlchora and also a member of an infinite family of isochoric great dodecahedral swirlchora, using a decagram for each of its 12 rings.
- https://en.wikipedia.org/wiki/Conway_polyhedron_notation - invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations. Conway and Hart extended the idea of using operators, like truncation defined by Kepler, to build related polyhedra of the same symmetry.
- Polyhedra Viewer - visualizes the relationships between the convex, regular-faced polyhedra. The 120 solids presented here can be transformed into each other by a network of operations. Select a solid below to manipulate it and to explore its relationships with other polyhedra. [146]
- Virtual Reality Polyhedra - The Encyclopedia of Polyhedra
- polyHédronisme
- https://github.com/levskaya/polyhedronisme - An html5+coffeescript tool to construct and explore polyhedra.
- https://en.wikipedia.org/wiki/Polychoron - polytope in four dimensions
- Polytwisters - are curvaceous four dimensional polytope-like objects with an interesting twist, instead of having vertices as their simplest elements, they have equatorial circles (also called rings). These rings actually swirl around each other in a twisted like arrangement. In place of edges, they have "strips" which look sort of like a full twist Mobius strip. Finally, for their facets, they have cylindrically curved "twisters". A twister resembles a polygonal rod (which is somewhat bowed out or in extreme cases bloated big time) which is then twisted 360 degrees and then curved back in on itself into a cylindrical ring. Polytwisters are actually the "lovechild" of polyhedra and Hopf Fibration. There is a one-to-one correspondence between the points of a sphere and a set of equatorial rings on a glome (4-D sphere). These rings fill the entire surface of the glome in a swirling manner and they never intersect - this is called Hopf Fibration. In a sense, a fibrated glome is a sphere of swirling equators and could also be called a "spheretwister". So what would happen if we started with a dodecahedron instead of a sphere and took the "Hopf function" of it? - you would get something we could call a "Hopf dodecahedron". This object looks a lot like the dodecatwister, except that it's facets bow inwards and it is concave, while the dodecatwister remains convex and has cylindrical curves. I suspect that Hopf polyhedra were known since Hopf discovered his fibration, however - their relatives, the polytwisters, may be something new. Polytwisters have symmetries like polyhedra, but also have the capacity to roll smoothly like cylinders.
- https://github.com/polytwisters/polytwisters - a class of strange curved four-dimensional shapes related to uniform polyhedra and Hopf fibration. They were discovered by Jonathan Bowers circa 2007, who found 222 of these shapes plus three infinite families, and produced POV-Ray renders of their 3D cross sections. This repository is a project to clean up and port the original POV-Ray code (unpublished, to a modern toolchain using Blender's Python scripting capabilities. Computing the polytwisters as meshes enables professional-quality animations, as well as digital fabrication for realizing physical cross sections.
- https://en.wikipedia.org/wiki/Flag_(geometry) - sequence of faces of a polytope, each contained in the next, with just one face from each dimension.
- https://en.wikipedia.org/wiki/B%C3%A9zier_curve - a parametric curve used in computer graphics and related fields. The curve, which is related to the Bernstein polynomial, is named after Pierre Bézier, who used it in the 1960s for designing curves for the bodywork of Renault cars. Other uses include the design of computer fonts and animation. Bézier curves can be combined to form a Bézier spline, or generalized to higher dimensions to form Bézier surfaces. The Bézier triangle is a special case of the latter.In vector graphics, Bézier curves are used to model smooth curves that can be scaled indefinitely. "Paths", as they are commonly referred to in image manipulation programs, are combinations of linked Bézier curves. Paths are not bound by the limits of rasterized images and are intuitive to modify.Bézier curves are also used in the time domain, particularly in animation, user interface design and smoothing cursor trajectory in eye gaze controlled interfaces. For example, a Bézier curve can be used to specify the velocity over time of an object such as an icon moving from A to B, rather than simply moving at a fixed number of pixels per step. When animators or interface designers talk about the "physics" or "feel" of an operation, they may be referring to the particular Bézier curve used to control the velocity over time of the move in question.This also applies to robotics where the motion of a welding arm, for example, should be smooth to avoid unnecessary wear.
- https://github.com/oysteinmyrmo/bezier - a single header only C++11 library for Bezier curve calculations and manipulations. Currently only supports 2D bezier curves.
- https://en.wikipedia.org/wiki/Spline_(mathematics) - a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees.In the computer science subfields of computer-aided design and computer graphics, the term spline more frequently refers to a piecewise polynomial (parametric) curve. Splines are popular curves in these subfields because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design.The term spline comes from the flexible spline devices used by shipbuilders and draftsmen to draw smooth shapes.
- https://en.wikipedia.org/wiki/Spline_interpolation - a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is often preferred over polynomial interpolation because the interpolation error can be made small even when using low degree polynomials for the spline. Spline interpolation avoids the problem of Runge's phenomenon, in which oscillation can occur between points when interpolating using high degree polynomials.
- Beyond Bézier curves - Hobby splines[149]
Tiles
Synthetic geometry
- https://en.wikipedia.org/wiki/Synthetic_geometry - sometimes referred to as axiomatic geometry or even pure geometry, is the study of geometry without the use of coordinates. According to Felix Klein, "Synthetic geometry is that which studies figures as such, without recourse to formulas, whereas analytic geometry consistently makes use of such formulas as can be written down after the adoption of an appropriate system of coordinates."
A defining characteristic of synthetic geometry is the use of the axiomatic method to draw conclusions and solve problems, as opposed to analytic and algebraic geometries, where one would use analysis and algebraic techniques to obtain these geometric results.
Euclidean geometry, as presented by Euclid, is the quintessential example of the use of the synthetic method. However, only after the introduction of coordinate methods was there a reason to introduce the term "synthetic geometry" to distinguish this approach to the subject. As a field of study, synthetic geometry was most prominent during the nineteenth century when some geometers rejected coordinate methods in establishing the foundations of projective geometry and non-Euclidean geometries.
- https://en.wikipedia.org/wiki/Metric_space - a set for which distances between all members of the set are defined. Those distances, taken together, are called a metric on the set. The most familiar metric space is 3-dimensional Euclidean space. In fact, a "metric" is the generalization of the Euclidean metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance between two points as the length of the straight line segment connecting them.
Analytic geometry
- https://en.wikipedia.org/wiki/Analytic_geometry - also known as coordinate geometry, or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is widely used in physics and engineering, and is the foundation of most modern fields of geometry, including algebraic, differential, discrete and computational geometry.
- https://en.wikipedia.org/wiki/Coordinate_system - a system which uses one or more numbers, or coordinates, to uniquely determine the position of a point or other geometric element on a manifold such as Euclidean space. The order of the coordinates is significant and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in "the x-coordinate". The coordinates are taken to be real numbers in elementary mathematics, but may be complex numbers or elements of a more abstract system such as a commutative ring. The use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa; this is the basis of analytic geometry.
- YouTube: Euler (gimbal lock) Explained
Differential geometry
- https://en.wikipedia.org/wiki/Differential_geometry - uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century and the 19th century.
Since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is closely related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas and techniques characteristic of this field.
- https://en.wikipedia.org/wiki/Differential_geometry_of_curves - deals with smooth curves in the plane and in the Euclidean space by methods of differential and integral calculus
- https://en.wikipedia.org/wiki/Minimal_surface - a surface that locally minimizes its area. This is equivalent to (see definitions below) having a mean curvature of zero. The term "minimal surface" is used because these surfaces originally arose as surfaces that minimized total surface area subject to some constraint. Physical models of area-minimizing minimal surfaces can be made by dipping a wire frame into a soap solution, forming a soap film, which is a minimal surface whose boundary is the wire frame. However the term is used for more general surfaces that may self-intersect or do not have constraints. For a given constraint there may also exist several minimal surfaces with different areas (for example, see minimal surface of revolution): the standard definitions only relate to a local optimum, not a global optimum.
- YouTube: Energy surface prototype
Discrete geometry
- https://en.wikipedia.org/wiki/Discrete_geometry - branches of geometry that study combinatorial properties and constructive methods of discrete geometric objects. Most questions in discrete geometry involve finite or discrete sets of basic geometric objects, such as points, lines, planes, circles, spheres, polygons, and so forth. The subject focuses on the combinatorial properties of these objects, such as how they intersect one another, or how they may be arranged to cover a larger object.
Discrete geometry has large overlap with convex geometry and computational geometry, and is closely related to subjects such as finite geometry, combinatorial optimization, digital geometry, discrete differential geometry, geometric graph theory, toric geometry, and combinatorial topology.
Computational geometry
- https://en.wikipedia.org/wiki/Computational_geometry - a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with history stretching back to antiquity. An ancient precursor is the Sanskrit treatise Shulba Sutras , or "Rules of the Chord", that is a book of algorithms written in 800 BCE. The book prescribes step-by-step procedures for constructing geometric objects like altars using a peg and chord.
Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation.
The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.
Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction). The main branches of computational geometry are:
Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975. Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971.
Projective geometry
- https://en.wikipedia.org/wiki/Projective_geometry - a topic in mathematics. It is the study of geometric properties that are invariant with respect to projective transformations. This means that, compared to elementary geometry, projective geometry has a different setting, projective space, and a selective set of basic geometric concepts. The basic intuitions are that projective space has more points than Euclidean space, for a given dimension, and that geometric transformations are permitted that transform the extra points (called "points at infinity") to Euclidean points, and vice versa.
Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations). The first issue for geometers is what kind of geometry is adequate for a novel situation. It is not possible to refer to angles in projective geometry as it is in Euclid...he way in which parallel lines can be said to meet in a point at infinity, once the concept is translated into projective geometry's terms. Again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See projective plane for the basics of projective geometry in two dimensions.
While the ideas were available earlier, projective geometry was mainly a development of the 19th century. This included the theory of complex projective space, the coordinates used (homogeneous coordinates) being complex numbers. Several major types of more abstract mathematics (including invariant theory, the Italian school of algebraic geometry, and Felix Klein's Erlangen programme resulting in the study of the classical groups) were based on projective geometry. It was also a subject with a large number of practitioners for its own sake, as synthetic geometry. Another topic that developed from axiomatic studies of projective geometry is finite geometry.
- YouTube: Projective geometry old and new. From 1956 to 2016 - Valentin Ovsienko, University of Reims - 2016
Mapping
- https://en.wikipedia.org/wiki/Isometry - distance preserving map
Geometric transformation
Other geometery
- https://en.wikipedia.org/wiki/Affine_geometry - the study of parallel lines
- https://en.wikipedia.org/wiki/Elliptic_geometry - also sometimes called Riemannian geometry, is a non-Euclidean geometry, in which, given a line L and a point p outside L, there exists no line parallel to L passing through p, as all lines in elliptic geometry intersect. Elliptic geometry has a variety of properties that differ from those of classical Euclidean plane geometry. For example, the sum of the interior angles of any triangle is always greater than 180°.
- https://en.wikipedia.org/wiki/Versor - an algebraic parametrisation of rotations. In classical quaternion theory a versor is a quaternion of norm one (a unit quaternion).
- https://theconversation.com/when-will-rooftop-solar-be-cheaper-than-the-grid-heres-a-map-54789 [154]
- https://github.com/brainsmoke/geometry - geometry experiments
Convex geometry
- https://en.wikipedia.org/wiki/Convex_set - In Euclidean space, an object is convex if for every pair of points within the object, every point on the straight line segment that joins the pair of points is also within the object. For example, a solid cube is convex, but anything that is hollow or has a dent in it, for example, a crescent shape, is not convex. A convex curve forms the boundary of a convex set. The notion of a convex set can be generalized to other spaces.
- Feeling Your Way Around in High Dimensions - The simplest objects of interest in any dimension, which are also the basis for approximating arbitrary objects, are the convex polytopes and in this column I'll explain how to begin to probe them...
Trigonometry
- https://en.wikipedia.org/wiki/Trigonometry - studies relationships involving lengths and angles of triangles. The field emerged during the 3rd century BC from applications of geometry to astronomical studies. The 3rd-century astronomers first noted that the lengths of the sides of a right-angle triangle and the angles between those sides have fixed relationships: that is, if at least the length of one side and the value of one angle is known, then all other angles and lengths can be determined algorithmically. These calculations soon came to be defined as the trigonometric functions and today are pervasive in both pure and applied mathematics: fundamental methods of analysis such as the Fourier transform, for example, or the wave equation, use trigonometric functions to understand cyclical phenomena across many applications in fields as diverse as physics, mechanical and electrical engineering, music and acoustics, astronomy, ecology, and biology. Trigonometry is also the foundation of surveying.
- https://en.wikipedia.org/wiki/Sine - a trigonometric function of an angle. The sine of an angle is defined in the context of a right triangle: for the specified angle, it is the ratio of the length of the side that is opposite that angle (that is not the hypotenuse) to the length of the longest side of the triangle (i.e., the hypotenuse).
Topology / Topos
- https://en.wikipedia.org/wiki/Topology - the mathematical study of shapes and topological spaces. It is an area of mathematics concerned with the properties of space that are preserved under continuous deformations including stretching and bending, but not tearing or gluing. This includes such properties as connectedness, continuity and boundary. Topology developed as a field of study out of geometry and set theory, through analysis of such concepts as space, dimension, and transformation. Such ideas go back to Leibniz, who in the 17th century envisioned the geometria situs (Greek-Latin for "geometry of place") and analysis situs (Greek-Latin for "picking apart of place"). The term topology was introduced by Johann Benedict Listing in the 19th century, although it was not until the first decades of the 20th century that the idea of a topological space was developed. By the middle of the 20th century, topology had become a major branch of mathematics.
- https://en.wikipedia.org/wiki/General_topology - the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The fundamental concepts in point-set topology are continuity, compactness, and connectedness: Continuous functions, intuitively, take nearby points to nearby points; Compact sets are those that can be covered by finitely many sets of arbitrarily small size; Connected sets are sets that cannot be divided into two pieces that are far apart.
- https://en.wikipedia.org/wiki/Topological_space - a set of points, along with a set of neighbourhoods for each point, that satisfy a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology.
- https://en.wikipedia.org/wiki/Topological_group - a group G together with a topology on G such that the group's binary operation and the group's inverse function are continuous functions with respect to the topology. A topological group is a mathematical object with both an algebraic structure and a topological structure. Thus, one may perform algebraic operations, because of the group structure, and one may talk about continuous functions, because of the topology. Topological groups, along with continuous group actions, are used to study continuous symmetries, which have many applications, for example in physics.
- https://en.wikipedia.org/wiki/Neighbourhood_(mathematics) - a neighbourhood of a point is a set containing the point where you can move that point some amount without leaving the set.
- https://en.wikipedia.org/wiki/Metric_space - a set for which distances between all members of the set are defined. Those distances, taken together, are called a metric on the set.
The most familiar metric space is 3-dimensional Euclidean space. In fact, a "metric" is the generalization of the Euclidean metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance between two points as the length of the straight line segment connecting them. Other metric spaces occur for example in elliptic geometry and hyperbolic geometry, where distance on a sphere measured by angle is a metric, and the hyperboloid model of hyperbolic geometry is used by special relativity as a metric space of velocities. A metric on a space induces topological properties like open and closed sets, which lead to the study of more abstract topological spaces. In the most general definition of a metric space, the distance between set elements can be negative. Spaces like these are important in the theory of relativity.
- https://en.wikipedia.org/wiki/Manifold - a topological space that resembles Euclidean space near each point
Algebraic
- https://en.wikipedia.org/wiki/Algebraic_topology - branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group.
- https://en.wikipedia.org/wiki/Simplicial_complex - a set composed of points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex.
- https://en.wikipedia.org/wiki/Abstract_simplicial_complex - a family of sets that is closed under taking subsets, i.e., every subset of a set in the family is also in the family. It is a purely combinatorial description of the geometric notion of a simplicial complex. For example, in a 2-dimensional simplicial complex, the sets in the family are the triangles (sets of size 3), their edges (sets of size 2), and their vertices (sets of size 1). In the context of matroids and greedoids, abstract simplicial complexes are also called independence systems. An abstract simplex can be studied algebraically by forming its Stanley–Reisner ring; this sets up a powerful relation between combinatorics and commutative algebra.
- https://en.wikipedia.org/wiki/Combinatorial_topology - an older name for algebraic topology, dating from the time when topological invariants of spaces (for example the Betti numbers) were regarded as derived from combinatorial decompositions of spaces, such as decomposition into simplicial complexes. After the proof of the simplicial approximation theorem this approach provided rigour. The change of name reflected the move to organise topological classes such as cycles-modulo-boundaries explicitly into abelian groups. This point of view is often attributed to Emmy Noether, and so the change of title may reflect her influence. The transition is also attributed to the work of Heinz Hopf, who was influenced by Noether, and to Leopold Vietoris and Walther Mayer, who independently defined homology.
- https://en.wikipedia.org/wiki/Digital_topology - deals with properties and features of two-dimensional (2D) or three-dimensional (3D) digital images that correspond to topological properties (e.g., connectedness) or topological features (e.g., boundaries) of objects. Concepts and results of digital topology are used to specify and justify important (low-level) image analysis algorithms, including algorithms for thinning, border or surface tracing, counting of components or tunnels, or region-filling.
- https://en.wikipedia.org/wiki/Fibration - generalizes the notion of a fiber bundle and plays an important role in algebraic topology, a branch of mathematics.
Differential
- https://en.wikipedia.org/wiki/Differential_topology - the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds.
Geometric
- https://en.wikipedia.org/wiki/Geometric_topology - the study of manifolds and maps between them, particularly embeddings of one manifold into another.
to sort
- https://en.wikipedia.org/wiki/Clique_complex - flag complexes, and conformal hypergraphs are closely related mathematical objects in graph theory and geometric topology that each describe the cliques (complete subgraphs) of an undirected graph.
- https://en.wikipedia.org/wiki/Specialization_(pre)order - a natural preorder on the set of the points of a topological space.
- https://en.wikipedia.org/wiki/Polish_space - a separable completely metrizable topological space; that is, a space homeomorphic to a complete metric space that has a countable dense subset. Polish spaces are so named because they were first extensively studied by Polish topologists and logicians—Sierpiński, Kuratowski, Tarski and others. However, Polish spaces are mostly studied today because they are the primary setting for descriptive set theory, including the study of Borel equivalence relations. Polish spaces are also a convenient setting for more advanced measure theory, in particular in probability theory.
- https://en.wikipedia.org/wiki/Poincaré_conjecture - a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states: "Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere."
An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it. Originally conjectured by Henri Poincaré, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. The analogous conjectures for all higher dimensions had already been proved.
- https://en.wikipedia.org/wiki/Solenoid_(mathematics) - a compact connected topological space (i.e. a continuum) that may be obtained as the inverse limit of an inverse system of topological groups and continuous homomorphisms (Si, fi), fi: Si+1 → Si, i ≥ 0,where each Si is a circle and fi is the map that uniformly wraps the circle Si+1 ni times (ni ≥ 2) around the circle Si. This construction can be carried out geometrically in the three-dimensional Euclidean space R3. A solenoid is a one-dimensional homogeneous indecomposable continuum that has the structure of a compact topological group.
- https://en.wikipedia.org/wiki/Fiber_bundle - a space that is locally a product space, but globally may have a different topological structure. Fiber bundles, such as the tangent bundle of a manifold and other more general vector bundles, play an important role in differential geometry and differential topology, as do principal bundles.
https://ncatlab.org/nlab/show/Grothendieck+fibration
- JTS Topology Suite - an API of spatial predicates and functions for processing geometry. It has the following design goals: JTS conforms to the Simple Features Specification for SQL published by the Open Geospatial Consortium. JTS provides a complete, consistent, robust implementation of fundamental algorithms for processing linear geometry on the 2-dimensional Cartesian plane. JTS is fast enough for production use JTS is written in 100% pure Java.
- https://en.wikipedia.org/wiki/Chu_space - generalize the notion of topological space by dropping the requirements that the set of open sets be closed under union and finite intersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition of continuous function remains unchanged other than having to be worded carefully to continue to make sense after these generalizations.
Topos
- https://en.wikipedia.org/wiki/Topos - a type of category that behaves like the category of sheaves of sets on a topological space (or more generally: on a site). Topoi behave much like the category of sets and possess a notion of localization; they are in a sense a generalization of point-set topology. The Grothendieck topoi find applications in algebraic geometry; the more general elementary topoi are used in logic.
- https://en.wikipedia.org/wiki/History_of_topos_theory - an aspect of category theory, and has a reputation for being abstruse. The level of abstraction involved cannot be reduced beyond a certain point; but on the other hand context can be given. This is partly in terms of historical development, but also to some extent an explanation of differing attitudes to category theory.
- https://ncatlab.org/nlab/show/Mitchell-Bénabou+language - a particularly simple form of the internal language of an elementary topos EE. It makes use of the fact that in the presence of a subobject classifier Ω, there is no need to treat formulas separately from terms, since a formula or proposition can be identified with a term of type Ω.
- https://ncatlab.org/nlab/show/Top - the category whose objects are topological spaces and whose morphisms are continuous functions between them. Its isomorphisms are the homeomorphisms.
- https://en.wikipedia.org/wiki/Effective_topos - introduced by Martin Hyland (1982) captures the mathematical idea of effectivity within the category theoretical framework.
Reverse mathematics
Dynamical systems
- https://en.wikipedia.org/wiki/Category:Dynamical_systems - deals with the study of the solutions to the equations of motion of systems that are primarily mechanical in nature; although this includes both planetary orbits as well as the behaviour of electronic circuits and the solutions to partial differential equations that arise in biology. Much of modern research is focused on the study of chaotic systems.
- YouTube: The Duffing Oscillator
- https://en.wikipedia.org/wiki/Poincar%C3%A9_map - a first recurrence map or Poincaré map, named after Henri Poincaré, is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name first recurrence map. The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it.A Poincaré map can be interpreted as a discrete dynamical system with a state space that is one dimension smaller than the original continuous dynamical system. Because it preserves many properties of periodic and quasiperiodic orbits of the original system and has a lower-dimensional state space, it is often used for analyzing the original system in a simpler way. In practice this is not always possible as there is no general method to construct a Poincaré map.A Poincaré map differs from a recurrence plot in that space, not time, determines when to plot a point. For instance, the locus of the Moon when the Earth is at perihelion is a recurrence plot; the locus of the Moon when it passes through the plane perpendicular to the Earth's orbit and passing through the Sun and the Earth at perihelion is a Poincaré map. It was used by Michel Hénon to study the motion of stars in a galaxy, because the path of a star projected onto a plane looks like a tangled mess, while the Poincaré map shows the structure more clearly.
Statistics
See Statistics
Information theory
- Information Theory, Inference, and Learning Algorithms - David MacKay
Foundations
- https://en.wikipedia.org/wiki/Classical_mathematics - refers generally to the mainstream approach to mathematics, which is based on classical logic and ZFC set theory. It stands in contrast to other types of mathematics such as constructive mathematics or predicative mathematics. In practice, the most common non-classical systems are used in constructive mathematics. Classical mathematics is sometimes attacked on philosophical grounds, due to constructivist and other objections to the logic, set theory, etc., chosen as its foundations, such as have been expressed by L. E. J. Brouwer. Almost all mathematics, however, is done in the classical tradition, or in ways compatible with it.
Formalism
- https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathematics) - the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings (alphanumeric sequences of symbols, usually as equations) using established manipulation rules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess." According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they aren't "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation (or semantics). In contrast to logicism or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist.Along with logicism and intuitionism, formalism is one of the main theories in the philosophy of mathematics that developed in the late nineteenth and early twentieth century. Among formalists, David Hilbert was the most prominent advocate of formalism
Constructivism
- https://en.wikipedia.org/wiki/Constructivism_(mathematics) - asserts that it is necessary to find (or "construct") a mathematical object to prove that it exists. In standard mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. This proof by contradiction is not constructively valid. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.
There are many forms of constructivism. These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as IZF and the study of topos theory.
Constructivism is often identified with intuitionism, although intuitionism is only one constructivist program. Intuitionism maintains that the foundations of mathematics lie in the individual mathematician's intuition, thereby making mathematics into an intrinsically subjective activity. Other forms of constructivism are not based on this viewpoint of intuition, and are compatible with an objective viewpoint on mathematics.
- https://ncatlab.org/nlab/show/constructive+mathematics - is mathematics done without the principle of excluded middle, or other principles, such as the full axiom of choice, that imply it, hence without “non-constructive” methods of formal proof, such as proof by contradiction. This is in contrast to classical mathematics, where such principles are taken to hold.
- https://en.wikipedia.org/wiki/Constructive_set_theory - an approach to mathematical constructivism following the program of axiomatic set theory. That is, it uses the usual first-order language of classical set theory, and although of course the logic is constructive, there is no explicit use of constructive types. Rather, there are just sets, thus it can look very much like classical mathematics done on the most common foundations, namely the Zermelo–Fraenkel axioms (ZFC).
In 1973, John Myhill proposed a system of set theory based on intuitionistic logic taking the most common foundation, ZFC, and throwing away the axiom of choice (AC) and the law of the excluded middle (LEM), leaving everything else as is. However, different forms of some of the ZFC axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply LEM. The system, which has come to be known as IZF, or Intuitionistic Zermelo–Fraenkel (ZF refers to ZFC without the axiom of choice), has the usual axioms of extensionality, pairing, union, infinity, separation and power set. The axiom of regularity is stated in the form of an axiom schema of set induction. Also, while Myhill used the axiom schema of replacement in his system, IZF usually stands for the version with collection.
Intuitionism
- Intuition and Logic in Mathematics - by Henri Poincaré
Univalent
- https://ncatlab.org/nlab/show/univalence+axiom - formalises the mode of reason for isomorphisms
- (−2)-types are the contractible ones,
- (−1)-types are the truth values,
- 0-types are the sets,
- 1-types are the groupoids,
- etc.
- YouTube: Univalent Foundations Seminar - Steve Awodey - Univalent Foundations Seminar - Steve Awodey, 2012
"So UA can be stated: 'Identity is isomorphic to isomorphism'"
"In the presence of a universe, UA is equivelent to invarience."
Software
SEe also JS scripts#Maths
- https://en.wikipedia.org/wiki/Mathematical_software - is software used to model, analyze or calculate numeric, symbolic or geometric data.
- https://en.wikipedia.org/wiki/Software_calculator - a calculator that has been implemented as a computer program, rather than as a physical hardware device. They are among the simpler interactive software tools, and, as such, they provide operations for the user to select one at a time.
Computer algebra
- https://en.wikipedia.org/wiki/Computer_algebra - also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols. Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.
- https://en.wikipedia.org/wiki/Computer_algebra_system - CAS. or symbolic algebra system, SAS, is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.
Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics.
Numerical analysis
- LWN.net: Symbolic mathematics on Linux - an introduction to the world of free and open-source applications for symbolic mathematics. These are programs that assist the researcher or student through their ability to manipulate mathematical expressions, rather than just make numerical calculations. I'll give an overview of two large computer algebra packages available for Linux, and a briefer sampling of some of the more specialized tools aimed at particular branches of mathematics. [160]
to sort
Symbolab
- Symbolab - an advanced math education tool. It allows users to learn, practice and discover math topics using mathematical symbols and scientific notations as well as text. Symbolab provides automated step by step solutions to algebraic, trigonometric and calculus topics covering from middle school through college. Symbolab offers a wealth of smart calculators including: equations, simultaneous equations, inequalities, integrals, derivatives, limits , tangent line, trigonometric equations, functions and more. The stated goal of the site is to make scientific content universally accessible by expanding the searchable data space onto scientific notations, expressions, equations and formulas. This is done by applying proprietary machine learning algorithms in order to understand the meaning and context of the queries. Symbolab, making math simpler.
swMATH
- swMATH - a freely accessible, innovative information service for mathematical software. swMATH not only provides access to an extensive database of information on mathematical software, but also includes a systematic linking of software packages with relevant mathematical publications.
SymPy
Mathics
- Mathics - a free, general-purpose online computer algebra system featuring Mathematica-compatible syntax and functions. It is backed by highly extensible Python code, relying on SymPy for most mathematical tasks and, optionally, Sage for more advanced stuff. [161] [162]
Scilab
- Scilab - free and open source software for numerical computation providing a powerful computing environment for engineering and scientific applications. Scilab includes hundreds of mathematical functions. It has a high level programming language allowing access to advanced data structures, 2-D and 3-D graphical functions.
- Scipad - a powerful editor and graphical debugger for programs written in Scilab language. It is a mature and highly configurable programmer's editor, including features like syntax colorization, regexp search/replace, parentheses matching, logical/physical line numbering, peer windows, line and block text editing, and much more. Scipad can be used along with Scicoslab or Scilab, but even as a standalone text editor.
GNU Octave
- GNU Octave - Scientific Programming Language. Powerful mathematics-oriented syntax with built-in plotting and visualization tools, free software, runs on GNU/Linux, macOS, BSD, and Windows, drop-in compatible with many Matlab scripts [163]
- Octave Forge - Extra packages for GNU Octave
Maxima
- Maxima - a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, sets, lists, vectors, matrices and tensors. Maxima yields high precision numerical results by using exact fractions, arbitrary-precision integers and variable-precision floating-point numbers. Maxima can plot functions and data in two and three dimensions. The Maxima source code can be compiled on many systems, including Windows, Linux, and MacOS X. The source code for all systems and precompiled binaries for Windows and Linux are available at the SourceForge file manager. Maxima is a descendant of Macsyma, the legendary computer algebra system developed in the late 1960s at the Massachusetts Institute of Technology.
- https://en.wikipedia.org/wiki/Maxima_(software) - a computer algebra system (CAS) based on a 1982 version of Macsyma. It is written in Common Lisp and runs on all POSIX platforms such as macOS, Unix, BSD, and Linux, as well as under Microsoft Windows and Android. It is free software released under the terms of the GNU General Public License (GPL).
Penrose
- Penrose - a platform that enables people to create beautiful diagrams just by typing mathematical notation in plain text. The goal is to make it easy for non-experts to create and explore high-quality diagrams, providing deeper insight into challenging technical concepts. We aim to democratize the process of creating visual intuition.
- https://github.com/penrose/penrose - Create beautiful diagrams just by typing mathematical notation in plain text.
Workbooks
LastMile AI
- LastMileAI - the generative AI workshoptext, image & audio models, all in one place.Build AI-powered apps with text, image, and audio models like OpenAI’s ChatGPT, Google’s PaLM, Stable Diffusion and many more.
- Welcome to LastMile AI! | LastMile AI - an interactive document that lets you experiment with and combine various LLMs, image, and audio models to build powerful applications and workflows in a single environment. Getting StartedWorkbooks are made up of different types of cells. This cell is a text cell to help you annotate and organize information in your workbook. We also support Model Cells that enable you to prompt the following models: ChatGPT, PaLM, Stable Diffusion, Bark, Whisper, and DALL-E2. You can also create your own custom models by going to the > Models page in the left nav bar.
to sort
- http://blog.stephenwolfram.com/2014/08/computational-knowledge-and-the-future-of-pure-mathematics/ [164]
- http://nathanfriend.com/inspirograph/ [166]
- https://github.com/nfriend/inspirograph - A digital Spirograph, written in TypeScript.
- GeoGebra - Get our free online math tools for graphing, geometry, 3D, and more!
- CoCalc - Collaborative Calculation
proof to sort
- EuroProofNet - usability of proof systems. It is a COST action started on November 2021 gathering more than 320 researchers from 42 different countries, chaired by Frédéric Blanqui and a Core group. EuroProofNet organizes meetings and schools, and provides grants to its members for short-term scientific missions in another country.
- https://chorasimilarity.wordpress.com/graphic-lambda-calculus/
- http://chorasimilarity.github.io/chemlambda-gui
- MiniZinc - a free and open-source constraint modeling language. You can use MiniZinc to model constraint satisfaction and optimization problems in a high-level, solver-independent way, taking advantage of a large library of pre-defined constraints. Your model is then compiled into FlatZinc, a solver input language that is understood by a wide range of solvers. MiniZinc is developed at Monash University in collaboration with Data61 Decision Sciences and the University of Melbourne.
Agda
Isabelle
- Isabelle - a generic proof assistant. It allows mathematical formulas to be expressed in a formal language and provides tools for proving those formulas in a logical calculus. Isabelle was originally developed at the University of Cambridge and Technische Universität München, but now includes numerous contributions from institutions and individuals worldwide.
- Archive of Formal Proofs - a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. It is organized in the way of a scientific journal, is indexed by dblp and has an ISSN: 2150-914x. Submissions are refereed.
Idris
- Idris - a general purpose pure functional programming language with dependent types. Dependent types allow types to be predicated on values, meaning that some aspects of a program’s behaviour can be specified precisely in the type. It is compiled, with eager evaluation. Its features are influenced by Haskell and ML,
Albatros
- http://albatross-lang.sourceforge.net/ a programming language which can be verified statically. You write programs in Albatross and prove them to be correct in the same language., What is a correct program? A program is correct if it is consistent with its specification. Specifications in Albatross are assertions which express correctness conditions. Assertions are boolean expressions in predicate logic. A verified Albatross program has as proof for each assertion. The proof is generated by the compiler.m But since assertions are expressed in predicate logic and predicate logic is not decidable for arbitrary expressions the theorem prover in the Albatross compiler cannot prove all valid assertions. Therefore the programmer has to provide the proof steps which cannot be done by the compiler automatically.
Coq
- Geocoq - A formalization of foundations of geometry in Coq
Logitext
- Logitext - an educational proof assistant for first-order classical logic using the sequent calculus, in the same tradition as Jape, Pandora, Panda and Yoda. It is intended to assist students who are learning Gentzen trees as a way of structuring derivations of logical statements. Underneath the hood, Logitext interfaces with Coq in order to check the validity of your proof steps. The frontend is written in Haskell and Ur/Web, and there is an interesting story behind it which you can read about. Alternatively, get the source at GitHub. [172]
Formality
- https://github.com/maiavictor/formality - An efficient programming language and proof assistant. [173]
Lean
- Lean - an open source theorem prover and programming language being developed at Microsoft Research. Lean aims to bridge the gap between interactive and automated theorem proving, by situating automated tools and methods in a framework that supports user interaction and the construction of fully specified axiomatic proofs.The Lean project was launched by Leonardo de Moura at Microsoft Research in 2013. It is an open source project, hosted on GitHub. Lean 3 is the latest official release. Development has shifted to Lean 4, which is not ready for general use yet. Meanwhile, members of the Lean community centered around the mathematical components library mathlib have continued maintenance and development of Lean 3 as a separate community version. Further information about mathlib and the Lean 3 community releases can be found at the Lean Community website, including an online version of Lean.
- https://github.com/leanprover/lean.vim - Auto-completion and error highlighting
- YouTube: The Future of Mathematics?
Machine learning
https://news.ycombinator.com/item?id=23310412
Visualisation
to find those prime vis things again
Gephi
- Gephi is an interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs.
- https://wiki.gephi.org
- http://en.wikipedia.org/wiki/Gephi
Fractals
- YouTube: Fractals are not self-similar
- http://en.wikipedia.org/wiki/Julia_set
- http://acko.net/blog/how-to-fold-a-julia-fractal/
- http://sprott.physics.wisc.edu/carlson/dragons.html
- https://en.wikipedia.org/wiki/Mandelbrot_set
- YouTube: The Mandelbrot Set - Numberphile
- YouTube: What's so special about the Mandelbrot Set? - Numberphile
- https://github.com/ttsiodras/MandelbrotSSE -Real-time Mandelbrot zoom via SSE, AVX, OpenMP, CUDA, XaoS...
- https://github.com/kallaballa/FractalDive - an attempt at implementing a highly optimized realtime mandelbrot fractal zoom in a highly platform independent fashion using modern C++.
- https://github.com/ath92/fractal-garden -an interactive 3d fractal explorer. The fractals are rendered using a technique called raymarching. The images are rendered at mostly acceptable framerates using WebGL (through a library called regl). To ensure that framerates stay interactive, images can be set to render at a lower resolution first, after which multiple lower-resolution images are sequentially combined into a full-resolution end result. If the renderer runs out of time while generating a frame, it can render one of the intermediate steps. This means it prioritizes interactivity over image quality.
- https://github.com/HackerPoet/FractalSoundExplorer - Explore fractals in an audio-visual sandbox
- YouTube: Sounds of the Mandelbrot Set
Software
- mandelstir - Animating fractional iterations in the Mandelbrot Set and Julia Sets.
- Mandelbulber is an experimental application that helps to make rendering 3D Mandelbrot fractals much more accessible. A few of the supported 3D fractals: Mandelbulb, Mandelbox, BulbBox, JuliaBulb, Menger Sponge, Quaternion, Trigonometric, Hypercomplex, and Iterated Function Systems (IFS). All of these can be combined into infinite variations with the ability to hybridize different formulas together.
- FractalNow - A fast, advanced, multi-platform fractal generator.
- Fraqtive is an open source, multi-platform generator of the Mandelbrot family fractals. It uses very fast algorithms supporting SSE2 and multi-core processors. It generates high quality anti-aliased images and renders 3D scenes using OpenGL. It allows real-time navigation and dynamic generation of the Julia fractal preview.
- Fragmentarium is an open source, cross-platform IDE for exploring pixel based graphics on the GPU. It is inspired by Adobe's Pixel Bender, but uses GLSL, and is created specifically with fractals and generative systems in mind.
Other
Informatics
Nature
to sort
- http://mathoverflow.net/questions/156263/perfectly-centered-break-of-a-perfectly-aligned-pool-ball-rack/156407#156407 [185]
- http://www.wisdomandwonder.com/link/6582/some-thoughts-on-mathematics
- http://news.ycombinator.com/item?id=4839881
- http://www.evanmiller.org/mathematical-hacker.html http://www.evanmiller.org/mathematical-hacker.html]
- https://en.wikipedia.org/wiki/Transformation_(function) - a function f that maps a set X to itself, i.e. f : X → X. In other areas of mathematics, a transformation may simply be any function, regardless of domain and codomain. This wider sense shall not be considered in this article; refer instead to the article on function for that sense.
Examples include linear transformations and affine transformations, rotations, reflections and translations. These can be carried out in Euclidean space, particularly in R2 (two dimensions) and R3 (three dimensions). They are also operations that can be performed using linear algebra, and described explicitly using matrices.
Integral transforms
Wavelets
- https://en.wikipedia.org/wiki/Wavelet - a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one recorded by a seismograph or heart monitor. Generally, wavelets are intentionally crafted to have specific properties that make them useful for signal processing.
- https://en.wikipedia.org/wiki/Wavelet_transform - a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
- https://en.wikipedia.org/wiki/Discrete_wavelet_transform - any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time).
Fourier Transform
- YouTube: But what is the Fourier Transform? A visual introduction. - An animated introduction to the Fourier Transform, winding graphs around circles by 3Blue1Brown.
- https://jackschaedler.github.io/circles-sines-signals/
- http://blog.revolutionanalytics.com/2014/01/the-fourier-transform-explained-in-one-sentence.html [187]
- http://www.altdevblogaday.com/2011/05/17/understanding-the-fourier-transform/
- http://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/
- http://toxicdump.org/stuff/FourierToy.swf [188]
- http://bl.ocks.org/jinroh/7524988
- https://news.ycombinator.com/item?id=7789767
- http://nautil.us/blog/the-math-trick-behind-mp3s-jpegs-and-homer-simpsons-face
- Fourier Analysis (and guitar jammin') - Sixty Symbols
- Every picture is made of waves - Sixty Symbols
- http://david.li/filtering/ [189]
- http://www.leancrew.com/all-this/2015/01/the-michelson-fourier-analyzer/ [190]
- http://bgrawi.com/Fourier-Visualizations/ [191]
- https://github.com/anthonix/ffts - The Fastest Fourier Transform in the South
- https://github.com/gpu-fftw/gpu_fftw - Run FFTW3 programs with Raspberry Pi GPU - fast ffts!
- myFourierEpicycles - draw your own fourier epicycles. - This website allows you to draw your own fourier epicycle drawings, either by uploading an svg or by mouse. I had quite a bit fun creating this, so at the end there is a brief explanation trying to give the reader some mathematical intuition as to how revolving circles and the fourier transform are connected. [192]
- https://github.com/HiFi-LoFi/FFTConvolver - Audio convolution algorithm in C++ for real time audio processing
Laplace Transform
- https://en.wikipedia.org/wiki/Laplace_transform - an integral transform named after its inventor Pierre-Simon Laplace (/ləˈplɑːs/). It transforms a function of a real variable t (often time) to a function of a complex variable s (complex frequency). The transform has many applications in science and engineering. The Laplace transform is similar to the Fourier transform. While the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. Laplace transforms are usually restricted to functions of t with t ≥ 0. A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable s. Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory. The Laplace transform is invertible on a large class of functions. The inverse Laplace transform takes a function of a complex variable s (often frequency) and yields a function of a real variable t (often time). Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications. So, for example, Laplace transformation from the time domain to the frequency domain transforms differential equations into algebraic equations and convolution into multiplication.
- YouTube: What does the Laplace Transform really tell us? A visual explanation (plus applications) [193]
to sort