Physics

From Things and Stuff Wiki
Revision as of 00:57, 27 July 2024 by Milk (talk | contribs) (→‎Spinors)
Jump to navigation Jump to search


to rework. slowly merging sections..

General


  • The Physics Travel Guide - a tool that makes learning physics easier. Each page here contains three layers which contain explanations with increasing level of sophistication. We call these layers: layman, student and researcher. These layers make sure that readers can always find an explanation they understand.


  • https://en.wikipedia.org/wiki/Experimental_physics - disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as the Cavendish experiment, to more complicated ones, such as the Large Hadron Collider.



Fields and spacetime

Spacetime

  • https://en.wikipedia.org/wiki/Spacetime - any mathematical model that combines space and time into a single interwoven continuum. Since 300 BCE, the spacetime of our universe has historically been interpreted from a Euclidean space perspective, which regards space as consisting of three dimensions, and time as consisting of one dimension, the "fourth dimension". By combining space and time into a single manifold called Minkowski space in 1905, physicists have significantly simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.


  • https://en.wikipedia.org/wiki/Space_(mathematics) - a set (sometimes called a universe) with some added structure. Mathematical spaces often form a hierarchy, i.e., one space may inherit all the characteristics of a parent space. For instance, all inner product spaces are also normed vector spaces, because the inner product induces a norm on the inner product space. Modern mathematics treats "space" quite differently compared to classical mathematics.


  • https://en.wikipedia.org/wiki/Principle_of_relativity - the requirement that the equations describing the laws of physics have the same form in all admissible frames of reference. For example, in the framework of special relativity the Maxwell equations have the same form in all inertial frames of reference. In the framework of general relativity the Maxwell equations or the Einstein field equations have the same form in arbitrary frames of reference. Several principles of relativity have been successfully applied throughout science, whether implicitly (as in Newtonian mechanics) or explicitly (as in Albert Einstein's special relativity and general relativity).


  • https://en.wikipedia.org/wiki/Spacetime_symmetries - features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems. Spacetime symmetries are used in the study of exact solutions of Einstein's field equations of general relativity.


  • https://en.wikipedia.org/wiki/Space#Physics - Many of the laws of physics, such as the various inverse square laws, depend on dimension three. In physics, our three-dimensional space is viewed as embedded in four-dimensional spacetime, called Minkowski space (see special relativity). The idea behind space-time is that time is hyperbolic-orthogonal to each of the three spatial dimensions.


  • https://en.wikipedia.org/wiki/Hyperbolic_orthogonality - In plane geometry, two lines are hyperbolic orthogonal when they are reflections of each other over the asymptote of a given hyperbola. Since Hermann Minkowski's foundation for spacetime study in 1908, the concept of points in a spacetime plane being hyperbolic-orthogonal to a timeline (tangent to a world line) has been used to define simultaneity of events relative to the timeline. Given a hyperbola with asymptote A, its reflection in A produces the conjugate hyperbola. Any diameter of the original hyperbola is reflected to a conjugate diameter. The directions indicated by conjugate diameters are taken for space and time axes in relativity. As E. T. Whittaker wrote in 1910, "[the] hyperbola is unaltered when any pair of conjugate diameters are taken as new axes, and a new unit of length is taken proportional to the length of either of these diameters." On this principle of relativity, he then wrote the Lorentz transformation in the modern form using rapidity.


  • https://en.wikipedia.org/wiki/Proper_time - along a timelike world line is defined as the time as measured by a clock following that line. It is thus independent of coordinates, and a Lorentz scalar.[1] The proper time interval between two events on a world line is the change in proper time. This interval is the quantity of interest, since proper time itself is fixed only up to an arbitrary additive constant, namely the setting of the clock at some event along the world line. The proper time between two events depends not only on the events but also the world line connecting them, and hence on the motion of the clock between the events. It is expressed as an integral over the world line. An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated (inertial) clock between the same two events. The twin paradox is an example of this effect.

In terms of four-dimensional spacetime, proper time is analogous to arc length in three-dimensional (Euclidean) space. By convention, proper time is usually represented by the Greek letter τ (tau) to distinguish it from coordinate time represented by t. By contrast, coordinate time is the time between two events as measured by an observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity. The concept of proper time was introduced by Hermann Minkowski in 1908,[2] and is a feature of Minkowski diagrams.


  • https://en.wikipedia.org/wiki/Rapidity - commonly used as a measure for relativistic velocity. Mathematically, rapidity can be defined as the hyperbolic angle that differentiates two frames of reference in relative motion, each frame being associated with distance and time coordinates. For one-dimensional motion, rapidities are additive whereas velocities must be combined by Einstein's velocity-addition formula. For low speeds, rapidity and velocity are proportional, but for higher velocities, rapidity takes a larger value, the rapidity of light being infinite.

Using the inverse hyperbolic function artanh, the rapidity w corresponding to velocity v is w = artanh(v / c) where c is the velocity of light. For low speeds, w is approximately v / c. Since in relativity any velocity v is constrained to the interval −c < v < c the ratio v / c satisfies −1 < v / c < 1. The inverse hyperbolic tangent has the unit interval (−1, 1) for its domain and the whole real line for its range, and so the interval −c < v < c maps onto −∞ < w < ∞.


  • https://en.wikipedia.org/wiki/Mach%27s_principle - In theoretical physics, particularly in discussions of gravitation theories, Mach's principle (or Mach's conjecture) is the name given by Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach. The idea is that local inertial frames are determined by the large scale distribution of matter, as exemplified by this anecdote:

You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?

Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. There are a number of rival formulations of the principle. It is often stated in vague ways, like "mass out there influences inertia here". A very general statement of Mach's principle is "Local physical laws are determined by the large-scale structure of the universe."

This concept was a guiding factor in Einstein's development of the general theory of relativity. Einstein realized that the overall distribution of matter would determine the metric tensor, which tells you which frame is rotationally stationary. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements can be (and have been) made which would qualify as a Mach principle, and some of these are false. The Gödel rotating universe is a solution of the field equations which is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example doesn't completely settle the question, because it has closed timelike curves.



  • https://en.wikipedia.org/wiki/Theory_of_relativity - or relativity in physics, usually encompasses two theories by Albert Einstein: special relativity and general relativity. (The word relativity can also be used in the context of an older theory, that of Galilean invariance.)
  • https://en.wikipedia.org/wiki/Special_relativity - the accepted physical theory regarding the relationship between space and time. It is based on two postulates: (1) that the laws of physics are invariant (i.e., identical) in all inertial systems (non-accelerating frames of reference); and (2) that the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper "On the Electrodynamics of Moving Bodies". The inconsistency of classical mechanics with Maxwell’s equations of electromagnetism led to the development of special relativity, which corrects classical mechanics to handle situations involving motions nearing the speed of light. As of today, special relativity is the most accurate model of motion at any speed. Even so, classical mechanics is still useful (due to its simplicity and high accuracy) as an approximation at small velocities relative to the speed of light.

Special relativity implies a wide range of consequences, which have been experimentally verified including length contraction, time dilation, relativistic mass, mass–energy equivalence, a universal speed limit, and relativity of simultaneity. It has replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = mc2, where c is the speed of light in vacuum.

A defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics with the Lorentz transformations. Time and space cannot be defined separately from one another. Rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the same time for one observer could occur at different times for another. The theory is called "special" because it applied the principle of relativity only to the special case of inertial reference frames. Einstein later published a paper on general relativity in 1915 to apply the principle in the general case, that is, to any frame so as to handle general coordinate transformations, and gravitational effects.

As Galilean relativity is now considered an approximation of special relativity valid for low speeds, special relativity is considered an approximation of the theory of general relativity valid for weak gravitational fields. The presence of gravity becomes undetectable at sufficiently small-scale, free-falling conditions. General relativity incorporates noneuclidean geometry, so that the gravitational effects are represented by the geometric curvature of spacetime. Contrarily, special relativity is restricted to flat spacetime. The geometry of spacetime in special relativity is called Minkowski space. A locally Lorentz invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime.

Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.







  • https://en.wikipedia.org/wiki/Four-momentum - The four-momentum is useful in relativistic calculations because it is a Lorentz vector. This means that it is easy to keep track of how it transforms under Lorentz transformations.




  • https://en.wikipedia.org/wiki/General_relativity - a theory of gravitation that was developed by Albert Einstein between 1907 and 1915, with contributions by many others after 1915. According to general relativity, the observed gravitational attraction between masses results from the warping of space and time by those masses.




  • https://en.wikipedia.org/wiki/Mathematics_of_general_relativity - refers to various mathematical structures and techniques that are used in studying and formulating Albert Einstein's theory of general relativity. The main tools used in this geometrical theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a general description of the mathematics of general relativity.





  • https://en.wikipedia.org/wiki/Einstein_field_equations - a set of 10 equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).

The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of non-linear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.





  • https://en.wikipedia.org/wiki/Adiabatic_process - a type of thermodynamic process that occurs without transferring heat or mass between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics.


  • https://en.wikipedia.org/wiki/T-symmetry - the theoretical symmetry of physical laws under a time reversal transformation: Although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics. Hence time is said to be non-symmetric, or asymmetric, except for equilibrium states when the second law of thermodynamics predicts the time symmetry to hold. However, quantum noninvasive measurements are predicted to violate time symmetry even in equilibrium, contrary to their classical counterparts, although it has not yet been experimentally confirmed.

Time asymmetries are generally distinguished as between those intrinsic to the dynamic physical laws, those due to the initial conditions of our universe, and due to measurements; The T-asymmetry of the weak force is of the first kind, The T-asymmetry of the second law of thermodynamics is of the second kind, while The T-asymmetry of the noninvasive measurements is of the third kind.


  • https://en.wikipedia.org/wiki/Poincaré_group - named after Henri Poincaré (1906), was first defined by Minkowski (1908) being the group of Minkowski spacetime isometries. It is a ten-generator non-abelian Lie group of fundamental importance in physics.



  • https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors - In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that does not change its direction when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v.




  • A rubber sheet can be mapped to a scalar theory of gravity
  • Relativity is a tensor field theory










  • https://en.wikipedia.org/wiki/Stress–energy_tensor - a tensor quantity in physics that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. The stress–energy tensor is the source of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.



  • https://en.wikipedia.org/wiki/Scalar_curvature - In general relativity, the scalar curvature is the Lagrangian density for the Einstein–Hilbert action. The Euler–Lagrange equations for this Lagrangian under variations in the metric constitute the vacuum Einstein field equations, and the stationary metrics are known as Einstein metrics. The scalar curvature is defined as the trace of the Ricci tensor, and it can be characterized as a multiple of the average of the sectional curvatures at a point. Unlike the Ricci tensor and sectional curvature, however, global results involving only the scalar curvature are extremely subtle and difficult. One of the few is the positive mass theorem of Richard Schoen, Shing-Tung Yau and Edward Witten. Another is the Yamabe problem, which seeks extremal metrics in a given conformal class for which the scalar curvature is constant.


  • https://en.wikipedia.org/wiki/Scalar_field_solution - an exact solution of the Einstein field equation in which the gravitational field is due entirely to the field energy and momentum of a scalar field. Such a field may or may not be massless, and it may be taken to have minimal curvature coupling, or some other choice, such as conformal coupling.



"Everything, by nature of simply existing, is "moving" at the speed of light (which really has nothing to do with light: more on that later). Yes, that does include you.

"Our understanding of the universe is that the way that we perceive space and time as separate things is, to be frank, wrong. They aren't separate: the universe is made of "spacetime," all one word. A year and a lightyear describe different things in our day to day lives, but from a physicist's point of view, they're actually the exact same thing (depending on what kind of physics you're doing).

"You're (presumably) sitting in your chair right now, which means you're not traveling through space at all. Since you have to travel through spacetime at c (speed of light), though, that means all of your motion is through time."


photons don't know time; time moves at the speed of light.??? v=r or not? ish?


  • https://en.wikipedia.org/wiki/Geodetic_effect - also known as geodetic precession, de Sitter precession or de Sitter effect, represents the effect of the curvature of spacetime, predicted by general relativity, on a vector carried along with an orbiting body. The term geodetic effect has two slightly different meanings as the moving body may be spinning or non-spinning. Non-spinning bodies move in geodesics, whereas spinning bodies move in slightly different orbits. The difference between de Sitter precession and Lense–Thirring precession (frame dragging) is that the de Sitter effect is due simply to the presence of a central mass, whereas Lense–Thirring precession is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession.
  • https://en.wikipedia.org/wiki/Geodesic - a generalization of the notion of a "straight line" to "curved spaces". The term "geodesic" comes from geodesy, the science of measuring the size and shape of Earth; in the original sense, a geodesic was the shortest route between two points on the Earth's surface, namely, a segment of a great circle. The term has been generalized to include measurements in much more general mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph.

In the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. If this connection is the Levi-Civita connection induced by a Riemannian metric, then the geodesics are (locally) the shortest path between points in the space. Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles.


  • https://en.wikipedia.org/wiki/Geodesics_in_general_relativity - generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational force, is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic. In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting around a star is the projection of a geodesic of the curved 4-D spacetime geometry around the star onto 3-D space.






  • https://en.wikipedia.org/wiki/Length_contraction - the phenomenon of a decrease in length of an object as measured by an observer who is traveling at any non-zero velocity relative to the object. This contraction (more formally called Lorentz contraction or Lorentz–FitzGerald contraction after Hendrik Lorentz and George Francis FitzGerald) is usually only noticeable at a substantial fraction of the speed of light.




  • https://en.wikipedia.org/wiki/World_line - object is the unique path of that object as it travels through 4-dimensional spacetime. The concept of "world line" is distinguished from the concept of "orbit" or "trajectory" (such as an orbit in space or a trajectory of a truck on a road map) by the time dimension, and typically encompasses a large area of spacetime wherein perceptually straight paths are recalculated to show their (relatively) more absolute position states — to reveal the nature of special relativity or gravitational interactions. The idea of world lines originates in physics and was pioneered by Hermann Minkowski. The term is now most often used in relativity theories


  • https://en.wikipedia.org/wiki/Schwarzschild_metric - also known as the Schwarzschild vacuum or Schwarzschild solution, is the solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. The solution is named after Karl Schwarzschild, who first published the solution in 1916.







  • https://en.wikipedia.org/wiki/Unruh_effect - sometimes Fulling–Davies–Unruh effect) is the prediction that an accelerating observer will observe blackbody radiation where an inertial observer would observe none. In other words, the background appears to be warm from an accelerating reference frame; in layman's terms, a thermometer waved around in empty space, subtracting any other contribution to its temperature, will record a non-zero temperature. For a uniformly accelerating observer, the ground state of an inertial observer is seen as in thermodynamic equilibrium with a non-zero temperature.



  • https://en.wikipedia.org/wiki/Scalar_theories_of_gravitation - field theories of gravitation in which the gravitational field is described using a scalar field, which is required to satisfy some field equation. Note: This article focuses on relativistic classical field theories of gravitation. The best known relativistic classical field theory of gravitation, general relativity, is a tensor theory, in which the gravitational interaction is described using a tensor field.





Quantum mechanics

a meeeess. still matching fields to excitations to particles etc.


Quanta is the plural of quantum.





  • Notes: Quantum Mechanics - This short lecture series has two main goals: 1. To introduce you to quantum mechanics at a level necessary for a good understanding of the fundamentals of nuclear magnetic resonance (NMR) and 2. To present the quantum mechanical description of NMR in sufficient detail so that you can understand multiple quantum coherence effects. [12]




  • YouTube playlist: Carthage PHY4200 Quantum mechanics video lectures for PHY4200, spring 2013, at Carthage College. These videos are meant to accompany in-class tutorials, problem solving exercises, and homeworks.




  • https://en.wikipedia.org/wiki/Quantum_system - a portion of the whole Universe (environment or physical world) which is taken under consideration to make analysis or to study for quantum mechanics pertaining to the wave-particle duality in that system and everything outside this system (i.e. environment) is studied only to observe its effects on the system. A quantum system involves the wave function and its constituents, such as the momentum and wavelength of the wave for which wave function is being defined.

to reorder, merge with above



  • https://en.wikipedia.org/wiki/Planck_units a set of units of measurement defined exclusively in terms of five universal physical constants, in such a manner that these five physical constants take on the numerical value of 1 when expressed in terms of these units. Originally proposed in 1899 by German physicist Max Planck, these units are also known as natural units because the origin of their definition comes only from properties of nature and not from any human construct. Planck units are only one system of several systems of natural units, but Planck units are not based on properties of any prototype object or particle (that would be arbitrarily chosen), but rather on only the properties of free space.

Planck units have significance for theoretical physics since they simplify several recurring algebraic expressions of physical law by nondimensionalization. They are relevant in research on unified theories such as quantum gravity. The Planck scale expresses the region in which the predictions of the Standard Model of quantum field theory and of general relativity are no longer reconcilable, and quantum effects of gravity are expected to dominate. This occurs at energy 1.22×1019 GeV (the Planck energy), at times 5.39×10−44 s (the Planck time) and length 1.62×10−35 m (the Planck length).


The five universal constants that Planck units, by definition, normalize to 1 are:

  • the speed of light in a vacuum, c,
  • the gravitational constant, G,
  • the reduced Planck constant, ħ,
  • the Coulomb constant, 1/4πε0
  • the Boltzmann constant, kB

Each of these constants can be associated with a fundamental physical theory or concept: c with special relativity, G with general relativity, ħ with quantum mechanics, ε0 with electric charge (electromagnetism), and kB with the notion of temperature (statistical mechanics and thermodynamics).


  • https://en.wikipedia.org/wiki/Planck_constant - quantum of action in quantum mechanics. first described as the proportionality constant between the energy (E) of a photon and the frequency (ν) of its associated electromagnetic wave






where E is the energy of a physical system, m is the mass of the system, and c is the speed of light in a vacuum (about 3×108 m/s). In words, energy equals mass multiplied by the square of the speed of light. Because the speed of light is a very large number in everyday units, the formula implies that any small amount of matter contains a very large amount of energy. Some of this energy may be released as heat and light by chemical or nuclear transformations. This also serves to convert units of mass to units of energy, no matter what system of measurement units is used.




  • https://en.wikipedia.org/wiki/Correspondence_principle - states that the behavior of systems described by the theory of quantum mechanics (or by the old quantum theory) reproduces classical physics in the limit of large quantum numbers. In other words, it says that for large orbits and for large energies, quantum calculations must agree with classical calculations. The principle was formulated by Niels Bohr in 1920,[2] though he had previously made use of it as early as 1913 in developing his model of the atom.

The term is also used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories (which become limiting cases) in those domains where the old theories work. Classical quantities appear in quantum mechanics in the form of expected values of observables, and as such the Ehrenfest theorem (which predicts the time evolution of the expected values) lends support to the correspondence principle.









  • https://en.wikipedia.org/wiki/Quantum_spacetime - a generalization of the usual concept of spacetime in which some variables that ordinarily commute are assumed not to commute and form a different Lie algebra. The choice of that algebra still varies from theory to theory. As a result of this change some variables that are usually continuous may become discrete. Often only such discrete variables are called "quantized"; usage varies. The idea of quantum spacetime was proposed in the early days of quantum theory by Heisenberg and Ivanenko as a way to eliminate infinities from quantum field theory. The germ of the idea passed from Heisenberg to Rudolf Peierls, who noted that electrons in a magnetic field can be regarded as moving in a quantum spaaaaaace-time, and to Robert Oppenheimer, who carried it to Hartland Snyder, who published the first concrete example. Snyder's Lie algebra was made simple by C. N. Yang in the same year.












Wave function

  • https://en.wikipedia.org/wiki/Wave_function - a mathematical description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ or Ψ (lower-case and capital psi, respectively). The wave function is a function of the degrees of freedom corresponding to some maximal set of commuting observables. Once such a representation is chosen, the wave function can be derived from the quantum state.





  • https://en.wikipedia.org/wiki/Operator_(physics) - a function over a space of physical states to another space of physical states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are a very useful tool in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator. The wavefunction represents the probability amplitude of finding the system in that state. The terms "wavefunction" and "state" in QM context are usually used interchangeably. Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space. Time evolution in this vector space is given by the application of the evolution operator.

Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian. The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details.

In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators. In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction.


  • https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics) - the operator corresponding to the total energy of the system in most of the cases. It is usually denoted by H, also Ȟ or Ĥ. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of fundamental importance in most formulations of quantum theory. The Hamiltonian is named after William Rowan Hamilton, who in his spare time also created a revolutionary reformation of Newtonian mechanics, now called Hamiltonian mechanics, that is important in quantum physics.




  • https://en.wikipedia.org/wiki/Momentum_operator - an operator which maps the wave function ψ(x, t) to another function. If this new function is a constant p multiplied by the original wave function ψ, then p is the eigenvalue of the momentum operator, and ψ is the eigenfunction of the momentum operator. In quantum mechanics, the set of eigenvalues of an operator are the possible results measured in an experiment.



  • https://en.wikipedia.org/wiki/Double-slit_experiment - a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena. A simpler form of the double-slit experiment was performed originally by Thomas Young in 1801 (well before quantum mechanics). He believed it demonstrated that the wave theory of light was correct and his experiment is sometimes referred to as Young's experiment or Young's slits. The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves that later combine into a single wave. Changes in the path lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a mirror.




  • https://en.wikipedia.org/wiki/Universal_wavefunction - a term introduced by Hugh Everett in his Princeton PhD thesis The Theory of the Universal Wave Function, and forms a core concept in the relative state interpretation or many-worlds interpretation of quantum mechanics. It has also received more recent investigation from James Hartle and Stephen Hawking in which they derive a specific solution to the Wheeler-deWitt equation to explain the initial conditions of the Big Bang cosmology.


  • https://en.wikipedia.org/wiki/Noether%27s_theorem - states that every differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function (which may or may not be an integral over space of a Lagrangian density function), from which the system's behavior can be determined by the principle of least action. Noether's theorem is used in theoretical physics and the calculus of variations. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g. systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.



  • https://en.wikipedia.org/wiki/Quantum_entanglement - a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently of the others, even when the particles are separated by a large distance – instead, a quantum state must be described for the system as a whole.

Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles are generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as to be expected due to their entanglement. However, this behavior gives rise to paradoxical effects: any measurement of a property of a particle can be seen as acting on that particle (e.g., by collapsing a number of superposed states) and will change the original quantum property by some unknown amount; and in the case of entangled particles, such a measurement will be on the entangled system as a whole. It thus appears that one particle of an entangled pair "knows" what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.

Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally.

Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which – by producing violations of Bell's inequality – demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles. Recent experiments have measured entangled particles within less than one hundredth of a percent of the travel time of light between them. According to the formalism of quantum theory, the effect of measurement happens instantly. It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds.






  • https://en.wikipedia.org/wiki/Matter_wave - All matter can exhibit wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. Matter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. The concept that matter behaves like a wave is also referred to as the de Broglie hypothesis due to having been proposed by Louis de Broglie in 1924. Matter waves are often referred to as de Broglie waves. The de Broglie wavelength is the wavelength, λ, associated with a massive particle and is related to its momentum, p, through the Planck constant

Wave-like behavior of matter was first experimentally demonstrated by George Paget Thomson's thin metal diffraction experiment, and independently in the Davisson–Germer experiment both using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules. The wave-like behavior of matter is crucial to the modern theory of atomic structure and particle physics.

  • https://en.wikipedia.org/wiki/Bloch_wave - also called Bloch state or Bloch function or Bloch wave function), named after Swiss physicist Felix Bloch, is a type of wavefunction for a particle in a periodically-repeating environment, most commonly an electron in a crystal. A wavefunction with a periodic function of the same periodicity as the crystal. In other words, if you multiply a plane wave by a periodic function, you get a Bloch wave.

Bloch waves are important because of Bloch's theorem, which states that the energy eigenstates for an electron in a crystal can be written as Bloch waves. (More precisely, it states that the electron wave functions in a crystal have a basis consisting entirely of Bloch wave energy eigenstates.) This fact underlies the concept of electronic band structures.



Uncertainty

  • https://en.wikipedia.org/wiki/Uncertainty_principle - also known as Heisenberg's uncertainty principle, is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle known as complementary variables, such as position x and momentum p, can be known simultaneously. Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.
  • https://en.wikipedia.org/wiki/Quantum_fluctuation - the temporary change in the amount of energy in a point in space, as explained in Werner Heisenberg's uncertainty principle. This allows the creation of particle-antiparticle pairs of virtual particles. The effects of these particles are measurable, for example, in the effective charge of the electron, different from its "naked" charge.

In the modern view, energy is always conserved, but because the particle number operator does not commute with a field's Hamiltonian or energy operator, the field's lowest-energy or ground state, often called the vacuum state, is not, as one might expect from that name, a state with no particles, but rather a quantum superposition of particle number eigenstates with 0, 1, 2...etc. particles.

Quantum fluctuations may have been very important in the origin of the structure of the universe: according to the model of inflation the ones that existed when inflation began were amplified and formed the seed of all current observed structure. Vacuum energy may also be responsible for the current accelerated expansion of the universe (cosmological constant).


  • https://en.wikipedia.org/wiki/Born_rule - a law of quantum mechanics which gives the probability that a measurement on a quantum system will yield a given result. It is named after its originator, the physicist Max Born. The Born rule is one of the key principles of quantum mechanics. There have been many attempts to derive the Born rule from the other assumptions of quantum mechanics, with inconclusive results.



  • YouTube: Understanding the Uncertainty Principle with Quantum Fourier Series | Space Time
  • https://en.wikipedia.org/wiki/Complementarity_(physics) - a theoretical and an experimental result of quantum mechanics, also referred to as principle of complementarity, closely associated with the Copenhagen interpretation. It holds that objects have complementary properties which cannot be observed or measured at all at the same time. Bohr’s complementary principle should not be confused with observables that do not commute, like position and momentum. This is an entirely different subject. The complementarity principle was formulated by Niels Bohr, a leading founder of quantum mechanics.

Examples of complementary properties that Bohr considered:

  • Position and momentum
  • Energy and duration
  • Spin on different axis
  • Wave and particle
  • Value of a field and its change (at a certain position)
  • Entanglement and coherence


  • https://en.wikipedia.org/wiki/Charge_radius - a measure of the size of an atomic nucleus, particularly of a proton or a deuteron. It can be measured by the scattering of electrons by the nucleus and also inferred from the effects of finite nuclear size on electron energy levels as measured in atomic spectra.


  • https://en.wikipedia.org/wiki/Weak_measurement - a type of quantum measurement that results in an observer obtaining very little information about the system on average, but also disturbs the state very little. From Busch's theorem the system is necessarily disturbed by the measurement.[2] In the literature weak measurements are also known as unsharp, fuzzy, dull, noisy, approximate, and gentle measurements. Additionally weak measurements are often confused with the distinct but related concept of the weak value.

"Heisenberg proposed that the act of measurement itself was responsible for the indeterminacy: using a photon of sufficient energy to locate the particle would give it a kick, making its momentum unpredictable. However, later more rigorous derivations showed that the HUP—while still concerned with the measurement of physical quantities—didn't require a specific measurement to be performed. Instead, the HUP was a statement of the intrinsic limitation of any measurement that could be taken, without needing to do an experiment.

"The Canadian researchers used another set of complementary physical quantities instead of position and momentum: the polarization states of a photon. Precision measurement of polarization along one axis (for example) means that the measurement along another perpendicular axis is indeterminate. ... The use of weak measurements allowed the researchers to quantify the amount of disturbance the experimental apparatus introduced. In that way, they could rule out Heisenberg's idea that it was the measurement itself that led to uncertainty. These results help clarify the role of measurement in quantum mechanics."

  • YouTube: Weak Measurement: A Peephole into the Quantum World - ICTP Colloquium - by Prof. Yuval Gefen. ABSTRACT: The measurement of observables in quantum mechanics is a probabilistic process, traditionally described by von Neumann’s projection postulate. Each eigenvalue of the observable happens to be a possible outcome of the measurement process with a given probability, and the original state of the system collapses into the corresponding eigenstate. Weakly measuring an observable (i.e., coupling the system weakly to the measuring device), perturbs the former weakly, yet, at the same time, provides only partial information on the state of the measured system. Employing composite measurement protocols, e.g., a weak measurement followed by a strong one, opens new horizons. Such composite protocols can be employed, inter alia, for efficient weak signal amplification; they provide a tool for quantum state discrimination, and may facilitate direct, yet non-destructive, observation of quantum virtual states. A very recent challenge is their utility in probing topological states of matter. I will address the principles and applications—present and future-- of weak measurement protocols, paying particular attention to the arena of solid state physics.


  • https://en.wikipedia.org/wiki/Quantum_nondemolition_measurement - a special type of measurement of a quantum system in which the uncertainty of the measured observable does not increase from its measured value during the subsequent normal evolution of the system. This necessarily requires that the measurement process preserve the physical integrity of the measured system, and moreover places requirements on the relationship between the measured observable and the self-Hamiltonian of the system. In a sense, QND measurements are the "most classical" and least disturbing type of measurement in quantum mechanics.

States


  • https://en.wikipedia.org/wiki/Stationary_state - a quantum state with all observables independent of time. It is an eigenvector of the Hamiltonian.[ This corresponds to a state with a single definite energy (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below.


  • https://en.wikipedia.org/wiki/Bound_state - a special quantum state of a particle subject to a potential such that the particle has a tendency to remain localised in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative energy states must be bound. In general, the energy spectrum of the set of bound states is discrete, unlike free particles, which have a continuous spectrum.



  • https://en.wikipedia.org/wiki/Ground_state - lowest-energy state of a of a quantum mechanical system; the energy of the ground state is known as the zero-point energy of the system. An excited state is any state with energy greater than the ground state. The ground state of a quantum field theory is usually called the vacuum state or the vacuum.


  • https://en.wikipedia.org/wiki/Vacuum_energy - an underlying background energy that exists in space throughout the entire Universe. One contribution to the vacuum energy may be from virtual particles which are thought to be particle pairs that blink into existence and then annihilate in a timespan too short to observe. They are expected to do this everywhere, throughout the Universe. Their behavior is codified in Heisenberg's energy–time uncertainty principle. Still, the exact effect of such fleeting bits of energy is difficult to quantify.

The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy of free space has been estimated to be 10−9 joules (10−2 ergs) per cubic meter. However, in both quantum electrodynamics (QED) and stochastic electrodynamics (SED), consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant requires it to have a much larger value of 10113 joules per cubic meter. This huge discrepancy is known as the vacuum catastrophe.



  • https://en.wikipedia.org/wiki/QED_vacuum - the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When Planck's constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism.


  • https://en.wikipedia.org/wiki/QCD_vacuum - a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter.







Energy

to move




  • https://en.wikipedia.org/wiki/Force_carrier - particles that give rise to forces between other particles. These particles are bundles of energy (quanta) of a particular kind of field. There is one kind of field for every species of elementary particle. A force between two particles can be described either as the action of a force field generated by one particle on the other, or in terms of the exchange of virtual force carrier particles between them.



  • https://en.wikipedia.org/wiki/Electronvolt - symbol eV, also written electron volt) is a unit of energy equal to approximately 160 zeptojoules (10−21 joules, symbol zJ) or 1.6×10−19 joules (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moving across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the elementary charge (e, or 1.6021766208(98)×10−19 C). Therefore, one electronvolt is equal to 1.6021766208(98)×10−19 J.[4] Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV.

The electronvolt is not a SI unit, and its definition is empirical (unlike the litre, the light-year and other such non-SI units), thus its value in SI units must be obtained experimentally. Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electronvolt.



  • https://en.wikipedia.org/wiki/Energy_density - the amount of energy stored in a given system or region of space per unit volume or mass, though the latter is more accurately termed specific energy. Often only the useful or extractable energy is measured, which is to say that chemically inaccessible energy such as rest mass energy is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress–energy tensor and therefore do include mass energy as well as energy densities associated with the pressures described in the next paragraph.

Energy per unit volume has the same physical units as pressure, and in many circumstances is a synonym: for example, the energy density of a magnetic field may be expressed as (and behaves as) a physical pressure, and the energy required to compress a compressed gas a little more may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. In short, pressure is a measure of the enthalpy per unit volume of a system. A pressure gradient has a potential to perform work on the surroundings by converting enthalpy until equilibrium is reached.



  • https://en.wikipedia.org/wiki/Specific_energy - is energy per unit mass. It is used to quantify, for example, stored heat or other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties. The SI unit for specific energy is the joule per kilogram (J/kg).


  • https://en.wikipedia.org/wiki/Energy_condition - one of various alternative conditions which can be applied to the matter content of the theory, when it is either not possible or desirable to specify this content explicitly. The hope is then that any reasonable matter theory will satisfy this condition or at least will preserve the condition if it is satisfied by the starting conditions. In general relativity, energy conditions are often used (and required) in proofs of various important theorems about black holes, such as the no hair theorem or the laws of black hole thermodynamics.


  • https://en.wikipedia.org/wiki/Physical_constant - a physical quantity that is generally believed to be both universal in nature and constant in time. It can be contrasted with a mathematical constant, which is a fixed numerical value, but does not directly involve any physical measurement. There are many physical constants in science, some of the most widely recognized being the speed of light in vacuum c, the gravitational constant G, Planck's constant h, the electric constant ε0, and the elementary charge e. Physical constants can take many dimensional forms: the speed of light signifies a maximum speed limit of the Universe and is expressed dimensionally as length divided by time; while the fine-structure constant α, which characterizes the strength of the electromagnetic interaction, is dimensionless.


  • https://en.wikipedia.org/wiki/Spontaneous_emission - the process in which a quantum mechanical system (such as an atom, molecule or subatomic particle) transitions from an excited energy state to a lower energy state (e.g., its ground state) and emits a quantum in the form of a photon. Spontaneous emission is ultimately responsible for most of the light we see all around us; it is so ubiquitous that there are many names given to what is essentially the same process. If atoms (or molecules) are excited by some means other than heating, the spontaneous emission is called luminescence.

The constant is referred to as the Einstein A coefficient, and has units s^{-1}. The number of excited states N decays exponentially with time, similar to radioactive decay. After one lifetime, the number of excited states decays to 36.8% of its original value {1}{e}}-time). The radiative decay rate is inversely proportional to the lifetim.


  • https://en.wikipedia.org/wiki/Einstein_coefficients - are mathematical quantities which are a measure of the probability of absorption or emission of light by an atom or molecule. The Einstein A coefficient is related to the rate of spontaneous emission of light and the Einstein B coefficients are related to the absorption and stimulated emission of light.




  • https://en.wikipedia.org/wiki/Hyperfine_structure - refers to small shifts and splittings in the energy levels of atoms, molecules and ions, due to interaction between the state of the nucleus and the state of the electron clouds. Hyperfine structure contrasts with mere fine structure, which results from the interaction between the magnetic moments associated with electron spin and the electrons' orbital angular momentum. Hyperfine structure, with energy shifts typically orders of magnitudes smaller than those of a fine-structure shift, results from the interactions of the nucleus (or nuclei, in molecules) with internally generated electric and magnetic fields.

Formulations



  • https://en.wikipedia.org/wiki/Quantum_statistical_mechanics - a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system


  • https://en.wikipedia.org/wiki/Hilbert_space - generalizes the notion of Euclidean space, extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions, an abstract vector space possessing the structure of an inner product that allows length and angle to be measured









Schrödinger picture

  • https://en.wikipedia.org/wiki/Schrödinger_equation - a partial differential equation that describes how the quantum state of a physical system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.

In quantum mechanics, the analogue of Newton's second law of motion is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").

Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe. The Schrödinger equation, in its most general form, is consistent with both classical mechanics and special relativity, but the original formulation by Schrödinger himself was non-relativistic.

Heisenberg picture

  • https://en.wikipedia.org/wiki/Heisenberg_picture - a formulation (largely due to Werner Heisenberg in 1925) of quantum mechanics in which the operators (observables and others) incorporate a dependency on time, but the state vectors are time-independent, an arbitrary fixed basis rigidly underlying the theory.

It stands in contrast to the Schrödinger picture in which the operators are constant, instead, and the states evolve in time. The two pictures only differ by a basis change with respect to time-dependency, which corresponds to the difference between active and passive transformations. The Heisenberg picture is the formulation of matrix mechanics in an arbitrary basis, in which the Hamiltonian is not necessarily diagonal.


Matrix mechanics

See also Maths#Matrix multiplication

  • https://en.wikipedia.org/wiki/Matrix_mechanics - a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. Matrix mechanics was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr Model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation. In some contrast to the wave formulation, it produces spectra of energy operators by purely algebraic, ladder operator, methods. Relying on these methods, Pauli derived the hydrogen atom spectrum in 1926, before the development of wave mechanics.



  • https://en.wikipedia.org/wiki/Ladder_operator - a raising or lowering operator (collectively known as ladder operators) is an operator that increases or decreases the eigenvalue of another operator. In quantum mechanics, the raising operator is sometimes called the creation operator, and the lowering operator the annihilation operator. Well-known applications of ladder operators in quantum mechanics are in the formalisms of the quantum harmonic oscillator and angular momentum.
  • https://en.wikipedia.org/wiki/Creation_and_annihilation_operators - are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization.

The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator. For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators.[3]


  • https://en.wikipedia.org/wiki/CCR_and_CAR_algebras - after canonical commutation relations) and CAR algebras (after canonical anticommutation relations) arise from the quantum mechanical study of bosons and fermions respectively. They play a prominent role in quantum statistical mechanics and quantum field theory.

f

It is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices with respect to one basis, then they are so represented with respect to every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra.

The anticommutator of two elements a and b of a ring or an associative algebra is defined by {a,b\}=ab+ba.

Sometimes the brackets [ ]+ are also used to denote anticommutators. The anticommutator is used less often than the commutator, but can be used for example to define Clifford algebras, Jordan algebras and is utilised to derive the Dirac equation in particle physics. The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets, and are completely isomorphic to the Hilbert-space commutator structures mentioned.

Interaction picture

Phase space formulation

Path integral formulation


Symmetry


  • https://en.wikipedia.org/wiki/C-symmetry - the symmetry of physical laws under a charge-conjugation transformation. Electromagnetism, gravity and the strong interaction all obey C-symmetry, but weak interactions violate C-symmetry.








  • https://en.wikipedia.org/wiki/Parity_(physics) - a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it is also often described by the simultaneous flip in the sign of all three spatial coordinates (a point reflection).

It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. A parity transformation on something achiral, on the other hand, can be viewed as an identity transformation. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics. In interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.


  • https://en.wikipedia.org/wiki/C_parity - a multiplicative quantum number of some particles that describes their behavior under the symmetry operation of charge conjugation. Charge conjugation changes the sign of all quantum charges (that is, additive quantum numbers), including the electrical charge, baryon number and lepton number, and the flavor charges strangeness, charm, bottomness, topness and Isospin (I3). In contrast, it doesn't affect the mass, linear momentum or spin of a particle.







Quantum numbers

  • https://en.wikipedia.org/wiki/Quantum_number - describe values of conserved quantities in the dynamics of a quantum system. In the case of electrons, the quantum numbers can be defined as "the sets of numerical values which give acceptable solutions to the Schrödinger wave equation for the hydrogen atom". An important aspect of quantum mechanics is the quantization of observable quantities, since quantum numbers are discrete sets of integers or half-integers, although they could approach infinity in some cases. This distinguishes quantum mechanics from classical mechanics where the values that characterize the system such as mass, charge, or momentum, range continuously. Quantum numbers often describe specifically the energy levels of electrons in atoms, but other possibilities include angular momentum, spin, etc. Any quantum system can have one or more quantum numbers; it is thus difficult to list all possible quantum numbers.


Principal
  • https://en.wikipedia.org/wiki/Principal_quantum_number - one of four quantum numbers which are assigned to each electron in an atom to describe that electron's state. As a discrete variable, the principal quantum number is always an integer. As n increases, the number of electronic shells increases and the electron spends more time farther from the nucleus. As n increases, the electron is also at a higher potential energy and is therefore less tightly bound to the nucleus. The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, modern theory still requires the principal quantum number.


Magnetic
  • https://en.wikipedia.org/wiki/Magnetic_quantum_number - distinguishes the orbitals available within a subshell, and is used to calculate the azimuthal component of the orientation of orbital in space. Electrons in a particular subshell (such as s, p, d, or f) are defined by values of ℓ (0, 1, 2, or 3). The value of m can range from -ℓ to +ℓ, inclusive of zero. Thus the s, p, d, and f subshells contain 1, 3, 5, and 7 orbitals each, with values of m within the ranges ±0, ±1, ±2, ±3 respectively. Each of these orbitals can accommodate up to two electrons (with opposite spins), forming the basis of the periodic table.

Spin

  • https://en.wikipedia.org/wiki/Spin_(physics) - an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei. Spin is one of two types of angular momentum in quantum mechanics, the other being orbital angular momentum. The orbital angular momentum operator is the quantum-mechanical counterpart to the classical angular momentum of orbital revolution: it arises when a particle executes a rotating or twisting trajectory (such as when an electron orbits a nucleus). The existence of spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which particles are observed to possess angular momentum that cannot be accounted for by orbital angular momentum alone.

In some ways, spin is like a vector quantity; it has a definite magnitude, and it has a "direction" (but quantization makes this "direction" different from the direction of an ordinary vector). All elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number.


  • https://en.wikipedia.org/wiki/Spin–statistics_theorem - relates the spin of a particle to the particle statistics it obeys. The spin of a particle is its intrinsic angular momentum (that is, the contribution to the total angular momentum that is not due to the orbital motion of the particle). All particles have either integer spin or half-integer spin (in units of the reduced Planck constant ħ).




  • https://en.wikipedia.org/wiki/Angular_momentum_coupling - For instance, the orbit and spin of a single particle can interact through spin–orbit interaction, in which case the complete physical picture must include spin-orbit coupling. Or two charged particles, each with a well-defined angular momentum, may interact by Coulomb forces, in which case coupling of the two one-particle angular momenta to a total angular momentum is a useful step in the solution of the two-particle Schrödinger equation. In both cases the separate angular momenta are no longer constants of motion, but the sum of the two angular momenta usually still is. Angular momentum coupling in atoms is of importance in atomic spectroscopy. Angular momentum coupling of electron spins is also of importance in quantum chemistry. Also in the nuclear shell model angular momentum coupling is ubiquitous and retroactively redundant.



  • https://en.wikipedia.org/wiki/Angular_momentum - the rotational analog of linear momentum. It is an important quantity in physics because it is a conserved quantity – the angular momentum of a system remains constant unless acted on by an external torque.


  • https://en.wikipedia.org/wiki/Relativistic_angular_momentum -r efers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics. Angular momentum is a dynamical quantity derived from position and momentum, and is important; angular momentum is a measure of an object's "amount of rotational motion" and resistance to stop rotating. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection between symmetries and conservation laws is made by Noether's theorem. While these concepts were originally discovered in classical mechanics – they are also true and significant in special and general relativity. In terms of abstract algebra; the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by the Lorentz group, or more generally the Poincaré group.

Physical quantities which remain separate in classical physics are naturally combined in SR and GR by enforcing the postulates of relativity. Most notably; space and time coordinates combine into the four-position, and energy and momentum combine into the four-momentum. The components of these four-vectors depend on the frame of reference used, and change under Lorentz transformations to other inertial frames or accelerated frames.

Relativistic angular momentum is less obvious. The classical definition of angular momentum is the cross product of position x with momentum p to obtain a pseudovector x×p, or alternatively as the exterior product to obtain a second order antisymmetric tensor x∧p. What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass (not the moment of inertia) related to the boost of the centre of mass of the system, and this combines with the classical angular momentum to form an antisymmetric tensor of second order. For rotating mass–energy distributions (such as gyroscopes, planets, stars, and black holes) instead of point-like particles, the angular momentum tensor is expressed in terms of the stress–energy tensor of the rotating object.

In special relativity alone, in the rest frame of a spinning object; there is an intrinsic angular momentum analogous to the "spin" in quantum mechanics and relativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics, elementary particles have spin and this is an additional contribution to the orbital angular momentum operator, yielding the total angular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of the Pauli–Lubanski pseudovector.


  • https://en.wikipedia.org/wiki/Azimuthal_quantum_number - a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers which describe the unique quantum state of an electron (the others being the principal quantum number, following spectroscopic notation, the magnetic quantum number, and the spin quantum number). It is also known as the orbital angular momentum quantum number, orbital quantum number or second quantum number, and is symbolized as ℓ.


  • https://en.wikipedia.org/wiki/Total_angular_momentum_quantum_number - parameterises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin). The total angular momentum corresponds to the Casimir invariant of the Lie algebra so(3) of the three-dimensional rotation group.





  • https://en.wikipedia.org/wiki/Multiplicative_quantum_number - conserved quantum numbers of a special kind. A given quantum number q is said to be additive if in a particle reaction the sum of the q-values of the interacting particles is the same before and after the reaction. Most conserved quantum numbers are additive in this sense; the electric charge is one example. A multiplicative quantum number q is one for which the corresponding product, rather than the sum, is preserved.

Any conserved quantum number is a symmetry of the Hamiltonian of the system (see Noether's theorem). Symmetry groups which are examples of the abstract group called Z2 give rise to multiplicative quantum numbers. This group consists of an operation, P, whose square is the identity, P2 = 1. Thus, all symmetries which are mathematically similar to parity (physics) give rise to multiplicative quantum numbers.

In principle, multiplicative quantum numbers can be defined for any abelian group. An example would be to trade the electric charge, Q, (related to the abelian group U(1) of electromagnetism), for the new quantum number exp(2iπ Q). Then this becomes a multiplicative quantum number by virtue of the charge being an additive quantum number. However, this route is usually followed only for discrete subgroups of U(1), of which Z2 finds the widest possible use.




Spinors

  • https://en.wikipedia.org/wiki/Spinor - elements of a (complex) vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation. When a sequence of such small rotations is composed (integrated) to form an overall final rotation, however, the resulting spinor transformation depends on which sequence of small rotations was used: unlike vectors and tensors, a spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360° (see picture). This property characterizes spinors. It is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles.



  • https://en.wikipedia.org/wiki/Weyl_equation#Weyl_spinors - also frequently used in a more general setting, as an element of a Clifford module. This is closely related to the solutions given above, and gives a natural geometric interpretation to spinors as geometric objects living on a manifold. This general setting has multiple strengths: it clarifies their interpretation as fermions in physics, and it shows precisely how to define spin in General Relativity, or, indeed, for any Riemannian manifold or pseudo-Riemannian manifold. This is informally sketched as follows.

The Weyl equation is invariant under the action of the Lorentz group. This means that, as boosts and rotations are applied, the form of the equation itself does not change. However, the form of the spinor ψ \psi itself does change. Ignoring spacetime entirely, the algebra of the spinors is described by a (complexified, Clifford algebra. The spinors transform under the action of the spin group. This is entirely analogous to how one might talk about a vector, and how it transforms under the rotation group, except that now, it has been adapted to the case of spinors.


  • https://en.wikipedia.org/wiki/Dirac_spinor - the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group. Dirac spinors are important and interesting in numerous ways. Foremost, they are important as they do describe all of the known fundamental particle fermions in nature; this includes the electron and the quarks. Algebraically they behave, in a certain sense, as the "square root" of a vector. This is not readily apparent from direct examination, but it has slowly become clear over the last 60 years that spinorial representations are fundamental to geometry. For example, effectively all Riemannian manifolds can have spinors and spin connections built upon them, via the Clifford algebra. The Dirac spinor is specific to that of Minkowski spacetime and Lorentz transformations; the general case is quite similar.


  • https://en.wikipedia.org/wiki/Spinor_field - particles with spin s are described by 2s-dimensional spinor field, where s is an integer or a half-integer. Fermions are described by spinor field, while bosons by tensor field.


  • https://en.wikipedia.org/wiki/Spin_structure - on an orientable Riemannian manifold (M,g) allows one to define associated spinor bundles, giving rise to the notion of a spinor in differential geometry. Spin structures have wide applications to mathematical physics, in particular to quantum field theory where they are an essential ingredient in the definition of any theory with uncharged fermions. They are also of purely mathematical interest in differential geometry, algebraic topology, and K theory. They form the foundation for spin geometry.


  • https://en.wikipedia.org/wiki/Spinor_bundle - the complex vector bundle associated to the corresponding principal bundle of spin frames over and the spin representation of its structure group on the space of spinors. A section of the spinor bundle is called a spinor field.




  • https://en.wikipedia.org/wiki/Spin_network - a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations. Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others. Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations.


Twistors

  • https://en.wikipedia.org/wiki/Twistor_theory - was proposed by Roger Penrose in 1967 as a possible path to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes. Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences.

Coherence

  • https://en.wikipedia.org/wiki/Quantum_decoherence - the loss of quantum coherence. In quantum mechanics, light particles such as electrons behave like waves and are described by a wavefunction. These waves can interfere, leading to the peculiar behaviour of quantum particles. As long as there exists a definite phase relation between different states, the system is said to be coherent. This coherence is a fundamental property of quantum mechanics, and is necessary for the function of quantum computers. However, when a quantum system is not perfectly isolated, but in contact with its surrounding, the coherence decays with time, a process called quantum decoherence. As a result of this process, the quantum behaviour is lost.

Decoherence was first introduced in 1970 by the German physicist H. Dieter Zeh and has been a subject of active research since the 1980s. Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings. Viewed in isolation, the system's dynamics are non-unitary (although the combined system plus environment evolves in a unitary fashion). Thus the dynamics of the system alone are irreversible. As with any coupling, entanglements are generated between the system and environment. These have the effect of sharing quantum information with—or transferring it to—the surroundings.

Decoherence does not generate actual wave function collapse. It only provides an explanation for the observation of wave function collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. A total superposition of the global or universal wavefunction still exists (and remains coherent at the global level), but its ultimate fate remains an interpretational issue. Specifically, decoherence does not attempt to explain the measurement problem. Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, our observation tells us that this mixture looks like a proper quantum ensemble in a measurement situation, as we observe that measurements lead to the "realization" of precisely one state in the "ensemble".

Decoherence represents a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. Simply put, they require that coherent states be preserved and that decoherence is managed, in order to actually perform quantum computation.





to sort


  • https://en.wikipedia.org/wiki/Perturbation_theory_(quantum_mechanics) - a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one.

Perturbation theory is applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a "small" term to the mathematical description of the exactly solvable problem.

Perturbation theory is an important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for Hamiltonians of even moderate complexity. The Hamiltonians to which we know exact solutions, such as the hydrogen atom, the quantum harmonic oscillator and the particle in a box, are too idealized to adequately describe most systems. Using perturbation theory, we can use the known solutions of these simple Hamiltonians to generate solutions for a range of more complicated systems. For example, by adding a perturbative electric potential to the quantum mechanical model of the hydrogen atom, we can calculate the tiny shifts in the spectral lines of hydrogen caused by the presence of an electric field (the Stark effect). This is only approximate because the sum of a Coulomb potential with a linear potential is unstable (has no true bound states) although the tunneling time (decay rate) is very long. This instability shows up as a broadening of the energy spectrum lines, which perturbation theory fails to reproduce entirely.

The expressions produced by perturbation theory are not exact, but they can lead to accurate results as long as the expansion parameter, say α, is very small. Typically, the results are expressed in terms of finite power series in α that seem to converge to the exact values when summed to higher order. After a certain order n ~ 1/α however, the results become increasingly worse since the series are usually divergent (being asymptotic series). There exist ways to convert them into convergent series, which can be evaluated for large-expansion parameters, most efficiently by Variational method.

In the theory of quantum electrodynamics (QED), in which the electron–photon interaction is treated perturbatively, the calculation of the electron's magnetic moment has been found to agree with experiment to eleven decimal places. In QED and other quantum field theories, special calculation techniques known as Feynman diagrams are used to systematically sum the power series terms. Under some circumstances, perturbation theory is an invalid approach to take. This happens when the system we wish to describe cannot be described by a small perturbation imposed on some simple system. In quantum chromodynamics, for instance, the interaction of quarks with the gluon field cannot be treated perturbatively at low energies because the coupling constant (the expansion parameter) becomes too large.


  • https://en.wikipedia.org/wiki/Feynman_diagram - pictorial representations of the mathematical expressions describing the behavior of subatomic particles. The scheme is named after its inventor, American physicist Richard Feynman, and was first introduced in 1948. The interaction of sub-atomic particles can be complex and difficult to understand intuitively. Feynman diagrams give a simple visualization of what would otherwise be a rather arcane and abstract formula. As David Kaiser writes, "since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations", and so "Feynman diagrams have revolutionized nearly every aspect of theoretical physics". While the diagrams are applied primarily to quantum field theory, they can also be used in other fields, such as solid-state theory.

Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time. Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams.

The calculation of probability amplitudes in theoretical particle physics requires the use of rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented graphically as Feynman diagrams. A Feynman diagram is a contribution of a particular class of particle paths, which join and split as described by the diagram. More precisely, and technically, a Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the S-matrix between the initial and the final states of the quantum system.





https://news.ycombinator.com/item?id=12048170




  • https://en.wikipedia.org/wiki/Phase_velocity - of a wave is the rate at which the phase of the wave propagates in space. This is the velocity at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave (for example, the crest) will appear to travel at the phase velocity.





  • https://en.wikipedia.org/wiki/Radiation - The gravitational field and the electromagnetic field are the only two fundamental fields in nature that have infinite range and a corresponding classical low-energy limit, which greatly diminishes and hides their "particle-like" excitations. Albert Einstein in 1905, attributed "particle-like" and discrete exchanges of momenta and energy, characteristic of "field quanta", to the electromagnetic field. Originally, his principal motivation was to explain the thermodynamics of radiation. Although the photoelectric effect and Compton scattering strongly suggest the existence of the photon, it might alternatively be explained by a mere quantization of emission; more definitive evidence of the quantum nature of radiation is now taken up into modern quantum optics as in the antibunching effect.


  • https://en.wikipedia.org/wiki/Black-body_radiation - the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body (an opaque and non-reflective body). It has a specific spectrum and intensity that depends only on the body's temperature, which is assumed for the sake of calculations and theory to be uniform and constant.




  • https://en.wikipedia.org/wiki/Bremsstrahlung - from German bremsen "to brake" and Strahlung "radiation"; i.e., "braking radiation" or "deceleration radiation", is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation, i.e. a photon, thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. Bremsstrahlung has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases. Broadly speaking, bremsstrahlung or braking radiation is any radiation produced due to the deceleration (negative acceleration) of a charged particle, which includes synchrotron radiation (i.e. photon emission by a relativistic particle), cyclotron radiation (i.e. photon emission by a non-relativistic particle), and the emission of electrons and positrons during beta decay. However, the term is frequently used in the more narrow sense of radiation from electrons (from whatever source) slowing in matter. Bremsstrahlung emitted from plasma is sometimes referred to as free/free radiation. This refers to the fact that the radiation in this case is created by charged particles that are free; i.e., not part of an ion, atom or molecule, both before and after the deflection (acceleration) that caused the emission.







  • https://en.wikipedia.org/wiki/Charge_carrier - a particle free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. In different conducting media, different particles serve to carry charge.















See also Maths#Category theory





  • https://en.wikipedia.org/wiki/Weinberg%E2%80%93Witten_theorem - proved by Steven Weinberg and Edward Witten, states that massless particles (either composite or elementary) with spin j > 1/2 cannot carry a Lorentz-covariant current, while massless particles with spin j > 1 cannot carry a Lorentz-covariant stress-energy. The theorem is usually interpreted to mean that the graviton (j = 2) cannot be a composite particle in a relativistic quantum field theory.






  • Ray Optics Simulation - An open-source web application to simulate reflection and refraction of light. Written in HTML, CSS and JavaScript.


Quantum optics





  • https://en.wikipedia.org/wiki/Geometrical_optics - or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances. The simplifying assumptions of geometrical optics include that light rays: propagate in straight-line paths as they travel in a homogeneous medium, bend, and in particular circumstances may split in two, at the interface between two dissimilar media, follow curved paths in a medium in which the refractive index changes may be absorbed or reflected.Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations.












  • https://en.wikipedia.org/wiki/Luminescence - emission of light by a substance not resulting from heat; it is thus a form of cold-body radiation. It can be caused by chemical reactions, electrical energy, subatomic motions or stress on a crystal, which all are ultimately caused by Spontaneous emission. This distinguishes luminescence from incandescence, which is light emitted by a substance as a result of heating. Historically, radioactivity was thought of as a form of "radio-luminescence", although it is today considered to be separate since it involves more than electromagnetic radiation. The dials, hands, scales, and signs of aviation and navigational instruments and markings are often coated with luminescent materials in a process known as "luminising"


Interpretations

Copenhagen

  • https://en.wikipedia.org/wiki/Copenhagen_interpretation - a loosely-knit informal collection of axioms or doctrines that attempt to express in quotidian language the mathematical formalism of quantum mechanics. The interpretation was largely devised in the years 1925–1927 by Niels Bohr and Werner Heisenberg. It is fundamental to the Copenhagen interpretation that the results of experiments must be reported in ordinary language, not relying on arcane terminology or words that refer only to clusters of mathematical symbols.


De Broglie–Bohm

  • https://en.wikipedia.org/wiki/De_Broglie–Bohm_theory - also known as the pilot-wave theory, Bohmian mechanics, the Bohm (or Bohm's) interpretation, and the causal interpretation, is an interpretation of quantum theory. In addition to a wavefunction on the space of all possible configurations, it also postulates an actual configuration that exists even when unobserved. The evolution over time of the configuration (that is, the positions of all particles or the configuration of all fields) is defined by the wave function by a guiding equation. The evolution of the wave function over time is given by Schrödinger's equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992).





Transactional

  • https://en.wikipedia.org/wiki/Transactional_interpretation - takes the psi and psi* wave functions of the standard quantum formalism to be retarded (forward in time) and advanced (backward in time) waves that form a quantum interaction as a Wheeler–Feynman handshake or transaction. It was first proposed in 1986 by John G. Cramer, who argues that it helps in developing intuition for quantum processes. He also suggests that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and also resolves various quantum paradoxes.


Quantum Field Theory

to sort, move up

there are only field waveforms, and particles are waveform excitations





  • YouTube: Understanding Quantum Field Theory - Dr. Rodney Brooks, author of "Fields of Color: The theory that escaped Einstein", shows why the answer is quantum field theory. He shows how quantum field theory, so often overlooked or misunderstood, resolves the weirdness of quantum mechanics and the paradoxes of relativity . Once the concepts of quantum field theory are grasped, understanding physics is within anyone's grasp.






  • https://en.wikipedia.org/wiki/False_vacuum - a false vacuum is a metastable sector of space that appears to be a perturbative vacuum, but is unstable due to instanton effects that may tunnel to a lower energy state.


  • https://en.wikipedia.org/wiki/Lagrangian_mechanics - is a re-formulation of classical mechanics using the principle of stationary action (also called the principle of least action, and applies to systems whether or not they conserve energy or momentum, and it provides conditions under which energy, momentum or both are conserved









  • https://en.wikipedia.org/wiki/AdS/CFT_correspondence - anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles.

The duality represents a major advance in our understanding of string theory and quantum gravity.[1] This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle, an idea in quantum gravity originally proposed by Gerard 't Hooft and promoted by Leonard Susskind.



Standard Model



  • https://en.wikipedia.org/wiki/Yang–Mills_theory - a gauge theory based on the SU(N) group, or more generally any compact, semi-simple Lie group. Yang–Mills theory seeks to describe the behavior of elementary particles using these non-Abelian Lie groups and is at the core of the unification of the electromagnetic and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of our understanding of the Standard Model of particle physics.


Other

Particles



  • https://en.wikipedia.org/wiki/Elementary_particle - or fundamental particle is a particle whose substructure is unknown, thus it is unknown whether it is composed of other particles. Known elementary particles include the fundamental fermions (quarks, leptons, antiquarks, and antileptons), which generally are "matter particles" and "antimatter particles", as well as the fundamental bosons (gauge bosons and the Higgs boson), which generally are "force particles" that mediate interactions among fermions. A particle containing two or more elementary particles is a composite particle.

Everyday matter is composed of atoms, once presumed to be matter's elementary particles—atom meaning "unable to cut" in Greek—although the atom's existence remained controversial until about 1910, as some leading physicists regarded molecules as mathematical illusions, and matter as ultimately composed of energy. Soon, subatomic constituents of the atom were identified. As the 1930s opened, the electron and the proton had been observed, along with the photon, the particle of electromagnetic radiation. At that time, the recent advent of quantum mechanics was radically altering the conception of particles, as a single particle could seemingly span a field as would a wave, a paradox still eluding satisfactory explanation.

Via quantum theory, protons and neutrons were found to contain quarks—up quarks and down quarks—now considered elementary particles. And within a molecule, the electron's three degrees of freedom (charge, spin, orbital) can separate via wavefunction into three quasiparticles (holon, spinon, orbiton). Yet a free electron—which, not orbiting an atomic nucleus, lacks orbital motion—appears unsplittable and remains regarded as an elementary particle.

Around 1980, an elementary particle's status as indeed elementary—an ultimate constituent of substance—was mostly discarded for a more practical outlook, embodied in particle physics' Standard Model, science's most experimentally successful theory. Many elaborations upon and theories beyond the Standard Model, including the extremely popular supersymmetry, double the number of elementary particles by hypothesizing that each known particle associates with a "shadow" partner far more massive, although all such superpartners remain undiscovered. Meanwhile, an elementary boson mediating gravitation—the graviton—remains hypothetical.



  • https://en.wikipedia.org/wiki/Particle_decay - the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process (the final state) must each be less massive than the original, although the total invariant mass of the system must be conserved. A particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability. Decays are mediated by one or several fundamental forces. The particles in the final state may themselves be unstable and subject to further decay. The term is typically distinct from radioactive decay, in which an unstable atomic nucleus is transformed into a lighter nucleus accompanied by the emission of particles or radiation, although the two are conceptually similar and are often described using the same terminology.


  • https://en.wikipedia.org/wiki/Free_particle - a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means a region of uniform potential, usually set to zero in the region of interest since potential can be arbitrarily set to zero at any point (or surface in three dimensions) in space.





Antiparticles

  • https://en.wikipedia.org/wiki/Antiparticle - Corresponding to most kinds of particles, there is an associated antimatter antiparticle with the same mass and opposite charge (including electric charge). For example, the antiparticle of the electron is the positively charged positron, which is produced naturally in certain types of radioactive decay.

The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of Charge Parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.

Particle-antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.

Antiparticles are produced naturally in beta decay, and in the interaction of cosmic rays in the Earth's atmosphere. Because charge is conserved, it is not possible to create an antiparticle without either destroying a particle of the same charge (as in β+ decay, when a proton (positive charge) is destroyed, a neutron created and a positron (positive charge, antiparticle) is also created and emitted) or by creating a particle of the opposite charge. The latter is seen in many processes in which both a particle and its antiparticle are created simultaneously, as in particle accelerators. This is the inverse of the particle-antiparticle annihilation process.

Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact. However, other neutral particles are their own antiparticles, such as photons, hypothetical gravitons, and some WIMPs.





Fermions

  • https://en.wikipedia.org/wiki/Fermion - any particle characterized by Fermi–Dirac statistics and following the Pauli exclusion principle; fermions include all quarks and leptons, as well as any composite particle made of an odd number of these, such as all baryons and many atoms and nuclei. spin-1⁄2 particle. Composite fermions, such as protons and neutrons, are key building blocks of everyday matter


  • https://en.wikipedia.org/wiki/Fermionic_field - a quantum field whose quanta are fermions; that is, they obey Fermi–Dirac statistics. Fermionic fields obey canonical anticommutation relations rather than the canonical commutation relations of bosonic fields.

The most prominent example of a fermionic field is the Dirac field, which describes fermions with spin-1/2: electrons, protons, quarks, etc. The Dirac field can be described as either a 4-component spinor or as a pair of 2-component Weyl spinors. Spin-1/2 Majorana fermions, such as the hypothetical neutralino, can be described as either a dependent 4-component Majorana spinor or a single 2-component Weyl spinor. It is not known whether the neutrino is a Majorana fermion or a Dirac fermion; observing neutrinoless double-beta decay experimentally would settle this question.


  • https://en.wikipedia.org/wiki/Fermi–Dirac_statistics - describe a distribution of particles over energy states in systems consisting of many identical particles that obey the Pauli exclusion principle. It is named after Enrico Fermi and Paul Dirac, each of whom discovered the method independently (although Fermi defined the statistics earlier than Dirac).

Fermi–Dirac (F–D) statistics apply to identical particles with half-integer spin in a system with thermodynamic equilibrium. Additionally, the particles in this system are assumed to have negligible mutual interaction. That allows the many-particle system to be described in terms of single-particle energy states. The result is the F–D distribution of particles over these states which includes the condition that no two particles can occupy the same state; this has a considerable effect on the properties of the system. Since F–D statistics apply to particles with half-integer spin, these particles have come to be called fermions. It is most commonly applied to electrons, which are fermions with spin 1/2. Fermi–Dirac statistics are a part of the more general field of statistical mechanics and use the principles of quantum mechanics.




  • https://en.wikipedia.org/wiki/Dirac_fermion - a fermion which is not its own antiparticle. The vast majority of particles fall under this category, as they are not their own antiparticles, and in particle physics all fermions in the standard model, except possibly neutrinos, are Dirac fermions. They are named for Paul Dirac, and can be modeled with the Dirac equation. A Dirac fermion is equivalent to two Weyl fermions. The counterpart to a Dirac fermion is a Majorana fermion, a particle that is its own antiparticle. In condensed matter physics, low-energy excitations in graphene and topological insulators, among others, are fermionic quasiparticles described by a pseudo-relativistic Dirac equation.


  • https://en.wikipedia.org/wiki/Majorana_fermion - also referred to as a Majorana particle, is a fermion that is its own antiparticle. They were hypothesized by Ettore Majorana in 1937. The term is sometimes used in opposition to a Dirac fermion, which describes fermions that are not their own antiparticles. With the exception of the neutrino, all of the Standard Model fermions are known to behave as Dirac fermions at low energy (after electroweak symmetry breaking). The nature of the neutrino is not settled; it may be either Dirac or Majorana. In condensed matter physics, bound Majorana fermions can appear as quasiparticle excitations – the collective movement of several individual particles, not a single one, and they are governed by non-abelian statistics.


  • https://en.wikipedia.org/wiki/Four-fermion_interactions - fermions are described by anticommuting spinor fields. A four-fermion interaction describes a local interaction between four fermionic fields at a point. Local here means that it all happens at the same spacetime point. This might be an effective field theory or it might be fundamental.


Leptons - not subject to strong interaction

  • https://en.wikipedia.org/wiki/Lepton - an elementary, half-integer spin (spin  1⁄2) particle that does not undergo strong interactions. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron.

There are six types of leptons, known as flavours, forming three generations. The first generation is the electronic leptons, comprising the electron (e−) and electron neutrino (ν e); the second is the muonic leptons, comprising the muon (μ−) and muon neutrino (ν μ); and the third is the tauonic leptons, comprising the tau (τ−) and the tau neutrino (ν τ). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons and neutrinos through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high energy collisions (such as those involving cosmic rays and those carried out in particle accelerators).

Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, electromagnetism (excluding neutrinos, which are electrically neutral), and the weak interaction. For every lepton flavor there is a corresponding type of antiparticle, known as an antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. However, according to certain theories, neutrinos may be their own antiparticle, but it is not currently known whether this is the case or not.


Electron
  • https://en.wikipedia.org/wiki/Electron - a subatomic particle, symbol e− or β−, with a negative elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure] The electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. As it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all matter, electrons have properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a larger De Broglie wavelength for a given energy.

Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions. Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer it will generate a magnetic field. Electromagnetic fields produced from other sources (not those self-produced) will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.

Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.

Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons.

Muon

The muon is an unstable subatomic particle with a mean lifetime of 2.2 µs, much longer than many other subatomic particles. As with the decay of the non-elementary neutron (with a lifetime around 15 minutes), muon decay is slow (by "subatomic standards") because the decay is mediated by the weak interaction exclusively (rather than the more powerful strong interaction or electromagnetic interaction), and because the mass difference between the muon and the set of its decay products is small, providing few kinetic degrees of freedom for decay. Muon decay almost always produces at least three particles, which must include an electron of the same charge as the muon and two neutrinos of different types.


Tau


Electron neutrino - no net electric charge, weak & gravity


  • https://en.wikipedia.org/wiki/Pontecorvo–Maki–Nakagawa–Sakata_matrix - PMNS matrix, Maki–Nakagawa–Sakata matrix (MNS matrix), lepton mixing matrix, or neutrino mixing matrix, is a unitary mixing matrix which contains information on the mismatch of quantum states of neutrinos when they propagate freely and when they take part in the weak interactions. It is a model of neutrino oscillation. This matrix was introduced in 1962 by Ziro Maki, Masami Nakagawa and Shoichi Sakata, to explain the neutrino oscillations predicted by Bruno Pontecorvo.



Muon neutrino
Tau neutrino

Quarks

  • https://en.wikipedia.org/wiki/Quark - ave various intrinsic properties, including electric charge, mass, color charge, and spin. Quarks are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as fundamental forces (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge.

There are six types of quarks, known as flavors: up, down, strange, charm, top, and bottom] Up and down quarks have the lowest masses of all quarks. The heavier quarks rapidly change into up and down quarks through a process of particle decay: the transformation from a higher mass state to a lower mass state. Because of this, up and down quarks are generally stable and the most common in the universe, whereas strange, charm, bottom, and top quarks can only be produced in high energy collisions (such as those involving cosmic rays and in particle accelerators). For every quark flavor there is a corresponding type of antiparticle, known as an antiquark, that differs from the quark only in that some of its properties have equal magnitude but opposite sign.

The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964. Quarks were introduced as parts of an ordering scheme for hadrons, and there was little evidence for their physical existence until deep inelastic scattering experiments at the Stanford Linear Accelerator Center in 1968. Accelerator experiments have provided evidence for all six flavors. The top quark was the last to be discovered at Fermilab in 1995.


  • https://en.wikipedia.org/wiki/Eightfold_way_(physics) - a term coined by American physicist Murray Gell-Mann for a theory organizing subatomic baryons and mesons into octets (alluding to the Noble Eightfold Path of Buddhism). The theory was independently proposed by Israeli physicist Yuval Ne'eman and led to the subsequent development of the quark model. In addition to organizing the mesons and spin -1/2 baryons into an octet, the principles of the Eightfold Way also applied to the spin -3/2 baryons, forming a decuplet. However, one of the particles of this decuplet had never been previously observed. Gell-Mann called this particle the Ω− and predicted in 1962 that it would have a strangeness −3, electric charge −1 and a mass near 1680 MeV/c2. In 1964, a particle closely matching these predictions was discovered by a particle accelerator group at Brookhaven. Gell-Mann received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.



  • https://en.wikipedia.org/wiki/Current_quark - (also called naked quarks or bare quarks) are defined as the constituent quark cores (constituent quarks with no covering) of a valence quark. If, in one constituent quark, the current quark is hit inside the covering with large force, it accelerates through the covering and leaves it behind. In addition, current quarks possess one asymptotic freedom within the perturbation theory described limits. In quantum chromodynamics, the mass of the current quarks carries the designation current quark mass.


  • https://en.wikipedia.org/wiki/Cabibbo–Kobayashi–Maskawa_matrix - CKM matrix, quark mixing matrix, or KM matrix is a unitary matrix which contains information on the strength of flavour-changing weak decays. Technically, it specifies the mismatch of quantum states of quarks when they propagate freely and when they take part in the weak interactions. It is important in the understanding of CP violation. This matrix was introduced for three generations of quarks by Makoto Kobayashi and Toshihide Maskawa, adding one generation to the matrix previously introduced by Nicola Cabibbo. This matrix is also an extension of the GIM mechanism, which only includes two of the three current families of quarks.


Up quark
Down quark
Charm quark
Strange quark
Top quark
Bottom quark


Bosons



  • https://en.wikipedia.org/wiki/Goldstone_boson - bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. These spinless bosons correspond to the spontaneously broken internal symmetry generators, and are characterized by the quantum numbers of these. They transform nonlinearly (shift) under the action of these generators, and can thus be excited out of the asymmetric vacuum by these generators. Thus, they can be thought of as the excitations of the field in the broken symmetry directions in group space—and are massless if the spontaneously broken symmetry is not also broken explicitly.



Higgs scalar field

  • https://en.wikipedia.org/wiki/Scalar_field - spin 0, associates a scalar value to every point in a space. The scalar may either be a mathematical number or a physical quantity. Scalar fields are required to be coordinate-independent, meaning that any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.


  • https://en.wikipedia.org/wiki/Scalar_boson - a boson whose spin equals zero. Boson means that it has an integer-valued spin; the scalar fixes this value to 0. The name scalar boson arises from quantum field theory. It refers to the particular transformation properties under Lorentz transformation.

The only fundamental scalar boson in the Standard Model of particle physics is the Higgs boson, the existence of which was confirmed on 14 March 2013 at the Large Hadron Collider. As a result of this confirmation, the 2013 Nobel Prize in physics was awarded to Peter Higgs and François Englert. Various known composite particles are scalar bosons, e.g. the alpha particle and the pion. Scalar mesons have invariant parity under Lorentz transformation, whereas pseudoscalar mesons reverse parity under Lorentz transformation.






  • https://en.wikipedia.org/wiki/Higgs_mechanism - explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (a type of fundamental particle) would be massless, but measurements show that the W+, W−, and Z bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field (the Higgs field) that permeates all space, to the Standard Model. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking.

The mechanism was proposed in 1962 by Philip Warren Anderson, following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics. A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism, Anderson–Higgs-Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK'tH mechanism [for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble and 't Hooft] by Peter Higgs.

On October 8, 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics (Englert's co-author Robert Brout had died in 2011 and the Nobel Prize is not usually awarded posthumously).



"the mass of almost all matter that we are used to is basically independent of the Higgs mechanism. Over 99% of all the mass that we ever interact with is due to the mass of the protons and neutrons in atomic nuclei, and the masses of protons and neutrons, which are in turn made out of very light quarks, is determined by quantum chromodynamics (nuclear strong force interactions), not the Higgs mechanism, which only applies to elementary particles!"

"It's often said that mass bends space-time, but in reality it's more complex than that: energy density bends space-time (and even that is a simplification...). The gravitational field of a massless photon is just as real and extant as the gravitational field of a planet; one is just hugely larger than the other. A photon with 100 Joules of energy has the same gravitational pull as an ecoli bacterium weighing 1 picogram."

"Higgs field interacts with certain fundamental particles (i.e. quarks, electrons) and gives them mass. All particles (even massless ones that never interact with the higgs field) have some energy and gravity acts on energy (not just mass!). So the higgs field give some things more mass which makes gravity affect them more, but gravity affects all things and would even if there was no higgs field."


Guage vector bosons

  • https://en.wikipedia.org/wiki/Vector_boson - a boson with the spin equal to 1. The vector bosons regarded as elementary particles in the Standard Model are the gauge bosons, which are the force carriers of fundamental interactions: the photon of electromagnetism, the W and Z bosons of the weak interaction, and the gluons of the strong interaction. Some composite particles are vector bosons, for instance any vector meson (quark and antiquark).


The term gauge refers to redundant degrees of freedom in the Lagrangian. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, the gauge theory is referred to as non-abelian, the usual example being the Yang–Mills theory.

Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stricter constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime.


  • https://en.wikipedia.org/wiki/Lattice_gauge_theory - the study of gauge theories on a spacetime that has been discretized into a lattice. Gauge theories are important in particle physics, and include the prevailing theories of elementary particles: quantum electrodynamics, quantum chromodynamics (QCD) and the Standard Model. Non-perturbative gauge theory calculations in continuous spacetime formally involve evaluating an infinite-dimensional path integral, which is computationally intractable. By working on a discrete spacetime, the path integral becomes finite-dimensional, and can be evaluated by stochastic simulation techniques such as the Monte Carlo method. When the size of the lattice is taken infinitely large and its sites infinitesimally close to each other, the continuum gauge theory is recovered.




  • http://www.scholarpedia.org/article/Gauge_invariance - In electrodynamics, the structure of the field equations is such that the electric field E(t,x) and the magnetic field B(t,x) can be expressed in terms of a scalar field A0(t,x) (scalar potential) and a vector field A(t,x) (vector potential). The term gauge invariance refers to the property that a whole class of scalar and vector potentials, related by so-called gauge transformations, describe the same electric and magnetic fields. As a consequence, the dynamics of the electromagnetic fields and the dynamics of a charged system in a electromagnetic background do not depend on the choice of the representative (A0(t,x),A(t,x)) within the appropriate class. The concept of gauge invariance has then been extended to more general theories like, for example, Yang-Mills theories or General Relativity.



  • https://en.wikipedia.org/wiki/G-factor_(physics) - a dimensionless quantity that characterizes the magnetic moment and gyromagnetic ratio of a particle or nucleus. It is essentially a proportionality constant that relates the observed magnetic moment μ of a particle to its angular momentum quantum number and a unit of magnetic moment, usually the Bohr magneton or nuclear magneton.
  • https://en.wikipedia.org/wiki/Gyromagnetic_ratio - also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic momentum in an atom to its angular momentum, and it is often denoted by the symbol γ, gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1). The term "gyromagnetic ratio" is always used[1] as a synonym for a different but closely related quantity, the g-factor. The g-factor, unlike the gyromagnetic ratio, is dimensionless.




Gluon - strong interaction

The strong interaction is observable at two ranges: on a larger scale (about 1 to 3 femtometers (fm)), it is the force that binds protons and neutrons (nucleons) together to form the nucleus of an atom. On the smaller scale (less than about 0.8 fm, the radius of a nucleon), it is the force (carried by gluons) that holds quarks together to form protons, neutrons, and other hadron particles. In the latter context, it is often known as the color force. The strong force inherently has such a high strength that hadrons bound by the strong force can produce new massive particles. Thus, if hadrons are struck by high-energy particles, they give rise to new hadrons instead of emitting freely moving radiation (gluons). This property of the strong force is called color confinement, and it prevents the free "emission" of the strong force: instead, in practice, jets of massive particles are observed.




Most of the mass of hadrons is actually QCD binding energy, through mass-energy equivalence. This phenomenon is related to chiral symmetry breaking. In the case of nucleons – protons and neutrons – QCD binding energy forms about 99% of the nucleon's mass. That is if assuming that the kinetic energy of the hadron's constituents, moving at near the speed of light, which contributes greatly to the hadron mass, is part of QCD binding energy. For protons, the sum of the rest masses of the three valence quarks (two up quarks and one down quark) is approximately 9.4 MeV, while the proton's total mass is about 938.3 MeV. For neutrons, the sum of the rest masses of the three valence quarks (two down quarks and one up quark) is approximately 11.9 MeV, while the neutron's total mass is about 939.6 MeV. Considering that nearly all of the atom's mass is concentrated in the nucleons, this means that about 99% of the mass of everyday matter (baryonic matter) is, in fact, chromodynamic binding energy.

While gluons are massless, they still possess energy – chromodynamic binding energy. In that they are similar to photons, which are also massless particles carrying energy – photon energy. The amount of energy per single gluon, or "gluon energy", cannot be calculated. Unlike photon energy, which is quantifiable, described by the Planck-Einstein relation and depends on a single variable (the photon's frequency), no formula exists for the quantity of energy carried by each gluon. While the effects of a single photon can be observed, single gluons have not been observed outside of a hadron. Due to the mathematical complexity of quantum chromodynamics and the somewhat chaotic structure of hadrons, which are composed of gluons, valence quarks, sea quarks and other virtual particles, it is not even measurable how many gluons exist at a given moment inside a hadron. Additionally, not all of the QCD binding energy is gluon energy, but rather, some of it comes from the kinetic energy of the hadron's constituents. Therefore, only the total QCD binding energy per hadron can be stated. However, in the future, studies into quark-gluon plasma might be able to overcome this.



Gluons can have eight colour charges so there are eight fields, in contrast to photons which are neutral and so there is only one photon field.


  • https://en.wikipedia.org/wiki/Color_charge - a property of quarks and gluons that is related to the particles' strong interactions in the theory of quantum chromodynamics (QCD). A quark's color can take one of three values or charges, red, green, and blue. An antiquark can take one of three anticolors, called antired, antigreen, and antiblue (represented as cyan, magenta and yellow, respectively). Gluons are mixtures of two colors, such as red and antigreen, which constitutes their color charge. QCD considers eight gluons of the possible nine color–anticolor combinations to be unique; see eight gluon colors for an explanation.
  • https://en.wikipedia.org/wiki/Color_confinement - often simply called confinement, is the phenomenon that color charged particles (such as quarks) cannot be isolated singularly, and therefore cannot be directly observed. Quarks, by default, clump together to form groups, or hadrons. The two types of hadrons are the mesons (one quark, one antiquark) and the baryons (three quarks). The constituent quarks in a group cannot be separated from their parent hadron, and this is why quarks currently cannot be studied or observed in any more direct way than at a hadron level.


  • https://en.wikipedia.org/wiki/QCD_string - In quantum chromodynamics (or in the more general case of quantum gauge theories), if a connection which is color confining occurs, it is possible for stringlike degrees of freedom called QCD strings or QCD flux tubes to form. These stringlike excitations are responsible for the confinement of color charges since they are always attached to at least one string which exhibits tension. Their existence can be predicted from the dual spin network/spin foam models (this duality is exact over a lattice). To a surprisingly good approximation, these strings are described phenomenologically by the Polyakov action, making them noncritical strings.


Electroweak - electromagnetic and weak interactions
  • https://en.wikipedia.org/wiki/Representation_theory_of_SU(2) - SU(2) is the universal covering group of SO(3), and so its representation theory includes that of the latter, by dint of a surjective homomorphism to it. This underlies the significance of SU(2) for the description of non-relativistic spin in theoretical physics; see below for other physical and historical context. Representations of SU(2) describe non-relativistic spin, due to being a double covering of the rotation group of Euclidean 3-space. Relativistic spin is described by the representation theory of SL2(C), a supergroup of SU(2), which in a similar way covers SO+(1;3), the relativistic version of the rotation group. SU(2) symmetry also supports concepts of isobaric spin and weak isospin, collectively known as isospin.




  • https://en.wikipedia.org/wiki/Weinberg_angle - a parameter in the Weinberg–Salam theory of the electroweak interaction, part of the standard model of particle physics, and is usually denoted as θW. It is the angle by which spontaneous symmetry breaking rotates the original W0 and B0 vector boson plane, producing as a result the Z0 boson, and the photon. It also gives the relationship between the masses of the W and Z bosons (denoted as mW and mZ).


  • https://en.wikipedia.org/wiki/Hypercharge - from hyperonic + charge, Y of a particle is related to the strong interaction, and is distinct from the similarly named weak hypercharge, which has an analogous role in the electroweak interaction. The concept of hypercharge combines and unifies isospin and flavour into a single charge operator.


Photon - electromagnetic radiation/interactions

See also Radio, Electrical

  • https://en.wikipedia.org/wiki/Photon - a type of elementary particle, the quantum of the electromagnetic field including electromagnetic radiation such as light, and the force carrier for the electromagnetic force (even when static via virtual particles). The photon has zero rest mass and always moves at the speed of light within a vacuum.

Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a single photon may be refracted by a lens and exhibit wave interference with itself, and it can behave as a particle with definite and finite measurable position or momentum, though not both at the same time. The photon's wave and quanta qualities are two observable aspects of a single phenomenon, and cannot be described by any mechanical model; a representation of this dual property of light, which assumes certain points on the wavefront to be the seat of the energy, is not possible. The quanta in a light wave cannot be spatially localized. Some defined physical parameters of a photon are listed.


Has no mass, always moves as fast as causality.



See also Electronics



  • https://en.wikipedia.org/wiki/Photon_energy - the energy carried by a single photon. The amount of energy is directly proportional to the photon's electromagnetic frequency and inversely proportional to the wavelength. The higher the photon's frequency, the higher its energy. Equivalently, the longer the photon's wavelength, the lower its energy.

Photon energy is solely a function of the photon's wavelength. Other factors, such as the intensity of the radiation, do not affect photon energy. In other words, two photons of light with the same color and therefore, same frequency, will have the same photon energy, even if one was emitted from a wax candle and the other from the Sun.

Photon energy can be represented by any unit of energy. Among the units commonly used to denote photon energy are the electronvolt (eV) and the joule (as well as its multiples, such as the microjoule). As one joule equals 6.24 × 1018 eV, the larger units may be more useful in denoting the energy of photons with higher frequency and higher energy, such as gamma rays, as opposed to lower energy photons, such as those in the radiofrequency region of the electromagnetic spectrum.




  • https://en.wikipedia.org/wiki/Maxwell's_equations - are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits. The equations provide a conceptual underpinning for all electric, optical and radio technologies, including power generation, electric motors, wireless communication, cameras, televisions, computers etc. Maxwell's equations describe how electric and magnetic fields are generated by charges, currents, and changes of each other. One important consequence of the equations is that they demonstrate how fluctuating electric and magnetic fields propagate at the speed of light. Known as electromagnetic radiation, these waves may occur at various wavelengths to produce a spectrum from radio waves to γ-rays. The equations are named after the physicist and mathematician James Clerk Maxwell, who between 1861 and 1862 published an early form of the equations, and firs...d in high energy and gravitational physics because they make the compatibility of the equations with special and general relativity manifest. In fact, Einstein developed special and general relativity to accommodate the absolute speed of light that drops out of the Maxwell equations with the principle that only relative movement has physical consequences.

Since the mid-20th century, it has been understood that Maxwell's equations are not exact, but a classical field theory approximation of some aspects of the fundamental theory of quantum electrodynamics, although some quantum features, such as quantum entanglement, are completely absent and in no way approximated. (For example, quantum cryptography has no approximate version in Maxwell theory.) In many situations, though, deviations from Maxwell's equations are immeasurably small. Exceptions include nonclassical light, photon–photon scattering, quantum optics, and many other phenomena related to photons or virtual photons.



  • https://en.wikipedia.org/wiki/Faraday_effect - or Faraday rotation is a magneto-optical phenomenon—that is, an interaction between light and a magnetic field in a medium. The Faraday effect causes a rotation of the plane of polarization which is linearly proportional to the component of the magnetic field in the direction of propagation. Formally, it is a special case of gyroelectromagnetism obtained when the dielectric permittivity tensor is diagonal.


  • https://en.wikipedia.org/wiki/Compton_scattering - discovered by Arthur Holly Compton, is the scattering of a photon by a charged particle, usually an electron. It results in a decrease in energy (increase in wavelength) of the photon (which may be an X-ray or gamma ray photon), called the Compton effect. Part of the energy of the photon is transferred to the recoiling electron. Inverse Compton scattering occurs, in which a charged particle transfers part of its energy to a photon.


  • https://en.wikipedia.org/wiki/Thomson_scattering - the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is just the low-energy limit of Compton scattering: the particle kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much less than the mass energy of the particle: {\displaystyle \nu \ll mc^{2}/h} , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle.



Richard Feynman - 1979 Sir Douglas Robb Memorial Lectures:



  • https://en.wikipedia.org/wiki/Lorentz_force - the combination of electric and magnetic force on a point charge due to electromagnetic fields. If a particle of charge q moves with velocity v in the presence of an electric field E and a magnetic field B, then it will experience a force. Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a charged particle which might be travelling near the speed of light (relativistic form of the Lorentz force).





  • https://en.wikipedia.org/wiki/Optical_field - a term used in physics and vector calculus to designate the electric field shown as E in the electromagnetic wave equation which can be derived from Maxwell's Equations. In electromagnetic theory, the electromagnetic wave propagates such that both the magnetic field oscillation, and the electric field oscillation is perpendicular to the direction of propagation of the wave. As with any wave, the electromagnetic wave transports energy, thus the total energy density is shared between the constituent electric and magnetic fields. Since the electric field is considerably more effective at exerting forces and doing work on charges than the magnetic field, the electric field E is referred to as the optical field.


Electric fields and magnetic fields are related; if one changes vs. time, you get the other as a result.



  • https://en.wikipedia.org/wiki/Electric_field - a vector field that associates to each point in space the Coulomb force that would be experienced per unit of electric charge, by an infinitesimal test charge at that point. Electric fields converge and diverge at electric charges and can be induced by time-varying magnetic fields. The electric field combines with the magnetic field to form the electromagnetic field.

Electric fields are caused by electric charges or varying magnetic fields. The former effect is described by Gauss's law, the latter by Faraday's law of induction, which together are enough to define the behavior of the electric field as a function of charge repartition and magnetic field. However, since the magnetic field is described as a function of electric field, the equations of both fields are coupled and together form Maxwell's equations that describe both fields as a function of charges and currents. Electric fields satisfy the superposition principle, because Maxwell's equations are linear.


  • https://en.wikipedia.org/wiki/Electrostatics - Electrostatic fields are E-fields which do not change with time, which happens when charges and currents are stationary. In that case, Coulomb's law fully describes the field. Electrostatics involves the buildup of charge on the surface of objects due to contact with other surfaces. Even though electrostatically induced forces seem to be rather weak, the electrostatic force between e.g. an electron and a proton, that together make up a hydrogen atom, is about 36 orders of magnitude stronger than the gravitational force acting between them. There are many examples of electrostatic phenomena, from those as simple as the attraction of the plastic wrap to your hand after you remove it from a package, and the attraction of paper to a charged scale, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier & laser printer operation.

Although charge exchange happens whenever any two surfaces contact and separate, the effects of charge exchange are usually only noticed when at least one of the surfaces has a high resistance to electrical flow. This is because the charges that transfer are trapped there for a time long enough for their effects to be observed. These charges then remain on the object until they either bleed off to ground or are quickly neutralized by a discharge: e.g., the familiar phenomenon of a static 'shock' is caused by the neutralization of charge built up in the body from contact with insulated surfaces.

Electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as Galinean limits for electromagnetism.



  • https://en.wikipedia.org/wiki/Coulomb's_law - or Coulomb's inverse-square law, is a law of physics describing the electrostatic interaction between electrically charged particles. The law was first published in 1785 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Isaac Newton's inverse-square law of universal gravitation. Coulomb's law can be used to derive Gauss's law, and vice versa. The law has been tested heavily, and all observations have upheld the law's principle.




  • https://en.wikipedia.org/wiki/Magnetic_dipole - the limit of either a closed loop of electric current or a pair of poles as the dimensions of the source are reduced to zero while keeping the magnetic moment constant. It is a magnetic analogue of the electric dipole, but the analogy is not complete. In particular, a magnetic monopole, the magnetic analogue of an electric charge, has never been observed. Moreover, one form of magnetic dipole moment is associated with a fundamental quantum property—the spin of elementary particles. The magnetic field around any magnetic source looks increasingly like the field of a magnetic dipole as the distance from the source increases.


  • https://en.wikipedia.org/wiki/Magnetic_moment - of a magnet is a quantity that determines the torque it will experience in an external magnetic field. A loop of electric current, a bar magnet, an electron (revolving around a nucleus), a molecule, and a planet all have magnetic moments. The magnetic moment may be considered to be a vector having a magnitude and direction. The direction of the magnetic moment points from the south to north pole of the magnet. The magnetic field produced by the magnet is proportional to its magnetic moment. More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, which produces the first term in the multipole expansion of a general magnetic field. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.


  • https://en.wikipedia.org/wiki/Electric_dipole_moment - a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The electric-field strength of the dipole is proportional to the magnitude of dipole moment. The SI units for electric dipole moment are coulomb-meter (C·m); however, the most common unit is the debye (D).


  • https://en.wikipedia.org/wiki/Transition_dipole_moment - for a transition between an initial state and a final state, is the electric dipole moment associated with the transition between the two states. In general the transition dipole moment is a complex vector quantity that includes the phase factors associated with the two states. Its direction gives the polarization of the transition, which determines how the system will interact with an electromagnetic wave of a given polarization, while the square of the magnitude gives the strength of the interaction due to the distribution of charge within the system. The SI unit of the transition dipole moment is the Coulomb-meter (Cm); a more conveniently sized unit is the Debye (D).


  • https://en.wikipedia.org/wiki/Zeeman_effect - named after the Dutch physicist Pieter Zeeman, is the effect of splitting a spectral line into several components in the presence of a static magnetic field. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules.

Since the distance between the Zeeman sub-levels is a function of the magnetic field, this effect can be used to measure the magnetic field, e.g. that of the Sun and other stars or in laboratory plasmas. The Zeeman effect is very important in applications such as nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, magnetic resonance imaging (MRI) and Mössbauer spectroscopy. It may also be utilized to improve accuracy in atomic absorption spectroscopy. A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. When the spectral lines are absorption lines, the effect is called inverse Zeeman effect.


  • https://en.wikipedia.org/wiki/Magnetic_flux - through a surface is the surface integral of the normal component of the magnetic field B passing through that surface. The SI unit of magnetic flux is the weber (Wb) (in derived units: volt-seconds), and the CGS unit is the maxwell. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils and electronics, that evaluates the change of voltage in the measuring coils to calculate the magnetic flux.




  • https://en.wikipedia.org/wiki/Eddy_current - loops of electrical current induced within conductors by a changing magnetic field in the conductor, due to Faraday's law of induction. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time-varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the strength of the magnetic field, the area of the loop, and the rate of change of flux, and inversely proportional to the resistivity of the material.




  • https://en.wikipedia.org/wiki/Orbital_angular_momentum_of_light - the component of angular momentum of a light beam that is dependent on the field spatial distribution, and not on the polarization. It can be further split into an internal and an external OAM. The internal OAM is an origin-independent angular momentum of a light beam that can be associated with a helical or twisted wavefront. The external OAM is the origin-dependent angular momentum that can be obtained as cross product of the light beam position (center of the beam) and its total linear momentum.


  • https://en.wikipedia.org/wiki/Spin–orbit_interaction - also called spin–orbit effect or spin–orbit coupling) is an interaction of a particle's spin with its motion. The first and best known example of this is that spin–orbit interaction causes shifts in an electron's atomic energy levels due to electromagnetic interaction between the electron's spin and the magnetic field generated by the electron's orbit around the nucleus. This is detectable as a splitting of spectral lines, which can be thought of as a Zeeman Effect due to the internal field. A similar effect, due to the relationship between angular momentum and the strong nuclear force, occurs for protons and neutrons moving inside the nucleus, leading to a shift in their energy levels in the nucleus shell model. In the field of spintronics, spin–orbit effects for electrons in semiconductors and other materials are explored for technological applications. The spin–orbit interaction is one cause of magnetocrystalline anisotropy.


  • https://en.wikipedia.org/wiki/Polarization_(waves) - a parameter applying to waves that specifies the geometrical orientation of the oscillation. Electromagnetic waves such as light exhibit multiple polarizations, as do many other types of waves such as gravitational waves and sound waves in solids. On the other hand, sound waves in a gas or liquid only oscillate in the wave's direction of propagation, and the oscillation of ocean waves is always in the vertical direction. In these cases one doesn't normally speak of "polarization" since the oscillation's direction is not in question.

In an electromagnetic wave, both the electric field and magnetic field are oscillating but in different directions; by convention the "polarization" of light refers to the polarization of the electric field. Light which can be approximated as a plane wave in free space or in an isotropic medium propagates as a transverse wave—both the electric and magnetic fields are perpendicular to the wave's direction of travel. The oscillation of these fields may be in a single direction (linear polarization), or the field may rotate at the optical frequency (circular or elliptical polarization). In that case the direction of the fields' rotation, and thus the specified polarization, may be either clockwise or counter clockwise; this is referred to as the wave's chirality or handedness.

The most common optical materials (such as glass) are isotropic and simply preserve the polarization of a wave but do not differentiate between polarization states. However, there are important classes of lossless materials classified as birefringent or optically active in which this is not the case and a wave's polarization will generally be modified or will affect propagation through it. In linear dichroism and circular dichroism, attenuation in propagation is dependent on the wave's polarization. One familiar example is the polarizer, an optical filter that transmits only one polarization.

Polarization is an important parameter in areas of science dealing with transverse wave propagation, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar. According to quantum mechanics, the energy, linear momentum, and angular momentum of an electromagnetic wave are quantized in the form of photons. Then there is an identification between the electromagnetic polarization of the wave and polarization operators which determine the probability of a photon to be found in a given polarization state. In particular, the spin operator is shown to correspond to the basis states of circular polarization as described below in terms of electromagnetic fields. This is described in detail at Photon polarization.


  • https://en.wikipedia.org/wiki/Linear_polarization - a confinement of the electric field vector or magnetic field vector to a given plane along the direction of propagation. The orientation of a linearly polarized electromagnetic wave is defined by the direction of the electric field vector. For example, if the electric field vector is vertical (alternately up and down as the wave travels) the radiation is said to be vertically polarized.
  • https://en.wikipedia.org/wiki/Elliptical_polarization - polarization of electromagnetic radiation such that the tip of the electric field vector describes an ellipse in any fixed plane intersecting, and normal to, the direction of propagation. An elliptically polarized wave may be resolved into two linearly polarized waves in phase quadrature, with their polarization planes at right angles to each other. Since the electric field can rotate clockwise or counterclockwise as it propagates, elliptically polarized waves exhibit chirality.


  • https://en.wikipedia.org/wiki/Circular_polarization - a polarization in which the electric field of the passing wave does not change strength but only changes direction in a rotary manner. In electrodynamics the strength and direction of an electric field is defined by what is called an electric field vector. In the case of a circularly polarized wave, as seen in the accompanying animation, the tip of the electric field vector, at a given point in space, describes a circle as time progresses. If the wave is frozen in time, the electric field vector of the wave describes a helix along the direction of propagation. Circular polarization is a limiting case of the more general condition of elliptical polarization. The other special case is the easier-to-understand linear polarization.


  • https://en.wikipedia.org/wiki/Photon_polarization - the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. Individual photon[dubious – discuss] eigenstates have either right or left circular polarization. A photon that is in a superposition of eigenstates can have linear, circular, or elliptical polarization.

The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well[dubious – discuss], and forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, and Hermitian operators, emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state.

Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with two pairs (or one broken pair) of polaroid sunglasses. The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein. The correspondence principle then allows the identification of momentum and angular momentum (called spin), as well as energy, with the photon.


  • https://en.wikipedia.org/wiki/Electromagnetic_tensor - or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in space-time of a physical system. The field tensor was first used after the 4-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows some physical laws to be written in a very concise form.


  • https://en.wikipedia.org/wiki/Electromagnetic_stress–energy_tensor - the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the classical Maxwell stress tensor that governs the electromagnetic interactions.












  • https://en.wikipedia.org/wiki/Gravitational_redshift - known as Einstein shift in older literature, is the phenomenon that electromagnetic waves or photons travelling out of a gravitational well (seem to) lose energy. This loss of energy corresponds to a decrease in the wave frequency and increase in the wavelength, known as a redshift.Gravitational redshift can be interpreted as a consequence of the equivalence principle (that gravity and acceleration are equivalent and the redshift is caused by the Doppler effect) or as a consequence of the mass-energy equivalence ('falling' photons seem to gain energy), though there are numerous subtleties that complicate a rigorous derivation. The effect was first described by Einstein in 1907, eight years before his publication of the full theory of relativity.
W+, W− and Z0 bosons - weak interaction
  • https://en.wikipedia.org/wiki/Weak_interaction - (the weak force or weak nuclear force) is the mechanism of interaction between sub-atomic particles that causes radioactive decay and thus plays an essential role in nuclear fission. The theory of the weak interaction is sometimes called quantum flavourdynamics (QFD), in analogy with the terms quantum chromodynamics (QCD) dealing with the strong interaction and quantum electrodynamics (QED) dealing with the electromagnetic force. However, the term QFD is rarely used because the weak force is best understood in terms of electro-weak theory (EWT). The weak interaction takes place only at very small, sub-atomic distances, less than the diameter of a proton. It is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation.


  • https://en.wikipedia.org/wiki/W_and_Z_bosons - are together known as the weak or more generally as the intermediate vector bosons. These elementary particles mediate the weak interaction; the respective symbols are W+, W−, and Z. The W boson has either a positive or negative electric charge of 1 elementary charge and are each other's antiparticles. The Z boson is electrically neutral and is its own antiparticle. The three particles have a spin of 1. The W bosons have a magnetic moment, but the Z has none. All three of these particles are very short-lived, with a half-life of about 3×10−25 s. Their experimental discovery was a triumph for what is now known as the Standard Model of particle physics.

The Z and W particles interact with the recently confirmed Higgs Boson, i.e., have mass.


The W bosons are best known for their role in nuclear decay. Consider, for example, the beta decay of cobalt-60. This reaction does not involve the whole cobalt-60 nucleus, but affects only one of its 33 neutrons. The neutron is converted into a proton while also emitting an electron (called a beta particle in this context) and an electron antineutrino: n0 → p+ + e− + νe

Again, the neutron is not an elementary particle but a composite of an up quark and two down quarks (udd). It is in fact one of the down quarks that interacts in beta decay, turning into an up quark to form a proton (uud). At the most fundamental level, then, the weak force changes the flavour of a single quark: d → u + W− which is immediately followed by decay of the W− itself: W− → e− + νe


The Z boson is its own antiparticle. Thus, all of its flavour quantum numbers and charges are zero. The exchange of a Z boson between particles, called a neutral current interaction, therefore leaves the interacting particles unaffected, except for a transfer of momentum. Z boson interactions involving neutrinos have distinctive signatures: They provide the only known mechanism for elastic scattering of neutrinos in matter; neutrinos are almost as likely to scatter elastically (via Z boson exchange) as inelastically (via W boson exchange). The first prediction of Z bosons was made by Brazilian physicist José Leite Lopes in 1958,[5] by devising an equation which showed the analogy of the weak nuclear interactions with electromagnetism. Steve Weinberg, Sheldon Glashow and Abdus Salam later used these results to develop the electroweak unification, in 1973. Weak neutral currents via Z boson exchange were confirmed shortly thereafter in 1974, in a neutrino experiment in the Gargamelle bubble chamber at CERN.



Composite fermions

Hadron - quark and gluon

  • https://en.wikipedia.org/wiki/Hadron - a composite particle made of quarks held together by the gluon strong force in a similar way as molecules are held together by the electromagnetic force.

Hadrons are categorized into two families: baryons, made of three quarks, and mesons, made of one quark and one antiquark. Protons and neutrons are examples of baryons; pions are an example of a meson. Hadrons containing more than three valence quarks (exotic hadrons) have been discovered in recent years. A tetraquark state (an exotic meson), named the Z(4430)−, was discovered in 2007 by the Belle Collaboration and confirmed as a resonance in 2014 by the LHCb collaboration. Two pentaquark states (exotic baryons), named P+ c(4380) and P+ c(4450), were discovered in 2015 by the LHCb collaboration. There are several more exotic hadron candidates, and other colour-singlet quark combinations may also exist.

Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable. Other hadrons are unstable under ordinary conditions; free neutrons decay with a half-life of about 611 seconds. Experimentally, hadron physics is studied by colliding protons or nuclei of heavy elements such as lead, and detecting the debris in the produced particle showers.



  • https://en.wikipedia.org/wiki/Hadronization - the process of the formation of hadrons out of quarks and gluons. This occurs after high-energy collisions in a particle collider in which free quarks or gluons are created. Due to colour confinement, these cannot exist individually. In the Standard Model they combine with quarks and antiquarks spontaneously created from the vacuum to form hadrons. The QCD (Quantum Chromodynamics) of the hadronization process are not yet fully understood, but are modeled and parameterized in a number of phenomenological studies, including the Lund string model and in various long-range QCD approximation schemes.

The tight cone of particles created by the hadronization of a single quark is called a jet. In particle detectors, jets are observed rather than quarks, whose existence must be inferred. The models and approximation schemes and their predicted Jet hadronization, or fragmentation, have been extensively compared with measurement in a number of high energy particle physics experiments; e.g. TASSO, OPAL, H1. Hadronization also occurred shortly after the Big Bang when the quark–gluon plasma cooled to the temperature below which free quarks and gluons cannot exist (about 170 MeV). The quarks and gluons then combined into hadrons.

A top quark, however, has a mean lifetime of 5×10−25 seconds, which is shorter than the time scale at which the strong force of QCD acts, so a top quark decays before it can hadronize, allowing physicists to observe a "bare quark." Thus, they have not been observed as components of any observed hadron, while all other quarks have been observed only as components of hadrons.


  • https://en.wikipedia.org/wiki/Quantum_hadrodynamics - an effective field theory pertaining to interactions between hadrons, that is, hadron-hadron interactions or the inter-hadron force. It is "a framework for describing the nuclear many-body problem as a relativistic system of baryons and mesons".[1] Quantum hadrodynamics is closely related and partly derived from quantum chromodynamics, which is the theory of interactions between quarks and gluons that bind them together to form hadrons, via the strong force.

An important phenomenon in quantum hadrodynamics is the nuclear force, or residual strong force. It is the force operating between those hadrons which are nucleons – protons and neutrons – as it binds them together to form the atomic nucleus. The bosons which mediate the nuclear force are three types of mesons: pions, rho mesons and omega mesons. Since mesons are themselves hadrons, quantum hadrodynamics also deals with the interaction between the carriers of the nuclear force itself, alongside the nucleons bound by it. The hadrodynamic force keeps nuclei bound, against the electrodynamic force which operates to break them apart (due to the mutual repulsion between protons in the nucleus).

Quantum hadrodynamics, dealing with the nuclear force and its mediating mesons, can be compared to other quantum field theories which describe fundamental forces and their associated bosons: quantum chromodynamics, dealing with the strong interaction and gluons; quantum electrodynamics, dealing with electromagnetism and photons; quantum flavordynamics, dealing with the weak interaction and W and Z bosons.



  • https://en.wikipedia.org/wiki/Jet_(particle_physics) - a narrow cone of hadrons and other particles produced by the hadronization of a quark or gluon in a particle physics or heavy ion experiment. Particles carrying a color charge, such as quarks, cannot exist in free form because of QCD confinement which only allows for colorless states. When an object containing color charge fragments, each fragment carries away some of the color charge. In order to obey confinement, these fragments create other colored objects around them to form colorless objects. The ensemble of these objects is called a jet.

Jets are measured in particle detectors and studied in order to determine the properties of the original quarks. In relativistic heavy ion physics, jets are important because the originating hard scattering is a natural probe for the QCD matter created in the collision, and indicate its phase. When the QCD matter undergoes a phase crossover into quark gluon plasma, the energy loss in the medium grows significantly, effectively quenching the outgoing jet. Example of jet analysis techniques are: jet reconstruction (e.g., kT algorithm, cone algorithm), jet correlation flavor tagging (e.g., b-tagging). The Lund string model is an example of a jet fragmentation model.


  • https://en.wikipedia.org/wiki/QCD_matter - refers to any of a number of theorized phases of matter whose degrees of freedom include quarks and gluons.[clarification needed] These theoretical phases would occur at extremely high temperatures and/or densities, billions of times higher than can be produced in equilibrium in laboratories. Under such extreme conditions, the familiar structure of matter, where the basic constituents are nuclei (consisting of nucleons which are bound states of quarks) and electrons, is disrupted. In quark matter it is more appropriate to treat the quarks themselves as the basic degrees of freedom.

At ordinary temperatures or densities the strong force just confines the quarks into composite particles (hadrons) of size around 10−15 m = 1 femtometer = 1 fm (corresponding to the QCD energy scale ΛQCD ≈ 200 MeV) and its effects are not noticeable at longer distances. However, when the temperature reaches the QCD energy scale (T of order 1012 kelvins) or the density rises to the point where the average inter-quark separation is less than 1 fm (quark chemical potential μ around 400 MeV), the hadrons are melted into their constituent quarks, and the strong interaction becomes the dominant feature of the physics. Such phases are called quark matter or QCD matter. The strength of the color force makes the properties of quark matter unlike gas or plasma, instead leading to a state of matter more reminiscent of a liquid. At high densities, quark matter is a Fermi liquid, but is predicted to exhibit color superconductivity at high densities and temperatures below 1012 K.


  • https://en.wikipedia.org/wiki/Quark–gluon_plasma - or quark soup is a state of matter in quantum chromodynamics (QCD) which is hypothesized to exist at extremely high temperature, density, or both temperature and density. This state is thought to consist of asymptotically free quarks and gluons, which are several of the basic building blocks of matter. It is believed that up to a few milliseconds after the Big Bang, known as the Quark epoch, the Universe was in a quark–gluon plasma state. In June 2015, an international team of physicists produced quark-gluon plasma at the Large Hadron Collider by colliding protons with lead nuclei at high energy inside the supercollider’s Compact Muon Solenoid detector. They also discovered that this new state of matter behaves like a fluid. The strength of the color force means that unlike the gas-like plasma, quark–gluon plasma behaves as a near-ideal Fermi liquid, although research on flow characteristics is ongoing. In the quark matter phase diagram, QGP is placed in the high-temperature, high-density regime; whereas, ordinary matter is a cold and rarefied mixture of nuclei and vacuum, and the hypothetical quark stars would consist of relatively cold, but dense quark matter.


Baryon
  • https://en.wikipedia.org/wiki/Baryon - a composite subatomic particle made up of three quarks (as distinct from mesons, which are composed of one quark and one antiquark). Baryons and mesons belong to the hadron family of particles, which are the quark-based particles. The name "baryon" comes from the Greek word for "heavy" (βαρύς, barys), because, at the time of their naming, most known elementary particles had lower masses than the baryons.





Proton
  • https://en.wikipedia.org/wiki/Proton - spin-½ fermions and are composed of three valence quarks, making them baryons (a sub-type of hadrons). The two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks (up, up, down), the gluons, and transitory pairs of sea quarks. Protons have an approximately exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm.

Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol "H") is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.

Neutron
Meson


  • https://en.wikipedia.org/wiki/B_meson - mesons composed of a bottom antiquark and either an up (B+), down (B0), strange (B0s) or charm quark (B+c). The combination of a bottom antiquark and a top quark is not thought to be possible because of the top quark's short lifetime. The combination of a bottom antiquark and a bottom quark is not a B meson, but rather bottomonium.

Each B meson has an antiparticle that is composed of a bottom quark and an up (B−), down (B0), strange (B0s) or charm antiquark (B−c) respectively.


  • https://en.wikipedia.org/wiki/D_meson - the lightest particle containing charm quarks. They are often studied to gain knowledge on the weak interaction. The strange D mesons (Ds) were called the "F mesons" prior to 1986.




  • https://en.wikipedia.org/wiki/Kaon - also called a K meson and denoted K, is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark).

Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery in cosmic rays in 1947. They were essential in establishing the foundations of the Standard Model of particle physics, such as the quark model of hadrons and the theory of quark mixing (the latter was acknowledged by a Nobel Prize in Physics in 2008). Kaons have played a distinguished role in our understanding of fundamental conservation laws: CP violation, a phenomenon generating the observed matter–antimatter asymmetry of the universe, was discovered in the kaon system in 1964 (which was acknowledged by a Nobel Prize in 1980). Moreover, direct CP violation was discovered in the kaon decays in the early 2000s by the NA48 and KTeV experiments at CERN and Fermilab.




  • https://en.wikipedia.org/wiki/J/psi_meson - a flavor-neutral meson consisting of a charm quark and a charm antiquark. Mesons formed by a bound state of a charm quark and a charm anti-quark are generally known as "charmonium". The J/ψ is most common form of charmonium, due to its low rest mass. The J/ψ has a rest mass of 3.0969 GeV/c2, just above that of the η c (2.9836 GeV/c2), and a mean lifetime of 7.2×10−21 s.
  • https://en.wikipedia.org/wiki/Upsilon_meson - a quarkonium state (i.e. flavourless meson) formed from a bottom quark and its antiparticle. It was discovered by the E288 collaboration, headed by Leon Lederman, at Fermilab in 1977, and was the first particle containing a bottom quark to be discovered because it is the lightest that can be produced without additional massive particles. It has a lifetime of 1.21×10−20 s and a mass about 9.46 GeV/c2 in the ground state.



Quasiparticle



Mass



  • https://en.wikipedia.org/wiki/Invariant_mass - rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is a characteristic of the total energy and momentum of an object or a system of objects that is the same in all frames of reference related by Lorentz transformations. If a center of momentum frame exists for the system, then the invariant mass of a system is simply the total energy divided by the speed of light squared. In other reference frames, the energy of the system increases, but system momentum is subtracted from this, so that the invariant mass remains unchanged.

Systems whose four-momentum is a null vector (for example a single photon or many photons moving in exactly the same direction) have zero invariant mass, and are referred to as massless. A physical object or particle moving faster than the speed of light would have space-like four-momenta (such as the hypothesized tachyon), and these do not appear to exist. Any time-like four-momentum possesses a reference frame where the momentum (3-dimensional) is zero, which is a center of momentum frame. In this case, invariant mass is positive and is referred to as the rest mass.









  • Ep = mgh
    • Gravitational potential energy (joules) = mass (kg) x gravitational field strength (g, N/kg) x height (m)
  • Ek = 1/2 mv
    • kinetic energy = 1/2 x mass x speed^2
  • Eh = cm∆T
    • Heat energy = heat capacity (Joules) x mass x change in temp.
  • Eh = mLf
    • Heat energy = mass x latent heat change property of material
  • a = V-u / t
    • acceleration = change in velocity / time
  • v = d/t
    • velocity = distance / time

Atomic






  • https://en.wikipedia.org/wiki/Nuclide - an atomic species characterized by the specific constitution of its nucleus, i.e., by its number of protons Z, its number of neutrons N, and its nuclear energy state.


  • https://en.wikipedia.org/wiki/Nuclear_binding_energy - the energy that would be required to disassemble the nucleus of an atom into its component parts. These component parts are neutrons and protons, which are collectively called nucleons. The binding energy of nuclei is due to the attractive forces that hold these nucleons together, and it is usually a positive number, since most nuclei would require the expenditure of energy to separate them into individual protons and neutrons. The mass of an atomic nucleus is usually less than the sum of the individual masses of the constituent protons and neutrons (according to Einstein's equation E=mc2) and this 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed.


















  • https://en.wikipedia.org/wiki/Mole_(unit) - he unit of measurement for amount of substance in the International System of Units (SI). The unit is defined as the amount or sample of a chemical substance that contains as many constitutive particles, e.g., atoms, molecules, ions, electrons, or photons, as there are atoms in 12 grams of carbon-12 (12C), the isotope of carbon with standard atomic weight 12 by definition. This number is expressed by the Avogadro constant, which has a value of 6.022140857×1023 mol−1. The mole is an SI base unit, with the unit symbol mol.


  • https://en.wikipedia.org/wiki/Avogadro_constant - named after the scientist Amedeo Avogadro, is the number of constituent particles, usually atoms or molecules, that are contained in the amount of substance given by one mole. Thus, it is the proportionality factor that relates the molar mass of a substance to the mass of a sample. The Avogadro constant, often designated with the symbol NA or L, has the value 6.022140857(74)×1023 mol−1 in the International System of Units (SI). (The parentheses there represent the degree of uncertainty.)

Matter




  • https://en.wikipedia.org/wiki/Equation_of_state - a thermodynamic equation relating state variables which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature (PVT), or internal energy. Equations of state are useful in describing the properties of fluids, mixtures of fluids, solids, and the interior of stars.



  • https://en.wikipedia.org/wiki/Ideal_gas_law - also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857.



  • https://en.wikipedia.org/wiki/Perfect_fluid - a fluid that can be completely characterized by its rest frame mass density {\displaystyle \rho _{m}} and isotropic pressure p. Real fluids are "sticky" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction.










See also Geology


  • https://en.wikipedia.org/wiki/Interface_(matter) - the boundary between two spatial regions occupied by different matter, or by matter in different physical states. The interface between matter and air, or matter and vacuum, is called a surface, and studied in surface science. In thermal equilibrium, the regions in contact are called phases, and the interface is called a phase boundary. An example for an interface out of equilibrium is the grain boundary in polycrystalline matter.The importance of the interface depends on the type of system: the bigger the quotient area/volume, the greater the effect the interface will have. Consequently, interfaces are very important in systems with large interface area-to-volume ratios, such as colloids. Interfaces can be flat or curved. For example, oil droplets in a salad dressing are spherical but the interface between water and air in a glass of water is mostly flat.Surface tension is the physical property which rules interface processes involving liquids. For a liquid film on flat surfaces, the liquid-vapor interface keeps flat to minimize interfacial area and system free energy. For a liquid film on rough surfaces, the surface tension tends to keep the meniscus flat, while the disjoining pressure makes the film conformal to the substrate. The equilibrium meniscus shape is a result of the competition between the capillary pressure and disjoining pressure. Interfaces may cause various optical phenomena, such as refraction. Optical lenses serve as an example of a practical application of the interface between glass and air.One topical interface system is the gas-liquid interface between aerosols and other atmospheric molecules.


  • https://en.wikipedia.org/wiki/Interface_and_colloid_science - an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.


  • https://en.wikipedia.org/wiki/Colloid - a mixture in which one substance of microscopically dispersed insoluble particles is suspended throughout another substance. Sometimes the dispersed substance alone is called the colloid; the term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). Unlike a solution, whose solute and solvent constitute only one phase, a colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension) that arise by phase separation. To qualify as a colloid, the mixture must be one that does not settle or would take a very long time to settle appreciably.The dispersed-phase particles have a diameter between approximately 1 and 1000 nanometers. Such particles are normally easily visible in an optical microscope, although at the smaller size range (r < 250 nm), an ultramicroscope or an electron microscope may be required. Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal foams, colloidal dispersions, or hydrosols. The dispersed-phase particles or droplets are affected largely by the surface chemistry present in the colloid.Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. The cytoplasm of living cells is an example of a colloid, containing many types of biomolecular condensate.Colloidal suspensions are the subject of interface and colloid science. This field of study was introduced in 1845 by Italian chemist Francesco Selmi[3] and further investigated since 1861 by Scottish scientist Thomas Graham.




  • https://en.wikipedia.org/wiki/Ion - an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving the atom a net positive or negative electrical charge


  • https://en.wikipedia.org/wiki/Condensed_matter_physics - a branch of physics that deals with the physical properties of condensed phases of matter. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics.

The most familiar condensed phases are solids and liquids while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on atomic lattices, and the Bose–Einstein condensate found in cold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using techniques of theoretical physics to develop mathematical models that help in understanding physical behavior.

A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics.






  • https://en.wikipedia.org/wiki/Lattice_gas_automaton - or lattice gas cellular automata, are a type of cellular automaton used to simulate fluid flows, pioneered by Hardy–Pomeau–de Pazzis and Frisch–Hasslacher–Pomeau. They were the precursor to the lattice Boltzmann methods. From lattice gas automata, it is possible to derive the macroscopic Navier–Stokes equations. Interest in lattice gas automaton methods levelled off in the early 1990s, as the interest in the lattice Boltzmann started to rise
  • https://en.wikipedia.org/wiki/HPP_model - a fundamental lattice gas automaton for the simulation of gases and liquids. It was a precursor to the lattice Boltzmann methods. From lattice gas automata, it is possible to derive the macroscopic Navier-Stokes equations.[1] Interest in lattice gas automaton methods levelled off in the early 1990s, due to rising interest in the lattice Boltzmann methods.


  • https://en.wikipedia.org/wiki/Lattice_Boltzmann_methods - originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.

Superconductivity

  • https://en.wikipedia.org/wiki/Superconductivity - a phenomenon of exactly zero electrical resistance and expulsion of magnetic flux fields occurring in certain materials, called superconductors, when cooled below a characteristic critical temperature. It was discovered by Dutch physicist Heike Kamerlingh Onnes on April 8, 1911, in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.

The electrical resistance of a metallic conductor decreases gradually as temperature is lowered. In ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current through a loop of superconducting wire can persist indefinitely with no power source.

In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above 90 K (−183 °C). Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply-available coolant liquid nitrogen boils at 77 K, and thus superconduction at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.


  • https://en.wikipedia.org/wiki/Covalent_superconductor - are superconducting materials where the atoms are linked by covalent bonds. The first such material was boron-doped synthetic diamond grown by the high-pressure high-temperature (HPHT) method. The discovery had no practical importance, but surprised most scientists as superconductivity had not been observed in covalent semiconductors, including diamond and silicon.

Chemistry



  • https://en.wikipedia.org/wiki/Reactivity_series - or activity series, is an empirical, calculated, and structurally analytical progression[1] of a series of metals, arranged by their "reactivity" from highest to lowest. It is used to summarize information about the reactions of metals with acids and water, double displacement reactions and the extraction of metals from their ores.



  • https://en.wikipedia.org/wiki/Relativistic_quantum_chemistry - combines relativistic mechanics with quantum chemistry to explain elemental properties and structure, especially for the heavier elements of the periodic table. A prominent example of such an explanation is the color of gold: due to relativistic effects, it is not silvery like most other metals.The term relativistic effects was developed in light of the history of quantum mechanics. Initially quantum mechanics was developed without considering the theory of relativity. Relativistic effects are those discrepancies between values calculated by models that consider and that do not consider relativity. Relativistic effects are important for the heavier elements with high atomic numbers. In the most common layout of the periodic table, these elements are shown in the lower area. Examples are the lanthanides and actinides. Relativistic effects in chemistry can be considered to be perturbations, or small corrections, to the non-relativistic theory of chemistry, which is developed from the solutions of the Schrödinger equation. These corrections affect the electrons differently depending on the electron speed relative to the speed of light. Relativistic effects are more prominent in heavy elements because only in these elements do electrons attain sufficient speeds for the elements to have properties that differ from what non-relativistic chemistry predicts.






  • The Molecule of the Month - one of the longest running chemistry webpages on the internet. Each month since January 1996 a new molecule has been added to the list on this page. The links will take you to a page at one of the Web sites at a University Chemistry Department or commercial site in the UK, the US, or anywhere in the world, where useful (and hopefully entertaining!), information can be found about a particularly interesting molecule. [36]



Bonds




  • https://en.wikipedia.org/wiki/Steric_effects - arise from the fact that each atom within a molecule occupies a certain amount of space. If atoms are brought too close together, there is an associated cost in energy due to overlapping electron clouds (Pauli or Born repulsion), and this may affect the molecule's preferred shape (conformation) and reactivity.



  • https://en.wikipedia.org/wiki/Resonance_(chemistry) or mesomerism is a quantum superposition of wave functions built from several contributing structures (also called resonance structures or canonical forms) as a way of describing delocalized electrons within certain molecules or polyatomic ions. Each contributing structure can be represented by a Lewis structure, with only an integer number of covalent bonds between each pair of atoms within the structure. Several Lewis structures are used collectively to describe the actual molecular structure. The actual structure is an approximate intermediate between the canonical forms, but its overall energy is lower than each of the contributors. This intermediate form between different contributing structures is called a resonance hybrid. Contributing structures differ only in the position of electrons, not in the position of nuclei. Resonance is a key component of valence bond theory.


Compounds

  • https://en.wikipedia.org/wiki/Chemical_compound - a chemical substance composed of many identical molecules (or molecular entities) containing atoms from more than one chemical element held together by chemical bonds. A molecule consisting of atoms of only one element is therefore not a compound. A compound can be transformed into a different substance by a chemical reaction, which may involve interactions with other substances. In this process, bonds between atoms may be broken and/or new bonds formed. There are four major types of compounds, distinguished by how the constituent atoms are bonded together. Molecular compounds are held together by covalent bonds; ionic compounds are held together by ionic bonds; intermetallic compounds are held together by metallic bonds; coordination complexes are held together by coordinate covalent bonds. Non-stoichiometric compounds form a disputed marginal case. A chemical formula specifies the number of atoms of each element in a compound molecule, using the standard chemical symbols with numerical subscripts. Many chemical compounds have a unique CAS number identifier assigned by the Chemical Abstracts Service. Globally, more than 350,000 chemical compounds (including mixtures of chemicals) have been registered for production and use.




  • https://en.wikipedia.org/wiki/Amine - a molecule which has a nitrogen atom that can behave as a base. An amine can be thought of coming from ammonia. The difference is that the three hydrogens in ammonia are changed to any group or atom. If only one of the hydrogens is changed, the amine is called primary. If two are changed, the amine is called secondary. If all three are changed then it is called tertiary.


  • https://en.wikipedia.org/wiki/Ketone - an organic compound that has a carbon atom with a double bond to an oxygen atom (this is called carbonyl functional group). This carbon atom must also have single bonds to two other carbon atoms.

Structure


  • https://en.wikipedia.org/wiki/Structure–activity_relationship - the relationship between the chemical structure of a molecule and its biological activity. This idea was first presented by Crum-Brown and Fraser in 1865. The analysis of SAR enables the determination of the chemical group responsible for evoking a target biological effect in the organism. This allows modification of the effect or the potency of a bioactive compound (typically a drug) by changing its chemical structure. Medicinal chemists use the techniques of chemical synthesis to insert new chemical groups into the biomedical compound and test the modifications for their biological effects.



  • Materials Project - Harnessing the power of supercomputing and state of the art electronic structure methods, the Materials Project provides open web-based access to computed information on known and predicted materials as well as powerful analysis tools to inspire and design novel materials.







  • https://en.wikipedia.org/wiki/Functional_group - specific groups of atoms or bonds within molecules that are responsible for the characteristic chemical reactions of those molecules. The same functional group will undergo the same or similar chemical reaction(s) regardless of the size of the molecule it is a part of. However, its relative reactivity can be modified by nearby functional groups.
  • https://en.wikipedia.org/wiki/Aromaticity - formally used to describe an unusually stable nature of some flat rings of atoms. These structures contain a number of double bonds that interact with each other according to certain rules. As a result of their being so stable, such rings tend to form easily, and once formed, tend to be difficult to break in chemical reactions.


  • https://en.wikipedia.org/wiki/Isomer - molecules with the same chemical formula but different chemical structures. That is, isomers contain the same number of atoms of each element, but have different arrangements of their atoms in space.[1][2] Isomers do not necessarily share similar properties, unless they also have the same functional groups. There are many different classes of isomers, like positional isomers, cis-trans isomers and enantiomers, etc. (see chart below). There are two main forms of isomerism: structural isomerism and stereoisomerism (spatial isomerism).
  • https://en.wikipedia.org/wiki/Conformational_isomerism - a form of stereoisomerism in which the isomers can be interconverted exclusively by rotations about formally single bonds (refer to figure on single bond rotation). Such isomers are generally referred to as conformational isomers or conformers and, specifically, as rotamers. Rotations about single bonds are restricted by a rotational energy barrier which must be overcome to interconvert one conformer to another. Conformational isomerism arises when the rotation about a single bond is relatively unhindered. That is, the energy barrier must be small enough for the interconversion to occur.



Spectroscopy

Organic chemistry

  • https://en.wikipedia.org/wiki/Organic_chemistry - subdiscipline involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms.


Inorganic chemistry


  • - The colors of chemistry - This notebook documents my exploration of color theory and its applications to photochemistry. It also shows off the functionality of several Julia packages: Color.jl for color theory and colorimetry, SIUnits.jl for unitful computations, and Gadfly.jl for graph plotting.





  • https://en.wikipedia.org/wiki/Lipid - a group of naturally occurring molecules that include fats, waxes, sterols, fat-soluble vitamins (such as vitamins A, D, E, and K), monoglycerides, diglycerides, triglycerides, phospholipids, and others. The main biological functions of lipids include storing energy, signaling, and acting as structural components of cell membranes. Lipids have applications in the cosmetic and food industries as well as in nanotechnology.




pH

  • https://en.wikipedia.org/wiki/pH - the negative log of the activity of the hydrogen ion in an aqueous solution. Solutions with a pH less than 7 are said to be acidic and solutions with a pH greater than 7 are basic or alkaline. Pure water has a pH of 7.
  • https://en.wikipedia.org/wiki/Base_(chemistry) - substances that, in aqueous solution, are slippery to the touch, tastes bitter, changes the color of indicators (e.g., turns red litmus paper blue), reacts with acids to form salts, and promotes certain chemical reactions (base catalysis).
  • https://en.wikipedia.org/wiki/Acid - a chemical substance whose aqueous solutions are characterized by a sour taste, the ability to turn blue litmus red, and the ability to react with bases and certain metals (like calcium) to form salts. Aqueous solutions of acids have a pH of less than 7. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic.

Reaction


  • https://en.wikipedia.org/wiki/Catalysis - the increase in the rate of a chemical reaction due to the participation of an additional substance called a catalyst. With a catalyst, reactions occur faster and with less energy. Because catalysts are not consumed, they are recycled. Often only tiny amounts are required.




  • (portmanteau of reduction and oxidation) reactions include all chemical reactions in which atoms have their oxidation state changed; in general, redox reactions involve the transfer of electrons between species.


  • https://en.wikipedia.org/wiki/Redox - reactions include all chemical reactions in which atoms have their oxidation state changed; in general, redox reactions involve the transfer of electrons between species. The term "redox" comes from two concepts involved with electron transfer: reduction and oxidation. It can be explained in simple terms: Oxidation is the loss of electrons or an increase in oxidation state by a molecule, atom, or ion. Reduction is the gain of electrons or a decrease in oxidation state by a molecule, atom, or ion.




  • CFAST - Consolidated Fire and Smoke Transport CFAST is a two-zone fire model capable of predicting the environment in a multi-compartment structure subjected to a fire. It calculates the time-evolving distribution of smoke and gaseous combustion products as well as the temperature throughout a building during a user-prescribed fire.
  • FDS-SMV - Fire Dynamics Simulator (FDS) and Smokeview (SMV). Fire Dynamics Simulator (FDS) is a large-eddy simulation (LES) code for low-speed flows, with an emphasis on smoke and heat transport from fires. Smokeview (SMV) is a visualization program used to display the output of FDS and CFAST simulations.

Synthesis



Substances

  • https://en.wikipedia.org/wiki/Mixture - a material system made up of two or more different substances which are mixed but are not combined chemically. A mixture refers to the physical combination of two or more substances on which the identities are retained and are mixed in the form of solutions, suspensions, and colloids.

Mixtures are the one product of a mechanical blending or mixing of chemical substances like elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components.






  • https://en.wikipedia.org/wiki/Colloid - a substance in which microscopically dispersed insoluble particles are suspended throughout another substance. Sometimes the dispersed substance alone is called the colloid; the term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is contradistinguished from colloids by larger particle size). Unlike a solution, whose solute and solvent constitute only one phase, a colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). To qualify as a colloid, the mixture must be one that does not settle or would take a very long time to settle appreciably.


  • https://en.wikipedia.org/wiki/Emulsion - a mixture of two or more liquids that are normally immiscible (unmixable or unblendable). Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion should be used when both the dispersed and the continuous phase are liquids. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, milk, mayonnaise, and some cutting fluids for metal working.


  • https://en.wikipedia.org/wiki/Sol_(colloid) - a colloidal suspension of very small solid particles[1] in a continuous liquid medium. Sols are quite stable and show the Tyndall effect. Examples include blood, pigmented ink, cell fluids and paint.




  • https://en.wikipedia.org/wiki/Clathrate_hydrate - or gas clathrates, gas hydrates', clathrates, hydrates, etc., are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including O2, H2, N2, CO2, CH4, H2S, Ar, Kr, and Xe, as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the sequestered molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood. Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine. Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion (6.4×1012) tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource, but an economical extraction method has so far proven elusive. Hydrocarbon clathrates cause problems for the petroleum industry, because they can form inside gas pipelines, often resulting in obstructions. Deep sea deposition of carbon dioxide clathrate has been proposed as a method to remove this greenhouse gas from the atmosphere and control climate change.Clathrates are suspected to occur in large quantities on some outer planets, moons and trans-Neptunian objects, binding gas at fairly high temperatures.

Phytochemistry

  • https://en.wikipedia.org/wiki/Phytochemistry - in the strict sense of the word the study of phytochemicals. These are chemicals derived from plants. In a narrower sense the terms are often used to describe the large number of secondary metabolic compounds found in plants. Many of these are known to provide protection against insect attacks and plant diseases. They also exhibit a number of protective functions for human consumers.

Phytochemistry can be considered sub-fields of Botany or Chemistry. Activities can be led in botanical gardens or in the wild with the aid of Ethnobotany. The applications of the discipline can be for Pharmacognosy, or the discovery of new drugs, or as an aid for plant physiology studies.


  • https://en.wikipedia.org/wiki/Phytochemical - chemical compounds that occur naturally in plants (phyto means "plant" in Greek). Some are responsible for color and other organoleptic properties, such as the deep purple of blueberries and the smell of garlic. Phytochemicals may have biological significance, for example carotenoids or flavonoids, but are not established as essential nutrients. There may be as many as 4,000 different phytochemicals.


  • https://en.wikipedia.org/wiki/Terpene - a large and diverse class of organic compounds, produced by a variety of plants, particularly conifers, though also by some insects such as termites or swallowtail butterflies, which emit terpenes from their osmeteria. They are often strong-smelling. They may protect the plants that produce them by deterring herbivores and by attracting predators and parasites of herbivores. Many terpenes are aromatic hydrocarbons and thus may have had a protective function. The difference between terpenes and terpenoids is that terpenes are hydrocarbons, whereas terpenoids contain additional functional groups.




Molecular

See also Chemistry






Exotic



  • https://en.wikipedia.org/wiki/Protonium - also known as antiprotonic hydrogen, is a type of exotic atom in which a proton (symbol: p) and an antiproton (symbol: p) orbit each other. Since protonium is a bound system of a particle and its corresponding antiparticle, it is one of many types of type of onium. Like all onia, protonium is a boson with all quantum numbers (baryon number, flavour quantum numbers, etc.) and electrical charge equal to 0.
  • https://en.wikipedia.org/wiki/Pionium - an exotic atom consisting of one π+ and one π− meson. It can be created, for instance, by interaction of a proton beam accelerated by a particle accelerator and a target nucleus. Pionium has a short lifetime, predicted by chiral perturbation theory to be 2.89×10−15 s. It decays mainly into two π0 mesons, and to a smaller extent into two photons.
  • https://en.wikipedia.org/wiki/Muonium - an exotic atom made up of an antimuon and an electron, which was discovered in 1960 and is given the chemical symbol Mu. During the muon's 2.2 µs lifetime, muonium can enter into compounds such as muonium chloride (Mu Cl) or sodium muonide (Na Mu). Due to the mass difference between the antimuon and the electron, muonium (μ+ e−) is more similar to atomic hydrogen (p+ e−) than positronium (e+ e−). Its Bohr radius and ionization energy are within 0.5% of hydrogen, deuterium, and tritium, and thus it can usefully be considered as an exotic light isotope of hydrogen.


Ionization

  • https://en.wikipedia.org/wiki/Ion - an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving the atom a net positive or negative electrical charge



Reactions



  • https://en.wikipedia.org/wiki/Fissile_material - material capable of sustaining a nuclear fission chain reaction. By definition, fissile material can sustain a chain reaction with neutrons of thermal[1] energy. The predominant neutron energy may be typified by either slow neutrons (i.e., a thermal system) or fast neutrons. Fissile material can be used to fuel thermal-neutron reactors, fast-neutron reactors and nuclear explosives.












  • https://en.wikipedia.org/wiki/Critical_mass - the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, the nuclear fission cross-section), its density, its shape, its enrichment, its purity, its temperature, and its surroundings. The concept is important in nuclear weapon design.


  • https://en.wikipedia.org/wiki/Dollar_(reactivity) - a unit of reactivity for a nuclear reactor, calibrated to the interval between the conditions of delayed criticality and prompt criticality. One dollar is defined to be the threshold of slow criticality, which means a steady reaction rate. Two dollars is defined to be the threshold of prompt criticality, which means a nuclear excursion or explosion. A cent is 1/100 of a dollar.





Accelerators


Software


  • https://en.wikipedia.org/wiki/Chemical_Markup_Language - ChemML or CML, is an approach to managing molecular information using tools such as XML and Java. It was the first domain specific implementation based strictly on XML, first based on a DTD and later on an XML Schema, the most robust and widely used system for precise information management in many areas. It has been developed over more than a decade by Murray-Rust, Rzepa and others and has been tested in many areas and on a variety of machines. Chemical information is traditionally stored in many different file types which inhibit reuse of the documents. CML uses XML's portability to help CML developers and chemists design interoperable documents. There are a number of tools that can generate, process and view CML documents. Publishers can distribute chemistry within XML documents by using CML, e.g. in RSS documents.

CML is capable of supporting a wide range of chemical concepts including:

  • molecules
  • eactions
  • spectra and analytical data
  • computational chemistry
  • chemical crystallography and materials


Qbox

  • Qbox - home of the first-principles molecular dynamics code. Qbox is a C++/MPI scalable parallel implementation of first-principles molecular dynamics (FPMD) based on the plane-wave, pseudopotential formalism. Qbox is designed for operation on large parallel computers.
    • http://scherzo.ucdavis.edu/qbox/qbox-public
    • https://en.wikipedia.org/wiki/Qbox - an open-source software package for atomic-scale simulations of molecules, liquids and solids. It implements first principles (or ab initio) molecular dynamics, a simulation method in which inter-atomic forces are derived from quantum mechanics. Qbox is released under a GNU General Public License (GPL), and it is available as a FreeBSD port.


OpenFermion

    • https://github.com/quantumlib/OpenFermion - an open source library for compiling and analyzing quantum algorithms to simulate fermionic systems, including quantum chemistry. Among other functionalities, this version features data structures and tools for obtaining and manipulating representations of fermionic and qubit Hamiltonians.


Psi4

  • Psi4 - an electronic structure package in C++ driven by Python, an open-source suite of ab initio quantum chemistry programs designed for efficient, high-accuracy simulations of molecular properties. We routinely perform computations with >2500 basis functions on multi-core machines. With computationally demanding portions written in C++, exports of many C++ classes into Python via Pybind11, and a flexible Python driver, Psi4 strives to be friendly to both users and developers.


Brailchem

  • Brailchem - a software tool helping blind and visually impaired students to access chemical information commonly presented visually. It consists of two parts: a server providing chemical data and a client presenting the server data to a user. The user can use the client for activities like browsing molecule structures or studying periodic table of elements. The server part is intended to be run on a central server to which clients can connect. It serves data to them using HTTP protocol, running its own HTTP server with the help of Twisted engine. The client part can be run on client machines either as a Firefox extension or as a standalone XUL application.

Mechanics







Cosmology

See also Space










  • https://en.wikipedia.org/wiki/Cosmological_principle - an axiom that embodies the working assumption or premise that the distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe







Between 10−12 second and 10−6 second after the Big Bang


Between 10−6 second and 1 second after the Big Bang


Between 1 second and 10 seconds after the Big Bang


Between 10 seconds and 380,000 years after the Big Bang

Between 3 minutes and 20 minutes after the Big Bang


  • https://en.wikipedia.org/wiki/Recombination_(cosmology) - refers to the epoch at which charged electrons and protons first became bound to form electrically neutral hydrogen atoms. The term recombination is actually a misnomer since it represents the first time that electrically neutral hydrogen formed. Recombination occurred about 378,000 years after the Big Bang (at a redshift of z = 1100).

ca. 377,000 years after the Big Bang


150 million to 1 billion years after the Big Bang







  • https://en.wikipedia.org/wiki/Interstellar_cloud - the generic name given to an accumulation of gas, plasma and dust in our and other galaxies. Put differently, an interstellar cloud is a denser-than-average region of the interstellar medium














  • https://en.wikipedia.org/wiki/Lambda-CDM_model a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos: the existence and structure of the cosmic microwave background, the large-scale structure in the distribution of galaxies, the abundances of hydrogen (including deuterium), helium, and lithium, the accelerating expansion of the universe observed in the light from distant galaxies and supernovae


Quasar

Black hole


Beyond the Standard Model



"But conformal systems, described by “conformal field theories” (CFTs), are uniform all the way up and down, and this, Polyakov discovered, makes them highly amenable to a bootstrap approach. In a magnet at its critical point, for instance, scale symmetry constrains the two-point correlation function by requiring that it must stay the same when you rescale the distance between the two points. Another conformal symmetry says the three-point function must not change when you invert the three distances involved. In a landmark 1983 paper known simply as “BPZ,” Alexander Belavin, Polyakov and Alexander Zamolodchikov showed that there are an infinite number of conformal symmetries in two spatial dimensions that could be used to constrain the correlation functions of two-dimensional conformal field theories. The authors exploited these symmetries to solve for the critical exponents of a famous CFT called the 2-D Ising model — essentially the theory of a flat magnet. The “conformal bootstrap,” BPZ’s bespoke procedure for exploiting conformal symmetries, shot to fame."

" The conformal bootstrap, like the original bootstrap more than a decade earlier, fell into disuse. The lull lasted until 2008, when a group of researchers discovered a powerful trick for approximating solutions to Polyakov’s bootstrap equation for CFTs with three or more dimensions. “Frankly, I didn’t expect this, and I thought originally that there is some mistake there,” Polyakov said. “It seemed to me that the information put into the equations is too little to get such results.”



  • https://en.wikipedia.org/wiki/Unified_field_theory - occasionally referred to as a uniform field theory, is a type of field theory that allows all that is usually thought of as fundamental forces and elementary particles to be written in terms of a single field. There is no accepted unified field theory, and thus it remains an open line of research. The term was coined by Einstein, who attempted to unify the general theory of relativity with electromagnetism. The "theory of everything" and Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature.





  • arxiv: Physics in 100 Years - Frank Wilczek. "Here I indulge in wide-ranging speculations on the shape of physics, and technology closely related to physics, over the next one hundred years. Themes include the many faces of unification, the re-imagining of quantum theory, and new forms of engineering on small, intermediate, and large scales." [49]





Georgi-Glashow model

  • https://en.wikipedia.org/wiki/Georgi-Glashow_model - a particular grand unification theory (GUT) proposed by Howard Georgi and Sheldon Glashow in 1974. In this model the standard model gauge groups SU(3) × SU(2) × U(1) are combined into a single simple gauge group—SU(5). The unified group SU(5) is then thought to be spontaneously broken into the standard model subgroup below some very high energy scale called the grand unification scale.

Flipped SU(5)

SO(10)

  • https://en.wikipedia.org/wiki/SO(10)_(physics) - one of the grand unified theories (GUT) is based on the SO(10) Lie group. (The Lie group involved is not really the special orthogonal group SO(10), but rather its double cover Spin(10); but calling it SO(10) is the standard convention.)

Supersymmetry




String theory





  • https://en.wikipedia.org/wiki/Bosonic_string_theory - the original version of string theory, developed in the late 1960s. It is so called because it only contains bosons in the spectrum. In the 1980s, supersymmetry was discovered in the context of string theory, and a new version of string theory called superstring theory (supersymmetric string theory) became the real focus. Nevertheless, bosonic string theory remains a very useful model to understand many general features of perturbative string theory, and many theoretical difficulties of superstrings can actually already be found in the context of bosonic strings.



  • https://en.wikipedia.org/wiki/M-theory - a theory in physics that unifies all consistent versions of superstring theory. The existence of such a theory was first conjectured by Edward Witten at a string theory conference at the University of Southern California in the spring of 1995. Witten's announcement initiated a flurry of research activity known as the second superstring revolution.




Loop quantum gravity

Emergent gravity

  • https://en.wikipedia.org/wiki/Induced_gravity - or emergent gravity, is an idea in quantum gravity that space-time curvature and its dynamics emerge as a mean field approximation of underlying microscopic degrees of freedom, similar to the fluid mechanics approximation of Bose–Einstein condensates. The concept was originally proposed by Andrei Sakharov in 1967.


  • https://en.wikipedia.org/wiki/Entropic_gravity - also known as emergent gravity, is a theory in modern physics that describes gravity as an entropic force—a force with macro-scale homogeneity but which is subject to quantum-level disorder—and not a fundamental interaction. The theory, based on string theory, black hole physics, and quantum information theory, describes gravity as an emergent phenomenon that springs from the quantum entanglement of small bits of spacetime information. As such, entropic gravity is said to abide by the second law of thermodynamics under which the entropy of a physical system tends to increase over time. At its simplest, the theory holds that when gravity becomes vanishingly weak—levels seen only at interstellar distances—it diverges from its classically understood nature and its strength begins to decay linearly with distance from a mass. Entropic gravity provides the underlying framework to explain Modified Newtonian Dynamics, or MOND, which holds that at a gravitational acceleration threshold of approximately 1.2×10−10 m/s2, gravitational strength begins to vary inversely (linearly) with distance from a mass rather than the normal inverse-square law of the distance. This is an exceedingly low threshold, measuring only 12 trillionths gravity's strength at earth's surface; an object dropped from a height of one meter would fall for 36 hours were earth's gravity this weak. It is also 3,000 times less than exists at the point where Voyager 1 crossed our solar system's heliopause and entered interstellar space. The theory claims to be consistent with both the macro-level observations of Newtonian gravity as well as Einstein's theory of general relativity and its gravitational distortion of spacetime. Importantly, the theory also explains (without invoking the existence of dark matter and its accompanying math featuring new free parameters that are tweaked to obtain the desired outcome) why galactic rotation curves differ from the profile expected with visible matter. The theory of entropic gravity posits that what has been interpreted as unobserved dark matter is the product of quantum effects that can be regarded as a form of positive dark energy that lifts the vacuum energy of space from its ground state value. A central tenet of the theory is that the positive dark energy leads to a thermal-volume law contribution to entropy that overtakes the area law of anti-de Sitter space precisely at the cosmological horizon. The theory has been controversial within the physics community but has sparked research and experiments to test its validity. [53]



Holographic principle





  • -2304.09900- Constraints on physical computers in holographic spacetimes - Within the setting of the AdS/CFT correspondence, we ask about the power of computers in the presence of gravity. We show that there are computations on n qubits which cannot be implemented inside of black holes with entropy less than O(2n). To establish our claim, we argue computations happening inside the black hole must be implementable in a programmable quantum processor, so long as the inputs and description of the unitary to be run are not too large. We then prove a bound on quantum processors which shows many unitaries cannot be implemented inside the black hole, and further show some of these have short descriptions and act on small systems. These unitaries with short descriptions must be computationally forbidden from happening inside the black hole. [57]

Entropic gravity


E8


Causal fermion system

Instead of introducing physical objects on a preexisting space-time manifold, the general concept is to derive space-time as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when space-time no longer has a manifold structure on the microscopic scale (like a space-time lattice or other discrete or continuous structures on the Planck scale). As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.


to sort