Topic Review
Electron Rest Mass
The electron rest mass (symbol: me) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about 9.109×10−31 kilograms or about 5.486×10−4 daltons, equivalent to an energy of about 8.187×10−14 joules or about 0.5110 MeV.
  • 13.7K
  • 31 Oct 2022
Topic Review
Proton–Proton Chain Reaction
The proton–proton chain reaction is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 times that of the Sun. In general, proton–proton fusion can occur only if the kinetic energy (i.e. temperature) of the protons is high enough to overcome their mutual electrostatic repulsion. In the Sun, deuterium-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen in the core of the Sun is calculated to take more than ten billion years. Although called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction. The proton-proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two such chains leading from Hydrogen to Helium in the Sun. One chain has five reactions, the other chain has six.
  • 10.5K
  • 21 Oct 2022
Topic Review
Proton–Proton Chain
The proton–proton chain, also commonly referred to as the p-p chain, is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 times that of the Sun. In general, proton–proton fusion can occur only if the kinetic energy (i.e. temperature) of the protons is high enough to overcome their mutual electrostatic repulsion. In the Sun, deuterium-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen initially in the core of the Sun is calculated to take more than ten billion years. Although sometimes called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction. The proton-proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two main chains leading from Hydrogen to Helium in the Sun. One chain has five reactions, the other chain has six.
  • 4.8K
  • 10 Nov 2022
Topic Review
Transmission Electron Microscopy
Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through a specimen to form an image. The specimen is most often an ultrathin section less than 100 nm thick or a suspension on a grid. An image is formed from the interaction of the electrons with the sample as the beam is transmitted through the specimen. The image is then magnified and focused onto an imaging device, such as a fluorescent screen, a layer of photographic film, or a sensor such as a scintillator attached to a charge-coupled device. Transmission electron microscopes are capable of imaging at a significantly higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons. This enables the instrument to capture fine detail—even as small as a single column of atoms, which is thousands of times smaller than a resolvable object seen in a light microscope. Transmission electron microscopy is a major analytical method in the physical, chemical and biological sciences. TEMs find application in cancer research, virology, and materials science as well as pollution, nanotechnology and semiconductor research, but also in other fields such as paleontology and palynology. TEM instruments have multiple operating modes including conventional imaging, scanning TEM imaging (STEM), diffraction, spectroscopy, and combinations of these. Even within conventional imaging, there are many fundamentally different ways that contrast is produced, called "image contrast mechanisms". Contrast can arise from position-to-position differences in the thickness or density ("mass-thickness contrast"), atomic number ("Z contrast", referring to the common abbreviation Z for atomic number), crystal structure or orientation ("crystallographic contrast" or "diffraction contrast"), the slight quantum-mechanical phase shifts that individual atoms produce in electrons that pass through them ("phase contrast"), the energy lost by electrons on passing through the sample ("spectrum imaging") and more. Each mechanism tells the user a different kind of information, depending not only on the contrast mechanism but on how the microscope is used—the settings of lenses, apertures, and detectors. What this means is that a TEM is capable of returning an extraordinary variety of nanometer- and atomic-resolution information, in ideal cases revealing not only where all the atoms are but what kinds of atoms they are and how they are bonded to each other. For this reason TEM is regarded as an essential tool for nanoscience in both biological and materials fields. The first TEM was demonstrated by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939. In 1986, Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy.
  • 3.8K
  • 05 Dec 2022
Topic Review
G-Factor
A g-factor (also called g value or dimensionless magnetic moment) is a dimensionless quantity that characterizes the magnetic moment and angular momentum of an atom, a particle or the nucleus. It is essentially a proportionality constant that relates the different observed magnetic moments μ of a particle to their angular momentum quantum numbers and a unit of magnetic moment (to make it dimensionless), usually the Bohr magneton or nuclear magneton.
  • 3.8K
  • 28 Oct 2022
Topic Review
Weak Interaction
In nuclear physics and particle physics, the weak interaction, which is also often called the weak force or weak nuclear force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms. The weak interaction participates in nuclear fission, and the theory describing its behaviour and effects is sometimes called quantum flavourdynamics (QFD). However, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances, and is less than the diameter of a proton.
  • 3.6K
  • 27 Oct 2022
Topic Review
Transactinide Element
In chemistry, transactinide elements (also, transactinides, or super-heavy elements) are the chemical elements with atomic numbers from 104 to 120. Their atomic numbers are immediately greater than those of the actinides, the heaviest of which is lawrencium (atomic number 103). Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed the transactinide series ranging from element 104 to 121 and the superactinide series approximately spanning elements 122 to 153. The transactinide seaborgium was named in his honor. By definition, transactinide elements are also transuranic elements, i.e. have an atomic number greater than uranium (92). The transactinide elements all have electrons in the 6d subshell in their ground state. Except for rutherfordium and dubnium, even the longest-lasting isotopes of transactinide elements have extremely short half-lives, measured in seconds, or smaller units. The element naming controversy involved the first five or six transactinide elements. These elements thus used systematic names for many years after their discovery had been confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively shortly after a discovery has been confirmed.) Transactinides are radioactive and have only been obtained synthetically in laboratories. None of these elements has ever been collected in a macroscopic sample. Transactinide elements are all named after physicists and chemists or important locations involved in the synthesis of the elements. IUPAC defines an element to exist if its lifetime is longer than 10−14 seconds, which is the time it takes for the nucleus to form an electron cloud.
  • 3.5K
  • 01 Dec 2022
Topic Review
Void Coefficient
In nuclear engineering, the void coefficient (more properly called void coefficient of reactivity) is a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. Net reactivity in a reactor is the sum total of all these contributions, of which the void coefficient is but one. Reactors in which either the moderator or the coolant is a liquid typically will have a void coefficient value that is either negative (if the reactor is under-moderated) or positive (if the reactor is over-moderated). Reactors in which neither the moderator nor the coolant is a liquid (e.g., a graphite-moderated, gas-cooled reactor) will have a void coefficient value equal to zero. It is unclear how the definition of 'void' coefficient applies to reactors in which the moderator/coolant is neither liquid nor gas (supercritical water reactor).
  • 2.7K
  • 22 Nov 2022
Topic Review
Inertial Confinement Fusion
Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear fusion reactions by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of deuterium and tritium. Typical fuel pellets are about the size of a pinhead and contain around 10 milligrams of fuel. To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices (As of 2015) have used lasers. The heated outer layer explodes outward, producing a reaction force against the remainder of the target, accelerating it inwards, compressing the target. This process is designed to create shock waves that travel inward through the target. A sufficiently powerful set of shock waves can compress and heat the fuel at the center so much that fusion reactions occur. ICF is one of two major branches of fusion energy research, the other being magnetic confinement fusion. When it was first proposed in the early 1970s, ICF appeared to be a practical approach to power production and the field flourished. Experiments during the 1970s and '80s demonstrated that the efficiency of these devices was much lower than expected, and reaching ignition would not be easy. Throughout the 1980s and '90s, many experiments were conducted in order to understand the complex interaction of high-intensity laser light and plasma. These led to the design of newer machines, much larger, that would finally reach ignition energies. The largest operational ICF experiment is the National Ignition Facility (NIF) in the US, designed using the decades-long experience of earlier experiments. Like those earlier experiments, however, NIF has failed to reach ignition and is, as of 2015, generating about ​1⁄3 of the required energy levels.
  • 2.7K
  • 29 Sep 2022
Topic Review
Parity
In quantum mechanics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection): It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics. In interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions. A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation. In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
  • 2.4K
  • 10 Nov 2022
Topic Review
Triple-Alpha Process
The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon.
  • 2.4K
  • 22 Nov 2022
Topic Review
Ionizing Radiation
The development of protective agents against harmful radiations has been a subject of investigation for decades. However, effective (ideal) radioprotectors and radiomitigators remain an unsolved problem. Because ionizing radiation-induced cellular damage is primarily attributed to free radicals, radical scavengers are promising as potential radioprotectors. Early development of such agents focused on thiol synthetic compounds, e.g., amifostine (2-(3-aminopropylamino) ethylsulfanylphosphonic acid), approved as a radioprotector by the Food and Drug Administration (FDA, USA) but for limited clinical indications and not for nonclinical uses. To date, no new chemical entity has been approved by the FDA as a radiation countermeasure for acute radiation syndrome (ARS). All FDA-approved radiation countermeasures (filgrastim, a recombinant DNA form of the naturally occurring granulocyte colony-stimulating factor, G-CSF; pegfilgrastim, a PEGylated form of the recombinant human G-CSF; sargramostim, a recombinant granulocyte macrophage colony-stimulating factor, GM-CSF) are classified as radiomitigators. No radioprotector that can be administered prior to exposure has been approved for ARS. This differentiates radioprotectors (reduce direct damage caused by radiation) and radiomitigators (minimize toxicity even after radiation has been delivered). Molecules under development with the aim of reaching clinical practice and other nonclinical applications are discussed. Assays to evaluate the biological effects of ionizing radiations are also analyzed. Ionizing radiation is the energy released by atoms in the form of electromagnetic waves (e.g., X or gamma rays) or particle radiation (alpha, beta, electrons, protons, neutrons, mesons, prions, and heavy ions) with sufficient energy to ionize atoms or molecules.
  • 2.4K
  • 23 Feb 2022
Topic Review
Kikuchi Lines
Kikuchi lines are patterns of electrons formed by scattering. They pair up to form bands in electron diffraction from single crystal specimens, there to serve as "roads in orientation-space" for microscopists uncertain of what they are looking at. In transmission electron microscopes, they are easily seen in diffraction from regions of the specimen thick enough for multiple scattering. Unlike diffraction spots, which blink on and off as one tilts the crystal, Kikuchi bands mark orientation space with well-defined intersections (called zones or poles) as well as paths connecting one intersection to the next. Experimental and theoretical maps of Kikuchi band geometry, as well as their direct-space analogs e.g. bend contours, electron channeling patterns, and fringe visibility maps are increasingly useful tools in electron microscopy of crystalline and nanocrystalline materials. Because each Kikuchi line is associated with Bragg diffraction from one side of a single set of lattice planes, these lines can be labeled with the same Miller or reciprocal-lattice indices that are used to identify individual diffraction spots. Kikuchi band intersections, or zones, on the other hand are indexed with direct-lattice indices i.e. indices which represent integer multiples of the lattice basis vectors a, b and c. Kikuchi lines are formed in diffraction patterns by diffusely scattered electrons, e.g. as a result of thermal atom vibrations. The main features of their geometry can be deduced from a simple elastic mechanism proposed in 1928 by Seishi Kikuchi, although the dynamical theory of diffuse inelastic scattering is needed to understand them quantitatively. In x-ray scattering, these lines are referred to as Kossel lines (named after Walther Kossel).
  • 2.2K
  • 09 Nov 2022
Topic Review
Photon
A photon (from grc φῶς, φωτός  'light') is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless,[lower-alpha 1] so they always move at the speed of light in vacuum, 299792458 m/s (or about 186,282 mi/s). The photon belongs to the class of bosons. Like all elementary particles, photons are currently best explained by quantum mechanics, and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, Planck proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography.
  • 2.1K
  • 23 Oct 2022
Topic Review
CNO Cycle
The CNO cycle (for carbon–nitrogen–oxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p-p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun. Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos. There are various alternative paths and catalysts involved in the CNO cycles, all these cycles have the same net result: The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in an endless loop. The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around 4×106 K (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately 15×106 K, but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately 17×106 K. The Sun has a core temperature of around 15.7×106 K, and only 1.7% of 4He nuclei produced in the Sun are born in the CNO cycle. The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s. The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.
  • 2.0K
  • 03 Nov 2022
Topic Review
Position and Momentum Space
In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general could be any finite number of dimensions. Position space (also real space or coordinate space) is the set of all position vectors r in space, and has dimensions of length. A position vector defines a point in space. If the position vector of a point particle varies with time it will trace out a path, the trajectory of a particle. Momentum space is the set of all momentum vectors p a physical system can have. The momentum vector of a particle corresponds to its motion, with units of [mass][length][time]−1. Mathematically, the duality between position and momentum is an example of Pontryagin duality. In particular, if a function is given in position space, f(r), then its Fourier transform obtains the function in momentum space, φ(p). Conversely, the inverse Fourier transform of a momentum space function is a position space function. These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves. The wave vector k (or simply "k-vector") has dimensions of reciprocal length, making it an analogue of angular frequency ω which has dimensions of reciprocal time. The set of all wave vectors is k-space. Usually r is more intuitive and simpler than k, though the converse can also be true, such as in solid-state physics. Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle ΔxΔp ≥ ħ/2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħk which states the momentum and wavevector of a free particle are proportional to each other. In this context, when it is unambiguous, the terms "momentum" and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal.
  • 2.0K
  • 24 Nov 2022
Topic Review
FFAG Accelerator
A Fixed-Field Alternating Gradient accelerator (FFAG) is a circular particle accelerator concept on which development was started in the early 50s, and that can be characterized by its time-independent magnetic fields (fixed-field, like in a cyclotron) and the use of strong focusing (alternating gradient, like in a synchrotron). Thus, FFAG accelerators combine the cyclotron's advantage of continuous, unpulsed operation, with the synchrotron's relatively inexpensive small magnet ring, of narrow bore. Although the development of FFAGs had not been pursued for over a decade starting from 1967, it has regained interest since the mid-1980s for usage in neutron spallation sources, as a driver for muon colliders and to accelerate muons in a neutrino factory since the mid-1990s. The revival in FFAG research has been particularly strong in Japan with the construction of several rings. This resurgence has been prompted in part by advances in RF cavities and in magnet design.
  • 1.7K
  • 01 Nov 2022
Topic Review
Radiative Transfer Equation and Diffusion Theory for Photon Transport in Biological Tissue
The RTE can mathematically model the transfer of energy as photons move inside a tissue. The flow of radiation energy through a small area element in the radiation field can be characterized by radiance [math]\displaystyle{ L(\vec{r},\hat{s},t) (\frac{W}{m^2 sr}) }[/math]. Radiance is defined as energy flow per unit normal area per unit solid angle per unit time. Here, [math]\displaystyle{ \vec{r} }[/math] denotes position, [math]\displaystyle{ \hat{s} }[/math] denotes unit direction vector and [math]\displaystyle{ t }[/math] denotes time. 
  • 1.6K
  • 16 Nov 2022
Topic Review
4D Scanning Transmission Electron Microscopy
4D scanning transmission electron microscopy (4D STEM) is a subset of scanning transmission electron microscopy (STEM) which utilizes a pixelated electron detector to capture a convergent beam electron diffraction (CBED) pattern at each scan location. This technique captures a 2 dimensional reciprocal space image associated with each scan point as the beam rasters across a 2 dimensional region in real space, hence the name 4D STEM. Its development was enabled by evolution in STEM detectors and improvements computational power. The technique has applications in visual diffraction imaging, phase orientation and strain mapping, phase contrast analysis, among others. The name 4D STEM is common in literature, however it is known by other names: 4D STEM EELS, ND STEM (N- since the number of dimensions could be higher than 4), position resolved diffraction (PRD), spatial resolved diffractometry, momentum-resolved STEM, "nanobeam precision electron diffraction", scanning electron nano diffraction, nanobeam electron diffraction, or pixelated STEM.
  • 1.6K
  • 11 Oct 2022
Topic Review
J/Psi Meson
The J/ψ (J/psi) meson /ˈdʒeɪ ˈsaɪ ˈmiːzɒn/ or psion is a subatomic particle, a flavor-neutral meson consisting of a charm quark and a charm antiquark. Mesons formed by a bound state of a charm quark and a charm anti-quark are generally known as "charmonium". The J/ψ is the most common form of charmonium, due to its spin of 1 and its low rest mass. The J/ψ has a rest mass of 3.0969 GeV/c2, just above that of the ηc (2.9836 GeV/c2), and a mean lifetime of 7.2×10−21 s. This lifetime was about a thousand times longer than expected. Its discovery was made independently by two research groups, one at the Stanford Linear Accelerator Center, headed by Burton Richter, and one at the Brookhaven National Laboratory, headed by Samuel Ting of MIT. They discovered they had actually found the same particle, and both announced their discoveries on 11 November 1974. The importance of this discovery is highlighted by the fact that the subsequent, rapid changes in high-energy physics at the time have become collectively known as the "November Revolution". Richter and Ting were awarded the 1976 Nobel Prize in Physics.
  • 1.5K
  • 09 Nov 2022
  • Page
  • of
  • 5
ScholarVision Creations