You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Corium (Nuclear Reactor)
Corium, also called fuel-containing material (FCM) or lava-like fuel-containing material (LFCM), is a material that is created in the core of a nuclear reactor during a meltdown accident. It resembles natural lava in its consistency. It consists of a mixture of nuclear fuel, fission products, control rods, structural materials from the affected parts of the reactor, products of their chemical reaction with air, water and steam, and, in the event that the reactor vessel is breached, molten concrete from the floor of the reactor room.
  • 1.6K
  • 28 Oct 2022
Topic Review
Precursor
Precursors are characteristic wave patterns caused by dispersion of an impulse's frequency components as it propagates through a medium. Classically, precursors precede the main signal, although in certain situations they may also follow it. Precursor phenomena exist for all types of waves, as their appearance is only predicated on the prominence of dispersion effects in a given mode of wave propagation. This non-specificity has been confirmed by the observation of precursor patterns in different types of electromagnetic radiation (microwaves, visible light, and terahertz radiation) as well as in fluid surface waves and seismic waves.
  • 1.5K
  • 02 Dec 2022
Topic Review
Low-Energy Electron Microscopy
Low-energy electron microscopy, or LEEM, is an analytical surface science technique used to image atomically clean surfaces, atom-surface interactions, and thin (crystalline) films. In LEEM, high-energy electrons (15-20 keV) are emitted from an electron gun, focused using a set of condenser optics, and sent through a magnetic beam deflector (usually 60˚ or 90˚). The “fast” electrons travel through an objective lens and begin decelerating to low energies (1-100 eV) near the sample surface because the sample is held at a potential near that of the gun. The low-energy electrons are now termed “surface-sensitive” and the near-surface sampling depth can be varied by tuning the energy of the incident electrons (difference between the sample and gun potentials minus the work functions of the sample and system). The low-energy elastically backscattered electrons travel back through the objective lens, reaccelerate to the gun voltage (because the objective lens is grounded), and pass through the beam separator again. However, now the electrons travel away from the condenser optics and into the projector lenses. Imaging of the back focal plane of the objective lens into the object plane of the projector lens (using an intermediate lens) produces a diffraction pattern (low-energy electron diffraction, LEED) at the imaging plane and recorded in a number of different ways. The intensity distribution of the diffraction pattern will depend on the periodicity at the sample surface and is a direct result of the wave nature of the electrons. One can produce individual images of the diffraction pattern spot intensities by turning off the intermediate lens and inserting a contrast aperture in the back focal plane of the objective lens (or, in state-of-the-art instruments, in the center of the separator, as chosen by the excitation of the objective lens), thus allowing for real-time observations of dynamic processes at surfaces. Such phenomena include (but are not limited to): tomography, phase transitions, adsorption, reaction, segregation, thin film growth, etching, strain relief, sublimation, and magnetic microstructure. These investigations are only possible because of the accessibility of the sample; allowing for a wide variety of in situ studies over a wide temperature range. LEEM was invented by Ernst Bauer in 1962; however, not fully developed (by Ernst Bauer and Wolfgang Telieps) until 1985.
  • 1.5K
  • 15 Nov 2022
Topic Review
Fouling Prevention in Membranes by Radiation-Induced Graft Copolymerization
The application of membrane processes in various fields has now undergone accelerated developments, despite the presence of some hurdles impacting the process efficiency. Fouling is arguably the main hindrance for a wider implementation of polymeric membranes, particularly in pressure-driven membrane processes, causing higher costs of energy, operation, and maintenance. Radiation induced graft copolymerization (RIGC) is a powerful versatile technique for covalently imparting selected chemical functionalities to membranes’ surfaces, providing a potential solution to fouling problems. 
  • 1.5K
  • 17 Jan 2022
Topic Review
Photoemission Electron Microscopy
Photoemission electron microscopy (PEEM, also called photoelectron microscopy, PEM) is a type of electron microscopy that utilizes local variations in electron emission to generate image contrast. The excitation is usually produced by ultraviolet light, synchrotron radiation or X-ray sources. PEEM measures the coefficient indirectly by collecting the emitted secondary electrons generated in the electron cascade that follows the creation of the primary core hole in the absorption process. PEEM is a surface sensitive technique because the emitted electrons originate from a shallow layer. In physics, this technique is referred to as PEEM, which goes together naturally with low-energy electron diffraction (LEED), and low-energy electron microscopy (LEEM). In biology, it is called photoelectron microscopy (PEM), which fits with photoelectron spectroscopy (PES), transmission electron microscopy (TEM), and scanning electron microscopy (SEM).
  • 1.5K
  • 19 Oct 2022
Topic Review
Cross Section
In physics, the cross section is a measure of the probability that a specific process will take place when some kind of radiant excitation (e.g. a particle beam, sound wave, light, or an X-ray) intersects a localized phenomenon (e.g. a particle or density fluctuation). For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process. In classical physics, this probability often converges to a deterministic proportion of excitation energy involved in the process, so that, for example, with light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). It is important to note that although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas. When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics.
  • 1.4K
  • 18 Nov 2022
Topic Review
Radiological Investigation of French Fossils
The accurate investigation of uranium-containing fossils in a museum of the City of Basel showed the following crucial points: 1 Storage of uranium-containing minerals or fossils may lead to elevated radon exposures for the staff. The monitoring of the room air is recommended: Eventual taken measures will reduce the exposure to radon (permanent air ventilation, airtight storage of contaminated objects). 2 The handling and preparation of such contaminated objects may lead to significant doses by inhalation of radioactive dust. Such work should not be envisaged without special precautions. These could be wearing protective mask and gloves, working in a clean-bench.
  • 1.4K
  • 04 Jul 2021
Topic Review
Osmium-191
Osmium (76Os) has seven naturally occurring isotopes, five of which are stable: 187Os, 188Os, 189Os, 190Os, and (most abundant) 192Os. The other natural isotopes, 184Os, and 186Os, have extremely long half-life (1.12×1013 years and 2×1015 years, respectively) and for practical purposes can be considered to be stable as well. 187Os is the daughter of 187Re (half-life 4.56×1010 years) and is most often measured in an 187Os/188Os ratio. This ratio, as well as the 187Re/188Os ratio, have been used extensively in dating terrestrial as well as meteoric rocks. It has also been used to measure the intensity of continental weathering over geologic time and to fix minimum ages for stabilization of the mantle roots of continental cratons. However, the most notable application of Os in dating has been in conjunction with iridium, to analyze the layer of shocked quartz along the Cretaceous–Paleogene boundary that marks the extinction of the dinosaurs 66 million years ago. There are also 30 artificial radioisotopes, the longest-lived of which is 194Os with a half-life of six years; all others have half-lives under 94 days. There are also nine known nuclear isomers, the longest-lived of which is 191mOs with a half-life of 13.10 hours. All isotopes and nuclear isomers of osmium are either radioactive or observationally stable, meaning that they are predicted to be radioactive but no actual decay has been observed.
  • 1.4K
  • 08 Nov 2022
Topic Review
Personal RF Safety Monitors
Electromagnetic field densitometers measure the exposure to electromagnetic radiation in certain ranges of the electromagnetic spectrum. This article concentrates on densitometers used in the telecommunication industry, which measure exposure to radio spectrum radiation. Other densitometers, like extremely low frequency densitometers which measure exposure to radiation from electric power lines, also exist. The major difference between a "Densitometer" and a "Dosimeter" is that a Dosimeter can measure the absorbed dose, which does not exist for RF Monitors. Monitors are also separated by "RF Monitors" that simply measure fields and "RF Personal Monitors" that are designed to function while mounted on the human body.
  • 1.4K
  • 14 Oct 2022
Topic Review
Isotopes of Osmium
Osmium (76Os) has seven naturally occurring isotopes, five of which are stable: 187Os, 188Os, 189Os, 190Os, and (most abundant) 192Os. The other natural isotopes, 184Os, and 186Os, have extremely long half-life (1.12×1013 years and 2×1015 years, respectively) and for practical purposes can be considered to be stable as well. 187Os is the daughter of 187Re (half-life 4.56×1010 years) and is most often measured in an 187Os/188Os ratio. This ratio, as well as the 187Re/188Os ratio, have been used extensively in dating terrestrial as well as meteoric rocks. It has also been used to measure the intensity of continental weathering over geologic time and to fix minimum ages for stabilization of the mantle roots of continental cratons. However, the most notable application of Os in dating has been in conjunction with iridium, to analyze the layer of shocked quartz along the Cretaceous–Paleogene boundary that marks the extinction of the dinosaurs 66 million years ago. There are also 30 artificial radioisotopes, the longest-lived of which is 194Os with a half-life of six years; all others have half-lives under 94 days. There are also nine known nuclear isomers, the longest-lived of which is 191mOs with a half-life of 13.10 hours. All isotopes and nuclear isomers of osmium are either radioactive or observationally stable, meaning that they are predicted to be radioactive but no actual decay has been observed.
  • 1.3K
  • 27 Oct 2022
Topic Review
Biological Small-Angle Scattering
Biological small-angle scattering is a small-angle scattering method for structure analysis of biological materials. Small-angle scattering is used to study the structure of a variety of objects such as solutions of biological macromolecules, nanocomposites, alloys, and synthetic polymers. Small-angle X-ray scattering (SAXS) and small-angle neutron scattering (SANS) are the two complementary techniques known jointly as small-angle scattering (SAS). SAS is an analogous method to X-ray and neutron diffraction, wide angle X-ray scattering, as well as to static light scattering. In contrast to other X-ray and neutron scattering methods, SAS yields information on the sizes and shapes of both crystalline and non-crystalline particles. When used to study biological materials, which are very often in aqueous solution, the scattering pattern is orientation averaged. SAS patterns are collected at small angles of a few degrees. SAS is capable of delivering structural information in the resolution range between 1 and 25 nm, and of repeat distances in partially ordered systems of up to 150 nm in size. Ultra small-angle scattering (USAS) can resolve even larger dimensions. The grazing-incidence small-angle scattering (GISAS) is a powerful technique for studying of biological molecule layers on surfaces. In biological applications SAS is used to determine the structure of a particle in terms of average particle size and shape. One can also get information on the surface-to-volume ratio. Typically, the biological macromolecules are dispersed in a liquid. The method is accurate, mostly non-destructive and usually requires only a minimum of sample preparation. However, biological molecules are always susceptible to radiation damage. In comparison to other structure determination methods, such as solution NMR or X-ray crystallography, SAS allows one to overcome some restraints. For example, solution NMR is limited to protein size, whereas SAS can be used for small molecules as well as for large multi-molecular assemblies. Solid-State NMR is still an indispensable tool for determining atomic level information of macromolecules greater than 40 kDa or non-crystalline samples such as amyloid fibrils. Structure determination by X-ray crystallography may take several weeks or even years, whereas SAS measurements take days. SAS can also be coupled to other analytical techniques like size-exclusion chromatography to study heterogeneous samples. However, with SAS it is not possible to measure the positions of the atoms within the molecule.
  • 1.2K
  • 09 Oct 2022
Topic Review
Naturalness
In physics, naturalness is the property that the dimensionless ratios between free parameters or physical constants appearing in a physical theory should take values "of order 1" and that free parameters are not fine-tuned. That is, a natural theory would have parameter ratios with values like 2.34 rather than 234000 or 0.000234. The requirement that satisfactory theories should be "natural" in this sense is a current of thought initiated around the 1960s in particle physics. It is a criterion that arises from the seeming non-naturalness of the standard model and the broader topics of the hierarchy problem, fine-tuning, and the anthropic principle. However it does tend to suggest a possible area of weakness or future development for current theories such as the Standard Model, where some parameters vary by many orders of magnitude, and which require extensive "fine-tuning" of their current values of the models concerned. The concern is that it is not yet clear whether these seemingly exact values we currently recognize, have arisen by chance (based upon the anthropic principle or similar) or whether they arise from a more advanced theory not yet developed, in which these turn out to be expected and well-explained, because of other factors not yet part of particle physics models. The concept of naturalness is not always compatible with Occam's razor, since many instances of "natural" theories have more parameters than "fine-tuned" theories such as the Standard Model. Naturalness in physics is closely related to the issue of fine-tuning, and over the past decade many scientists argued that the principle of naturalness is a specific application of Bayesian statistics. In the history of particle physics, the naturalness principle has given correct predictions three times - in the case of electron self-energy, pion mass difference and kaon mass difference.
  • 1.2K
  • 28 Oct 2022
Topic Review
Radiolabeled Gold Nanoseeds and Glioblastoma Multiforme
Glioblastoma multiforme (GBM), classified as a grade IV brain tumor, represents the most frequent brain tumor, accounting for approximately 12–15% of all intracranial neoplasms. Current therapeutic strategies for GBM rely on open surgery, chemotherapy and radiotherapy. Despite some progress in the past 30 years, the overall survival of patients with glioblastoma remains extremely poor. The average lifespan is approximately 15 months after diagnosis, with most patients experiencing tumor relapse and outgrowth within 7–10 months of initial radiation therapy.
  • 1.2K
  • 14 Jan 2022
Topic Review
Transient Reactor Test Facility (TREAT)
Coordinates: 43°41′11″N 112°45′36″W / 43.68647°N 112.75998°W / 43.68647; -112.75998 Lua error in Module:Location_map at line 522: Unable to find the specified location map definition: "Module:Location map/data/Idaho" does not exist. The Transient Reactor Test Facility (TREAT) is an air-cooled, graphite moderated, thermal spectrum test nuclear reactor designed to test reactor fuels and structural materials. Constructed in 1958, and operated from 1959 until 1994, TREAT was built to conduct transient reactor tests where the test material is subjected to neutron pulses that can simulate conditions ranging from mild transients to reactor accidents. TREAT was designed by Argonne National Laboratory, and is located at the Idaho National Laboratory. Since original construction, the facility had additions or systems upgrades in 1963, 1972, 1982, and 1988. The 1988 addition was extensive, and included upgrades of most of the instrumentation and control systems. The U.S. Department of Energy (DOE) has decided to resume a program of transient testing, and plans to invest about $75 million to restart the TREAT facility by 2018. The renewed interest in TREAT was sparked by the 2011 Fukushima Daiichi nuclear disaster, which prompted the shutdown of Japan's and Germany's nuclear plants. One use for TREAT is planned to be testing of new accident tolerant fuel for nuclear reactors. TREAT was successfully restarted in November 2017.
  • 1.1K
  • 13 Oct 2022
Topic Review
Synchrotron Light Source
A synchrotron light source is a source of electromagnetic radiation (EM) usually produced by a storage ring, for scientific and technical purposes. First observed in synchrotrons, synchrotron light is now produced by storage rings and other specialized particle accelerators, typically accelerating electrons. Once the high-energy electron beam has been generated, it is directed into auxiliary components such as bending magnets and insertion devices (undulators or wigglers) in storage rings and free electron lasers. These supply the strong magnetic fields perpendicular to the beam which are needed to convert high energy electrons into photons. The major applications of synchrotron light are in condensed matter physics, materials science, biology and medicine. A large fraction of experiments using synchrotron light involve probing the structure of matter from the sub-nanometer level of electronic structure to the micrometer and millimeter level important in medical imaging. An example of a practical industrial application is the manufacturing of microstructures by the LIGA process.
  • 1.1K
  • 11 Oct 2022
Topic Review
Large Extra Dimension
In particle physics and string theory (M-theory), the ADD model, also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem. (Why is the force of gravity so weak compared to the electromagnetic force and the other fundamental forces?) The model tries to explain this problem by postulating that our universe, with its four dimensions (three spatial ones plus time), exists on a so called membrane floating in 11-dimensional space. It is then suggested that the other forces of nature (the electromagnetic force, strong interaction, and weak interaction) operate within this membrane and its four dimensions, while gravity can operate across all 11 dimensions. This would explain why gravity is very weak compared to the other fundamental forces. This is a radical theory given that the other 7 dimensions, which we do not observe, previously have been assumed to be very small (about a planck-length), while this theory asserts that they might be very large. The model was proposed by Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali in 1998. Attempts to test the theory are executed by smashing together two protons in the Large Hadron Collider so that they disperse and release elementary particles. If a postulated graviton appeared after a collision, for such a particle to disappear, and its disappearance be observed, that would suggest that the graviton had escaped into other dimensions beyond our universe's observable four. No experiments from the Large Hadron Collider have been decisive thus far. However, the operation range of the LHC (13 TeV collision energy) covers only a small part of the predicted range in which evidence for LED would be recorded (a few TeV to 1016 TeV). This suggests that the theory might be more thoroughly tested with advanced technology.
  • 1.1K
  • 01 Dec 2022
Topic Review
April 18: Albert Einstein Passed Away
On April 18, 1955, the world lost one of its greatest intellectual giants. Albert Einstein, the Nobel Prize-winning physicist whose theories reshaped modern science, passed away in Princeton, New Jersey, at the age of 76. His death marked not only the conclusion of an extraordinary life but also a turning point in 20th-century scientific history. Though his voice was silenced, his ideas continue to resonate through the cosmos.
  • 1.1K
  • 17 Apr 2025
Topic Review
Osmium-188
Osmium (76Os) has seven naturally occurring isotopes, five of which are stable: 187Os, 188Os, 189Os, 190Os, and (most abundant) 192Os. The other natural isotopes, 184Os, and 186Os, have extremely long half-life (1.12×1013 years and 2×1015 years, respectively) and for practical purposes can be considered to be stable as well. 187Os is the daughter of 187Re (half-life 4.56×1010 years) and is most often measured in an 187Os/188Os ratio. This ratio, as well as the 187Re/188Os ratio, have been used extensively in dating terrestrial as well as meteoric rocks. It has also been used to measure the intensity of continental weathering over geologic time and to fix minimum ages for stabilization of the mantle roots of continental cratons. However, the most notable application of Os in dating has been in conjunction with iridium, to analyze the layer of shocked quartz along the Cretaceous–Paleogene boundary that marks the extinction of the dinosaurs 66 million years ago. There are also 30 artificial radioisotopes, the longest-lived of which is 194Os with a half-life of six years; all others have half-lives under 94 days. There are also nine known nuclear isomers, the longest-lived of which is 191mOs with a half-life of 13.10 hours. All isotopes and nuclear isomers of osmium are either radioactive or observationally stable, meaning that they are predicted to be radioactive but no actual decay has been observed.
  • 1.1K
  • 30 Sep 2022
Biography
Karl Wirtz
Karl Eugen Julius Wirtz (24 April 1910 – 12 February 1994) was a German nuclear physicist, born in Cologne. He was arrested by the allied British and American Armed Forces and incarcerated at Farm Hall for six months in 1945 under Operation Epsilon. From 1929 to 1934, Wirtz studied physics, chemistry, and mathematics at the University of Bonn, the Albert Ludwigs University of Freiburg, and
  • 1.1K
  • 08 Dec 2022
Topic Review
Types of Compton Cameras
A Compton camera is a promising γ-ray detector that operates in the wide energy range of a few tens of keV to MeV. The γ-ray detection method of a Compton camera is based on Compton scattering kinematics, which is used to determine the direction and energy of the γ-rays without using a mechanical collimator. Although the Compton camera was originally designed for astrophysical applications, it was later applied in medical imaging as well. Moreover, its application in environmental radiation measurements is also under study.
  • 1.1K
  • 18 Oct 2022
  • Page
  • of
  • 5
Academic Video Service