You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Denormal Number
In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal number is subnormal. In a normal floating-point value, there are no leading zeros in the significand or mantissa; rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as 1.23 × 10−2). Conversely, a denormalized floating point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range). The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the significant digits. For a positive normalised number it can be represented as m0.m1m2m3...mp−2mp−1 (where m represents a significant digit, and p is the precision) with non-zero m0. Notice that for a binary radix, the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0.m1m2m3...mp−2mp−1), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent is the least value possible. By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach (discarding all significant digits when underflow is reached). Hence the production of a subnormal number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small. In IEEE 754-2008, denormal numbers are renamed subnormal numbers and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly. Mathematically speaking, the normalized floating-point numbers of a given sign are roughly logarithmically spaced, and as such any finite-sized normal float cannot include zero. The subnormal floats are a linearly spaced set of values, which span the gap between the negative and positive normal floats.
  • 1.9K
  • 08 Nov 2022
Topic Review
Propagation Graph
Propagation Graphs are a mathematical modelling method for radio propagation channels. A propagation graph is a signal flow graph in which vertices represent transmitters, receivers or scatterers, and edges models propagation conditions between vertices. Propagation graph models were initially developed in for multipath propagation in scenarios with multiple scattering, such as indoor radio propagation. It has later been applied in many other scenarios.
  • 1.9K
  • 21 Oct 2022
Topic Review
AI: When a Robot Writes a Play
AI: When a Robot Writes a Play (in Czech: AI: Když robot píše hru) is an experimental theatre play, where 90% of its script was automatically generated by artificial intelligence (the GPT-2 language model). The play is in Czech language, but an English version of the script also exists.
  • 1.9K
  • 23 Nov 2022
Topic Review
Adder (Electronics)
An adder, or summer, is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units (ALUs). They are also used in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators and similar operations. Although adders can be constructed for many number representations, such as binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require more logic around the basic adder.
  • 1.9K
  • 24 Nov 2022
Topic Review
IBM POWER Microprocessors
IBM has a series of high performance microprocessors called POWER followed by a number designating generation, i.e. POWER1, POWER2, POWER3 and so forth up to the latest POWER9. These processors have been used by IBM in their RS/6000, AS/400, pSeries, iSeries, System p, System i and Power Systems line of servers and supercomputers. They have also been used in data storage devices by IBM and by other server manufacturers like Bull and Hitachi. The name "POWER" was originally presented as an acronym for "Performance Optimization With Enhanced RISC". The POWERn family of processors were developed in the late 1980s and are still in active development nearly 30 years later. In the beginning, they utilized the POWER instruction set architecture (ISA), but that evolved into PowerPC in later generations and then to Power Architecture, so modern POWER processors do not use the POWER ISA, they use the Power ISA.
  • 1.9K
  • 23 Nov 2022
Topic Review
IBM Optical Mark and Character Readers
IBM designed, manufactured and sold optical mark and character readers from 1960 until 1984. The IBM 1287 is notable as being the first commercially sold scanner capable of reading handwritten numbers.
  • 1.9K
  • 17 Nov 2022
Topic Review
Big Data
Big data has become a very frequent research topic, due to the increase in data availability. Here we make the linkage between the use of big data and Econophysics, a research field which uses a large amount of data and deals with complex systems.
  • 1.9K
  • 25 Dec 2021
Topic Review
Joint Commission
The Joint Commission is a United States-based nonprofit tax-exempt 501(c) organization that accredits more than 21,000 US health care organizations and programs. The international branch accredits medical services from around the world. A majority of US state governments recognize Joint Commission accreditation as a condition of licensure for the receipt of Medicaid and Medicare reimbursements. The Joint Commission is based in the Chicago suburb of Oakbrook Terrace, Illinois.
  • 1.9K
  • 08 Oct 2022
Topic Review
Economic Impact Analysis
An economic impact analysis (EIA) examines the effect of an event on the economy in a specified area, ranging from a single neighborhood to the entire globe. It usually measures changes in business revenue, business profits, personal wages, and/or jobs. The economic event analyzed can include implementation of a new policy or project, or may simply be the presence of a business or organization. An economic impact analysis is commonly conducted when there is public concern about the potential impacts of a proposed project or policy. An economic impact analysis typically measures or estimates the change in economic activity between two scenarios, one assuming the economic event occurs, and one assuming it does not occur (which is referred to as the counterfactual case). This can be accomplished either before or after the event (ex ante or ex post). An economic impact analysis attempts to measure or estimate the change in economic activity in a specified region, caused by a specific business, organization, policy, program, project, activity, or other economic event. The study region can be a neighborhood, town, city, county, statistical area, state, country, continent, or the entire globe.
  • 1.9K
  • 18 Oct 2022
Topic Review
Discipline (Academia)
An academic discipline or academic field is a subdivision of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties within colleges and universities to which their practitioners belong. It includes language, art and cultural studies and other scientific disciplines. It incorporates expertise, people, projects, communities, challenges, studies, inquiry, research areas, and facilities that are strongly associated with a given scholastic subject area or college department. For example, the branches of science are commonly referred to as the scientific disciplines, e.g. physics, chemistry, and biology. Individuals associated with academic disciplines are commonly referred to as experts or specialists. Others, who may have studied liberal arts or systems theory rather than concentrating in a specific academic discipline, are classified as generalists. While academic disciplines in and of themselves are more or less focused practices, scholarly approaches such as multidisciplinarity/interdisciplinarity, transdisciplinarity, and cross-disciplinarity integrate aspects from multiple academic disciplines, therefore addressing any problems that may arise from narrow concentration within specialized fields of study. For example, professionals may encounter trouble communicating across academic disciplines because of differences in language, specified concepts or methodology. Some researchers believe that academic disciplines may, in the future, be replaced by what is known as Mode 2 or "post-academic science", which involves the acquisition of cross-disciplinary knowledge through collaboration of specialists from various academic disciplines.
  • 1.8K
  • 24 Nov 2022
Topic Review
Qiskit
Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Experience or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware (currently supports superconducting qubits and trapped ions) that follows this model. Qiskit was founded by IBM Research to allow software development for their cloud quantum computing service, IBM Quantum Experience. Contributions are also made by external supporters, typically from academic institutions. The primary version of Qiskit uses the Python programming language. Versions for Swift and JavaScript were initially explored, though the development for these versions have halted. Instead, a minimal re-implementation of basic features is available as MicroQiskit, which is made to be easy to port to alternative platforms. A range of Jupyter notebooks are provided with examples of quantum computing being used. Examples include the source code behind scientific studies that use Qiskit, as well as a set of exercises to help people to learn the basics of quantum programming. An open source textbook based on Qiskit is available as a university-level quantum algorithms or quantum computation course supplement.
  • 1.8K
  • 31 Oct 2022
Topic Review
Windows 10 Version History (Version 2004)
The Windows 10 May 2020 Update (also known as version 2004 and codenamed "20H1") is the ninth major update to Windows 10. It carries the build number 10.0.19041.
  • 1.8K
  • 17 Oct 2022
Topic Review
Schwarz Triangle Function
In complex analysis, the Schwarz triangle function or Schwarz s-function is a function that conformally maps the upper half plane to a triangle in the upper half plane having lines or circular arcs for edges. Let πα, πβ, and πγ be the interior angles at the vertices of the triangle. If any of α, β, and γ are greater than zero, then the Schwarz triangle function can be given in terms of hypergeometric functions as: where a = (1−α−β−γ)/2, b = (1−α+β−γ)/2, c = 1−α, a′ = a − c + 1 = (1+α−β−γ)/2, b′ = b − c + 1 = (1+α+β−γ)/2, and c′ = 2 − c = 1 + α. This mapping has singular points at z = 0, 1, and ∞, corresponding to the vertices of the triangle with angles πα, πγ, and πβ respectively. At these singular points, This formula can be derived using the Schwarzian derivative. This function can be used to map the upper half-plane to a spherical triangle on the Riemann sphere if α + β + γ > 1, or a hyperbolic triangle on the Poincaré disk if α + β + γ < 1. When α + β + γ = 1, then the triangle is a Euclidean triangle with straight edges: a = 0, [math]\displaystyle{ _2 F_1 \left(a, b; c; z\right) = 1 }[/math], and the formula reduces to that given by the Schwarz–Christoffel transformation. In the special case of ideal triangles, where all the angles are zero, the triangle function yields the modular lambda function. This function was introduced by H. A. Schwarz as the inverse function of the conformal mapping uniformizing a Schwarz triangle. Applying successive hyperbolic reflections in its sides, such a triangle generates a tessellation of the upper half plane (or the unit disk after composition with the Cayley transform). The conformal mapping of the upper half plane onto the interior of the geodesic triangle generalizes the Schwarz–Christoffel transformation. By the Schwarz reflection principle, the discrete group generated by hyperbolic reflections in the sides of the triangle induces an action on the two dimensional space of solutions. On the orientation-preserving normal subgroup, this two dimensional representation corresponds to the monodromy of the ordinary differential equation and induces a group of Möbius transformations on quotients of solutions. Since the triangle function is the inverse function of such a quotient, it is therefore an automorphic function for this discrete group of Möbius transformations. This is a special case of a general method of Henri Poincaré that associates automorphic forms with ordinary differential equations with regular singular points.
  • 1.8K
  • 13 Oct 2022
Topic Review
Berlekamp's Root Finding Algorithm
In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over a field [math]\displaystyle{ \mathbb Z_p }[/math]. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.
  • 1.8K
  • 02 Nov 2022
Topic Review
Emotion Recognition
Emotion recognition is the process of identifying human emotion, most typically from facial expressions as well as from verbal expressions. This is both something that humans do automatically but computational methodologies have also been developed.
  • 1.8K
  • 10 Oct 2022
Topic Review
3D Interaction
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant. The 3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both. When the real space is used for data input, humans perform actions or give commands to the machine using an input device that detects the 3D position of the human action. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device or a combination of them.
  • 1.8K
  • 29 Nov 2022
Topic Review
XCore Architecture
The XCore Architecture is a 32-bit RISC microprocessor architecture designed by XMOS. The architecture is designed to be used in multi-core processors for embedded systems. Each XCore executes up to eight concurrent threads, each thread having its own register set, and the architecture directly supports inter-thread and inter-core communication and various forms of thread scheduling. Two versions of the XCore architecture exist: the XS1 architecture and the XS2 architecture. Processors with the XS1 architecture include the XCore XS1-G4 and XCore XS1-L1. Processors with the XS2 architecture include xCORE-200. The architecture encodes instructions compactly, using 16 bits for frequently used instructions (with up to three operands) and 32 bits for less frequently used instructions (with up to 6 operands). Almost all instructions execute in a single cycle, and the architecture is event-driven in order to decouple the timings that a program needs to make from the execution speed of the program. A program will normally perform its computations and then wait for an event (e.g. a message, time, or external I/O event) before continuing.
  • 1.8K
  • 24 Nov 2022
Topic Review
Geologic Modelling
Geologic modelling, geological modelling or geomodelling is the applied science of creating computerized representations of portions of the Earth's crust based on geophysical and geological observations made on and below the Earth surface. A geomodel is the numerical equivalent of a three-dimensional geological map complemented by a description of physical quantities in the domain of interest. Geomodelling is related to the concept of Shared Earth Model; which is a multidisciplinary, interoperable and updatable knowledge base about the subsurface. Geomodelling is commonly used for managing natural resources, identifying natural hazards, and quantifying geological processes, with main applications to oil and gas fields, groundwater aquifers and ore deposits. For example, in the oil and gas industry, realistic geologic models are required as input to reservoir simulator programs, which predict the behavior of the rocks under various hydrocarbon recovery scenarios. A reservoir can only be developed and produced once; therefore, making a mistake by selecting a site with poor conditions for development is tragic and wasteful. Using geological models and reservoir simulation allows reservoir engineers to identify which recovery options offer the safest and most economic, efficient, and effective development plan for a particular reservoir. Geologic modelling is a relatively recent subdiscipline of geology which integrates structural geology, sedimentology, stratigraphy, paleoclimatology, and diagenesis; In 2-dimensions (2D), a geologic formation or unit is represented by a polygon, which can be bounded by faults, unconformities or by its lateral extent, or crop. In geological models a geological unit is bounded by 3-dimensional (3D) triangulated or gridded surfaces. The equivalent to the mapped polygon is the fully enclosed geological unit, using a triangulated mesh. For the purpose of property or fluid modelling these volumes can be separated further into an array of cells, often referred to as voxels (volumetric elements). These 3D grids are the equivalent to 2D grids used to express properties of single surfaces. Geomodelling generally involves the following steps:
  • 1.8K
  • 06 Oct 2022
Topic Review
Group Algebra
In functional analysis and related areas of mathematics, group algebras are constructions that generalize the concept of group ring to some classes of topological groups with the aim to reduce the theory of representations of topological groups to the theory of representations of topological algebras. There are several nonequivalent definitions of group algebra, each of which is considered convenient in a particular situation.
  • 1.8K
  • 02 Nov 2022
Topic Review
Converse (Logic)
In logic and mathematics, the converse of a categorical or implicational statement is the result of reversing its two constituent statements. For the implication P → Q, the converse is Q → P. For the categorical proposition All S are P, the converse is All P are S. Either way, the truth of the converse is generally independent from that of the original statement.
  • 1.8K
  • 27 Oct 2022
  • Page
  • of
  • 48
Academic Video Service