Your browser does not fully support modern features. Please upgrade for a smoother experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Comparison of On-demand Music Streaming Services
The following is a list of on-demand music streaming services. The services offer streaming of full-length content via the Internet as a part of their service, without the listener necessarily purchasing a file for download. This type of service is comparable to Internet radio. Many of these sites have advertising and offer non-free options in the style of an online music store. For a list of online music stores that provide a means of purchasing and downloading music as files of some sort, see: Comparison of online music stores. Many of both types of sites offer services similar to an online music database.
  • 1.9K
  • 17 Oct 2022
Topic Review
Training, Validation, and Test Sets
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided in multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
  • 1.9K
  • 17 Oct 2022
Topic Review
Denormal Number
In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal number is subnormal. In a normal floating-point value, there are no leading zeros in the significand or mantissa; rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as 1.23 × 10−2). Conversely, a denormalized floating point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range). The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the significant digits. For a positive normalised number it can be represented as m0.m1m2m3...mp−2mp−1 (where m represents a significant digit, and p is the precision) with non-zero m0. Notice that for a binary radix, the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0.m1m2m3...mp−2mp−1), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent is the least value possible. By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach (discarding all significant digits when underflow is reached). Hence the production of a subnormal number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small. In IEEE 754-2008, denormal numbers are renamed subnormal numbers and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly. Mathematically speaking, the normalized floating-point numbers of a given sign are roughly logarithmically spaced, and as such any finite-sized normal float cannot include zero. The subnormal floats are a linearly spaced set of values, which span the gap between the negative and positive normal floats.
  • 1.9K
  • 08 Nov 2022
Topic Review
Operad Theory
Operad theory is a field of mathematics concerned with prototypical algebras that model properties such as commutativity or anticommutativity as well as various amounts of associativity. Operads generalize the various associativity properties already observed in algebras and coalgebras such as Lie algebras or Poisson algebras by modeling computational trees within the algebra. Algebras are to operads as group representations are to groups. An operad can be seen as a set of operations, each one having a fixed finite number of inputs (arguments) and one output, which can be composed one with others. They form a category-theoretic analog of universal algebra. Operads originate in algebraic topology from the study of iterated loop spaces by J. Michael Boardman and Rainer M. Vogt, and J. Peter May. The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher.
  • 1.9K
  • 09 Oct 2022
Topic Review
IBM POWER Microprocessors
IBM has a series of high performance microprocessors called POWER followed by a number designating generation, i.e. POWER1, POWER2, POWER3 and so forth up to the latest POWER9. These processors have been used by IBM in their RS/6000, AS/400, pSeries, iSeries, System p, System i and Power Systems line of servers and supercomputers. They have also been used in data storage devices by IBM and by other server manufacturers like Bull and Hitachi. The name "POWER" was originally presented as an acronym for "Performance Optimization With Enhanced RISC". The POWERn family of processors were developed in the late 1980s and are still in active development nearly 30 years later. In the beginning, they utilized the POWER instruction set architecture (ISA), but that evolved into PowerPC in later generations and then to Power Architecture, so modern POWER processors do not use the POWER ISA, they use the Power ISA.
  • 1.9K
  • 23 Nov 2022
Topic Review
Adder (Electronics)
An adder, or summer, is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units (ALUs). They are also used in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators and similar operations. Although adders can be constructed for many number representations, such as binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require more logic around the basic adder.
  • 1.9K
  • 24 Nov 2022
Topic Review
Propagation Graph
Propagation Graphs are a mathematical modelling method for radio propagation channels. A propagation graph is a signal flow graph in which vertices represent transmitters, receivers or scatterers, and edges models propagation conditions between vertices. Propagation graph models were initially developed in for multipath propagation in scenarios with multiple scattering, such as indoor radio propagation. It has later been applied in many other scenarios.
  • 1.9K
  • 21 Oct 2022
Topic Review
Joint Commission
The Joint Commission is a United States-based nonprofit tax-exempt 501(c) organization that accredits more than 21,000 US health care organizations and programs. The international branch accredits medical services from around the world. A majority of US state governments recognize Joint Commission accreditation as a condition of licensure for the receipt of Medicaid and Medicare reimbursements. The Joint Commission is based in the Chicago suburb of Oakbrook Terrace, Illinois.
  • 1.9K
  • 08 Oct 2022
Topic Review
List of Mathematicians (B)
This is a list of mathematicians in alphabetical order beginning with 'B'.
  • 1.9K
  • 31 Oct 2022
Topic Review
Big Data
Big data has become a very frequent research topic, due to the increase in data availability. Here we make the linkage between the use of big data and Econophysics, a research field which uses a large amount of data and deals with complex systems.
  • 1.9K
  • 25 Dec 2021
Topic Review
Economic Impact Analysis
An economic impact analysis (EIA) examines the effect of an event on the economy in a specified area, ranging from a single neighborhood to the entire globe. It usually measures changes in business revenue, business profits, personal wages, and/or jobs. The economic event analyzed can include implementation of a new policy or project, or may simply be the presence of a business or organization. An economic impact analysis is commonly conducted when there is public concern about the potential impacts of a proposed project or policy. An economic impact analysis typically measures or estimates the change in economic activity between two scenarios, one assuming the economic event occurs, and one assuming it does not occur (which is referred to as the counterfactual case). This can be accomplished either before or after the event (ex ante or ex post). An economic impact analysis attempts to measure or estimate the change in economic activity in a specified region, caused by a specific business, organization, policy, program, project, activity, or other economic event. The study region can be a neighborhood, town, city, county, statistical area, state, country, continent, or the entire globe.
  • 1.9K
  • 18 Oct 2022
Topic Review
Discipline (Academia)
An academic discipline or academic field is a subdivision of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties within colleges and universities to which their practitioners belong. It includes language, art and cultural studies and other scientific disciplines. It incorporates expertise, people, projects, communities, challenges, studies, inquiry, research areas, and facilities that are strongly associated with a given scholastic subject area or college department. For example, the branches of science are commonly referred to as the scientific disciplines, e.g. physics, chemistry, and biology. Individuals associated with academic disciplines are commonly referred to as experts or specialists. Others, who may have studied liberal arts or systems theory rather than concentrating in a specific academic discipline, are classified as generalists. While academic disciplines in and of themselves are more or less focused practices, scholarly approaches such as multidisciplinarity/interdisciplinarity, transdisciplinarity, and cross-disciplinarity integrate aspects from multiple academic disciplines, therefore addressing any problems that may arise from narrow concentration within specialized fields of study. For example, professionals may encounter trouble communicating across academic disciplines because of differences in language, specified concepts or methodology. Some researchers believe that academic disciplines may, in the future, be replaced by what is known as Mode 2 or "post-academic science", which involves the acquisition of cross-disciplinary knowledge through collaboration of specialists from various academic disciplines.
  • 1.9K
  • 24 Nov 2022
Topic Review
Fsutil
As the next version of Windows NT after Windows 2000, as well as the successor to Windows Me, Windows XP introduced many new features but it also removed some others.
  • 1.9K
  • 08 Nov 2022
Topic Review
Qiskit
Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Experience or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware (currently supports superconducting qubits and trapped ions) that follows this model. Qiskit was founded by IBM Research to allow software development for their cloud quantum computing service, IBM Quantum Experience. Contributions are also made by external supporters, typically from academic institutions. The primary version of Qiskit uses the Python programming language. Versions for Swift and JavaScript were initially explored, though the development for these versions have halted. Instead, a minimal re-implementation of basic features is available as MicroQiskit, which is made to be easy to port to alternative platforms. A range of Jupyter notebooks are provided with examples of quantum computing being used. Examples include the source code behind scientific studies that use Qiskit, as well as a set of exercises to help people to learn the basics of quantum programming. An open source textbook based on Qiskit is available as a university-level quantum algorithms or quantum computation course supplement.
  • 1.9K
  • 31 Oct 2022
Topic Review
Windows 10 Version History (Version 2004)
The Windows 10 May 2020 Update (also known as version 2004 and codenamed "20H1") is the ninth major update to Windows 10. It carries the build number 10.0.19041.
  • 1.9K
  • 17 Oct 2022
Topic Review
Berlekamp's Root Finding Algorithm
In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over a field [math]\displaystyle{ \mathbb Z_p }[/math]. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.
  • 1.9K
  • 02 Nov 2022
Topic Review
Group Algebra
In functional analysis and related areas of mathematics, group algebras are constructions that generalize the concept of group ring to some classes of topological groups with the aim to reduce the theory of representations of topological groups to the theory of representations of topological algebras. There are several nonequivalent definitions of group algebra, each of which is considered convenient in a particular situation.
  • 1.9K
  • 02 Nov 2022
Topic Review
History of Programming Languages
History of Programming Languages (HOPL) is an infrequent ACM SIGPLAN conference. Past conferences were held in 1978, 1993, and 2007. The fourth conference was originally intended to take place in June 2020, but has been postponed.
  • 1.8K
  • 27 Oct 2022
Topic Review
XCore Architecture
The XCore Architecture is a 32-bit RISC microprocessor architecture designed by XMOS. The architecture is designed to be used in multi-core processors for embedded systems. Each XCore executes up to eight concurrent threads, each thread having its own register set, and the architecture directly supports inter-thread and inter-core communication and various forms of thread scheduling. Two versions of the XCore architecture exist: the XS1 architecture and the XS2 architecture. Processors with the XS1 architecture include the XCore XS1-G4 and XCore XS1-L1. Processors with the XS2 architecture include xCORE-200. The architecture encodes instructions compactly, using 16 bits for frequently used instructions (with up to three operands) and 32 bits for less frequently used instructions (with up to 6 operands). Almost all instructions execute in a single cycle, and the architecture is event-driven in order to decouple the timings that a program needs to make from the execution speed of the program. A program will normally perform its computations and then wait for an event (e.g. a message, time, or external I/O event) before continuing.
  • 1.8K
  • 24 Nov 2022
Topic Review
Flocking (Behavior)
thumb|200px|right|A swarm-like flock of starlings Flocking behavior is the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, the swarming behavior of insects, and herd behavior of land animals. During the winter months, Starlings are known for aggregating into huge flocks of hundreds to thousands of individuals, murmurations, which when they take flight altogether, render large displays of intriguing swirling patterns in the skies above observers. Computer simulations and mathematical models which have been developed to emulate the flocking behaviors of birds can also generally be applied to the "flocking" behavior of other species. As a result, the term "flocking" is sometimes applied, in computer science, to species other than birds. This article is about the modelling of flocking behavior. From the perspective of the mathematical modeller, "flocking" is the collective motion by a group of self-propelled entities and is a collective animal behavior exhibited by many living beings such as birds, fish, bacteria, and insects. It is considered an emergent behavior arising from simple rules that are followed by individuals and does not involve any central coordination. Flocking behavior was simulated on a computer in 1987 by Craig Reynolds with his simulation program, Boids. This program simulates simple agents (boids) that are allowed to move according to a set of basic rules. The result is akin to a flock of birds, a school of fish, or a swarm of insects.
  • 1.8K
  • 27 Oct 2022
  • Page
  • of
  • 48
Academic Video Service

Quick Survey

Encyclopedia MDPI is conducting a targeted survey to identify the specific barriers hindering efficient research. We invite you to spend 3 minutes defining the priorities for our next generation of structured knowledge tools.
Take Survey