Topic Review
Multiscale-Deep-Learning Applications
In general, most of the existing convolutional neural network (CNN)-based deep-learning models suffer from spatial-information loss and inadequate feature-representation issues. This is due to their inability to capture multiscale-context information and the exclusion of semantic information throughout the pooling operations. In the early layers of a CNN, the network encodes simple semantic representations, such as edges and corners, while, in the latter part of the CNN, the network encodes more complex semantic features, such as complex geometric shapes. Theoretically, it is better for a CNN to extract features from different levels of semantic representation because tasks such as classification and segmentation work better when both simple and complex feature maps are utilized. Hence, it is also crucial to embed multiscale capability throughout the network so that the various scales of the features can be optimally captured to represent the intended task.
  • 544
  • 26 Oct 2022
Topic Review
MediaWiki Extension
MediaWiki extensions allow MediaWiki to be made more advanced and useful for various purposes. These extensions vary greatly in complexity. The Wikimedia Foundation operates a Git server where many extensions are hosted, and a directory of them can be found on the MediaWiki website. Some other sites also are known for development of—or support for—extensions are MediaWiki.org, which maintains an extension matrix; and Google Code. MediaWiki code review is itself facilitated through a Gerrit instance. Since version 1.16 MediaWiki also used the jQuery library.
  • 544
  • 09 Nov 2022
Topic Review
Comprehensive School Mathematics Program
Comprehensive School Mathematics Program (CSMP) stands for both the name of a curriculum and the name of the project that was responsible for developing curriculum materials in the United States. Two major curricula were developed as part of the overall CSMP project: the Comprehensive School Mathematics Program (CSMP), a K–6 mathematics program for regular classroom instruction, and the Elements of Mathematics (EM) program, a grades 7–12 mathematics program for gifted students. EM treats traditional topics rigorously and in-depth, and was the only curriculum that strictly adhered to Goals for School Mathematics: The Report of the Cambridge Conference on School Mathematics (1963). As a result, it includes much of the content generally required for an undergraduate mathematics major. These two curricula are unrelated to one another, but certain members of the CSMP staff contributed to the development of both projects. Additionally, some staff were involved with the Secondary School Mathematics Curriculum Improvement Study program being developed around the same time. What follows is a description of the K–6 program that was designed for a general, heterogeneous audience. The CSMP project was established in 1966, under the direction of Burt Kaufman, who remained director until 1979, succeeded by Clare Heidema. It was originally affiliated with Southern Illinois University in Carbondale, Illinois. After a year of planning, CSMP was incorporated into the Central Midwest Regional Educational Laboratory (later CEMREL, Inc.), one of the national educational laboratories funded at that time by the U.S. Office of Education. In 1984, the project moved to Mid-continental Research for Learning (McREL) Institute's Comprehensive School Reform program, who supported the program until 2003. Heidema remained director to its conclusion. In 1984, it was implemented in 150 school districts in 42 states and about 55,000 students.
  • 543
  • 18 Oct 2022
Topic Review
Bunyakovsky Conjecture
The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial [math]\displaystyle{ f(x) }[/math] in one variable with integer coefficients to give infinitely many prime values in the sequence[math]\displaystyle{ f(1), f(2), f(3),\ldots. }[/math] It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for [math]\displaystyle{ f(x) }[/math] to have the desired prime-producing property: Bunyakovsky's conjecture is that these conditions are sufficient: if [math]\displaystyle{ f(x) }[/math] satisfies (1)-(3), then [math]\displaystyle{ f(n) }[/math] is prime for infinitely many positive integers [math]\displaystyle{ n }[/math].
  • 543
  • 31 Oct 2022
Topic Review
Data Re-Identification
Data Re-Identification is the practice of matching anonymous data (also known as de-identified data) with publicly available information, or auxiliary data, in order to discover the individual to which the data belongs to. This is a concern because companies with privacy policies, health care providers, and financial institutions may release the data they collect after the data has gone through the de-identification process. The de-identification process involves masking, generalizing or deleting both direct and indirect identifiers; the definition of this process is not universal, however. Information in the public domain, even seemingly anonymized, may thus be re-identified in combination with other pieces of available data and basic computer science techniques. The Common Rule Agencies, a collection of multiple U.S. federal agencies and departments including the U.S. Department of Health and Human Services, speculate that re-identification is becoming gradually easier because of "big data" - the abundance and constant collection and analysis of information along the evolution of technologies and the advances of algorithms. However, others have claimed that de-identification is a safe and effective data liberation tool and do not view re-identification as a concern. A 2000 study found that 87 percent of the U.S. population can be identified using a combination of their gender, birthdate and zip code. Others do not think that re-identification is a serious threat, and call it a "myth"; they claim that the combination of zip code, date of birth and gender is rare or partially complete, such as only the year and month birth without the date, or the county name instead of the specific zip code, thus the risk of such re-identification is reduced in many instances.
  • 543
  • 31 Oct 2022
Topic Review
Denormal Number
In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal number is subnormal. In a normal floating-point value, there are no leading zeros in the significand or mantissa; rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as 1.23 × 10−2). Conversely, a denormalized floating point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range). The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the significant digits. For a positive normalised number it can be represented as m0.m1m2m3...mp−2mp−1 (where m represents a significant digit, and p is the precision) with non-zero m0. Notice that for a binary radix, the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0.m1m2m3...mp−2mp−1), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent is the least value possible. By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach (discarding all significant digits when underflow is reached). Hence the production of a subnormal number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small. In IEEE 754-2008, denormal numbers are renamed subnormal numbers and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly. Mathematically speaking, the normalized floating-point numbers of a given sign are roughly logarithmically spaced, and as such any finite-sized normal float cannot include zero. The subnormal floats are a linearly spaced set of values, which span the gap between the negative and positive normal floats.
  • 543
  • 08 Nov 2022
Topic Review
Sarcasm and Irony Detection in Social Media
Sarcasm and irony represent intricate linguistic forms in social media communication, demanding nuanced comprehension of context and tone. 
  • 544
  • 30 Nov 2023
Topic Review
Service Virtualisation
Continuous delivery is an industry software development approach that aims to reduce the delivery time of software and increase the quality assurance within a short development cycle. The fast delivery and improved quality require continuous testing of the developed software service. Testing services are complicated and costly and postponed to the end of development due to unavailability of the requisite services. Therefore, an empirical approach that has been utilised to overcome these challenges is to automate software testing by virtualising the requisite services’ behaviour for the system being tested. Service virtualisation involves analysing the behaviour of software services to uncover their external behaviour in order to generate a light-weight executable model of the requisite services. There are different research areas which can be used to create such a virtual model of services from network interactions or service execution logs, including message format extraction, inferring control model, data model and multi-service dependencies.
  • 542
  • 07 Apr 2021
Topic Review
Bayes Factor and Prior Elicitation
The Bayes factor is a ratio of the marginal likelihood of two competing models. The marginal likelihood for a model class is a weighted average of the likelihood over all the parameter values represented by the prior distribution. Therefore, carefully choosing priors and conducting a prior sensitivity analysis play an essential role when using Bayes factors as a model selection tool. This section briefly discusses the prior distributions, prior elicitation, and prior sensitivity analysis.
  • 542
  • 24 Feb 2022
Topic Review
State-of-the-Art on Recommender Systems for E-Learning
Recommender systems (RSs) are increasingly recognized as intelligent software for predicting users’ opinions on specific items. Various RSs have been developed in different domains, such as e-commerce, e-government, e-resource services, e-business, e-library, e-tourism, and e-learning, to make excellent user recommendations. In e-learning technology, RSs are designed to support and improve the learning practices of a student or an organization.
  • 542
  • 06 Dec 2022
  • Page
  • of
  • 366
ScholarVision Creations