Topic Review
Circle-ellipse Problem
The circle-ellipse problem in software development (sometimes termed the square-rectangle problem) illustrates several pitfalls which can arise when using subtype polymorphism in object modelling. The issues are most commonly encountered when using object-oriented programming (OOP). By definition, this problem is a violation of the Liskov substitution principle, one of the SOLID principles. The problem concerns which subtyping or inheritance relationship should exist between classes which represent circles and ellipses (or, similarly, squares and rectangles). More generally, the problem illustrates the difficulties which can occur when a base class contains methods which mutate an object in a manner which may invalidate a (stronger) invariant found in a derived class, causing the Liskov substitution principle to be violated. The existence of the circle-ellipse problem is sometimes used to criticize object-oriented programming. It may also imply that hierarchical taxonomies are difficult to make universal, implying that situational classification systems may be more practical.
  • 213
  • 18 Oct 2022
Topic Review
Moz (Marketing Software)
Moz is a software as a service (SaaS) company based in Seattle that sells inbound marketing and marketing analytics software subscriptions. It was founded by Rand Fishkin and Gillian Muessig in 2004 as a consulting firm and shifted to SEO software development in 2008. The company hosts a website that includes an online community of more than one million globally based digital marketers and marketing related tools. Moz offers SEO tools that includes keyword research, link building, site audits, and page optimization insights in order to help companies to have a better view of the position they have on search engines and how to improve their ranking. The company also developed the most commonly used algorithm to determine Domain Authority, which is a score between 1-100, that is often used by many SEO companies to estimate a website's overall viability with the search engines.
  • 1.7K
  • 18 Oct 2022
Topic Review
Scientific Community Metaphor
In computer science, the scientific community metaphor is a metaphor used to aid understanding scientific communities. The first publications on the scientific community metaphor in 1981 and 1982 involved the development of a programming language named Ether that invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.
  • 218
  • 18 Oct 2022
Topic Review
HandWiki
HandWiki is an internet Wiki-style encyclopedia for professional researchers in various branches of science and computer science. As other Wiki type encyclopedias, HandWiki is designed for collaborative editing of articles. Unlike the traditional Wikipedia that uses the categories concept for all articles located in the main namespace, HandWiki uses dedicated namespaces for each topic. This allows creation of "Books" or "Manual" by grouping articles under the same namespace. According to the Handwiki designers, this can simplify organization of articles according to particular topic. HandWiki has the following topics included in the dedicated namespaces: Mathematics, Computers, Analysis, Physics, Astronomy, Biology, Chemistry, Unsolved. In addition to the categories preserved from Wikipedia, HandWiki has its own categories for original articles posted to HandWiki. One notable feature of HandWiki is that it allows to collaborate in real-time on many types of documents (lectures, books, technical documents, etc.) with multiple authors. The text can be protected from viewing, and can only be available for groups of people working on the same project. HandWiki can be used to convert such articles to LaTeX and to use BibTeX for referencing. These two features are a significant advantage for preparing research articles for publication. The HandWiki is designed using the MediaWiki software with additional extensions for inclusion of references to programming codes and BibTeX citations. Handwiki allows adding advertisements to the end of the articles. The advertising icons can be grouped according to the HandWiki topics.
  • 6.3K
  • 18 Oct 2022
Topic Review
FireEye
FireEye is a privately held cybersecurity company headquartered in Milpitas, California. It has been involved in the detection and prevention of major cyber attacks. It provides hardware, software, and services to investigate cybersecurity attacks, protect against malicious software, and analyze IT security risks. FireEye was founded in 2004. Initially, it focused on developing virtual machines that would download and test internet traffic before transferring it to a corporate or government network. The company diversified over time, in part through acquisitions. In 2014, it acquired Mandiant, which provides incident response services following the identification of a security breach. FireEye went public in 2013. USAToday says FireEye "has been called in to investigate high-profile attacks against Target, JP Morgan Chase, Sony Pictures, Anthem and others".
  • 1.4K
  • 18 Oct 2022
Topic Review
Modo
Modo (stylized as MODO, and originally modo) is a polygon and subdivision surface modeling, sculpting, 3D painting, animation and rendering package developed by Luxology, LLC, which is now merged with and known as Foundry. The program incorporates features such as n-gons and edge weighting, and runs on Microsoft Windows, Linux and macOS platforms.
  • 766
  • 18 Oct 2022
Topic Review
Winograd Schema Challenge
The Winograd schema challenge (WSC) is a test of machine intelligence proposed by Hector Levesque, a computer scientist at the University of Toronto. Designed to be an improvement on the Turing test, it is a multiple-choice test that employs questions of a very specific structure: they are instances of what are called Winograd schemas, named after Terry Winograd, professor of computer science at Stanford University. On the surface, Winograd schema questions simply require the resolution of anaphora: the machine must identify the antecedent of an ambiguous pronoun in a statement. This makes it a task of natural language processing, but Levesque argues that for Winograd schemas, the task requires the use of knowledge and commonsense reasoning. Nuance Communications announced in July 2014 that it would sponsor an annual WSC competition, with a prize of $25,000 for the best system that could match human performance. However, the prize is no longer offered.
  • 362
  • 18 Oct 2022
Topic Review
TScript
TScript is an object-oriented embeddable scripting language for C++ that supports hierarchical transient typed variables (TVariable). Its main design criterion is to create a scripting language that can interface with C++, transforming data and returning the result. This enables C++ applications to change their functionality after installation.
  • 388
  • 18 Oct 2022
Topic Review
Computer-Aided Diagnosis Approach for Breast Cancer
Breast cancer is a gigantic burden on humanity, causing the loss of enormous numbers of lives and amounts of money. It is the world’s leading type of cancer among women and a leading cause of mortality and morbidity. The histopathological examination of breast tissue biopsies is the gold standard for diagnosis. A computer-aided diagnosis (CAD) system based on deep learning is developed to ease the pathologist’s mission A new transfer learning approach is introduced for breast cancer classification using a set of pre-trained Convolutional Neural Network (CNN) models with the help of data augmentation techniques. Multiple experiments are performed to analyze the performance of these pre-trained CNN models through carrying out magnification dependent and magnification independent binary and eight-class classifications. Xception model has shown a promising performance through achieving the highest classification accuracy for all experiments.
  • 372
  • 17 Oct 2022
Topic Review
Training, Validation, and Test Sets
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided in multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
  • 748
  • 17 Oct 2022
  • Page
  • of
  • 371
Video Production Service