Topic Review
Perl 6 Rules
Perl 6 rules are the regular expression, string matching and general-purpose parsing facility of Perl 6, and are a core part of the language. Since Perl's pattern-matching constructs have exceeded the capabilities of formal regular expressions for some time, Perl 6 documentation refers to them exclusively as regexes, distancing the term from the formal definition. Perl 6 provides a superset of Perl 5 features with respect to regexes, folding them into a larger framework called rules, which provide the capabilities of a parsing expression grammar, as well as acting as a closure with respect to their lexical scope. Rules are introduced with the rule keyword, which has a usage quite similar to subroutine definitions. Anonymous rules can be introduced with the regex (or rx) keyword, or simply be used inline as regexes were in Perl 5 via the m (matching) or s (substitution) operators.
  • 1.1K
  • 22 Nov 2022
Topic Review
The Simple Function Point (SFP) Method
The Simple Function Point (SFP) method is a lightweight Functional Measurement Method. The Simple Function Point method was designed to be compliant with the ISO14143-1 standard and compatible with the International Function Points User Group (IFPUG) Function Point Analysis (FPA) method. The original method is described in a manual produced by the Simple Function Point Association: the Simple Function Point Functional Size Measurement Method Reference Manual is available under the Creatives Commons Attribution-NoDerivatives 4.0 International Public License.
  • 1.1K
  • 27 Sep 2022
Topic Review
MultiOTP
multiOTP is an open source PHP class, a command line tool, and a web interface that can be used to provide an operating-system-independent, strong authentication system. multiOTP is OATH-certified since version 4.1.0 and is developed under the LGPL license. Starting with version 4.3.2.5, multiOTP open source is also available as a virtual appliance - as a standard OVA file, a customized OVA file with open-vm-tools, and also as a Hyper-V downloadable file.Template:Jargon-statement A QR code is generated automatically when printing the user-configuration page.
  • 1.0K
  • 21 Nov 2022
Biography
Andrey Korotayev
Andrey Vitalievich Korotayev (Russian: Андре́й Вита́льевич Корота́ев; born 17 February 1961) is a Russian anthropologist, economic historian, comparative political scientist, demographer and sociologist, with major contributions to world-systems theory, cross-cultural studies, Near Eastern history, Big History, and mathematical modelling of social and economic macrodyn
  • 1.0K
  • 08 Dec 2022
Topic Review
Bumble (App)
Bumble is a location-based social application that facilitates communication between interested users. In heterosexual matches, only female users can make the first contact with matched male users, while in same-sex matches either person can send a message first. Users can sign up using their phone number or Facebook profile, and have options of searching for romantic matches or, in "BFF mode", friends. Bumble Bizz facilitates business communications. Bumble was founded by Whitney Wolfe Herd shortly after she left Tinder, a dating app she co-founded, due to growing tensions with other company executives. Wolfe Herd has described Bumble as a "feminist dating app". As of September 2019, with a monthly user base of 5 million, Bumble is the second-most popular dating app in the U.S. after Tinder. According to a June 2016 survey, 46.2% of its users are female. According to Forbes , the company is valued at more than $1 billion, and has over 55 million users.
  • 1.0K
  • 17 Oct 2022
Topic Review
Rosetta Stone
Rosetta Stone Language Learning is proprietary, computer-assisted language learning (CALL) software published by Rosetta Stone Inc, part of the IXL Learning family of products. The software uses images, text, and sound to teach words and grammar by spaced repetition, without translation. Rosetta Stone calls its approach Dynamic Immersion. The software's name and logo allude to the ancient stone slab of the same name on which the Decree of Memphis is inscribed in three writing systems. IXL Learning acquired Rosetta Stone in March 2021.
  • 1.0K
  • 24 Oct 2022
Topic Review
Psycholinguistics
Psycholinguistics or psychology of language is the study of the psychological and neurobiological factors that enable humans to acquire, use, and understand language. Initial forays into psycholinguistics were largely philosophical ventures, due mainly to a lack of cohesive data on how the human brain functioned. Modern research makes use of biology, neuroscience, cognitive science, and information theory to study how the brain processes language. There are a number of subdisciplines; for example, as non-invasive techniques for studying the neurological workings of the brain become more and more widespread, neurolinguistics has become a field in its own right. Psycholinguistics covers the cognitive processes that make it possible to generate a grammatical and meaningful sentence out of vocabulary and grammatical structures, as well as the processes that make it possible to understand utterances, words, text, etc. Developmental psycholinguistics studies infants' and children's ability to learn language, usually with experimental or at least quantitative methods (as opposed to naturalistic observations such as those made by Jean Piaget in his research on the development of children).
  • 1.0K
  • 26 Oct 2022
Topic Review
Nontransitive Dice
A set of dice is nontransitive if it contains three dice, A, B, and C, with the property that A rolls higher than B more than half the time, and B rolls higher than C more than half the time, but it is not true that A rolls higher than C more than half the time. In other words, a set of dice is nontransitive if the binary relation – X rolls a higher number than Y more than half the time – on its elements is not transitive. It is possible to find sets of dice with the even stronger property that, for each die in the set, there is another die that rolls a higher number than it more than half the time. Using such a set of dice, one can invent games which are biased in ways that people unused to nontransitive dice might not expect (see Example).
  • 1.0K
  • 28 Nov 2022
Topic Review
Rational ClearQuest
ClearQuest is an enterprise level workflow automation tool from the Rational Software division of IBM. Commonly, ClearQuest is configured as a bug tracking system, but it can be configured to act as a CRM tool or to track a complex manufacturing process. It can also implement these functions together. IBM provides a number of predefined "schemas" for common tasks such as software defect tracking which can themselves be further customized if required.
  • 1.0K
  • 31 Oct 2022
Topic Review
AmigaOS
AmigaOS is a family of proprietary native operating systems of the Amiga and AmigaOne personal computers. It was developed first by Commodore International and introduced with the launch of the first Amiga, the Amiga 1000, in 1985. Early versions of AmigaOS required the Motorola 68000 series of 16-bit and 32-bit microprocessors. Later versions were developed by Haage & Partner (AmigaOS 3.5 and 3.9) and then Hyperion Entertainment (AmigaOS 4.0-4.1). A PowerPC microprocessor is required for the most recent release, AmigaOS 4. AmigaOS is a single-user operating system based on a preemptive multitasking kernel, called Exec. It includes an abstraction of the Amiga's hardware, a disk operating system called AmigaDOS, a windowing system API called Intuition, and a desktop environment and file manager called Workbench. The Amiga intellectual property is fragmented between Amiga Inc., Cloanto, and Hyperion Entertainment. The copyrights for works created up to 1993 are owned by Cloanto. In 2001, Amiga Inc. contracted AmigaOS 4 development to Hyperion Entertainment and, in 2009 they granted Hyperion an exclusive, perpetual, worldwide license to AmigaOS 3.1 in order to develop and market AmigaOS 4 and subsequent versions. On December 29, 2015, the AmigaOS 3.1 source code leaked to the web; this was confirmed by the licensee, Hyperion Entertainment.
  • 1.0K
  • 07 Nov 2022
Topic Review
Operad Theory
Operad theory is a field of mathematics concerned with prototypical algebras that model properties such as commutativity or anticommutativity as well as various amounts of associativity. Operads generalize the various associativity properties already observed in algebras and coalgebras such as Lie algebras or Poisson algebras by modeling computational trees within the algebra. Algebras are to operads as group representations are to groups. An operad can be seen as a set of operations, each one having a fixed finite number of inputs (arguments) and one output, which can be composed one with others. They form a category-theoretic analog of universal algebra. Operads originate in algebraic topology from the study of iterated loop spaces by J. Michael Boardman and Rainer M. Vogt, and J. Peter May. The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher.
  • 1.0K
  • 09 Oct 2022
Topic Review
The Hussaini Encyclopedia
The Hussaini Encyclopedia (Arabic - دائرة المعارف الحسينية - Dāʾirat al-maʿārif al-Ḥusaynīyah) is a one-of-a-kind encyclopedia in Arabic, completely themed on the third Holy Imam, Husayn ibn Ali, his biography, thought and way of conduct, as well as on the social circle of personalities around him and also of places, chronicles and various other related subjects. This encyclopedia consisted of 700 volumes and well over 95 million words. Sheikh Mohammed Sadiq Al-Karbassi, author of the voluminous Arabic encyclopedia dedicated to Imam al-Hussain started his work by establishing the Hussaini Center for Research in London in 1993. The Hussaini Encyclopedia is a historic study of al-Hussain which provides a heritage owing to its impact on all events, particularly the aftermath of battle of Karbala with its impact being observed in recent times.
  • 1.0K
  • 01 Dec 2022
Topic Review
Google Book Search Settlement Agreement
The Google Book Search Settlement Agreement was a proposal between the Authors Guild, the Association of American Publishers, and Google in the settlement of Authors Guild et al. v. Google, a class action lawsuit alleging copyright infringement on the part of Google. The settlement was initially proposed in 2008, but ultimately rejected by the court in 2011. In November 2013, the presiding U.S. Circuit Judge dismissed Authors Guild et al. v. Google. On April 18, 2016, the Supreme Court turned down an appeal.
  • 1.0K
  • 14 Nov 2022
Topic Review
Berlekamp's Root Finding Algorithm
In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over a field [math]\displaystyle{ \mathbb Z_p }[/math]. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.
  • 1.0K
  • 02 Nov 2022
Topic Review
AI: When a Robot Writes a Play
AI: When a Robot Writes a Play (in Czech: AI: Když robot píše hru) is an experimental theatre play, where 90% of its script was automatically generated by artificial intelligence (the GPT-2 language model). The play is in Czech language, but an English version of the script also exists.
  • 1.0K
  • 23 Nov 2022
Topic Review
Ceph
Ceph (pronounced /ˈsɛf/) is an open-source software-defined storage platform that implements object storage on a single distributed computer cluster and provides 3-in-1 interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalability to the exabyte level, and to be freely available. Since version 12, Ceph does not rely on other filesystems and can directly manage HDDs and SSDs with its own storage backend BlueStore and can completely self reliantly expose a POSIX filesystem. Ceph replicates data and makes it fault-tolerant, using commodity hardware and Ethernet IP and requiring no specific hardware support. The Ceph’s system offers disaster recovery and data redundancy through techniques such as replication, erasure coding, snapshots and storage cloning. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs. In this way, administrators have a single, consolidated system that avoids silos and collects the storage within a common management framework. Ceph consolidates several storage use cases and improves resource utilization. It also lets an organization deploy servers where needed.
  • 1.0K
  • 14 Oct 2022
Topic Review
Forensic Identification
Forensic identification is the application of forensic science, or "forensics", and technology to identify specific objects from the trace evidence they leave, often at a crime scene or the scene of an accident. Forensic means "for the courts".
  • 1.0K
  • 25 Oct 2022
Topic Review
List of Medical Wikis
This is a list of medical wikis, collaboratively-editable websites that focus on medical information. Many of the most popular medical wikis take the form of encyclopedias, with a separate article for each medical term. Some of these websites, such as WikiDoc and Radiopaedia, are editable by anyone, while others, such as Ganfyd, restrict editing access to professionals. The majority of them have content available only in English. The largest and most popular general encyclopedia, Wikipedia, also hosts a significant amount of health and medical information.
  • 1.0K
  • 01 Nov 2022
Topic Review
Technical Support Scam
A technical support scam refers to any class of telephone fraud activities in which a scammer claims to offer a legitimate technical support service, often via cold calls to unsuspecting users. Such calls are mostly targeted at Microsoft Windows users, with the caller often claiming to represent a Microsoft technical support department. In English-speaking countries such as the United States , Canada , United Kingdom , Ireland, Australia and New Zealand, such cold call scams have occurred as early as 2008. and primarily originate from call centers in India . The scammer will typically attempt to get the victim to allow remote access to their computer. After remote access is gained, the scammer relies on confidence tricks, typically involving utilities built into Windows and other software, in order to gain the victim's trust to pay for the supposed "support" services. The scammer will often then steal the victim's credit card account information or persuade the victim to log in to their online banking account to receive a promised refund, only to steal more money, claiming that a secure server is connected and that the scammer cannot see the details. Many schemes involve convincing the victim to purchase expensive gift cards and then to divulge the card information to the scammer.
  • 1.0K
  • 21 Nov 2022
Topic Review
Training, Validation, and Test Sets
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided in multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
  • 1.0K
  • 17 Oct 2022
  • Page
  • of
  • 47
ScholarVision Creations