You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Topic Review
Microsoft DoubleSpace FAT
DriveSpace (initially known as DoubleSpace) is a disk compression utility supplied with MS-DOS starting from version 6.0 in 1993 and ending in 2000 with the release of Windows Me. The purpose of DriveSpace is to increase the amount of data the user could store on disks by transparently compressing and decompressing data on-the-fly. It is primarily intended for use with hard drives, but use for floppy disks is also supported. This feature was removed in Windows XP and later.
  • 442
  • 16 Nov 2022
Topic Review
BOSH
BOSH is an open-source software project that offers a toolchain for release engineering, software deployment and application lifecycle management of large-scale distributed services. The toolchain is made up of a server (the BOSH Director) and a command line tool. BOSH is typically used to package, deploy and manage cloud software. While BOSH was initially developed by VMware in 2010 to deploy Cloud Foundry PaaS, it can be used to deploy other software (such as Hadoop, RabbitMQ, or MySQL for instance). BOSH is designed to manage the whole lifecycle of large distributed systems. Since March 2016, BOSH can manage deployments on both Microsoft Windows and Linux servers. A BOSH Director communicates with a single Infrastructure as a service (IaaS) provider to manage the underlying networking and virtual machines (VMs) (or containers). Several IaaS providers are supported: Amazon Web Services EC2, Apache CloudStack, Google Compute Engine, Microsoft Azure, OpenStack, and VMware vSphere. To help support more underlying IaaS providers, BOSH uses the concept of a Cloud Provider Interface (CPI). There is an implementation of the CPI for each of the IaaS providers listed above. Typically the CPI is used to deploy VMs, but it can be used to deploy containers as well. Few CPIs exist for deploying containers with BOSH and only one is actively supported. For this one, BOSH uses a CPI that deploys Pivotal Software's Garden containers (Garden is very similar to Docker) on a single virtual machine, run by VirtualBox or VMware Workstation. In theory, any other container engine could be supported, if the necessary CPIs were developed. Due to BOSH indifferently supporting deployments on VMs or containers, BOSH uses the generic term “instances” to designate those. It is up to the CPI to choose whether a BOSH “instance” is actually a VM or a container.
  • 441
  • 28 Nov 2022
Topic Review
Marketing and Artificial Intelligence
The fields of marketing and artificial intelligence converge in systems which assist in areas such as market forecasting, and automation of processes and decision making, along with increased efficiency of tasks which would usually be performed by humans. The science behind these systems can be explained through neural networks and expert systems, computer programs that process input and provide valuable output for marketers. Artificial intelligence systems stemming from social computing technology can be applied to understand social networks on the Web. Data mining techniques can be used to analyze different types of social networks. This analysis helps a marketer to identify influential actors or nodes within networks, information which can then be applied to take a societal marketing approach.
  • 436
  • 21 Nov 2022
Topic Review
Infinite–dimensional Vector Function
An infinite–dimensional vector function is a function whose values lie in an infinite-dimensional topological vector space, such as a Hilbert space or a Banach space. Such functions are applied in most sciences including physics.
  • 431
  • 15 Nov 2022
Topic Review
Serial Number Arithmetic
Many protocols and algorithms require the serialization or enumeration of related entities. For example, a communication protocol must know whether some packet comes "before" or "after" some other packet. The IETF (Internet Engineering Task Force) RFC 1982 attempts to define "Serial Number Arithmetic" for the purposes of manipulating and comparing these sequence numbers. This task is rather more complex than it might first appear, because most algorithms use fixed size (binary) representations for sequence numbers. It is often important for the algorithm not to "break down" when the numbers become so large that they are incremented one last time and "wrap" around their maximum numeric ranges (go instantly from a large positive number to 0, or a large negative number). Unfortunately, some protocols choose to ignore these issues, and simply use very large integers for their counters, in the hope that the program will be replaced (or they will retire), before the problem occurs (see Y2K). Many communication protocols apply serial number arithmetic to packet sequence numbers in their implementation of a sliding window protocol. Some versions of TCP use protection against wrapped sequence numbers (PAWS). PAWS applies the same serial number arithmetic to packet timestamps, using the timestamp as an extension of the high-order bits of the sequence number.
  • 427
  • 11 Oct 2022
Topic Review
Cloud Digital Forensics
Cloud computing technology is rapidly becoming ubiquitous and indispensable. Despite the multiple advantages the cloud offers, organizations remain cautious about migrating their data and applications to the cloud due to fears of data breaches and security compromises.
  • 425
  • 18 Jan 2024
Topic Review
Microsoft DoubleSpace BIOS Parameter Block
DriveSpace (initially known as DoubleSpace) is a disk compression utility supplied with MS-DOS starting from version 6.0 in 1993 and ending in 2000 with the release of Windows Me. The purpose of DriveSpace is to increase the amount of data the user could store on disks by transparently compressing and decompressing data on-the-fly. It is primarily intended for use with hard drives, but use for floppy disks is also supported. This feature was removed in Windows XP and later.
  • 422
  • 26 Oct 2022
Topic Review
Probabilistic Soft Logic
Probabilistic Soft Logic (PSL) is a statistical relational learning (SRL) framework for modeling probabilistic and relational domains. It is applicable to a variety of machine learning problems, such as collective classification, entity resolution, link prediction, and ontology alignment. PSL combines two tools: first-order logic, with its ability to succinctly represent complex phenomena, and probabilistic graphical models, which capture the uncertainty and incompleteness inherent in real-world knowledge. More specifically, PSL uses "soft" logic as its logical component and Markov random fields as its statistical model. PSL provides sophisticated inference techniques for finding the most likely answer (i.e. the maximum a posteriori (MAP) state). The "softening" of the logical formulas makes inference a polynomial time operation rather than an NP-hard operation.
  • 422
  • 21 Nov 2022
Topic Review
Incompatible Timesharing System
Incompatible Timesharing System (ITS) is a time-sharing operating system developed principally by the MIT Artificial Intelligence Laboratory, with help from Project MAC. The name is the jocular complement of the MIT Compatible Time-Sharing System (CTSS). ITS, and the software developed on it, were technically and culturally influential far beyond their core user community. Remote "guest" or "tourist" access was easily available via the early ARPAnet, allowing many interested parties to informally try out features of the operating system and application programs. The wide-open ITS philosophy and collaborative online community were a major influence on the hacker culture, as described in Steven Levy's book Hackers, and were the direct forerunners of the free and open-source software, open-design, and Wiki movements.
  • 421
  • 04 Nov 2022
Topic Review
Augmented and Virtual Reality Exergames for Elderly People
Augmented and virtual reality (AR/VR) can be used in the context of the exergames to train motor and cognitive skills in the elderly population for health improvement.
  • 420
  • 04 Feb 2024
Topic Review
Patch Verb
In computing, the PATCH method is a request method supported by the Hypertext Transfer Protocol (HTTP) protocol for making partial changes to an existing resource. The PATCH method provides an entity containing a list of changes to be applied to the resource requested using the HTTP Uniform Resource Identifier (URI). The list of changes are supplied in the form of a PATCH document. If the requested resource does not exist then the server may create the resource depending on the PATCH document media type and permissions. The changes described in the PATCH document must be semantically well defined but can have a different media type than the resource being patched. Frameworks such as XML, JSON can be used in describing the changes in the PATCH document.
  • 419
  • 14 Nov 2022
Topic Review
Plural Quantification
In mathematics and logic, plural quantification is the theory that an individual variable x may take on plural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories. The point of the theory is to give first-order logic the power of set theory, but without any "existential commitment" to such objects as sets. The classic expositions are Boolos 1984 and Lewis 1991.
  • 417
  • 08 Oct 2022
Topic Review
National Council of Teachers of Mathematics
Founded in 1920, The National Council of Teachers of Mathematics (NCTM) is the world's largest mathematics education organization. NCTM holds annual national and regional conferences for teachers and publishes five journals.
  • 414
  • 10 Nov 2022
Topic Review
Feature Extracted Deep Neural Collaborative Filtering
The electronic publication market is growing along with the electronic commerce market. Electronic publishing companies use recommendation systems to increase sales to recommend various services to consumers. However, due to data sparsity, the recommendation systems have low accuracy. Also, previous deep neural collaborative filtering models utilize various variables of datasets such as user information, author information, and book information, and these models have the disadvantage of requiring significant computing resources and training time for their training.
  • 412
  • 27 Jul 2023
Topic Review
Seismic Data Query Algorithm
Edge computing can reduce the transmission pressure of wireless networks in earthquakes by pushing computing functionalities to network edges and avoiding the data transmission to cloud servers. This also leads to the scattered storage of data content in each edge server, increasing the difficulty of content search.
  • 411
  • 29 Jun 2023
Topic Review
Multi-Eye to Robot Indoor Calibration Dataset
The METRIC dataset comprises more than 10,000 synthetic and real images of ChAruCo and checkerboard patterns. Each pattern is securely attached to the robot's end-effector, which is systematically moved in front of four cameras surrounding the manipulator. This movement allows for image acquisition from various viewpoints. The real images in the dataset encompass multiple sets of images captured by three distinct types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455, and Intel RealSense Lidar L515. The purpose of including these images is to evaluate the advantages and disadvantages of each sensor network for calibration purposes. Additionally, to accurately assess the impact of the distance between the camera and robot on calibration, researchers obtained a comprehensive synthetic dataset. This dataset contains associated ground truth data and is divided into three different camera network setups, corresponding to three levels of calibration difficulty based on the cell size.
  • 411
  • 09 Jun 2023
Topic Review
Real-Time Sensing in Smart Cities
To aid urban planners and residents in understanding the nuances of day-to-day urban dynamics, we actively pursue the improvement of data visualisation tools that can adapt to changing conditions. An architecture was created and implemented that ensures secure and easy connectivity between various sources, such as a network of Internet of Things (IoT) devices, to merge with crowdsensing data and use them efficiently.
  • 407
  • 21 Feb 2024
Topic Review
CSC Version 6.0
The Center for Internet Security Critical Security Controls Version 6.0 was released October 15, 2015.
  • 405
  • 09 Nov 2022
Topic Review
Power (Statistics)
The power of a binary hypothesis test is the probability that the test rejects the null hypothesis (H0) when a specific alternative hypothesis (H1) is true. The statistical power ranges from 0 to 1, and as statistical power increases, the probability of making a type II error (wrongly failing to reject the null hypothesis) decreases. For a type II error probability of β, the corresponding statistical power is 1 − β. For example, if experiment 1 has a statistical power of 0.7, and experiment 2 has a statistical power of 0.95, then there is a stronger probability that experiment 1 had a type II error than experiment 2, and experiment 2 is more reliable than experiment 1 due to the reduction in probability of a type II error. It can be equivalently thought of as the probability of accepting the alternative hypothesis (H1) when it is true—that is, the ability of a test to detect a specific effect, if that specific effect actually exists. That is, If [math]\displaystyle{ H_1 }[/math] is not an equality but rather simply the negation of [math]\displaystyle{ H_0 }[/math] (so for example with [math]\displaystyle{ H_0:\mu=0 }[/math] for some unobserved population parameter [math]\displaystyle{ \mu, }[/math] we have simply [math]\displaystyle{ H_1:\mu\ne 0 }[/math]) then power cannot be calculated unless probabilities are known for all possible values of the parameter that violate the null hypothesis. Thus one generally refers to a test's power against a specific alternative hypothesis. As the power increases, there is a decreasing probability of a type II error, also referred to as the false negative rate (β) since the power is equal to 1 − β. A similar concept is the type I error probability, also referred to as the "false positive rate" or the level of a test under the null hypothesis. Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. For example: "how many times do I need to toss a coin to conclude it is rigged by a certain amount?" Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis. In the context of binary classification, the power of a test is called its statistical sensitivity, its true positive rate, or its probability of detection.
  • 404
  • 28 Oct 2022
Topic Review
Power Set
In mathematics, the power set (or powerset) of any set S is the set of all subsets of S, including the empty set and S itself, variously denoted as P(S), 𝒫(S), ℘(S) (using the "Weierstrass p"), P(S), ℙ(S), or, identifying the powerset of S with the set of all functions from S to a given set of two elements, 2S. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set. Any subset of P(S) is called a family of sets over S.
  • 404
  • 04 Nov 2022
  • Page
  • of
  • 47
Academic Video Service