Topic Review
AlphaServer
AlphaServer is a series of server computers, produced from 1994 onwards by Digital Equipment Corporation, and later by Compaq and HP. AlphaServers were based on the DEC Alpha 64-bit microprocessor. Supported operating systems for AlphaServers are Tru64 UNIX (formerly Digital UNIX), OpenVMS, MEDITECH MAGIC and Windows NT (on earlier systems, with AlphaBIOS ARC firmware), while enthusiasts have provided alternative operating systems such as Linux, NetBSD, OpenBSD and FreeBSD. The Alpha processor was also used in a line of workstations, AlphaStation. Some AlphaServer models were rebadged in white enclosures as Digital Servers for the Windows NT server market. These so-called "white box" models comprised the following: As part of the roadmap to phase out Alpha-, MIPS- and PA-RISC-based systems in favor of Itanium-based systems at HP, the most recent AlphaServer systems reached their end of general availability on 27 April 2007. The availability of upgrades and options was discontinued on 25 April 2008, approximately one year after the systems were discontinued. Support for the most recent AlphaServer systems, the DS15A, DS25, ES45, ES47, ES80 and GS1280 is being provided by HP Services as of 2008. These systems are scheduled to reach end of support sometime during 2012, although HP has stated that this event may be delayed.
  • 474
  • 25 Oct 2022
Topic Review
Amazon Elastic Compute Cloud
Amazon Elastic Compute Cloud (EC2) is a part of Amazon.com's cloud-computing platform, Amazon Web Services (AWS), that allows users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servers – hence the term "elastic". EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.
  • 1.3K
  • 04 Nov 2022
Topic Review
Asperitas Microdatacenter
A microdatacenter is a small, self contained data center consisting of computing, storage, networking, power and cooling. Micro data centers typically employ water cooling to achieve compactness, low component count, low cost and high energy efficiency. Its small size allows decentralised deployment in places where traditional data centers cannot go, for instance edge computing for Internet of things.
  • 299
  • 01 Nov 2022
Topic Review
Associative Classification Method
Machine learning techniques are ever prevalent as datasets continue to grow daily. Associative classification (AC), which combines classification and association rule mining algorithms, plays an important role in understanding big datasets that generate a large number of rules. Clustering, on the other hand, can contribute by reducing the rule space to produce compact models. 
  • 846
  • 20 Sep 2022
Topic Review
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by ANSI and ITU (formerly CCITT) for digital transmission of multiple types of traffic, including telephony (voice), data, and video signals in one network without the use of separate overlay networks. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as voice and video. ATM provides functionality that uses features of circuit switching and packet switching networks. It uses asynchronous time-division multiplexing, and encodes data into small, fixed-sized network packets. In the ISO-OSI reference model data link layer (layer 2), the basic transfer units are generically called frames. In ATM these frames are of a fixed (53 octets or bytes) length and specifically called cells. This differs from approaches such as IP or Ethernet that use variable sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent, i.e. dedicated connections that are usually preconfigured by the service provider, or switched, i.e. set up on a per-call basis using signaling and disconnected when the call is terminated. The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the SONET/SDH backbone of the public switched telephone network (PSTN) and in the Integrated Services Digital Network (ISDN), but has largely been superseded in favor of next-generation networks based in Internet Protocol (IP) technology, while wireless and mobile ATM never established a significant foothold.
  • 533
  • 27 Oct 2022
Topic Review
Atom (Web Standard)
The name Atom applies to a pair of related Web standards. The Atom Syndication Format is an XML language used for web feeds, while the Atom Publishing Protocol (AtomPub or APP) is a simple HTTP-based protocol for creating and updating web resources. Web feeds allow software programs to check for updates published on a website. To provide a web feed, the site owner may use specialized software (such as a content management system) that publishes a list (or "feed") of recent articles or content in a standardized, machine-readable format. The feed can then be downloaded by programs that use it, like websites that syndicate content from the feed, or by feed reader programs that allow internet users to subscribe to feeds and view their content. A feed contains entries, which may be headlines, full-text articles, excerpts, summaries or links to content on a website along with various metadata. The Atom format was developed as an alternative to RSS. Ben Trott, an advocate of the new format that became Atom, believed that RSS had limitations and flaws—such as lack of on-going innovation and its necessity to remain backward compatible—and that there were advantages to a fresh design. Proponents of the new format formed the IETF Atom Publishing Format and Protocol Workgroup. The Atom Syndication Format was published as an IETF proposed standard in RFC 4287 (December 2005), and the Atom Publishing Protocol was published as RFC 5023 (October 2007).
  • 316
  • 02 Nov 2022
Topic Review
Atomic Commit
In the field of computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation. If the changes are applied, then the atomic commit is said to have succeeded. If there is a failure before the atomic commit can be completed, then all of the changes completed in the atomic commit are reversed. This ensures that the system is always left in a consistent state. The other key property of isolation comes from their nature as atomic operations. Isolation ensures that only one atomic commit is processed at a time. The most common uses of atomic commits are in database systems and version control systems. The problem with atomic commits is that they require coordination between multiple systems. As computer networks are unreliable services, this means no algorithm can coordinate with all systems as proven in the Two Generals Problem. As databases become more and more distributed, this coordination will increase the difficulty of making truly atomic commits.
  • 1.1K
  • 07 Nov 2022
Topic Review
Bayesian Nonlinear Mixed Effects Models
Nonlinear mixed effects models have become a standard platform for analysis when data is in the form of continuous and repeated measurements of subjects from a population of interest, while temporal profiles of subjects commonly follow a nonlinear tendency. While frequentist analysis of nonlinear mixed effects models has a long history, Bayesian analysis of the models has received comparatively little attention until the late 1980s, primarily due to the time-consuming nature of Bayesian computation. Since the early 1990s, Bayesian approaches for the models began to emerge to leverage rapid developments in computing power, and have recently received significant attention due to (1) superiority to quantify the uncertainty of parameter estimation; (2) utility to incorporate prior knowledge into the models; and (3) flexibility to match exactly the increasing complexity of scientific research arising from diverse industrial and academic fields. 
  • 898
  • 23 Mar 2022
Topic Review
Big Data Mining
Big data mining (BDM) is an approach that uses the cumulative data mining or extraction techniques on large datasets / volumes of data. It is mainly focused on retrieving relevant and demanded information (or patterns) and thus extracting value hidden in data of an immense volume. BDM draws from the conventional data mining notation but also combines the aspects of big data, i.e. it enables to acquire useful information from databases or data streams that are huge in terms of “big data V’s”, like volume, velocity, and variety.
  • 5.8K
  • 05 Aug 2021
Topic Review
Blockchain Enabled Cyber-Physical Systems
Cyber-physical systems (CPS) is a setup that controls and monitors the physical world around us. The advancement of these systems needs to incorporate an unequivocal spotlight on making these systems efficient. Blockchains and their inherent combination of consensus algorithms, distributed data storage, and secure protocols can be utilized to build robustness and reliability in these systems. Blockchain is the underlying technology behind bitcoins and it provides a decentralized framework to validate transactions and ensure that they cannot be modified. 
  • 797
  • 25 Apr 2022
  • Page
  • of
  • 14