You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Timeline of Computing 1950–79
This article presents a detailed timeline of events in the history of computing from 1950 to 1979. For narratives explaining the overall developments, see the History of computing.
  • 1.8K
  • 17 Oct 2022
Topic Review
Convolutional Sparse Coding
The convolutional sparse coding paradigm is an extension of the global Sparse Coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal [math]\displaystyle{ \mathbf{x}\in \mathbb{R}^{N} }[/math] as a linear combination of a few atoms in the redundant dictionary [math]\displaystyle{ \mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N }[/math], usually expressed as [math]\displaystyle{ \mathbf{x}=\mathbf{D}\mathbf{\Gamma} }[/math] for a sparse vector [math]\displaystyle{ \mathbf{\Gamma}\in \mathbb{R}^{M} }[/math], the alternative dictionary structure adopted by the Convolutional Sparse Coding model allows the sparsity prior to be applied locally instead of globally: independent patches of [math]\displaystyle{ \mathbf{x} }[/math] are generated by "local" dictionaries operating over stripes of [math]\displaystyle{ \mathbf{\Gamma} }[/math]. The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as Image Understanding and Computer Vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the Convolutional Neural Networks model, allowing a deeper understanding of how the latter operates.
  • 1.8K
  • 24 Nov 2022
Topic Review
Value Co-Creation in Digital Innovation Ecosystems
The innovation ecosystem guides the transition from individual value creation to multi-actor value co-creation by coordinating the interests of multiple parties for cross-border cooperation and enhancing the efficiency of technological innovation and resource integration in the system.
  • 1.8K
  • 16 May 2023
Topic Review
Token Ring
Token Ring is a computer networking technology used to build local area networks. It uses a special three-byte frame called a token that travels around a logical ring of workstations or servers. This token passing is a channel access method providing fair access for all stations, and eliminating the collisions of contention-based access methods. There were several other earlier implementations of token-passing networks. Token Ring was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5. It was a successful technology, particularly in corporate environments, but was gradually eclipsed by the later versions of Ethernet.
  • 1.7K
  • 10 Oct 2022
Topic Review
SIMD
Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA): it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but each unit performs the exact same instruction at any given moment (just with different data). SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions to improve the performance of multimedia use. SIMD has three different subcategories in Flynn's 1972 Taxonomy, one of which is SIMT. SIMT should not be confused with software threads or hardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution.
  • 1.7K
  • 23 Nov 2022
Topic Review
Syslog
In computing, syslog /ˈsɪslɒɡ/ is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients.
  • 1.6K
  • 19 Oct 2022
Topic Review
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by ANSI and ITU (formerly CCITT) for digital transmission of multiple types of traffic, including telephony (voice), data, and video signals in one network without the use of separate overlay networks. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as voice and video. ATM provides functionality that uses features of circuit switching and packet switching networks. It uses asynchronous time-division multiplexing, and encodes data into small, fixed-sized network packets. In the ISO-OSI reference model data link layer (layer 2), the basic transfer units are generically called frames. In ATM these frames are of a fixed (53 octets or bytes) length and specifically called cells. This differs from approaches such as IP or Ethernet that use variable sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent, i.e. dedicated connections that are usually preconfigured by the service provider, or switched, i.e. set up on a per-call basis using signaling and disconnected when the call is terminated. The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the SONET/SDH backbone of the public switched telephone network (PSTN) and in the Integrated Services Digital Network (ISDN), but has largely been superseded in favor of next-generation networks based in Internet Protocol (IP) technology, while wireless and mobile ATM never established a significant foothold.
  • 1.6K
  • 27 Oct 2022
Topic Review
MOS Technology VIC-II
The VIC-II (Video Interface Chip II), specifically known as the MOS Technology 6567/8562/8564 (NTSC versions), 6569/8565/8566 (PAL), is the microchip tasked with generating Y/C video signals (combined to composite video in the RF modulator) and DRAM refresh signals in the Commodore 64 and C128 home computers. Succeeding MOS's original VIC (used in the VIC-20), the VIC-II was one of the two chips mainly responsible for the C64's success (the other chip being the 6581 SID).
  • 1.6K
  • 10 Nov 2022
Topic Review
Technological Breakthroughs in Sport
We are currently witnessing an unprecedented era of digital transformation in sports, driven by the revolutions in Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), and Data Visualization (DV). These technologies hold the promise of redefining sports performance analysis, automating data collection, creating immersive training environments, and enhancing decision-making processes. Traditionally, performance analysis in sports relied on manual data collection, subjective observations, and standard statistical models. These methods, while effective, had limitations in terms of time and subjectivity.
  • 1.5K
  • 26 Sep 2025
Topic Review
Integrated GNN and DRL in E2E Networking Solutions
Graph neural networks (GNN) and deep reinforcement learning (DRL) are at the forefront of algorithms for advancing network automation with capabilities of extracting features and multi-aspect awareness in building controller policies. While GNN offers non-Euclidean topology awareness, feature learning on graphs, generalization, representation learning, permutation equivariance, and propagation analysis, it lacks capabilities in continuous optimization and long-term exploration/exploitation strategies. Therefore, DRL is an optimal complement to GNN, enhancing the applications towards achieving specific policies within the scope of end-to-end (E2E) network automation.
  • 1.5K
  • 18 Mar 2024
Topic Review
DiamondTouch
The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).
  • 1.5K
  • 24 Oct 2022
Topic Review
Copy and Paste Programming
Copy-and-paste programming, sometimes referred to as just 'pasting', is the production of highly repetitive computer programming code, as produced by copy and paste operations. It is primarily a pejorative term; those who use the term are often implying a lack of programming competence. It may also be the result of technology limitations (e.g., an insufficiently expressive development environment) as subroutines or libraries would normally be used instead. However, there are occasions when copy and paste programming is considered acceptable or necessary, such as for boilerplate, loop unrolling (when not supported automatically by the compiler), or certain programming idioms, and it is supported by some source code editors in the form of snippets.
  • 1.5K
  • 28 Oct 2022
Topic Review
Reduction Operator
In computer science, the reduction operator is a type of operator that is commonly used in parallel programming to reduce the elements of an array into a single result. Reduction operators are associative and often (but not necessarily) commutative. The reduction of sets of elements is an integral part of programming models such as Map Reduce, where a reduction operator is applied (mapped) to all elements before they are reduced. Other parallel algorithms use reduction operators as primary operations to solve more complex problems. Many reduction operators can be used for broadcasting to distribute data to all processors.
  • 1.4K
  • 11 Oct 2022
Topic Review
JPEG XT
JPEG XT (ISO/IEC 18477) is an image compression standard which specifies backward-compatible extensions of the base JPEG standard (ISO/IEC 10918-1 and ITU Rec. T.81). JPEG XT extends JPEG with support for higher integer bit depths, high dynamic range imaging and floating-point coding, lossless coding, alpha channel coding, and an extensible file format based on JFIF. It also includes reference software implementation and conformance testing specification. JPEG XT extensions are backward compatible with base JPEG/JFIF file format - existing software is forward compatible and can read the JPEG XT binary stream, though it would only decode the base 8-bit lossy image.
  • 1.4K
  • 28 Sep 2022
Topic Review
Windows Service
In Windows NT operating systems, a Windows service is a computer program that operates in the background. It is similar in concept to a Unix daemon. A Windows service must conform to the interface rules and protocols of the Service Control Manager, the component responsible for managing Windows services. It is the Services and Controller app, services.exe, that launches all the services and manages their actions, such as start, end, etc. Windows services can be configured to start when the operating system is started and run in the background as long as Windows is running. Alternatively, they can be started manually or by an event. Windows NT operating systems include numerous services which run in context of three user accounts: System, Network Service and Local Service. These Windows components are often associated with Host Process for Windows Services. Because Windows services operate in the context of their own dedicated user accounts, they can operate when a user is not logged on. Prior to Windows Vista, services installed as an "interactive service" could interact with Windows desktop and show a graphical user interface. In Windows Vista, however, interactive services are deprecated and may not operate properly, as a result of Windows Service hardening.
  • 1.4K
  • 21 Oct 2022
Topic Review
Challenges According to the ‘7Vs’ of Big Data
Big Data has challenges that sometimes are the same as its own characteristics. These characteristics are known as the Vs. The researchers describe ‘7Vs’ that they have found and are the most used and more related to general data used in Big Data, including an explanation and challenges that exist according to them. Some of them have different subtypes according to the differences detected in the literature and working with them. Some authors create a different V for similar purposes, and here, the researcher gather them into one due to the similarities.
  • 1.4K
  • 12 Apr 2023
Topic Review
Shibboleth Single Sign-on Architecture
Shibboleth is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems run by federations of different organizations or institutions. The federations are often universities or public service organizations. The Shibboleth Internet2 middleware initiative created an architecture and open-source implementation for identity management and federated identity-based authentication and authorization (or access control) infrastructure based on Security Assertion Markup Language (SAML). Federated identity allows the sharing of information about users from one security domain to the other organizations in a federation. This allows for cross-domain single sign-on and removes the need for content providers to maintain user names and passwords. Identity providers (IdPs) supply user information, while service providers (SPs) consume this information and give access to secure content.
  • 1.4K
  • 28 Oct 2022
Topic Review
Image Derivatives
Image derivatives can be computed by using small convolution filters of size 2 x 2 or 3 x 3, such as the Laplacian, Sobel, Roberts and Prewitt operators. However, a larger mask will generally give a better approximation of the derivative and examples of such filters are Gaussian derivatives and Gabor filters. Sometimes high frequency noise needs to be removed and this can be incorporated in the filter so that the Gaussian kernel will act as a band pass filter. The use of Gabor filters in image processing has been motivated by some of its similarities to the perception in the human visual system. The pixel value is computed as a convolution where [math]\displaystyle{ \mathbf{d} }[/math] is the derivative kernel and [math]\displaystyle{ G }[/math] is the pixel values in a region of the image and [math]\displaystyle{ \ast }[/math] is the operator that performs the convolution.
  • 1.4K
  • 28 Oct 2022
Topic Review
Developments in Algorithms for Sequence Alignment
Pairwise sequence alignment is the basis of multiple sequence alignment and mainly divided into local alignment and global alignment. The former is to find and align the similar local region, and the latter is end-to-end alignment. A commonly used global alignment algorithm is the Needleman–Wunsch algorithm, which has become the basic algorithm that is used in many types of multiple sequence alignment software. The algorithm usually consists of two steps: one is calculating the states of the dynamic programming matrix; and the other is tracking back from the final state to the initial state of the dynamic programming matrix to obtain the solution of alignment. Time and space complexity of pairwise sequence alignment algorithms based on dynamic programming is O(l1l2), where l1 and l2 are the lengths of the two sequences to be aligned. Such overheads are acceptable for short sequences but not for sequences with more than several thousand sites. As a space-saving strategy of the dynamic programming algorithm, the Hirschberg algorithm is able to complete alignment by the space complexity of O(l) without any sacrifice of quality.
  • 1.3K
  • 22 Apr 2022
Topic Review
Self-Sovereign Identity Technology
Self-sovereign identity (SSI) is a set of technologies that build on core concepts in identity management, blockchain technology, and cryptography. SSI enables entities to create fraud-proof verifiable credentials and instantly verify the authenticity of a digital credential.
  • 1.3K
  • 29 Nov 2022
  • Page
  • of
  • 7
Academic Video Service