Topic Review
Shibboleth (Shibboleth Consortium)
Shibboleth is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems run by federations of different organizations or institutions. The federations are often universities or public service organizations. The Shibboleth Internet2 middleware initiative created an architecture and open-source implementation for identity management and federated identity-based authentication and authorization (or access control) infrastructure based on Security Assertion Markup Language (SAML). Federated identity allows the sharing of information about users from one security domain to the other organizations in a federation. This allows for cross-domain single sign-on and removes the need for content providers to maintain user names and passwords. Identity providers (IdPs) supply user information, while service providers (SPs) consume this information and give access to secure content.
  • 548
  • 20 Oct 2022
Topic Review
Discrete Event Simulation
A discrete-event simulation (DES) models the operation of a system as a discrete sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation can directly jump in time from one event to the next. This contrasts with continuous simulation in which the simulation continuously tracks the system dynamics over time. Instead of being event-based, this is called an activity-based simulation; time is broken up into small time slices and the system state is updated according to the set of activities happening in the time slice. Because discrete-event simulations do not have to simulate every time slice, they can typically run much faster than the corresponding continuous simulation. A more recent method is the three-phased approach to discrete event simulation (Pidd, 1998). In this approach, the first phase is to jump to the next chronological event. The second phase is to execute all events that unconditionally occur at that time (these are called B-events). The third phase is to execute all events that conditionally occur at that time (these are called C-events). The three phase approach is a refinement of the event-based approach in which simultaneous events are ordered so as to make the most efficient use of computer resources. The three-phase approach is used by a number of commercial simulation software packages, but from the user's point of view, the specifics of the underlying simulation method are generally hidden.
  • 546
  • 02 Nov 2022
Topic Review
Data Integrity Tracking and Verification System
Data integrity is a prerequisite for ensuring data availability of IoT data and has received extensive attention in the field of IoT big data security. Stream computing systems are widely used in the field of IoT for real-time data acquisition and computing. The real-time, volatility, suddenness, and disorder of stream data make data integrity verification difficult. The data integrity tracking and verification system is constructed based on a data integrity verification algorithm scheme of the stream computing system (S-DIV) to  track and analyze the message data stream in real time. By verifying the data integrity of message during the whole life cycle, the problem of data corruption or data loss can be found in time, and error alarm and message recovery can be actively implemented.
  • 529
  • 13 Feb 2023
Topic Review
Technological Breakthroughs in Sport
We are currently witnessing an unprecedented era of digital transformation in sports, driven by the revolutions in Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), and Data Visualization (DV). These technologies hold the promise of redefining sports performance analysis, automating data collection, creating immersive training environments, and enhancing decision-making processes. Traditionally, performance analysis in sports relied on manual data collection, subjective observations, and standard statistical models. These methods, while effective, had limitations in terms of time and subjectivity.
  • 525
  • 29 Feb 2024
Topic Review
SLinCA@Home
SLinCA@Home (Scaling Laws in Cluster Aggregation) was a research project that uses Internet-connected computers to do research in fields such as physics and materials science.
  • 505
  • 08 Nov 2022
Topic Review
Elastic Load Balancing
Amazon Elastic Compute Cloud (EC2) is a part of Amazon.com's cloud-computing platform, Amazon Web Services (AWS), that allows users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servers – hence the term "elastic". EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. In November 2010, Amazon switched its own retail website platform to EC2 and AWS.
  • 495
  • 27 Oct 2022
Topic Review
Juniper M Series
Juniper M series is a line of multiservice edge routers designed and manufactured by Juniper Networks, for enterprise and service provider networks. It spans over M7i, M10i, M40e, M120, and M320 platforms with 5 Gbit/s up to 160 Gbit/s of full-duplex throughput. The M40 router was the first product by Juniper Networks, which was released in 1998. The M-series routers run on JUNOS Operating System.
  • 493
  • 25 Oct 2022
Topic Review
Conservation and Restoration OpenLab
Open laboratories (OpenLabs) in Cultural Heritage institutions are an effective way to provide visibility into the behind-the-scenes processes and promote documentation data collected and produced by domain specialists. Cultural Heritage (CH) institutions have been adopting new practices to improve their services and meet the preferences and needs of potential audiences. One such practice is the transformation of conservation and restoration (CnR) laboratories into OpenLabs, which allow visitors to see the various processes that take place “behind the scenes” .
  • 490
  • 19 May 2023
Topic Review
Range Encoding
Range encoding is an entropy coding method defined by G. Nigel N. Martin in a 1979 paper, which effectively rediscovered the FIFO arithmetic code first introduced by Richard Clark Pasco in 1976. Given a stream of symbols and their probabilities, a range coder produces a space-efficient stream of bits to represent these symbols and, given the stream and the probabilities, a range decoder reverses the process. Range coding is very similar to arithmetic encoding, except that encoding is done with digits in any base, instead of with bits, and so it is faster when using larger bases (e.g. a byte) at small cost in compression efficiency. After the expiration of the first (1978) arithmetic coding patent, range encoding appeared to clearly be free of patent encumbrances. This particularly drove interest in the technique in the open source community. Since that time, patents on various well-known arithmetic coding techniques have also expired.
  • 482
  • 14 Oct 2022
Topic Review Video
Graph Burning
The graph burning problem is a relatively new combinatorial optimization problem that helps quantify a graph's vulnerability. It is defined in terms of a fundamental diffusion model.
  • 476
  • 21 Jul 2023
Topic Review
SGI Octane
Octane series of IRIX workstations was developed and sold by SGI in the 2000s. Octane and Octane2 are two-way multiprocessing-capable workstations, originally based on the MIPS Technologies R10000 microprocessor. Newer Octanes are based on the R12000 and R14000. The Octane2 has four improvements: a revised power supply, system board, and Xbow ASIC. The Octane2 has VPro graphics and supports all the VPro cards. Later revisions of the Octane include some of the improvements introduced in the Octane2. The codenames for the Octane and Octane2 are "Racer" and "Speedracer" respectively. The Octane is the direct successor to the Indigo2, and was succeeded by the Tezro, and its immediate sibling is the O2. SGI withdrew the Octane2 from the price book on May 26, 2004, and ceased Octane2 production on June 25, 2004. Support for the Octane2 ceased in June 2009. Octane III was introduced in early 2010 after SGI's bankruptcy reorganization. It is a series of Intel based deskside systems, as a Xeon-based workstation with 1 or 2 3U EATX trays, or as cluster servers with 10 system trays configured with up to 10 Twin Blade nodes or 20 Intel ATOM MINI-ITX nodes.
  • 473
  • 01 Nov 2022
Topic Review
Grand Central Dispatch
Grand Central Dispatch (GCD or libdispatch), is a technology developed by Apple Inc. to optimize application support for systems with multi-core processors and other symmetric multiprocessing systems. It is an implementation of task parallelism based on the thread pool pattern. The fundamental idea is to move the management of the thread pool out of the hands of the developer, and closer to the operating system. The developer injects "work packages" into the pool oblivious of the pool's architecture. This model improves simplicity, portability and performance. GCD was first released with Mac OS X 10.6, and is also available with iOS 4 and above. The name "Grand Central Dispatch" is a reference to Grand Central Terminal. The source code for the library that provides the implementation of GCD's services, libdispatch, was released by Apple under the Apache License on September 10, 2009. It has been ported to FreeBSD 8.1+, MidnightBSD 0.3+, Linux, and Solaris. Attempts in 2011 to make libdispatch work on Windows were not merged into upstream. Apple has its own port of libdispatch.dll for Windows shipped with Safari and iTunes, but no SDK is provided. Since around 2017, the original libdispatch repository hosted by Nick Hutchinson was deprecated in favor of a version that is part of the Swift core library created in June 2016. The new version supports more platforms, notably including Windows.
  • 473
  • 04 Nov 2022
Topic Review
Smart Parking System Based on Edge-Cloud-Dew Computing Architecture
In a smart parking system, the license plate recognition service controls the car’s entry and exit and plays the core role in the parking lot system. When the Internet is interrupted, the parking lot’s business will also be interrupted. Hence, an Edge-Cloud-Dew architecture for the mobile industry was proposed in order to tackle this critical problem. The architecture has an innovative design, including LAN-level deployment, Platform-as-a-Dew Service (PaaDS), the dew version of license plate recognition, and the dew type of machine learning model training. Based on these designs, the architecture presents many benefits, such as: (1) reduced maintenance and deployment issues and increased dew service reliability and sustainability; (2) effective release of the network constraint on cloud computing and increase in the horizontal and vertical scalability of the system; (3) enhancement of dew computing to resolve the heavy computing process problem; and (4) proposal of a dew type of machine learning training mechanism without requiring periodic retraining, but with acceptable accuracy. 
  • 472
  • 03 Jul 2023
Topic Review
Atom (Web Standard)
The name Atom applies to a pair of related Web standards. The Atom Syndication Format is an XML language used for web feeds, while the Atom Publishing Protocol (AtomPub or APP) is a simple HTTP-based protocol for creating and updating web resources. Web feeds allow software programs to check for updates published on a website. To provide a web feed, the site owner may use specialized software (such as a content management system) that publishes a list (or "feed") of recent articles or content in a standardized, machine-readable format. The feed can then be downloaded by programs that use it, like websites that syndicate content from the feed, or by feed reader programs that allow internet users to subscribe to feeds and view their content. A feed contains entries, which may be headlines, full-text articles, excerpts, summaries or links to content on a website along with various metadata. The Atom format was developed as an alternative to RSS. Ben Trott, an advocate of the new format that became Atom, believed that RSS had limitations and flaws—such as lack of on-going innovation and its necessity to remain backward compatible—and that there were advantages to a fresh design. Proponents of the new format formed the IETF Atom Publishing Format and Protocol Workgroup. The Atom Syndication Format was published as an IETF proposed standard in RFC 4287 (December 2005), and the Atom Publishing Protocol was published as RFC 5023 (October 2007).
  • 467
  • 02 Nov 2022
Topic Review
Trust Computation in Internet of Vehicles
The current trust computation scheme in Internet of Vehicles, according to the adopted decision logic, can be divided into different approaches based on multi-weight fusion, Bayesian inference (BI), the Dempster–Shafer (D-S) theory, fuzzy logic, and three-valued subjective logic (3VSL), etc.
  • 450
  • 05 Jun 2023
Topic Review
Dask
Dask is a flexible open-source Python library for parallel computing. Dask scales Python code from multi-core local machines to large distributed clusters in the cloud. Dask provides a familiar user interface by mirroring the APIs of other libraries in the PyData ecosystem including: Pandas, Scikit-learn and NumPy. It also exposes low-level APIs that help programmers run custom algorithms in parallel. Dask was created by Matthew Rocklin in December 2014 and has over 9.8k stars and 500 contributors on GitHub. Dask is used by retail, financial, governmental organizations, as well as life science and geophysical institutes. Walmart, Wayfair, JDA, GrubHub, General Motors, NVIDIA, Harvard Medical School, Capital One and NASA are among the organizations that use Dask.
  • 446
  • 20 Oct 2022
Topic Review
Interactive Visual Analysis
Interactive Visual Analysis (IVA) is a set of techniques for combining the computational power of computers with the perceptive and cognitive capabilities of humans, in order to extract knowledge from large and complex datasets. The techniques rely heavily on user interaction and the human visual system, and exist in the intersection between visual analytics and big data. It is a branch of data visualization. IVA is a suitable technique for analyzing high-dimensional data that has a large number of data points, where simple graphing and non-interactive techniques give an insufficient understanding of the information. These techniques involve looking at datasets through different, correlated views and iteratively selecting and examining features the user finds interesting. The objective of IVA is to gain knowledge which is not readily apparent from a dataset, typically in tabular form. This can involve generating, testing or verifying hypotheses, or simply exploring the dataset to look for correlations between different variables.
  • 442
  • 10 Oct 2022
Topic Review
Integrated GNN and DRL in E2E Networking Solutions
Graph neural networks (GNN) and deep reinforcement learning (DRL) are at the forefront of algorithms for advancing network automation with capabilities of extracting features and multi-aspect awareness in building controller policies. While GNN offers non-Euclidean topology awareness, feature learning on graphs, generalization, representation learning, permutation equivariance, and propagation analysis, it lacks capabilities in continuous optimization and long-term exploration/exploitation strategies. Therefore, DRL is an optimal complement to GNN, enhancing the applications towards achieving specific policies within the scope of end-to-end (E2E) network automation.
  • 440
  • 18 Mar 2024
Topic Review
Smart City Infrastructure Threat Modelling Methodologies
Smart city infrastructure and the related theme of critical national infrastructure have attracted growing interest in recent years in academic literature, notably how cyber-security can be effectively applied within the environment, which involves using cyber-physical systems. These operate cross-domain and have massively improved functionality and complexity, especially in threat modelling cyber-security analysis—the disparity between current cyber-security proficiency and the requirements for an effective cyber-security systems implementation.
  • 436
  • 14 Sep 2022
Topic Review
Vertex Chunk-Based Object Culling
Famous content using the Metaverse concept allows users to freely place objects in a world space without constraints. To render various high-resolution objects placed by users in real-time, various algorithms exist, such as view frustum culling, visibility culling and occlusion culling. These algorithms selectively remove objects outside the camera’s view and eliminate an object that is too small to render.
  • 436
  • 26 Jun 2023
  • Page
  • of
  • 7
ScholarVision Creations