Topic Review
Support Vector Machine
In machine learning, support-vector machines (SVMs, also support-vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. The Support Vector Machine (SVM) algorithm is a popular machine learning tool that offers solutions for both classification and regression problems. Developed at AT&T Bell Laboratories by Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Vapnik et al., 1997), it presents one of the most robust prediction methods, based on the statistical learning framework or VC theory proposed by Vapnik and Chervonekis (1974) and Vapnik (1982, 1995). Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on the side of the gap on which they fall. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. When data are unlabelled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The support-vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications.
  • 4.3K
  • 14 Oct 2022
Topic Review
Supply Chain Management Using Blockchain
Blockchain is a groundbreaking technology widely adopted in industrial applications for improving supply chain management (SCM). The SCM and logistics communities have paid close attention to the development of blockchain technology. The primary purpose of employing a blockchain for SCM is to lower production costs while enhancing the system’s security. Blockchain-related SCM research has drawn much interest, and it is fair to state that this technology is now the most promising option for delivering reliable services/goods in supply chain networks.
  • 345
  • 25 Sep 2023
Topic Review
Supply Chain Management in Pandemics
Pandemics cause chaotic situations in supply chains (SC) around the globe, which can lead towards survivability challenges. The ongoing COVID-19 pandemic is an unprecedented humanitarian crisis that has severely affected global business dynamics. Similar vulnerabilities have been caused by other outbreaks in the past. In these terms, prevention strategies against propagating disruptions require vigilant goal conceptualization and roadmaps. In this respect, there is a need to explore supply chain operation management strategies to overcome the challenges that emerge due to COVID-19-like situations. 
  • 676
  • 16 Mar 2021
Topic Review
Supervised Log Anomaly with Probabilistic Polynomial Approximation
Audit and Security log collection and storage are essential for organizations worldwide to recognize security breaches and are required by law. Logs often contain sensitive information about an organization or its customers. Fully Homomorphic Encryption (FHE) allows calculations on encrypted data, thus very useful for privacy-preserving tasks such as log anomaly detection. While word-wise FHE schemes can perform additions and multiplications, complex functions such as Sigmoid need to be approximated. Probabilistic polynomial approximations using a Perceptron can achieve lower errors compared to deterministic approaches like Taylor and Chebyshev.
  • 201
  • 23 Oct 2023
Topic Review
Supercomputer Operating Systems
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. Given that modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g., using a small and efficient lightweight kernel such as Compute Node Kernel (CNK) or Compute Node Linux (CNL) on compute nodes, but a larger system such as a Linux-derivative on server and input/output (I/O) nodes. While in a traditional multi-user computer system job scheduling is in effect a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully dealing with inevitable hardware failures when tens of thousands of processors are present. Although most modern supercomputers use the Linux operating system, each manufacturer has made its own specific changes to the Linux-derivative they use, and no industry standard exists, partly because the differences in hardware architectures require changes to optimize the operating system to each hardware design.
  • 1.1K
  • 29 Nov 2022
Biography
Sunil Mukhi
Sunil Mukhi is an Indian theoretical physicist working in the areas of string theory, quantum field theory and particle physics. Currently he is a physics professor at IISER Pune. He is also the dean of faculty here. He obtained a B.Sc. degree at St. Xavier's College, Mumbai and a Ph.D. in Theoretical Physics in 1981 from Stony Brook University (then called the State University of New York at
  • 548
  • 16 Nov 2022
Topic Review
Sun-Ni Law
Sun-Ni's Law (or Sun and Ni's Law, also known as memory-bounded speedup), is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity. In general, as a system grows in computational power, the problems run on the system increase in size. Analogous to Amdahl's law, which says that the problem size remains constant as system sizes grow, and Gustafson's law, which proposes that the problem size should scale but be bound by a fixed amount of time, Sun-Ni's Law states the problem size should scale but be bound by the memory capacity of the system. Sun-Ni's Law was initially proposed by Xian-He Sun and Lionel Ni at the Proceedings of IEEE Supercomputing Conference 1990. With the increasing disparity between CPU speed and memory data access latency, application execution time often depends on the memory speed of the system. As predicted by Sun and Ni, data access has become the premier performance bottleneck for high-end computing. From this fact one can see the intuition behind Sun-Ni's Law, as system resources increase, applications are often bottlenecked by memory speed and bandwidth, thus an application can achieve a larger speedup by utilizing all the memory capacity in the system. Sun-Ni's Law can be applied to different layers of a memory hierarchy system, from L1 cache to main memory. Through its memory-bounded function,W=G(M), it reveals the trade-off between computing and memory in algorithm and system architecture design. All three speedup models, Sun-Ni, Gustafson, and Amdahl, provide a metric to analyze speedup for Parallel computing. Amdahl’s law focuses on the time reduction for a given fixed-size problem. Amdahl’s law states that the sequential portion of the problem (algorithm) limits the total speedup that can be achieved as system resources increase. Gustafson’s law suggests that it is beneficial to build a large-scale parallel system as the speedup can grow linearly with the system size if the problem size is scaled up to maintain a fixed execution time. Yet as memory access latency often becomes the dominant factor in an application’s execution time, applications may not scale up to meet the time bound constraint. Sun-Ni's Law, instead of constraining the problem size by time, constrains the problem by the memory capacity of the system, or in other words bounds based on memory. Sun-Ni's Law is a generalization of Amdahl's Law and Gustafson's Law. When the memory-bounded function G(M)=1, it resolves to Amdahl's law, when the memory-bounded function G(M)=m,the number of processors, it resolves to Gustafson's Law.
  • 511
  • 31 Oct 2022
Topic Review
Sun Ray
The Sun Ray was a stateless thin client computer (and associated software) aimed at corporate environments, originally introduced by Sun Microsystems in September 1999 and discontinued by Oracle Corporation in 2014. It featured a smart card reader and several models featured an integrated flat panel display. The idea of a stateless desktop was a significant shift from, and the eventual successor to, Sun's earlier line of diskless Java-only desktops, the JavaStation.
  • 531
  • 10 Nov 2022
Topic Review
Suitability of NB-IoT
Narrow-Band Internet of Things (NB-IoT) shares among the challenges faced by Internet of Things (IoT) and its applications in industrial settings are set to bring in the fourth industrial revolution. The industrial environment consisting of high profile manufacturing plants and a variety of equipment is inherently characterized by high reflectiveness, causing significant multi-path components that affect the propagation of wireless communications—a challenge among others that needs to be resolved. The suitability of NB-IoT for industrial applications is therewith explained.
  • 938
  • 19 Aug 2021
Topic Review
Sugar
Sugar is a free and open-source desktop environment designed for interactive learning by children. Copyright by SugarLabs. Developed as part of the One Laptop per Child (OLPC) project, Sugar was the default interface on OLPC XO-1 laptop computers. The OLPC XO-1.5 and later provided the option of either the Gnome or Sugar interfaces. Sugar is available as a Live CD, as Live USB, and a package installable through several Linux distributions. It can run in a Linux virtual machine under Windows and Mac OS. Unlike most other desktop environments, Sugar does not use the "desktop", "folder" and "window" metaphors. Instead, Sugar's default full-screen activities require users to focus on only one program at a time. Sugar implements a journal which automatically saves the user's running program session and allows them to later use an interface to pull up their past works by date, an activity used, or file type.
  • 580
  • 08 Nov 2022
  • Page
  • of
  • 371
Video Production Service