You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Optimal Design of Neural Networks Based on FPGA
Deep learning based on neural network has been widely used in image recognition, speech recognition, natural language processing, automatic driving and other fields, and has made breakthrough progress. FPGA stands out in the field of accelerated deep learning with its flexible architecture and logical unit, high energy efficiency ratio, strong compatibility, low delay and other advantages.
  • 2.0K
  • 15 Sep 2023
Topic Review
Wearable Technology in Sports
Wearable technology is increasingly vital for improving sports performance through real-time data analysis and tracking. Both professional and amateur athletes rely on wearable sensors to enhance training efficiency and competition outcomes.
  • 1.9K
  • 10 Oct 2025
Topic Review
9 Track Tape
The IBM System/360, announced in 1964, introduced what is now generally known as 9 track tape. The ​1⁄2 inch (12.7 mm) wide magnetic tape media and reels are the same size as the earlier IBM 7 track format it replaced, but the new format has eight data tracks and one parity track for a total of nine parallel tracks. Data is stored as 8-bit characters, spanning the full width of the tape (including the parity bit). Various recording methods have been employed during its lifetime as tape speed and data density increased, including PE (phase encoding), GCR (group coded recording) and NRZI (non-return-to-zero, inverted, sometimes pronounced "nur-zee"). Tapes come in various sizes up to 3,600 feet (1,100 m) in length. The standard size of a byte was effectively set at eight bits with the S/360 and nine-track tape. For over 30 years the format dominated offline storage and data transfer, but by the end of the 20th century it was obsolete, and the last manufacturer of tapes ceased production in early 2002, with drive production ending the next year.
  • 1.9K
  • 14 Oct 2022
Topic Review
Embedded Brain Computer Interface
We attempt to summarize the last two decades of embedded Brain-Computer Interface mostly because of the electroencephalography influence on these systems. Numerous noninvasive EBCIs have been developed, described, and tested. Noninvasive nature of the EEG-based BCIs made them the most popular BCI systems.
  • 1.8K
  • 15 Jul 2021
Topic Review
Delay Line Memory
Delay line memory is a form of computer memory, now obsolete, that was used on some of the earliest digital computers. Like many modern forms of electronic computer memory, delay line memory was a refreshable memory, but as opposed to modern random-access memory, delay line memory was sequential-access. Analog delay line technology had been used since the 1920s to delay the propagation of analog signals. When a delay line is used as a memory device, an amplifier and a pulse shaper are connected between the output of the delay line and the input. These devices recirculate the signals from the output back into the input, creating a loop that maintains the signal as long as power is applied. The shaper ensures the pulses remain well-formed, removing any degradation due to losses in the medium. The memory capacity is determined by dividing the time taken to transmit one bit into the time it takes for data to circulate through the delay line. Early delay-line memory systems had capacities of a few thousand bits, with recirculation times measured in microseconds. To read or write a particular bit stored in such a memory, it is necessary to wait for that bit to circulate through the delay line into the electronics. The delay to read or write any particular bit is no longer than the recirculation time. Use of a delay line for a computer memory was invented by J. Presper Eckert in the mid-1940s for use in computers such as the EDVAC and the UNIVAC I. Eckert and John Mauchly applied for a patent for a delay line memory system on October 31, 1947; the patent was issued in 1953. This patent focused on mercury delay lines, but it also discussed delay lines made of strings of inductors and capacitors, magnetostrictive delay lines, and delay lines built using rotating disks to transfer data to a read head at one point on the circumference from a write head elsewhere around the circumference.
  • 1.7K
  • 01 Dec 2022
Topic Review
FPGA in Decimal Arithmetic
Decimal operations are executed with slow software-based decimal arithmetic functions. For the fast execution of decimal operations, dedicated hardware units have been proposed and designed in FPGA. Decimal addition and multiplication is found in most decimal-based applications and so its design is very important for fast execution. This entry describes recent solutions for decimal multiplication and addition in FPGA.
  • 1.7K
  • 16 Jul 2021
Topic Review
Routing in the Data Center
To have adequate routing and forwarding, it is imperative to fully exploit the topological characteristics of fat trees. Some basic requirements should be satisfied: forwarding loops avoidance, rapid failure detection, efficient network utilization (e.g., spanning-tree solutions are not acceptable), routing scalability (in addition to physical scalability). In principle, being the data center a single administrative domain, the candidates to fulfill the routing role are popular link-state IGPs. However, as they have been designed for arbitrary topologies, the flood of link-state advertisements may suffer from scalability issues. Therefore, the possible solutions should entail reducing the message flooding, exploiting the topology knowledge, or using other routing algorithms. In this regard, the following routing protocols will be considered in this work: BGP with a specific configuration for the data center, link-state algorithms with flooding reduction, and ongoing Internet Engineering Task Force (IETF) efforts, namely Routing in Fat Trees (RIFT) and Link State Vector Routing (LSVR), which are leveraging link-state and distance-vector advantages to design specific routing algorithms for data centers.This entry only consider distributed control plane solutions, i.e., routing protocols. Consequently, logically centralized Software-Defined Networking (SDN) solutions are not analyzed.
  • 1.7K
  • 27 Jan 2022
Topic Review
Olfactory Displays in Education and Training
Olfactory displays are defined as human–computer interfaces that generate and diffuse or transmit one or more odors to a user for a purpose. Computer-generated odors, in conjunction with other sensory information, have been proposed and used in education and training settings over the past four decades, supporting memorization of information, helping immerse learners into 3D educational environments, and complementing or supplementing human senses.
  • 1.7K
  • 19 Nov 2021
Topic Review
Topology Designs for Data Centers
The adoption of simple network topologies allows for an easier way to forward packets. On the other hand, more complex topologies may achieve greater performance, although network maintenance may become harder. Hence, a balance between performance and simplicity is a convenient point when choosing a data center design. Therefore, some topology designs for data centers are going to be proposed; these are classified into tree-like and graph-like architectures. With respect to the former, a hierarchical switching layout interconnects all nodes, thus showing the form of an inverted tree within multiple roots, where nodes are the leaves of such a tree. Regarding the latter, nodes are directly interconnected to each other, hence no switch is involved in the design.
  • 1.6K
  • 07 Jul 2023
Topic Review
Electrochemical Random-Access Memory
Electrochemical Random-Access Memory (ECRAM) is a type of non-volatile memory (NVM) with multiple levels per cell (MLC) designed for deep learning analog acceleration. An ECRAM cell is a three-terminal device composed of a conductive channel, an insulating electrolyte, an ionic reservoir, and metal contacts. The resistance of the channel is modulated by ionic exchange at the interface between the channel and the electrolyte upon application of an electric field. The charge-transfer process allows both for state retention in the absence of applied power, and for programming of multiple distinct levels, both differentiating ECRAM operation from the one of a field-effect transistor (FET). The write operation is deterministic and can result in symmetrical potentiation and depression, making ECRAM arrays attractive for acting as artificial synaptic weights in physical implementations of artificial neural networks (ANN). The technology challenges include open circuit potential (OCP) and semiconductor foundry compatibility associated with energy materials. Universities, government laboratories, and corporate research teams have contributed to the development of ECRAM for analog computing. Notably, Sandia National Laboratories designed a lithium-based cell inspired by solid-state battery materials, Stanford University built an organic proton-based cell, and International Business Machines (IBM) demonstrated in-memory selector-free parallel programming for a logistic regression task in an array of metal-oxide ECRAM designed for insertion in the back end of line (BEOL).
  • 1.6K
  • 17 Nov 2022
Topic Review
Hardware Heritage
Hardware heritage is the history of both hardware and software. Human knowledge, experience, and skills are translated into computation models (binary code) in software and computer environment (‘digital ecosystem’). The history of software is a history of how different communities of practitioners ‘put their world into a computer’.
  • 1.5K
  • 23 Sep 2021
Topic Review
Wireless Technologies for Social Distancing in COVID-19 Pandemic
So-called “social distance” refers to measures that work to prevent disease spread through minimizing human physical contact frequency and intensity, including the closure of public spaces (e.g., schools and offices), avoiding large crowds, and maintaining a safe distance between individuals. Because it reduces the likelihood that an infected person would transmit the illness to a healthy individual, social distance reduces the disease’s progression and impact. During the early stages of a pandemic, social distancing techniques can play a crucial role in decreasing the infection rate and delaying the disease’s peak. Consequently, the load on healthcare systems is reduced, and death rates are reduced. The concept of social distancing may not be as easy as physical distancing, given the rising complexity of viruses and the fast expansion of social interaction and globalization. It encompasses numerous non-pharmaceutical activities or efforts designed to reduce the spread of infectious diseases, including monitoring, detection, and alerting people. Different technologies can assist in maintaining a safe distance (e.g., 1.5 m) between persons in the adopted scenarios. There are a number of wireless and similar technologies that can be used to monitor people and public locations in real-time.
  • 1.5K
  • 25 Mar 2022
Topic Review
Gate-Level Static Approximate Adders
This work compares and analyzes static approximate adders which are suitable for FPGA and ASIC type implementations. We consider many static approximate adders and evaluate their performance with respect to a digital image processing application using standard figures of merit such as peak signal to noise ratio and structural similarity index metric. We provide the error metrics of approximate adders, and the design metrics of accurate and approximate adders corresponding to FPGA and ASIC type implementations. For the FPGA implementation, we considered a Xilinx Artix-7 FPGA, and for an ASIC type implementation, we considered a 32-28 nm CMOS standard digital cell library. While the inferences from this work could serve as a useful reference to determine an optimum static approximate adder for a practical application, in particular, we found approximate adders HOAANED, HERLOA and M-HERLOA to be preferable.
  • 1.4K
  • 14 Dec 2021
Topic Review
The Agricultural Internet of Things Ecosystem
The negative impacts of climate change and the increasing global population on food security and unemployment threats have motivated the adoption of the wireless sensor network (WSN)-based Agri-IoT as an indispensable underlying technology in precision agriculture and greenhouses to improve food production capacities and quality.
  • 1.4K
  • 03 Aug 2023
Topic Review
HPE BladeSystem
BladeSystem is a line of blade server machines from Hewlett Packard Enterprise (Formerly Hewlett-Packard) that was introduced in June 2006. The BladeSystem forms part of the HPE Converged Systems platform, which use a common converged infrastructure architecture for server, storage, and networking products. Designed for enterprise installations of 100 to more than 1,000 Virtual machines, the HP ConvergedSystem 700 is configured with BladeSystem servers. When managing a software-defined data center, a System administrator can perform automated lifecycle management for BladeSystems using HPE OneView for converged infrastructure management. The BladeSystem allows users to build a high density system, up to 128 servers in each rack.
  • 1.4K
  • 28 Sep 2022
Topic Review
The Evolution of Phone Technology
This research traces the evolution of phone technology from its early days as a mechanical device to the latest cutting-edge smartphones and smartwatches. It highlights the significant breakthroughs in communication technology and the companies and countries that have been at the forefront of this transformation. The research explores the development of rotary phones, touch-tone phones, mobile phones, and smartphones, and how each has changed the way people communicate and access information. It also delves into smartwatches and the latest advances in phone technology, including 5G connectivity, foldable screens, and augmented reality. Finally, the research considers the ethical questions raised by the constant connectivity provided by phones and smartwatches and emphasizes the importance of using emerging technologies for the benefit of all.
  • 1.4K
  • 22 May 2023
Topic Review
Fog-Based IoT Platform Performance Modeling and Optimization
A fog-based IoT platform model involving three layers, i.e., IoT devices, fog nodes, and the cloud, was proposed using an open Jackson network with feedback. The system performance was analyzed for individual subsystems, and the overall system was based on different input parameters. Interesting performance metrics were derived from analytical results. A resource optimization problem was developed and solved to determine the optimal service rates at individual fog nodes under some constraint conditions.
  • 1.3K
  • 19 Jun 2023
Topic Review
Databases in Metabolomics
Metabolomics has advanced from innovation and functional genomics tools and is currently a basis in the big data-led precision medicine era. Metabolomics is promising in the pharmaceutical field and clinical research.
  • 1.3K
  • 27 Oct 2022
Topic Review
Embedded Machine Learning
Embedded machine learning (EML) can be applied in the areas of accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-constrained environments such as embedded and mobile devices. In order to efficiently implement these compute and memory-intensive algorithms within the embedded and mobile computing space, innovative optimization techniques are required at the algorithm and hardware levels. 
  • 1.3K
  • 01 Nov 2021
Topic Review
Comparative Study of Keccak SHA-3 Implementations
SHA-3, a pivotal component in modern cryptography, has spawned numerous implementations across diverse platforms and technologies. This text aims to provide valuable insights into selecting and optimizing Keccak SHA-3 implementations. It encompasses an in-depth analysis of hardware, software, and software–hardware (hybrid) solutions. Researchers assess the strengths, weaknesses, and performance metrics of each approach. Critical factors, including computational efficiency, scalability, and flexibility, are evaluated across different use cases. Researchers investigate how each implementation performs in terms of speed and resource utilization. This text aims to improve the knowledge of cryptographic systems, aiding in the informed design and deployment of efficient cryptographic solutions. By providing a comprehensive overview of SHA-3 implementations, it offers a clear understanding of the available options and equips professionals and researchers with the necessary insights to make informed decisions in their cryptographic endeavors.
  • 1.3K
  • 15 Dec 2023
  • Page
  • of
  • 5
Academic Video Service