You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Subject:
All Disciplines Arts & Humanities Biology & Life Sciences Business & Economics Chemistry & Materials Science Computer Science & Mathematics Engineering Environmental & Earth Sciences Medicine & Pharmacology Physical Sciences Public Health & Healthcare Social Sciences
Sort by:
Most Viewed Latest Alphabetical (A-Z) Alphabetical (Z-A)
Filter:
All Topic Review Biography Peer Reviewed Entry Video Entry
Topic Review
Nirvana
Nirvana was virtual object storage software developed and maintained by General Atomics. It can also be described as metadata, data placement and data management software that lets organizations manage unstructured data on multiple storage devices located anywhere in the world in order to orchestrate global data intensive workflows, and search for and locate data no matter where it is located or when it was created. Nirvana does this by capturing system and user-defined metadata to enable detailed search and enact policies to control data movement and protection. Nirvana also maintains data provenance, audit, security and access control. Nirvana can reduce storage costs by identifying data to be moved to lower cost storage and data that no longer needs to be stored.
  • 1.2K
  • 22 Nov 2022
Topic Review
Measuring Network Throughput
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements. Reasons for measuring throughput in networks. People are often concerned about measuring the maximum data throughput in bits per second of a communications link or network access. A typical method of performing a measurement is to transfer a 'large' file from one system to another system and measure the time required to complete the transfer or copy of the file. The throughput is then calculated by dividing the file size by the time to get the throughput in megabits, kilobits, or bits per second. Unfortunately, the results of such an exercise will often result in the goodput which is less than the maximum theoretical data throughput, leading to people believing that their communications link is not operating correctly. In fact, there are many overheads accounted for in throughput in addition to transmission overheads, including latency, TCP Receive Window size and system limitations, which means the calculated goodput does not reflect the maximum achievable throughput.
  • 1.2K
  • 17 Oct 2022
Topic Review
Sustainability Budgets
Artificial Intelligence (AI) is increasingly being used to solve global problems, and its use could potentially solve challenges relating to climate change, but the creation of AI systems often requires vast amounts of, up front, computing power, and, thereby, it can be a significant contributor to greenhouse gas emissions. In 2015, the United Nations (UN) set their ‘Sustainable Development Goals’ (SDGs) as key global priorities for a better world by 2030. One of the key goals (Goal 13) is ‘to take urgent action to combat climate change and its impacts’. More recently, at the 2021 World Climate Summit (a global conference aimed at addressing climate change), a worldwide pledge was made for countries to do more to reduce their carbon footprints. Achieving this aim: utilising ‘Sustainability Budgets’, in analogy with Privacy Budgets in Differential Privacy, it develops a procedure that empowers developers, allows management to have sufficient oversight and provides a governance framework towards achieving Goal 13 of the ‘Sustainable Development Goals’.
  • 1.2K
  • 14 Jun 2022
Topic Review
Mascot
Mascot is a software search engine that uses mass spectrometry data to identify proteins from peptide sequence databases. Mascot is widely used by research facilities around the world. Mascot uses a probabilistic scoring algorithm for protein identification that was adapted from the MOWSE algorithm. Mascot is freely available to use on the website of Matrix Science. A license is required for in-house use where more features can be incorporated.
  • 1.2K
  • 26 Oct 2022
Topic Review
Real-Time Information Processing and Visualization
The processing of information in real-time (through the processing of complex events) has become an essential task for the optimal functioning of manufacturing plants. Only in this way can artificial intelligence, data extraction, and even business intelligence techniques be applied, and the data produced daily be used in a beneficent way, enhancing automation processes and improving service delivery.
  • 1.2K
  • 10 Jun 2021
Topic Review
Improvement of Agricultural Product Traceability with Blockchain
In recent years, agricultural product safety accidents have raised public concern, jeopardizing people’s dietary safety and health. In order to keep track of specific information through the entire supply chain, including the production, logistics, processing, and sales processes, as well as to quickly find and prevent agricultural product safety problems, it is important to build a trusted traceability system. Traditional centralized traceability systems exist with the issues of insecure data storage, low traceability reliability, and single-point attack vulnerability. Blockchain technology has the characteristics of being data tamper-proof, distributed, decentralized, and traceable, which makes it a promising technology for agricultural product traceability.
  • 1.2K
  • 16 May 2022
Topic Review
Grsecurity
Grsecurity is a set of patches for the Linux kernel which emphasize security enhancements. The patches are typically used by computer systems which accept remote connections from untrusted locations, such as web servers and systems offering shell access to its users. Grsecurity provides a collection of security features to the Linux kernel, including address space protection, enhanced auditing and process control. Grsecurity is produced by Open Source Security, Inc., headquartered in Pennsylvania, and since April 2017 (Linux 4.9) the patches (including test ones) are only available to their paying customers.
  • 1.2K
  • 01 Dec 2022
Topic Review
NetApp Filer
In computer storage, a so called NetApp "filer" was referring to the storage systems product by NetApp, before block protocols were supported. It can serve storage over a network using file-based protocols such as NFS, SMB, FTP, TFTP, and HTTP. But the so-called "Filers" can also serve data over block-based protocol, such as the SCSI command protocol over the Fibre Channel Protocol on a Fibre Channel network, Fibre Channel over Ethernet (FCoE), FC-NVMe or iSCSI transport layer. The product is also known as NetApp Fabric-Attached Storage (FAS) and NetApp All Flash FAS (AFF) NetApp Filers implement their physical storage in large disk arrays. While most large-storage filers are implemented with commodity computers with an operating system such as Microsoft Windows Server, VxWorks or tuned Linux, NetApp filers use highly customized hardware and the proprietary Data ONTAP operating system with WAFL file system, all originally designed by NetApp founders David Hitz and James Lau specifically for storage-serving purposes. Data ONTAP is NetApp's internal operating system, specially optimised for storage functions at high and low level. It boots from FreeBSD as a stand-alone kernel-space module and uses some functions of FreeBSD (command interpreter and drivers stack, for example). All filers have battery-backed non-volatile random access memory or NVDIMM, referred to as NVRAM or NVDIMM, which allows them to commit writes to stable storage more quickly than traditional systems with only volatile memory. Early filers connected to external disk enclosures via parallel SCSI, while modern models ((As of 2009 )) use fibre channel and SAS (Serial Attach SCSI) SCSI transport protocols. The disk enclosures (shelves) use fibre channel hard disk drives, as well as parallel ATA, serial ATA and Serial attached SCSI. Starting with AFF A800 NVRAM PCI card no longer used for NVLOGs, it was replaced with NVDIMM memory directly connected to memory bus. Implementers often organize two filers in a high-availability cluster with a private high-speed link, either Fibre Channel, InfiniBand, 10 Gigabit Ethernet, 40 Gigabit Ethernet or 100 Gigabit Ethernet. One can additionally group such clusters together under a single namespace when running in the "cluster mode" of the Data ONTAP 8 operating system.
  • 1.2K
  • 28 Sep 2022
Topic Review
GSOAP
gSOAP is a C and C++ software development toolkit for SOAP/XML web services and generic XML data bindings. Given a set of C/C++ type declarations, the compiler-based gSOAP tools generate serialization routines in source code for efficient XML serialization of the specified C and C++ data structures. Serialization takes zero-copy overhead.
  • 1.2K
  • 04 Nov 2022
Topic Review
Computational Resource for Drug Discovery
Computational Resources for Drug Discovery (CRDD) is one of the important silico modules of Open Source for Drug Discovery (OSDD). The CRDD web portal provides computer resources related to drug discovery on a single platform. It provides computational resources for researchers in computer-aided drug design, a discussion forum, and resources to maintain Wikipedia related to drug discovery, predict inhibitors, and predict the ADME-Tox property of molecules One of the major objectives of CRDD is to promote open source software in the field of chemoinformatics and pharmacoinformatics.
  • 1.2K
  • 27 Oct 2022
Topic Review
Comparison of File Verification Software
The following tables compare file verification software that typically use checksums to confirm the integrity or authenticity of a file.
  • 1.2K
  • 11 Nov 2022
Topic Review
CeCILL
CeCILL (from CEA CNRS INRIA Logiciel Libre) is a free software license adapted to both international and French legal matters, in the spirit of and retaining compatibility with the GNU General Public License (GPL). It was jointly developed by a number of France agencies: the Commissariat à l'Énergie Atomique (Atomic Energy Commission), the Centre national de la recherche scientifique (National Centre for Scientific Research) and the Institut national de recherche en informatique et en automatique (National Institute for Research in Computer Science and Control). It was announced on 5 July 2004 in a joint press communication of the CEA, CNRS and INRIA. It has gained support of the main French Linux User Group and the Minister of Public Function, and was considered for adoption at the European level before the European Union Public Licence was created.
  • 1.2K
  • 09 Oct 2022
Topic Review
Perl Compatible Regular Expressions
Perl Compatible Regular Expressions (PCRE) is a library written in C, which implements a regular expression engine, inspired by the capabilities of the Perl programming language. Philip Hazel started writing PCRE in summer 1997. PCRE's syntax is much more powerful and flexible than either of the POSIX regular expression flavors (BRE, ERE) and than that of many other regular-expression libraries. While PCRE originally aimed at feature-equivalence with Perl, the two implementations are not fully equivalent. During the PCRE 7.x and Perl 5.9.x phase, the two projects have coordinated development, with features being ported between them in both directions. In 2015 a fork of PCRE was released with a revised programming interface (API). The original software, now called PCRE1 (the 1.xx–8.xx series), has had bugs mended, but no further development. (As of 2020), it is considered obsolete, and the current 8.45 release is likely to be the last. The new PCRE2 code (the 10.xx series) has had a number of extensions and coding improvements and is where development takes place. A number of prominent open-source programs, such as the Apache and Nginx HTTP servers, and the PHP and R scripting languages, incorporate the PCRE library; proprietary software can do likewise, as the library is BSD-licensed. As of Perl 5.10, PCRE is also available as a replacement for Perl's default regular-expression engine through the "re::engine::PCRE" module. The library can be built on Unix, Windows, and several other environments. PCRE2 is distributed with a POSIX C wrapper, several test programs, and the utility program "pcre2grep" built in tandem with the library.
  • 1.2K
  • 14 Nov 2022
Topic Review
JavaServer Faces
JavaServer Faces (JSF) is a Java specification for building component-based user interfaces for web applications and was formalized as a standard through the Java Community Process being part of the Java Platform, Enterprise Edition. It is also a MVC web framework that simplifies construction of user interfaces (UI) for server-based applications by using reusable UI components in a page. JSF 2 uses Facelets as its default templating system. Other view technologies such as XUL or plain Java can also be employed. In contrast, JSF 1.x uses JavaServer Pages (JSP) as its default templating system.
  • 1.2K
  • 25 Nov 2022
Topic Review
Secure Access Service Edge
Secure Access Service Edge (SASE) is a term coined by analyst firm Gartner, SASE simplifies wide-area networking (WAN) and security by delivering both as a cloud service directly to the source of connection (user, device, branch office, IoT device, edge computing location) rather than the enterprise data center. Security is based on identity, real-time context and enterprise security and compliance policies. An identity may be attached to anything from a person/user to a device, branch office, cloud service, application, IoT system, or an edge computing location. SASE is meant to be a simplified WAN and security solution for a mobile, global workplace that relies on cloud applications and data. The common solution of backhauling all WAN traffic over long distances to one or a few corporate data centers for security functions adds network latency when users and their cloud applications are globally dispersed, rather than on-premises. By targeting services to the edge at the connection source, SASE eliminates the latency caused by backhauling.
  • 1.1K
  • 23 Nov 2022
Topic Review
VRPN
VRPN (Virtual-Reality Peripheral Network) is a device-independent, network-based interface for accessing virtual reality peripherals in VR applications. It was originally designed and implemented by Russell M. Taylor II at the Department of Computer Science of the University of North Carolina at Chapel Hill. VRPN was maintained and supported by Sensics while it was business. It is currently maintained by ReliaSolve and developed in collaboration with a productive community of contributors. It is described more fully at vrpn.org and in VRPN-VRST. The purpose of VRPN is to provide a unified interface to input devices, like motion trackers or joystick controllers. It also provides the following: The VRPN system consists of programming interfaces for both the client application and the hardware drivers and a server application that communicates with the hardware devices. The client interfaces are written in C++ but have been wrapped in C#, Python and Java. A typical application of VRPN is to encode and send 6DoF motion capture data through the network in real time.
  • 1.1K
  • 28 Nov 2022
Topic Review
GNU Bison
GNU bison, commonly known as Bison, is a parser generator that is part of the GNU Project. Bison reads a specification of a context-free language, warns about any parsing ambiguities, and generates a parser (either in C, C++, or Java) which reads sequences of tokens and decides whether the sequence conforms to the syntax specified by the grammar. Bison by default generates LALR parsers but can also create GLR parsers. In POSIX mode, Bison is compatible with Yacc, but also has several extensions over this earlier program. Flex, an automatic lexical analyser, is often used with Bison, to tokenise input data and provide Bison with tokens. Bison was originally written by Robert Corbett in 1985. Later, in 1989, Robert Corbett released another parser generator named Berkeley Yacc. Bison was made Yacc-compatible by Richard Stallman. Bison is free software and is available under the GNU General Public License, with an exception (discussed below) allowing its generated code to be used without triggering the copyleft requirements of the licence.
  • 1.1K
  • 24 Oct 2022
Topic Review
Use Case Points
Use Case Points (UCP) is a software estimation technique used to forecast the software size for software development projects. UCP is used when the Unified Modeling Language (UML) and Rational Unified Process (RUP) methodologies are being used for the software design and development. The concept of UCP is based on the requirements for the system being written using use cases, which is part of the UML set of modeling techniques. The software size (UCP) is calculated based on elements of the system use cases with factoring to account for technical and environmental considerations. The UCP for a project can then be used to calculate the estimated effort for a project.
  • 1.1K
  • 18 Oct 2022
Topic Review
PHYLIP
PHYLogeny Inference Package (PHYLIP) is a free computational phylogenetics package of programs for inferring evolutionary trees (phylogenies). It consists of 35 portable programs, i.e., the source code is written in the programming language C. As of version 3.696, it is licensed as open-source software; versions 3.695 and older were proprietary software freeware. Releases occur as source code, and as precompiled executables for many operating systems including Windows (95, 98, ME, NT, 2000, XP, Vista), Mac OS 8, Mac OS 9, OS X, Linux (Debian, Red Hat); and FreeBSD from FreeBSD.org. Full documentation is written for all the programs in the package and is included therein. The programs in the phylip package were written by Professor Joseph Felsenstein, of the Department of Genome Sciences and the Department of Biology, University of Washington, Seattle. Methods (implemented by each program) that are available in the package include parsimony, distance matrix, and likelihood methods, including bootstrapping and consensus trees. Data types that can be handled include molecular sequences, gene frequencies, restriction sites and fragments, distance matrices, and discrete characters. Each program is controlled through a menu, which asks users which options they want to set, and allows them to start the computation. The data is read into the program from a text file, which the user can prepare using any word processor or text editor (but this text file cannot be in the special format of the word processor, it must instead be in flat ASCII or text only format). Some sequence analysis programs such as the ClustalW alignment program can write data files in the PHYLIP format. Most of the programs look for the data in a file called infile . If the phylip programs do not find this file, they then ask the user to type in the file name of the data file.
  • 1.1K
  • 27 Oct 2022
Topic Review
Developing Microservice-Based Applications
Microservice Architecture (MSA) is a rising trend in software architecture design. Applications based on MSA are distributed applications whose components are microservices. MSA has already been adopted with great success by numerous companies, and a significant number of published papers discuss its advantages. However, there are several important challenges in the adoption of microservices such as finding the right decomposition approach, heterogeneous technology stacks, lack of relevant skills, out-of-date documentation, etc.
  • 1.1K
  • 15 Jul 2022
  • Page
  • of
  • 19
Academic Video Service