Encyclopedia
Scholarly Community
Encyclopedia
Entry
Video
Image
Journal
Book
News
About
Log in/Sign up
Submit
Entry
Video
Image
and
or
not
All
${ type }
To
Search
Subject:
All Disciplines
Arts & Humanities
Biology & Life Sciences
Business & Economics
Chemistry & Materials Science
Computer Science & Mathematics
Engineering
Environmental & Earth Sciences
Medicine & Pharmacology
Physical Sciences
Public Health & Healthcare
Social Sciences
Sort:
Most Viewed
Latest
Alphabetical (A-Z)
Alphabetical (Z-A)
Filter:
All
Topic Review
Biography
Peer Reviewed Entry
Video Entry
Topic Review
Fifth Generation Computer
The Fifth Generation Computer Systems (FGCS) was an initiative by Japan's Ministry of International Trade and Industry (MITI), begun in 1982, to create computers using massively parallel computing and logic programming. It was to be the result of a government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. There was also an unrelated Russian project also named as a fifth-generation computer (see Kronos (computer)). Ehud Shapiro, in his "Trip Report" paper (which focused the FGCS project on concurrent logic programming as the software foundation for the project), captured the rationale and motivations driving this project: The term "fifth generation" was intended to convey the system as being advanced. In the history of computing hardware, computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance. The project was to create the computer over a ten-year period, after which it was considered ended and investment in a new "sixth generation" project would begin. Opinions about its outcome are divided: either it was a failure, or it was ahead of its time.
12.9K
21 Oct 2022
Topic Review
Root-Finding Algorithm
In mathematics and computing, a root-finding algorithm is an algorithm for finding zeroes, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeroes of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeroes, expressed either as floating point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound). Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms allow solving any equation defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists. Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converge towards the root as a limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points. The behaviour of general root-finding algorithms is studied in numerical analysis. However, for polynomials, root-finding study belongs generally to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root so located.
11.2K
14 Oct 2022
Topic Review
Big Data Mining
Big data mining (BDM) is an approach that uses the cumulative data mining or extraction techniques on large datasets / volumes of data. It is mainly focused on retrieving relevant and demanded information (or patterns) and thus extracting value hidden in data of an immense volume. BDM draws from the conventional data mining notation but also combines the aspects of big data, i.e. it enables to acquire useful information from databases or data streams that are huge in terms of “big data V’s”, like volume, velocity, and variety.
7.1K
05 Aug 2021
Topic Review
Timeline of Operating Systems
This article presents a timeline of events in the history of computer operating systems from 1951 to the current day. For a narrative explaining the overall developments, see the History of operating systems.
6.6K
04 Nov 2022
Topic Review
Feature Detection (Computer Vision)
In computer vision and image processing feature detection includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.
4.9K
10 Oct 2022
Topic Review
List of Instruction Sets
A list of computer central processor instruction sets: (By alphabetical order by its manufacturer.)
4.8K
04 Nov 2022
Topic Review
Digital Literacy
Digital literacy refers to an individual's ability to find, evaluate, and compose clear information through writing and other media on various digital platforms. Digital literacy is evaluated by an individual's grammar, composition, typing skills and ability to produce text, images, audio and designs using technology. While digital literacy initially focused on digital skills and stand-alone computers, the advent of the internet and use of social media, has caused some of its focus to shift to mobile devices. Similar to other expanding definitions of literacy that recognize cultural and historical ways of making meaning, digital literacy does not replace traditional forms of literacy, and instead builds upon and expands the skills that form the foundation of traditional forms of literacy. Digital literacy should be considered to be a part of the path to knowledge. Digital literacy is built on the expanding role of social science research in the field of literacy as well as on concepts of visual literacy, computer literacy, and information literacy. Overall, digital literacy shares many defining principles with other fields that use modifiers in front of literacy to define ways of being and domain specific knowledge or competence. The term has grown in popularity in education and higher education settings and is used in both international and national standards.
4.0K
31 Oct 2022
Topic Review
Windows NT
Windows NT is a proprietary graphical operating system produced by Microsoft, the first version of which was released on July 27, 1993. It is a processor-independent, multiprocessing and multi-user operating system. The first version of Windows NT was Windows NT 3.1 and was produced for workstations and server computers. It was a commercially focused operating system intended to complement consumer versions of Windows that were based on MS-DOS (including Windows 1.0 through Windows 3.1x). Gradually, the Windows NT family was expanded into Microsoft's general-purpose operating system product line for all personal computers, deprecating the Windows 9x family. "NT" was formerly expanded to "New Technology" but no longer carries any specific meaning. Starting with Windows 2000, "NT" was removed from the product name and is only included in the product version string along with several low-level places within the system. NT was the first purely 32-bit version of Windows, whereas its consumer-oriented counterparts, Windows 3.1x and Windows 9x, were 16-bit/32-bit hybrids. It is a multi-architecture operating system. Initially, it supported several instruction set architectures, including IA-32, MIPS, and DEC Alpha; support for PowerPC, Itanium, x64, and ARM were added later. The latest versions support x86 (including IA-32 and x64) and ARM. Major features of the Windows NT family include Windows Shell, Windows API, Native API, Active Directory, Group Policy, Hardware Abstraction Layer, NTFS, BitLocker, Windows Store, Windows Update, and Hyper-V.
3.0K
02 Nov 2022
Topic Review
Chaotic Image Encryption
Chaos is the characteristic of a system whose current state is guaranteed to be highly sensitive to the previous state (spatial chaos), the initial conditions (temporal chaos), or both (spatio-temporal chaos). Such a sensitivity makes the output or the behavior of a chaotic system difficult to predict. Chaos theory justifies and formulates the apparent disorder of chaotic systems on the basis of orderly patterns, structured feedback loops, iterative repetitions, self-organization, self-similarity, fractals, etc. Chaotic maps, attractors, and sequences all refer to the mathematical structures used for this formulation. Chaotic systems, maps, attractors, and sequences have been of great interest to the research community. They have been used for security purposes in a broad variety of applications ranging from smart grids to communication systems. Especially, chaotic encryption has been used for encrypting a variety of content types in addition to images.
2.9K
27 Jul 2022
Topic Review
IBM Systems Network Architecture
Systems Network Architecture (SNA) is IBM's proprietary networking architecture, created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes formats and protocols and is, in itself, not a piece of software. The implementation of SNA takes the form of various communications packages, most notably Virtual Telecommunications Access Method (VTAM), the mainframe software package for SNA communications.
2.1K
28 Oct 2022
Topic Review
Home Theater PC
A home theater PC (HTPC) or media center computer is a convergent device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality. In recent years, other types of consumer electronics, including game consoles and dedicated media devices, have crossed over to manage video and music content. The term "media center" also refers to specialized application software designed to run on standard personal computers. HTPC and other convergent devices integrate components of a home theater into a unit co-located with a home entertainment system. An HTPC system typically has a remote control and the software interface normally has a 10-foot (3 m) user interface design so that it can be comfortably viewed at typical television viewing distances. An HTPC can be purchased pre-configured with the required hardware and software needed to add video programming or music to the PC. Enthusiasts can also piece together a system out of discrete components as part of a software-based HTPC. Since 2007, digital media player and smart TV software has been incorporated into consumer electronics through software or hardware changes including video game consoles, Blu-ray players, networked media players, televisions, and set-top boxes. The increased availability of specialized devices, coupled with paid and free digital online content, now offers an alternative to multipurpose (and more costly) personal computers.
2.1K
08 Nov 2022
Topic Review
Cache Placement Policies
Cache is a memory which holds the recently utilized data by the processor. A block of memory cannot be placed randomly in the cache and is restricted to a single cache line by the “Placement Policy”. In other words, Placement Policy determines where a particular memory block can be placed when it goes into the cache. There are three different policies available for placement of a memory block in the cache.
2.0K
06 Nov 2022
Topic Review
Thread Block
A thread block is a programming abstraction that represents a group of threads that can be executed serially or in parallel. For better process and data mapping, threads are grouped into thread blocks. The number of threads varies with available shared memory. 'The number of threads in a thread block is also limited by the architecture to a total of 512 threads per block.' The threads in the same thread block run on the same stream processor. Threads in the same block can communicate with each other via shared memory, barrier synchronization or other synchronization primitives such as atomic operations. Multiple blocks are combined to form a grid. All the blocks in the same grid contain the same number of threads. Since the number of threads in a block is limited to 512, grids can be used for computations that require a large number of thread blocks to operate in parallel. CUDA is a parallel computing platform and programming model that higher level languages can use to exploit parallelism. In CUDA, the kernel is executed with the aid of threads. The thread is an abstract entity that represents the execution of the kernel. A kernel is a small program or a function. Multi threaded applications use many such threads that are running at the same time, to organize parallel computation. Every thread has an index, which is used for calculating memory address locations and also for taking control decisions.
1.8K
29 Nov 2022
Topic Review
ISO 29110
ISO/IEC 29110: Systems and Software Life Cycle Profiles and Guidelines for Very Small Entities (VSEs) International Standards (IS) and Technical Reports (TR) are targeted at Very Small Entities (VSEs). A Very Small Entity (VSE) is an enterprise, an organization, a department or a project having up to 25 people. The ISO/IEC 29110 is a series of international standards and guides entitled "Systems and Software Engineering — Lifecycle Profiles for Very Small Entities (VSEs)". The standards and technical reports were developed by working group 24 (WG24) of sub-committee 7 (SC7) of Joint Technical Committee 1 (JTC1) of the International Organization for Standardization and the International Electrotechnical Commission. Industries around the world have agreed that there are certain ways of working that produce predictable results. Companies that agree to use these agreed methods and then to have their compliance measured are called ISO certificated. Some ISO-certificated organizations require that their vendors also be ISO certificated. The general standard for software development, ISO/IEC/IEEE 12207, is appropriate for medium and large software development efforts. Similarly, the general standard for system development, ISO/IEC/IEEE 15288, is appropriate for medium and large system development efforts. Systems, in the context of ISO/IEC 29110, are typically composed of hardware and software components. Things work differently in small organisations; ISO 29110 reflects that.
1.8K
07 Nov 2022
Topic Review
Marker-Controlled Watershed for Segmentation of Images
Watershed is a widely used image segmentation algorithm. A grayscale image is considered as topographic relief, which is flooded from initial basins. However, frequently they are not aware of the options of the algorithm and the peculiarities of its realizations. There are many watershed implementations in software packages and products. Even if these packages are based on the identical algorithm–watershed by flooding, their outcomes, processing speed, and consumed memory, vary greatly.
1.8K
07 Jul 2022
Topic Review
List of Years in Home Video
This page indexes the individual year in home video pages. Some years are annotated with a significant event as a reference point.
1.7K
21 Oct 2022
Topic Review
LINPACK Benchmarks
The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering. The latest version of these benchmarks is used to build the TOP500 list, ranking the world's most powerful supercomputers. The aim is to approximate how fast a computer will perform when solving real problems. It is a simplification, since no single computational task can reflect the overall performance of a computer system. Nevertheless, the LINPACK benchmark performance can provide a good correction over the peak performance provided by the manufacturer. The peak performance is the maximal theoretical performance a computer can achieve, calculated as the machine's frequency, in cycles per second, times the number of operations per cycle it can perform. The actual performance will always be lower than the peak performance. The performance of a computer is a complex issue that depends on many interconnected variables. The performance measured by the LINPACK benchmark consists of the number of 64-bit floating-point operations, generally additions and multiplications, a computer can perform per second, also known as FLOPS. However, a computer's performance when running actual applications is likely to be far behind the maximal performance it achieves running the appropriate LINPACK benchmark. The name of these benchmarks comes from the LINPACK package, a collection of algebra Fortran subroutines widely used in the 1980s, and initially tightly linked to the LINPACK benchmark. The LINPACK package has since been replaced by other libraries.
1.7K
02 Dec 2022
Topic Review
List of Mergers and Acquisitions by IBM
IBM has undergone a large number of mergers and acquisitions during a corporate history lasting over a century; the company has also produced a number of spinoffs during that time. The acquisition date listed is the date of the agreement between IBM and the subject of the acquisition. The value of each acquisition is listed in USD because IBM is based in the United States . If the value of an acquisition is not listed, then it is undisclosed. Many of the companies listed in this article had subsidiaries of their own who had subsidiaries who ... For examples, see Pugh's book Building IBM, page 26.
1.7K
17 Oct 2022
Topic Review
Social Login
Social login is a form of single sign-on using existing information from a social networking service such as Facebook, Twitter or Google, to sign into a third party website instead of creating a new login account specifically for that website. It is designed to simplify logins for end users as well as provide more and more reliable demographic information to web developers.
1.6K
03 Nov 2022
Topic Review
Minix 3
MINIX 3 is a small, Unix-like operating system. It is published under a BSD-3-Clause[lower-alpha 1] license and is a successor project to the earlier versions, MINIX 1 and 2. The project's main goal is for the system to be fault-tolerant by detecting and repairing its faults on the fly, with no user intervention. The main uses of the system are envisaged to be embedded systems and education. (As of 2017), MINIX 3 supports IA-32 and ARM architecture processors. It can also run on emulators or virtual machines, such as Bochs, VMware Workstation, Microsoft Virtual PC, Oracle VirtualBox, and QEMU. A port to PowerPC architecture is in development. The distribution comes on a live CD and does not support live USB installation. MINIX 3 is believed to have inspired the Intel Management Engine (ME) OS found in Intel's Platform Controller Hub, starting with the introduction of ME 11, which is used with Skylake and Kaby Lake processors. It was debated that MINIX could have been the most widely used OS on x86/AMD64 processors, with more installations than Microsoft Windows, Linux, or macOS, because of its use in the Intel ME. The project has been dormant since 2018, and the latest release is 3.4.0 rc6 from 2017, although the MINIX 3 discussion group is still active.
1.6K
07 Nov 2022
Page
of
7
Featured Entry Collections
>>
Featured Books
>>
Encyclopedia of Social Sciences
Chief Editor:
Kum Fai Yuen
Encyclopedia of COVID-19
Chief Editor:
Stephen Bustin
Encyclopedia of Fungi
Chief Editor:
Luis V. Lopez-Llorca
Encyclopedia of Digital Society, Industry 5.0 and Smart City
Chief Editor:
Sandro Serpa
Entry
Video
Image
Journal
Book
News
About
Log in/Sign up
New Entry
New Video
New Images
About
Terms and Conditions
Privacy Policy
Advisory Board
Contact
Partner
ScholarVision Creations
Feedback
Top
Feedback
×
Help Center
Browse our user manual, common Q&A, author guidelines, etc.
Rate your experience
Let us know your experience and what we could improve.
Report an error
Is something wrong? Please let us know!
Other feedback
Other feedback you would like to report.
×
Did you find what you were looking for?
Love
Like
Neutral
Dislike
Hate
0
/500
Email
Do you agree to share your valuable feedback publicly on
Encyclopedia
’s homepage?
Yes, I agree. Encyclopedia can post it.
No, I do not agree. I would not like to post my testimonial.
Webpage
Upload a screenshot
(Max file size 2MB)
Submit
Back
Close
×