Topic Review
Kissing Number Problem
In geometry, a kissing number is defined as the number of non-overlapping unit spheres that can be arranged such that they each touch a common unit sphere. For a lattice packing the kissing number is the same for every sphere, but for an arbitrary sphere packing the kissing number may vary from one sphere to another. Other names for kissing number that have been used are Newton number (after the originator of the problem), and contact number. In general, the kissing number problem seeks the maximum possible kissing number for n-dimensional spheres in (n + 1)-dimensional Euclidean space. Ordinary spheres correspond to two-dimensional closed surfaces in three-dimensional space. Finding the kissing number when centers of spheres are confined to a line (the one-dimensional case) or a plane (two-dimensional case) is trivial. Proving a solution to the three-dimensional case, despite being easy to conceptualise and model in the physical world, eluded mathematicians until the mid-20th century. Solutions in higher dimensions are considerably more challenging, and only a handful of cases have been solved exactly. For others investigations have determined upper and lower bounds, but not exact solutions.
  • 706
  • 19 Oct 2022
Topic Review
Low-Cost Water Quality Sensors for IoT
In many countries, water quality monitoring is limited due to the high cost of logistics and professional equipment such as multiparametric probes. However, low-cost sensors integrated with the Internet of Things (IoT) can enable real-time environmental monitoring networks, providing valuable water quality information to the public.
  • 706
  • 06 May 2023
Topic Review
Conceptual Interoperability
Conceptual interoperability is a concept in simulation theory. However, it is broadly applicable for other model-based information technology domains. From the early ideas of Harkrider and Lunceford, simulation composability has been studied in more detail. Petty and Weisel formulated the current working definition: "Composability is the capability to select and assemble simulation components in various combinations into simulation systems to satisfy specific user requirements. The defining characteristic of composability is the ability to combine and recombine components into different simulation systems for different purposes." A recent RAND study provided a coherent overview of the state of composability for military simulation systems within the U.S. Department of Defense; many of its findings have much broader applicability.
  • 705
  • 25 Oct 2022
Topic Review
Decision Intelligence
Decision intelligence is an engineering discipline that augments data science with theory from social science, decision theory, and managerial science. Its application provides a framework for best practices in organizational decision-making and processes for applying machine learning at scale. The basic idea is that decisions are based on our understanding of how actions lead to outcomes. Decision intelligence is a discipline for analyzing this chain of cause-and-effect, and decision modeling is a visual language for representing these chains.
  • 704
  • 09 Nov 2022
Topic Review Peer Reviewed
Tokenization in the Theory of Knowledge
Tokenization is a procedure for recovering the elements of interest in a sequence of data. This term is commonly used to describe an initial step in the processing of programming languages, and also for the preparation of input data in the case of artificial neural networks; however, it is a generalizable concept that applies to reducing a complex form to its basic elements, whether in the context of computer science or in natural processes. In this entry, the general concept of a token and its attributes are defined, along with its role in different contexts, such as deep learning methods. Included here are suggestions for further theoretical and empirical analysis of tokenization, particularly regarding its use in deep learning, as it is a rate-limiting step and a possible bottleneck when the results do not meet expectations.
  • 704
  • 11 Apr 2023
Topic Review
Exclusion of the Null Hypothesis
In inferential statistics, the null hypothesis (often denoted H0) is a default hypothesis that a quantity to be measured is zero (null). Typically, the quantity to be measured is the difference between two situations, for instance to try to determine if there is a positive proof that an effect has occurred or that samples derive from different batches. The null hypothesis is effectively stating that a quantity (of interest) is larger or equal to zero AND smaller or equal to zero. If either requirement can be positively overturned, the null hypothesis is "excluded from the realm of possibilities". The null hypothesis is generally assumed to remain possibly true. Multiple analyses can be performed to show how the hypothesis should be either: rejected or excluded e.g. having high confidence level, thus demonstrating a statistically significant difference. This is demonstrated by showing that zero is outside of the specified confidence interval of the measurement on either side, typically within the real numbers. Failure to exclude the null hypothesis (with any confidence) does logically NOT confirm or support the (unprovable) null hypothesis. (When you have not proven something is e.g. bigger than x, it does not necessarily mean you have made it plausible that it is smaller or equal than x; alternatively you may just have done a lousy measurement with low accuracy. Confirming the null hypothesis two-sided would amount to positively proving it is bigger or equal than 0 AND to positively proving it is smaller or equal than 0; this is something for which you need infinite accuracy as well as exactly zero effect neither of which normally are realistic. Also measurements will never indicate a non-zero probability of exactly zero difference.) So failure of an exclusion of a null hypothesis amounts to a "don't know" at the specified confidence level; it does not immediately imply null somehow, as the data may already show a (less strong) indication for a non-null. The used confidence level does absolutely certainly not correspond to the likelihood of null at failing to exclude; in fact in this case a high used confidence level expands the still plausible range. A non-null hypothesis can have the following meanings, depending on the author a) a value other than zero is used, b) some margin other than zero is used and c) the "alternative" hypothesis. Testing (excluding or failing to exclude) the null hypothesis provides evidence that there are (or are not) statistically sufficient grounds to believe there is a relationship between two phenomena (e.g., that a potential treatment has a non-zero effect, either way). Testing the null hypothesis is a central task in statistical hypothesis testing in the modern practice of science. There are precise criteria for excluding or not excluding a null hypothesis at a certain confidence level. The confidence level should indicate the likelihood that much more and better data would still be able to exclude the null hypothesis on the same side. The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach of Ronald Fisher, a null hypothesis is rejected if the observed data is significantly unlikely to have occurred if the null hypothesis were true. In this case, the null hypothesis is rejected and an alternative hypothesis is accepted in its place. If the data is consistent with the null hypothesis statistically possibly true, then the null hypothesis is not rejected. In neither case is the null hypothesis or its alternative proven; with better or more data, the null may still be rejected. This is analogous to the legal principle of presumption of innocence, in which a suspect or defendant is assumed to be innocent (null is not rejected) until proven guilty (null is rejected) beyond a reasonable doubt (to a statistically significant degree). In the hypothesis testing approach of Jerzy Neyman and Egon Pearson, a null hypothesis is contrasted with an alternative hypothesis, and the two hypotheses are distinguished on the basis of data, with certain error rates. It is used in formulating answers in research. Statistical inference can be done without a null hypothesis, by specifying a statistical model corresponding to each candidate hypothesis, and by using model selection techniques to choose the most appropriate model. (The most common selection techniques are based on either Akaike information criterion or Bayes factor).
  • 704
  • 17 Oct 2022
Topic Review
Application of Triboelectric Nanogenerator in Fluid Dynamics Sensing
The triboelectric nanogenerator (TENG) developed by Z. L. Wang’s team to harvest random mechanical energy is a promising new energy source for distributed sensing systems in the new era of the internet of things (IoT) and artificial intelligence (AI) for a smart world. In industry and academia, fluid dynamics sensing for liquid and air is urgently needed but lacking. In particular, local fluid sensing is difficult and limited to traditional sensors. Fortunately, with advantages for ordinary TENGs and TENGs as fluid dynamics sensors, fluid dynamics sensing can be better realized.
  • 704
  • 30 Sep 2022
Topic Review
Low Rate DDoS Detection Techniques in Software-Defined Networks
Software-defined networking (SDN) is a new networking paradigm that provides centralized control, programmability, and a global view of topology in the controller. SDN is becoming more popular due to its high audibility, which also raises security and privacy concerns. SDN must be outfitted with the best security scheme to counter the evolving security attacks. A Distributed Denial-of-Service (DDoS) attack is a network attack that floods network links with illegitimate data using high-rate packet transmission. Illegitimate data traffic can overload network links, causing legitimate data to be dropped and network services to be unavailable. Low-rate Distributed Denial-of-Service (LDDoS) is a recent evolution of DDoS attack that has been emerged as one of the most serious vulnerabilities for the Internet, cloud computing platforms, the Internet of Things (IoT), and large data centers. Moreover, LDDoS attacks are more challenging to detect because this attack sends a large amount of illegitimate data that are disguised as legitimate traffic. Thus, traditional security mechanisms such as symmetric/asymmetric detection schemes that have been proposed to protect SDN from DDoS attacks may not be suitable or inefficient for detecting LDDoS attacks. 
  • 704
  • 08 Aug 2022
Topic Review
Quietism
Quietism in philosophy sees the role of philosophy as broadly therapeutic or remedial. Quietist philosophers believe that philosophy has no positive thesis to contribute, but rather that its value is in defusing confusions in the linguistic and conceptual frameworks of other subjects, including non-quietist philosophy. For quietists, advancing knowledge or settling debates (particularly those between realists and non-realists) is not the job of philosophy, rather philosophy should liberate the mind by diagnosing confusing concepts.
  • 703
  • 17 Nov 2022
Topic Review
Computer Vision in Self-Steering Tractors
Agricultural machinery, such as tractors, is meant to operate for many hours in large areas and perform repetitive tasks. The automatic navigation of agricultural vehicles can ensure the high intensity of automation of cultivation tasks, the enhanced precision of navigation between crop structures, an increase in operation safety and a decrease in human labor and operation costs.
  • 702
  • 24 Feb 2022
  • Page
  • of
  • 366
ScholarVision Creations