Topic Review
Interactive Storytelling
Interactive storytelling (also known as interactive drama) is a form of digital entertainment in which the storyline is not predetermined. The author creates the setting, characters, and situation which the narrative must address, but the user (also reader or player) experiences a unique story based on their interactions with the story world. The architecture of an interactive storytelling program includes a drama manager, user model, and agent model to control, respectively, aspects of narrative production, player uniqueness, and character knowledge and behavior. Together, these systems generate characters that act "human," alter the world in real-time reactions to the player, and ensure that new narrative events unfold comprehensibly. The field of study surrounding interactive storytelling encompasses many disparate fields, including psychology, sociology, cognitive science, linguistics, natural language processing, user interface design, computer science, and emergent intelligence. They fall under the umbrella term of Human-Computer Interaction (HCI), at the intersection of hard science and the humanities. The difficulty of producing an effective interactive storytelling system is attributed to the ideological division between professionals in each field: artists have trouble constraining themselves to logical and linear systems and programmers are disinclined to appreciate or incorporate the abstract and unproven concepts of the humanities.
  • 404
  • 07 Nov 2022
Topic Review
Reinforcement Learning
Reinforcement Learning (RL) is an approach in Machine Learning that aims to solve dynamic and complex problems, in which autonomous entities, called agents, are trained to take actions that will lead them to an optimal solution
  • 404
  • 25 Jul 2023
Topic Review
Fluid-Structure Interaction Methods in Biomechanics
Fluid-structure interaction algorithms are utilized to examine how the human circulatory system functions by simulating blood flow and capturing mechanical responses within blood vessels. These sophisticated algorithms take into account interactions between fluid dynamics, vessel walls, heart walls, and valves. By combining advanced medical imaging techniques with fluid-structure interaction models, it becomes possible to customize these models for individual patients. 
  • 404
  • 14 Aug 2023
Topic Review
Interactive Visual Analysis
Interactive Visual Analysis (IVA) is a set of techniques for combining the computational power of computers with the perceptive and cognitive capabilities of humans, in order to extract knowledge from large and complex datasets. The techniques rely heavily on user interaction and the human visual system, and exist in the intersection between visual analytics and big data. It is a branch of data visualization. IVA is a suitable technique for analyzing high-dimensional data that has a large number of data points, where simple graphing and non-interactive techniques give an insufficient understanding of the information. These techniques involve looking at datasets through different, correlated views and iteratively selecting and examining features the user finds interesting. The objective of IVA is to gain knowledge which is not readily apparent from a dataset, typically in tabular form. This can involve generating, testing or verifying hypotheses, or simply exploring the dataset to look for correlations between different variables.
  • 403
  • 10 Oct 2022
Topic Review
Natural-language Generation
Natural-language generation (NLG) is a software process that produces natural language output. While it is widely agreed that the output of any NLG process is text, there is some disagreement on whether the inputs of an NLG system need to be non-linguistic. Common applications of NLG methods include the production of various reports, for example weather and patient reports; image captions; and chatbots. Automated NLG can be compared to the process humans use when they turn ideas into writing or speech. Psycholinguists prefer the term language production for this process, which can also be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared to translators of artificial computer languages, such as decompilers or transpilers, which also produce human-readable code generated from an intermediate representation. Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging. NLG may be viewed as complementary to natural-language understanding (NLU): whereas in natural-language understanding, the system needs to disambiguate the input sentence to produce the machine representation language, in NLG the system needs to make decisions about how to put a representation into words. The practical considerations in building NLU vs. NLG systems are not symmetrical. NLU needs to deal with ambiguous or erroneous user input, whereas the ideas the system wants to express through NLG are generally known precisely. NLG needs to choose a specific, self-consistent textual representation from many potential representations, whereas NLU generally tries to produce a single, normalized representation of the idea expressed. NLG has existed since ELIZA was developed in the mid 1960s, but the methods were first used commercially in the 1990s. NLG techniques range from simple template-based systems like a mail merge that generates form letters, to systems that have a complex understanding of human grammar. NLG can also be accomplished by training a statistical model using machine learning, typically on a large corpus of human-written texts.
  • 403
  • 07 Nov 2022
Topic Review
State-of the-Art Constraint-Based Modeling of Microbial Metabolism
Methanotrophy is the ability of an organism to capture and utilize the greenhouse gas, methane, as a source of energy-rich carbon. Over the years, significant progress has been made in understanding of mechanisms for methane utilization, mostly in bacterial systems, including the key metabolic pathways, regulation and the impact of various factors (iron, copper, calcium, lanthanum, and tungsten) on cell growth and methane bioconversion. The implementation of -omics approaches provided vast amount of heterogeneous data that require the adaptation or development of computational tools for a system-wide interrogative analysis of methanotrophy. The genome-scale mathematical modeling of its metabolism has been envisioned as one of the most productive strategies for the integration of muti-scale data to better understand methane metabolism and enable its biotechnological implementation. 
  • 403
  • 03 Jan 2024
Topic Review
Mathematical Background of 5D model of the aorta
Visualization is crucial for the display and understanding of medical image data. For diagnostic and surgical planning, radiologists and surgeons must be able to evaluate the data appropriately. Many imaging systems’ data can incorporate both functional and structural information, resulting in 4D datasets. When the image contains spectral information, it can be extended to 5D in some circumstances. Overall, 5D imaging reveals more information than 4D imaging.
  • 402
  • 17 Jan 2022
Topic Review
CVF (File Format)
DriveSpace (initially known as DoubleSpace) is a disk compression utility supplied with MS-DOS starting from version 6.0 in 1993 and ending in 2000 with the release of Windows Me. The purpose of DriveSpace is to increase the amount of data the user could store on disks by transparently compressing and decompressing data on-the-fly. It is primarily intended for use with hard drives, but use for floppy disks is also supported. This feature was removed in Windows XP and later.
  • 402
  • 15 Nov 2022
Topic Review
Vertex Chunk-Based Object Culling
Famous content using the Metaverse concept allows users to freely place objects in a world space without constraints. To render various high-resolution objects placed by users in real-time, various algorithms exist, such as view frustum culling, visibility culling and occlusion culling. These algorithms selectively remove objects outside the camera’s view and eliminate an object that is too small to render.
  • 402
  • 26 Jun 2023
Topic Review
VGG-C Transform Model to Predict Alzheimer’s Disease
Alzheimer’s disease (AD) is a chronic neurobiological brain disorder that continuously kills brain cells and causes deficits in memory and thinking skills, eventually accelerating the loss of the ability to perform even the most basic tasks. Early detection and automatic AD classification have emerged, resulting in large-scale multimodal neuroimaging results. Other methods of AD research include MRI, positron emission tomography (PET), and genotype sequencing results. Analyzing different modalities to make a decision is time consuming.
  • 401
  • 09 Sep 2022
  • Page
  • of
  • 366
ScholarVision Creations