Immersive learning conceptualizes education as a set of active phenomenological experiences that are based on presence. Immersive learning can be implemented using both physical and digital means, such as virtual reality and augmented reality.
The suggested methodology for the characterization of temperature extrema presents a multistep preprocessing procedure intended to derive extrema time series of correctly identified and thermally defined daily air temperature extrema pairs. The underlying conceptual framework for this approach was developed in response to the existing gaps in the current state of daily extrema identification and the development of extrema-based synthetic air temperature time series. A code consisting of a series of algorithms was developed to establish four-parameter criteria for a more accurate representation of daily variability that allows easy replication of temperature distribution based on the correct characterization of daily temperature patterns. The first preprocessing step consists of subjecting the high-frequency temperature time series to a theoretical diurnal observing window that imposes latitudinally and seasonally crafted limits for the individual identification of daily minima and maxima. The following pre-processing step involves the supplementation of air temperature extrema with the information on the occurrence of extrema timing deemed as vital information for the reconstruction of the temperature time series. The subsequent step involves the application of an innovative temperature pattern recognition algorithm that identifies physically homogeneous air temperature populations based on the information obtained in previous steps. The last step involves the use of a metric for the assessment of extrema temperature and timing parameters’ susceptibility to climate change. The application of the presented procedure to high-frequency temperature data yields two strains of physically homogeneous extrema time series with the preserved characteristics of the overall temperature variability. In the present form, individual elements of this methodology are applicable for correcting historical sampling and air temperature averaging biases, improving the reproducibility of daily air temperature variation, and enhancing the performance of temperature index formulae based on daily temperature extrema. The objective of this analysis is the eventual implementation of the presented methodology into the practice of systematic temperature extrema identification and preprocessing of temperature time series for the configuration of physically homogeneous air temperature subpopulations.
Tokenization is a procedure for recovering the elements of interest in a sequence of data. This term is commonly used to describe an initial step in the processing of programming languages, and also for the preparation of input data in the case of artificial neural networks; however, it is a generalizable concept that applies to reducing a complex form to its basic elements, whether in the context of computer science or in natural processes. In this entry, the general concept of a token and its attributes are defined, along with its role in different contexts, such as deep learning methods. Included here are suggestions for further theoretical and empirical analysis of tokenization, particularly regarding its use in deep learning, as it is a rate-limiting step and a possible bottleneck when the results do not meet expectations.