Topic Review
Transformer Framework and YOLO Framework for Object Detection
Object detection for remote sensing is a fundamental task in image processing of remote sensing; as one of the core components, small or tiny object detection plays an important role. 
  • 374
  • 25 Aug 2023
Topic Review
Transformer Architecture and Attention Mechanisms in Genome Data
The emergence and rapid development of deep learning, specifically transformer-based architectures and attention mechanisms, have had transformative implications across several domains, including bioinformatics and genome data analysis. The analogous nature of genome sequences to language texts has enabled the application of techniques that have exhibited success in fields ranging from natural language processing to genomic data. 
  • 370
  • 26 Jul 2023
Topic Review
Transformation of Business Process Manager Profession
The increasing role of emerging technologies, such as big data, the Internet of Things, artificial intelligence (AI), cognitive technologies, cloud computing, and mobile technologies, is essential to the business process manager profession’s sustainable development.  Nevertheless, these technologies could involve new challenges in labor markets.
  • 493
  • 11 Jan 2022
Topic Review
Transfer Learning Strategies
Discriminatively trained models perform well if labeled data are available in abundance, but they do not perform adequately for tasks with scarce datasets as this limits their learning abilities. To address this issue, Large language models (LLMs) were first pretrained on large unlabeled datasets using the self-supervised approach, where the learning was then transferred discriminatively on specific tasks. As a result, transfer learning helps to leverage the capabilities of pretrained models and is advantageous, especially in data-scare settings. For example, generative pretrained transformer (GPT) used the generative language model objective for pretraining, followed by discriminative finetuning. Compared to pretraining, the transfer learning process is inexpensive and converges faster than training the model from scratch. Additionally, pretraining uses an unlabeled dataset and follows a self-supervised approach, whereas transfer learning follows a supervised technique using a labeled dataset particular to the downstream task. The pretraining dataset comes from a generic domain, whereas, during transfer learning, data come from specific distributions (supervised datasets specific to the desired task).
  • 222
  • 08 Mar 2024
Topic Review
Transfer Learning
Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks/domains to improve generalization in the tasks/domains of interest.
  • 640
  • 09 Jan 2022
Topic Review
Training, Validation, and Test Sets
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided in multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
  • 690
  • 17 Oct 2022
Topic Review
Training, Test, and Validation Sets
In machine learning, the study and construction of algorithms that can learn from and make predictions on data is a common task. Such algorithms work by making data-driven predictions or decisions,:2 through building a mathematical model from input data. The data used to build the final model usually comes from multiple datasets. In particular, three data sets are commonly used in different stages of the creation of the model. The model is initially fit on a training dataset, that is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method (e.g. gradient descent or stochastic gradient descent). In practice, the training dataset often consist of pairs of an input vector and the corresponding "answer" vector (or scalar), which is commonly denoted as the target (or label). The current model is run with the training dataset and produces a result, which is then compared with the target, for each input vector in the training dataset. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset provides an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g. the number of hidden units in a neural network). Validation datasets can be used for regularization by early stopping: stop training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset. This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when overfitting has truly begun. Finally, the test dataset is a dataset used to provide an unbiased evaluation of a final model fit on the training dataset.. When the data in the test dataset has never been used in training (for example in cross-validation), the test dataset is also called a holdout dataset.
  • 629
  • 03 Nov 2022
Topic Review
Train Multiplayer First-Person Shooter Game Agents
Artificial Intelligence bots are extensively used in multiplayer First-Person Shooter (FPS) games. By using Machine Learning techniques, we can improve their performance and bring them to human skill levels.
  • 151
  • 18 Mar 2024
Topic Review
Traffic Pattern in Smart Cities
Smart cities have large-scale infrastructures that have been developed to monitor a wide variety of urban occurrences. This is done to improve the quality of urban life. In most instances, they place a very restricted and specific emphasis on (e.g., monitoring the traffic). They are expensive, need the management of specialists, and are not universally well-liked among residents since they focus on topics that are not (often) of public importance. 
  • 190
  • 23 Oct 2023
Topic Review
Traffic Load Distribution Fairness in Mobile Social Networks
Mobile social networks suffer from an unbalanced traffic load distribution due to the heterogeneity in mobility of nodes (humans) in the network. A few nodes in these networks are highly mobile, and the proposed social-based routing algorithms are likely to choose these most “social” nodes as the best message relays.
  • 309
  • 04 Jul 2022
  • Page
  • of
  • 371
Video Production Service