Topic Review
Things
Things is a task management app for macOS, iPadOS, iOS, and watchOS made by Cultured Code, a software startup based in Stuttgart, Germany . It first released for Mac as an alpha that went out in late 2007 to 12,000 people and quickly gained popularity. The following July, when the App Store launched, it was among the first 552 apps available for iPhone. It was then released alongside the iPad in 2010, and became one of the first apps available for Apple Watch in 2015. In December 2013, Cultured Code announced that they had sold one million copies of the software to date, and in December 2014 the company announced that downloads had increased by an additional three million.
  • 821
  • 24 Oct 2022
Topic Review
Mathematics Problems Solving
Mathematics problems solving (MPS) has been considered for decades as the centre of mathematics teaching, as it demonstrates the ability to analyse, understand, reason and apply. At the same time, it is also considered to be specific content when highlighting it as a basic competence that students should acquire.
  • 820
  • 12 Nov 2021
Topic Review
Pipe and Filter Architecture
In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters; this is also called the pipes and filters design pattern. Connecting elements into a pipeline is analogous to function composition. Narrowly speaking, a pipeline is linear and one-directional, though sometimes the term is applied to more general flows. For example, a primarily one-directional pipeline may have some communication in the other direction, known as a return channel or backchannel, as in the lexer hack, or a pipeline may be fully bi-directional. Flows with one-directional tree and directed acyclic graph topologies behave similarly to (linear) pipelines – the lack of cycles makes them simple – and thus may be loosely referred to as "pipelines".
  • 820
  • 23 Nov 2022
Topic Review
Trillian
Trillian is a proprietary multiprotocol instant messaging application created by Cerulean Studios. It is currently available for Microsoft Windows, Mac OS X, Linux, Android, iOS, BlackBerry OS, and the Web. It can connect to multiple IM services, such as AIM, Bonjour, Facebook Messenger, Google Talk (Hangouts), IRC, XMPP (Jabber), VZ, and Yahoo! Messenger networks; as well as social networking sites, such as Facebook, Foursquare, LinkedIn, and Twitter; and email services, such as POP3 and IMAP. Trillian no longer supports Windows Live Messenger or Skype as these services have combined and Microsoft chose to discontinue Skypekit. They also no longer support connecting to MySpace, and no longer support a distinct connection for Gmail, Hotmail or Yahoo! Mail although these can still be connected to via POP3 or IMAP. Currently, Trillian supports Facebook, Google, Jabber (XMPP), and Olark. Initially released July 1, 2000, as a freeware IRC client, the first commercial version (Trillian Pro 1.0) was published on September 10, 2002. The program was named after Trillian, a fictional character in The Hitchhiker's Guide to the Galaxy by Douglas Adams. A previous version of the official web site even had a tribute to Douglas Adams on its front page. On August 14, 2009, Trillian "Astra" (4.0) for Windows was released, along with its own Astra network. Trillian 5 for Windows was released in May 2011, and Trillian 6.0 was initially released in February 2017.
  • 819
  • 21 Oct 2022
Topic Review
Business Recommender Systems
Besides the typical applications of recommender systems in B2C scenarios such as movie or shopping platforms, there is a rising interest in transforming the human-driven advice provided, e.g., in consultancy via the use of recommender systems. There are two main classes of recommender systems: information-filtering-based and knowledge-based systems. The former category selects items from a large collection of items based on user preferences and is further classified as collaborative-filtering recommenders and content-based filtering recommenders. The knowledge-based recommenders make recommendations by applying constraints or similarities based on domain or contextual knowledge. Common applications are in B2C scenarios such as e-commerce, tourism, news, movie, music, etc.
  • 818
  • 10 Jun 2022
Topic Review
Authority-Based Conversation Tracking in Twitter
Twitter is undoubtedly one of the most widely used data sources to analyze human communication. The literature is full of examples where Twitter is accessed, and data are downloaded as the previous step to a more in-depth analysis in a wide variety of knowledge areas. Unfortunately, the extraction of relevant information from the opinions that users freely express in Twitter is complicated, both because of the volume generated—more than 6000 tweets per second—and the difficulties related to filtering out only what is pertinent to our research. Inspired by the fact that a large part of users use Twitter to communicate or receive political information, we created a method that allows for the monitoring of a set of users (which we will call authorities) and the tracking of the information published by them about an event. Our approach consists of dynamically and automatically monitoring the hottest topics among all the conversations where the authorities are involved, and retrieving the tweets in connection with those topics, filtering other conversations out. Although our case study involves the method being applied to the political discussions held during the Spanish general, local, and European elections of April/May 2019, the method is equally applicable to many other contexts, such as sporting events, marketing campaigns, or health crises.
  • 818
  • 28 Oct 2020
Topic Review
Outline of Ubuntu
The following outline is provided as an overview of and topical guide to Ubuntu: Ubuntu — Debian-based Linux operating system for personal computers, tablets and smartphones, where Ubuntu Touch edition is used; and also runs network servers, usually with the Ubuntu Server edition, either on physical or virtual servers (such as on mainframes) or with containers, that is with enterprise-class features; runs on the most popular architectures, including server-class ARM-based. Ubuntu is published by Canonical Ltd., who offer commercial support.
  • 817
  • 10 Nov 2022
Topic Review
Training, Validation, and Test Sets
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided in multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation dataset's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
  • 816
  • 17 Oct 2022
Topic Review
Facial Information for Healthcare Applications
The document is not limited to global face analysis but it also concentrates on methods related to local cues (e.g. the eyes). A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. For each facial feature, the computer vision-based tasks aiming at analyzing it and the related healthcare goals that could be pursued are detailed.
  • 816
  • 28 Oct 2020
Topic Review
Quantum Natural Language Processing
Quantum Natural Language Processing (QNLP) is a hybrid field that combines aspects derived from Quantum Computing (QC) with tasks of Natural Language Processing (NLP).
  • 815
  • 14 Jun 2022
  • Page
  • of
  • 366
ScholarVision Creations