OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License. OpenCog is in use by more than 50 companies, including Huawei and Cisco.
1. Origin
OpenCog was originally based on the release in 2008 of the source code of the proprietary "Novamente Cognition Engine" (NCE) of Novamente LLC. The original NCE code is discussed in the PLN book (ref below). Ongoing development of OpenCog is supported by Artificial General Intelligence Research Institute (AGIRI), the Google Summer of Code project, Hanson Robotics, SingularityNET and others.
2. Components
OpenCog consists of:
- A graph database, dubbed the AtomSpace, that holds "atoms" (that is, terms, atomic formulas, sentences and relationships) together with their "values" (valuations or interpretations, which can be thought of as per-atom key-value databases). An example of a value would be a truth value. Atoms are globally unique, immutable and are indexed (searchable); values are fleeting and changeable.
- A collection of pre-defined atoms, termed Atomese, used for generic knowledge representation, such as conceptual graphs and semantic networks, as well as to represent and store the rules (in the sense of term rewriting) needed to manipulate such graphs.
- A collection of pre-defined atoms, that encode a type subsystem, including type constructors and function types. These are used to specify the types of variables, terms and expressions, and are used to specify the structure of generic graphs containing variables.
- A collection of pre-defined atoms that encode both functional and imperative programming styles. These include the lambda abstraction for binding free variables into bound variables, as well as for performing beta reduction.
- A collection of pre-defined atoms that encode a satisfiability modulo theories solver, built in as a part of a generic graph query engine, for performing graph and hypergraph pattern matching (isomorphic subgraph discovery). This generalizes the idea of a structured query language (SQL) to the domain of generic graphical queries; it is an extended form of a graph query language.
- A generic rule engine, including a forward chainer and a backward chainer, that is able to chain together rules. The rules are exactly the graph queries of the graph query subsystem, and so the rule engine vaguely resembles a query planner. It is designed so as to allow different kinds of inference engines and reasoning systems to be implemented, such as Bayesian inference or fuzzy logic, or practical tasks, such as constraint solvers or motion planners.
- An attention allocation subsystem based on economic theory, termed ECAN.[1] This subsystem is used to control the combinatorial explosion of search possibilities that are met during inference and chaining.
- An implementation of a probabilistic reasoning engine based on probabilistic logic networks (PLN). The current implementation uses the rule engine to chain together specific rules of logical inference (such as modus ponens), together with some very specific mathematical formulas assigning a probability and a confidence to each deduction. This subsystem can be thought of as a certain kind of proof assistant that works with a modified form of Bayesian inference.
- A probabilistic genetic program evolver called Meta-Optimizing Semantic Evolutionary Search, or MOSES.[2] This is used to discover collections of short Atomese programs that accomplish tasks; these can be thought of as performing a kind of decision tree learning, resulting in a kind of decision forest, or rather, a generalization thereof.
- A natural language input system consisting of Link Grammar, and partly inspired by both Meaning-Text Theory as well as Dick Hudson's Word Grammar, which encodes semantic and syntactic relations in Atomese.
- A natural language generation system.[3]
- An implementation of Psi-Theory for handling emotional states, drives and urges, dubbed OpenPsi.[4]
- Interfaces to Hanson Robotics robots, including emotion modelling[5] via OpenPsi. This includes the Loving AI project, used to demonstrate meditation techniques.
3. Organization and Funding
In 2008, the Machine Intelligence Research Institute (MIRI), formerly called Singularity Institute for Artificial Intelligence (SIAI), sponsored several researchers and engineers. Many contributions from the open source community have been made since OpenCog's involvement in the Google Summer of Code in 2008 and 2009. Currently MIRI no longer supports OpenCog.[6] OpenCog has received funding and support from several sources, including the Hong Kong government, Hong Kong Polytechnic University, the Jeffrey Epstein VI Foundation[7] and Hanson Robotics. The OpenCog project is currently affiliated with SingularityNET and Hanson Robotics.
4. Applications
Similar to other cognitive architectures, the main purpose is to create virtual humans, which are three dimensional avatar characters. The goal is to mimic behaviors like emotions, gestures and learning. For example, the emotion module in the software was only programmed because humans have emotions. Artificial General Intelligence can be realized, if it simulates intelligence of humans.[8]
The self-description of the OpenCog project provides additional possible applications which are going into the direction of natural language processing and the simulation of a dog.[9]