- Glossary in the fields of Modelization and Artificial Intelligence
- Artificial Neural Network - ANN
- AR Models
- Artificial Intelligence - AI
- Behavioral analytics
- Big Data
- Business Intelligence - BI
- Case Base Reasoning - CBR
- Call Detail Record (CDR) Analysis
- Clickstream Analytics
- Clustering Analysis
- Columnar Database
- Complex Event Processing (CEP)
- Convolutional Neural Networks
- Data Analytics
- Data Governance
- Data Mining
- Data Science
- Deep Learning
- Errors used in modeling problems
- Evolutionary Algorithms (EA)
- Evolving Intelligent Systems (EIS)
- Fuzzy Logic
- Genetic Algorithms
- Grid Computing
- Growing Self-Organizing Networks
- Growing Neural Gas
- Human Level Artificial Intelligence - HLAI
- k-Nearest Neighboor (k-NN)
- Learning Vector Quantification - LVQ
- Machine Learning - ML
- Minimum Classification Error - MCE
- Modified Learning Vector Quantification - MLVQ
- Multilayer Perceptron - MLP
- Multiple Self-Organized Maps - MSOM
- OJA’s Network
- Pattern Recognition
- Predictive Modeling
- Principal Component Analysis - PCA
- Radial Basis Function - RBF
- Recommendation Engine
- Recurrent Neural Network
- Recursive Neural Network
- Self Organized Maps - SOM
- Soft computing
- Supervised Neural Network
- Text Mining
- The Turing Test
- Unsupervised Neural Network
Glossary in the fields of Modelization and Artificial Intelligence
Adaptive Linear Neuron. The evolutionary version MA-DALINE (Many Adaptive Linear Neuron) is considered the origin of the actual multilayer neural networks.
Bibliography: B. Widrow. Generalization and information storage in networks of adaline neurons.
An algorithm consists of a self-contained step-by-step set of operations to be performed. In analytics its goal is to operate on data in order to solve a particular question or problem.
Analytics is the discipline of using software based algorithms to discovery, interpretation, and communication of meaningful patterns in data.
Artificial Neural Network - ANN
Inspired in the biological nervous systems. The Artificial Neural Networks are based on interconnected nodes (neurons) which done very simple operations. The connection between nodes has a weight simulating the synaptic process in the neuron dendrites, the modification of the signal in the neuron is simulated by de activation function.
Autoregressive model specifies that the output variable depends linearly on its own previous values.
Autoregressive moving average model with exogenous inputs.
Autoregressive model with exogenous inputs.
Artificial Intelligence - AI
Some of the most famous classification of the Artificial Intelligence definitions based on the classification given by Russel and Norving in their book “Artificial Intelligence”:
“The exciting new effort to make computers think […] machines with minds, in the full and literal sense.” (Haugeland, 1985)
“The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning […]” (Bellman, 1978)
“The study of mental faculties through the use of computational models.”
(Charniak and McDermott, 1985)
“The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992)
“The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990)
“The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991)
“Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998)
“AI is concerned with intelligent behavior in artifacts.” (Nilsson, 1998)
A neural network learns by training, using an algorithm called backpropagation. To train a neural network it is first given an input which produces an output. The first step is to teach the neural network what the correct, or ideal, output should have been for that input. The ANN can then take this ideal output and begin adapting the weights to yield an enhanced, more precise output (based on how much they contributed to the overall prediction) the next time it receives a similar input. This process is repeated many many times until the margin of error between the input and the ideal output is considered acceptable.
It is a systematic approach that uses data collected to the understanding of human behavior, for understanding and predicting future actions.
Big data refers to datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.
The modern concept of Big data appeared in the McKensey report: Big data: The next frontier for innovation, competition, and productivity.
The term was used for first time in the scientific paper: M. Cox and D. Ellsworth. Managing Big Data for Scientific Visualization.
Nowadays is commonly used the caracterization of the Big data given by IBM, the three Vs: Volume, Variety and Velocity.
Business Intelligence - BI
Tools focused mainly in the visualization and controls of the relevant indicators of the business. Focused on the reporting and dashboards generation.
Case Base Reasoning - CBR
It is an Artificial Intelligence paradigm based on the idea: similar problems have similar solutions.
CBR implementations are often based on k-NN algorithms.
Call Detail Record (CDR) Analysis
CDRs contain data that a telecommunications company collects about phone calls, such as time and length of call. This data can be used in analytical applications.
The analysis of the online activity based on the items that users click on a web page.
The process of identifying objects that are similar to each other and cluster them in order to understand the differences as well as the similarities within the data.
A database that stores data by column rather than by row. Both columnar and row databases use traditional database languages like SQL to load data and perform queries. A key advantage of a columnar database is faster hard disk access.
Complex Event Processing (CEP)
CEP is the process of monitoring and analyzing all events across an organization’s systems and acting on them when necessary in real time. It is event processing that combines data from multiple sources to infer events or patterns that suggest more complicated circumstances. The goal of complex event processing is to identify meaningful events and respond to them as quickly as possible.
Convolutional Neural Networks
A convolutional neural network (CNN) can be considered as a neural network that utilizes numerous identical replicas of the same neuron. The benefit of this is that it enables a network to learn a neuron once and use it in numerous places, simplifying the model learning process and thus reducing error. This has made CNNs particularly useful in the area of object recognition and image tagging.
CNNs learn more and more abstract representations of the input with each convolution. In the case of object recognition, a CNN might start with raw pixel data, then learn highly discriminative features such as edges, followed by basic shapes, complex shapes, patterns and textures.
The science of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
A set of data management policies and practices defined to ensure that data availability, usability, quality, integrity, and security are maintained.
Data Mining is the process of discovering patterns in large data sets involving methods from artificial intelligence, statistics, and database systems.
Reference: From Data Mining to Knowledge Discovery in Databases, Fayyad, U. and Piatetsky-Shapiro.
Data science is an interdisciplinary field about processes to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, Data Mining, and Artificial Intelligence.
Deep Learning tries to model high-level abstractions in data by using multiple processing layers, composed of multiple non-linear transformations.
Deep Learning is a branch of the machine learning technique.
Reference: Yoshua Bengio. Learning Deep Architectures for AI..
Errors used in modeling problems
Evolutionary Algorithms (EA)
Evolutionary Algorithms are used in optimization problems. EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions.
Evolving Intelligent Systems (EIS)
EIS defines the new approach which focuses on learning, developing soft computing models that have both their parameters but also their structure adapting on-line.
Reference: Evolving intelligent systems: methodology and applications.
It is a superset of conventional (Boolean) logic that has been
extended to handle the concept of partial truth: truth values between
“completely true” and “completely false”.
Reference: Fuzzy logic and its applications.
Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection and crossover.
The performing of computing functions using resources from multiple distributed systems. The systems that comprise a grid computing network do not have to be similar in design or in the same geographic location.
Growing Self-Organizing Networks
Self-organizing networks such as Neural Gas. Growing Neural Gas and many others have been adopted in actual applications for both dimensionality reduction and manifold learning.
Reference: “A ”neural gas” network learns topologies, T. Martinetz and K. Schulten, in Artificial Neural Networks, Eds. Elsevier, 1991, pp. 397–402”
Growing Neural Gas
Human Level Artificial Intelligence - HLAI
To build a machine that has “common sense”: develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn (see Minsky article).
k-Nearest Neighboor (k-NN)
Starting from a new case, to explore the knowledge repository (a set of known cases), to find out similar cases and then determine a solution to the new case.
Learning Vector Quantification - LVQ
Prototypes training technique for improving 1-NN algorithm.
Reference: T. Kohonen. Learning vector quantization for pattern recognition.
Machine Learning - ML
The Machine Learning has been developed learning algorithms based on biological nervous systems, that emulate the biological process of recognition, learning and generalization.
In web development, is a web application, that uses content from more than one source to create a single new service. The term implies easy, fast integration, using open application programming interfaces (open API) and data sources to produce enriched results.
Minimum Classification Error - MCE
An evolution of the LVQ algorithm.
Reference: Biing-Hwang Juang and Shigeru Katagiri. Discriminative learning for minimum error classication.
Modified Learning Vector Quantification - MLVQ
Multilayer Perceptron - MLP
An Artificial Neural Network consists of multiple layers of nodes, with each layer fully connected to the next one. To specify a MLP is necessary to define: the activation function of the neurons, the number of hidden layers, and the numbers of nodes per each layer.
Usually MLP uses a supervised learning technique called backpropagation for training the neural network.
Multiple Self-Organized Maps - MSOM
A network architecture based on multiple Kohonen’s Self-Organized Maps.
NoSQL (commonly interpreted as “non SQL”, “non relational” or “not only SQL”) is a broad class of database management systems identified by non-adherence to the relational database management system model. NoSQL databases are not built primarily on tables, and generally do not use SQL for data manipulation.
A feedforward neural network consists in only one layer designed by extracting the principal components from the input vectors.
It is focuses on the recognition of patterns and regularities in data, that is, the identification of implicit objects, types or relationships in raw data by a machine.
In the human brain, a neuron is a cell that processes and transmits information. A perceptron can be considered as a super-simplified version of a biological neuron.
A perceptron will take several inputs and weigh them up to produce a single output. Each input is weighted according to its importance in the output decision.
See Multilayer Perceptron (MLP).
The process of developing a model that predict a trend or outcome.
Principal Component Analysis - PCA
PCA converts a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. It is a technique used for compressing the data without lose important information.
Radial Basis Function - RBF
Artificial Neural Network consists of three layers: input, hidden and output layers. The nodes in the hidden layer have an activation function called basis function, usually a gaussian function. The activation function has no zero output in a domain of the inputs space.
An algorithm that analyzes a customer’s purchases and actions and then uses that data to recommend complementary products.
Recurrent Neural Network
Recurrent Neural Networks (RNN) make use of sequential information. Unlike traditional neural networks, where it is assumed that all inputs and outputs are independent of one another, RNNs are reliant on preceding computations and what has previously been calculated. RNNs can be conceptualized as a neural network unrolled over time. Where you would have different layers in a regular neural network, you apply the same layer to the input at each timestep in an RNN, using the output, i.e. the state of the previous timestep as input. Connections between entities in a RNN form a directed cycle, creating a sort of internal memory, that helps the model leverage long chains of dependencies.
Recursive Neural Network
A Recursive Neural Network is a generalization of a Recurrent Neural Network and is generated by applying a fixed and consistent set of weights repetitively, or recursively, over the structure. Recursive Neural Networks take the form of a tree, while Recurrent is a chain. Recursive Neural Nets have been utilized in Natural Language Processing for tasks such as Sentiment Analysis.
Self Organized Maps - SOM
An Artificial Neural Network based on the knowledge that the various areas of the brain, especially of the cerebral cortex, have topology organization and these areas perform specialized tasks: speech control and analysis of sensory signals (visual, auditory, …).
SOM discretizes representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks as they apply competitive learning as opposed to error-correction learning. SOM uses a neighborhood function to preserve the topological properties of the input space.
Reference: Teuvo Kohonen. Self-Organized Formation of Topologically Correct Feature Maps.
Soft computing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computing is the human mind.
Supervised Neural Network
For a supervised neural network to produce an ideal output, it must have been previously given this output. It is ‘trained’ on a pre-defined dataset and based on this dataset, can produce accurate outputs depending on the input it has received. You could therefore say that it has been supervised in its learning, having for example been given both the question and the ideal answer.
Text mining is the analysis of data contained in natural language text. The application of text mining techniques to solve business problems is called text analytics. Text mining can help to derive potentially valuable business insights from text-based content such as postings on social media streams like Facebook, Twitter and LinkedIn. Mining unstructured data with natural language processing (NLP), statistical modeling and machine learning techniques.
The Turing Test
A test proposed by Turing in order to verify if an artificial intelligence device can exhibit intelligent behavior equivalent to, or indistinguishable from, of a human.
Reference: Alan M. Turing. Computing Machinery and Intelligence.
Unsupervised Neural Network
This involves providing a programme with an unlabeled data set that it has not been previously trained for, with the goal of automatically discovering patterns and trends through clustering.