Companies of all types and sizes are increasingly looking to predictive analytics to improve business decision making in Marketing, Risk Management, Fraud Detection, Network Security, Product Quality and many other areas.
Predictive modeling solutions now abound, but they all share three key weaknesses that makes it nearly impossible to align analytic models with business objectives.
- Valuable events are rare, but the tools are biased to the majority.
- Event value varies widely, but the tools treat all events as the same.
- Big Data modeling is compute intensive, but the tools don’t scale.
Optimized Prediction combines revolutionary technology on three different levels to deliver a powerful, highly transparent modeling solution.
Intelligence of a new machine learning algorithm – Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions. It’s built to handle unbalanced data, where target events, like marketing conversions and detectable frauds are rare and hard to predict. It doesn’t just predict valid target events; it predicts the ones with the highest business value, such as the best up-sell targets or riskiest policies. It doesn’t penalize dirty data or require extensive data tweaking.
Power of highly parallel systems programming – Parallel programming ingeniously combines the CPU and memory resources of numerous networked servers to work as a single powerful system to build new predictive models.
The system is faster, easier to use, and more cost-effective than traditional desktop and server-based predictive modeling solutions. As a result, users can build more models and refresh them more frequently than they previously could.
Scalability of public or private cloud computing – Optimized Prediction runs in either the Amazon AWS public cloud or on-premises in a Eucalyptus private cloud. This gives the user smooth, automatic compute scaling to build models from the largest data sets.
Cloud scaling enables the modeler to work with Big Data without sampling and other reduction techniques. And, cloud software is highly automated and doesn’t require any IT skills for configuration or operation.
Techniques Used
Decision tree learning
Decision tree learning uses a decision tree as a predictive model, which maps observations about an item to conclusions about the item’s target value.
Association rule learning
Association rule learning is a method for discovering interesting relations between variables in large databases.
Artificial neural networks
An artificial neural network (ANN) learning algorithm, usually called “neural network” (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.
Inductive logic programming
Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming languages for representing hypotheses (and not only logic programming), such as functional programs.
Support vector machines
Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.
Clustering
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to some predesignated criterion or criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated for example by internal compactness (similarity between members of the same cluster) and separation between different clusters. Other methods are based on estimated density and graph connectivity. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.
Bayesian networks
A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning.
Reinforcement learning
Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.
Representation learning
Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and cluster analysis. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into (high-dimensional) vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.
Similarity and metric learning
In this problem, the learning machine is given pairs of examples that are considered similar and pairs of less similar objects. It then needs to learn a similarity function (or a distance metric function) that can predict if new objects are similar. It is sometimes used in Recommendation systems.
Sparse dictionary learning
In this method, a datum is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse. Let x be a d-dimensional datum, D be a d by n matrix, where each column of D represents a basis function. r is the coefficient to represent x using D. Mathematically, sparse dictionary learning means the following x \approx D r where r is sparse. Generally speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.
Learning a dictionary along with sparse representations is strongly NP-hard and also difficult to solve approximately. A popular heuristic method for sparse dictionary learning is K-SVD.
Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class has already been built. Then a new datum is associated with the class such that it’s best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
Genetic algorithms
A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection, and uses methods such as mutation and crossover to generate new genotype in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s and 1990s. Vice versa, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.