Unsupervised learning
In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learning and reinforcement learning.

Unsupervised learning

From Wikipedia, the free encyclopedia
 

In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learningand reinforcement learning.

Unsupervised learning is closely related to the problem of density estimationin statistics.[1] However unsupervised learning also encompasses many other techniques that seek to summarize and explain key features of the data. Many methods employed in unsupervised learning are based on data miningmethods used to preprocess[citation needed] data.

Approaches to unsupervised learning include:

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg (1988).[3]

Bibliography[edit]

  1. Jump up^ Jordan, Michael I.; Bishop, Christopher M. (2004). "Neural Networks". In Allen B. Tucker. Computer Science Handbook, Second Edition (Section VII: Intelligent Systems). Boca Raton, FL: Chapman & Hall/CRC Press LLC. ISBN 1-58488-360-X.
  2. Jump up^ Acharyya, Ranjan (2008); A New Approach for Blind Source Separation of Convolutive SourcesISBN 978-3-639-07797-1(this book focuses on unsupervised learning with Blind Source Separation)
  3. Jump up^ Carpenter, G.A. and Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network"Computer 21: 77–88.
CONTEXT(Help)
-
Machine Learning Methods & Algorithms »Machine Learning Methods & Algorithms
Unsupervised learning
Association rule learning »Association rule learning
Data clustering »Data clustering
Expectation–maximization algorithm »Expectation–maximization algorithm
FastICA »FastICA
Generative topographic map »Generative topographic map
Hierarchical clustering »Hierarchical clustering
IBSEAD - distributed autonomous entity systems based Interaction »IBSEAD - distributed autonomous entity systems based Interaction
Information bottleneck method »Information bottleneck method
Partitional clustering »Partitional clustering
Radial basis function network »Radial basis function network
Self-organizing map »Self-organizing map
Sparse PCA (sparse principal component analysis) »Sparse PCA (sparse principal component analysis)
Stochastic gradient descent »Stochastic gradient descent
Vector quantization (VQ) »Vector quantization (VQ)
Artificial neural network »Artificial neural network
+Comments (0)
+Citations (1)
+About