Unsupervised learning
In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learning and reinforcement learning.

Unsupervised learning

From Wikipedia, the free encyclopedia
 

In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learningand reinforcement learning.

Unsupervised learning is closely related to the problem of density estimationin statistics.[1] However unsupervised learning also encompasses many other techniques that seek to summarize and explain key features of the data. Many methods employed in unsupervised learning are based on data miningmethods used to preprocess[citation needed] data.

Approaches to unsupervised learning include:

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are also used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing. The first version of ART was "ART1", developed by Carpenter and Grossberg (1988).[3]

Bibliography[edit]

  1. Jump up^ Jordan, Michael I.; Bishop, Christopher M. (2004). "Neural Networks". In Allen B. Tucker. Computer Science Handbook, Second Edition (Section VII: Intelligent Systems). Boca Raton, FL: Chapman & Hall/CRC Press LLC. ISBN 1-58488-360-X.
  2. Jump up^ Acharyya, Ranjan (2008); A New Approach for Blind Source Separation of Convolutive SourcesISBN 978-3-639-07797-1(this book focuses on unsupervised learning with Blind Source Separation)
  3. Jump up^ Carpenter, G.A. and Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network"Computer 21: 77–88.
Immediately related elementsHow this works
-
Machine Learning Methods & Algorithms Â»Machine Learning Methods & Algorithms
Unsupervised learning
Association rule learning Â»Association rule learning
Data clustering Â»Data clustering
Expectation–maximization algorithm Â»Expectation–maximization algorithm
FastICA Â»FastICA
Generative topographic map Â»Generative topographic map
Hierarchical clustering Â»Hierarchical clustering
IBSEAD - distributed autonomous entity systems based Interaction Â»IBSEAD - distributed autonomous entity systems based Interaction
Information bottleneck method Â»Information bottleneck method
Partitional clustering Â»Partitional clustering
Radial basis function network Â»Radial basis function network
Self-organizing map Â»Self-organizing map
Sparse PCA (sparse principal component analysis) Â»Sparse PCA (sparse principal component analysis)
Stochastic gradient descent Â»Stochastic gradient descent
Vector quantization (VQ) Â»Vector quantization (VQ)
Artificial neural network Â»Artificial neural network
+Kommentare (0)
+Verweise (1)
+About