Resilient backpropagation (Rprop)
Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.

Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.[1]

Similarly to the Manhattan update rule, Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factor Î·âˆ’, where Î·âˆ’ < 1. If the last iteration produced the same sign, the update value is multiplied by a factor of Î·+, where Î·+ > 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function. Î·+ is empirically set to 1.2 and Î·âˆ’ to 0.5.

Next to the cascade correlation algorithm and the Levenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.

RPROP is a batch update algorithm.

Variations[edit]

Martin Riedmiller developed three algorithms, all named RPROP. Igel and HĂŒsken assigned a new name to them:[2]

  1. RPROP+ is defined at A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm.[3]
  2. RPROP− is defined at Advanced Supervised Learning in Multi-layer Perceptrons – From Backpropagation to Adaptive Learning Algorithms. Backtracking is removed from RPROP+.[4]
  3. iRPROP− is defined at Rprop – Description and Implementation Details.[5] This is reinvented by Igel and HĂŒsken. This variant is very popular and most simple.
  4. iRPROP+ is defined at Improving the Rprop Learning Algorithm and is very robust and typically faster than the other three variants.[6][2]

References[edit]

  1. Jump up^ Martin Riedmiller und Heinrich Braun: Rprop - A Fast Adaptive Learning Algorithm. Proceedings of the International Symposium on Computer and Information Science VII, 1992
  2. Jump up to:a b Christian Igel and Michael HĂŒsken. Empirical Evaluation of the Improved Rprop Learning Algorithm. Neurocomputing 50:105-123, 2003
  3. Jump up^ Martin Riedmiller and Heinrich Braun. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. Proceedings of the IEEE International Conference on Neural Networks, 586-591, IEEE Press, 1993
  4. Jump up^ Martin Riedmiller. Advanced supervised learning in multi-layer perceptrons - From backpropagation to adaptive learning algorithms. Computer Standards and Interfaces 16(5), 265-278, 1994
  5. Jump up^ Martin Riedmiller. Rprop – Description and Implementation Details. Technical report, 1994
  6. Jump up^ Christian Igel and Michael HĂŒsken. Improving the Rprop Learning Algorithm. Second International Symposium on Neural Computation (NC 2000), pp. 115-121, ICSC Academic Press, 2000

External links[edit]

Immediately related elementsHow this works
-
Machine Learning Methods & Algorithms Â»Machine Learning Methods & Algorithms
Supervised learning Â»Supervised learning
Artificial neural network Â»Artificial neural network
Backpropagation Â»Backpropagation
Resilient backpropagation (Rprop)
+Commentaires (0)
+Citations (0)
+About