ANALYTICAL FORWARD LEARNING (AFL) PADA JARINGAN SYARAF TIRUAN (JST)

Learning algorithm for neural network that has high learning speed and good performance is called extreme learning machine (ELM). ELM tends to solve the issues of iterative method like BP gradient-based learning algorithm. BP may face several issues like local minima, hardly parameter chosen, slow l...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: , SYUKRON ABU ISHAQ ALFAROZI, , Noor Akhmad Setiawan, S.T., M.T., Ph.D.
التنسيق: Theses and Dissertations NonPeerReviewed
منشور في: [Yogyakarta] : Universitas Gadjah Mada 2014
الموضوعات:
ETD
الوصول للمادة أونلاين:https://repository.ugm.ac.id/131510/
http://etd.ugm.ac.id/index.php?mod=penelitian_detail&sub=PenelitianDetail&act=view&typ=html&buku_id=72006
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Universitas Gadjah Mada
الوصف
الملخص:Learning algorithm for neural network that has high learning speed and good performance is called extreme learning machine (ELM). ELM tends to solve the issues of iterative method like BP gradient-based learning algorithm. BP may face several issues like local minima, hardly parameter chosen, slow learning speed, etc. On the other hand, BP has good generalized performance with appropriate initial condition and parameter. ELM trains the single hidden layer feedforward neural network (SLFN) by calculating the weight between hidden layer and output layer analytically using Moore-Penrose generalized inverse, and assign randomly the weight between input layer and hidden layer. Although this algorithm is extremely fast, the random weight could cause the network unstable. This research proposes new algorithm that is called analytical forward learning (AFL). AFL calculates the weight of neural network analytically from the first to the last layer. AFL uses poison vector obtained from output layer to the particular hidden layer using Moore-Penrose generalized inverse to calculate each weight on the network. Poison vector acts as directed mapper from the input layer to particular hidden layer so that the last weight can be calculated easily with minimum error. The result of this research shows that AFL provides good generalized performance like BP rather than ELM and has high learning speed like ELM. Furthermore, this algorithm can be applied to multilayer perceptron generally to solve any complex network architecture.